paper_id
stringlengths
9
12
venue
stringclasses
139 values
year
stringclasses
7 values
paper_title
stringlengths
0
181
paper_authors
stringlengths
4
925
paper_abstract
stringlengths
1
5k
paper_keywords
stringlengths
2
436
paper_content
stringlengths
0
100k
review_id
stringlengths
9
12
review_title
stringlengths
0
500
review_rating
stringclasses
61 values
review_text
stringlengths
2
28.3k
review_confidence
stringclasses
13 values
text
stringlengths
402
130k
Sy4tzwqxe
ICLR.cc/2017/conference
2017
Two Methods for Wild Variational Inference
["Qiang Liu", "Yihao Feng"]
Variational inference provides a powerful tool for approximate probabilistic inference on complex, structured models. Typical variational inference methods, however, require to use inference networks with computationally tractable probability density functions. This largely limits the design and implementation of variational inference methods. We consider wild variational inference methods that do not require tractable density functions on the inference networks, and hence can be applied in more challenging cases. As an example of application, we treat stochastic gradient Langevin dynamics (SGLD) as an inference network, and use our methods to automatically adjust the step sizes of SGLD to maximize its convergence speed, significantly outperforming the hand-designed step size schemes.
["Theory"]
ABSTRACTVariational inference provides a powerful tool for approximate probabilistic in-ference on complex, structured models. Typical variational inference methods,however, require to use inference networks with computationally tractable proba-bility density functions. This largely limits the design and implementation of vari-ational inference methods. We consider wild variational inference methods thatdo not require tractable density functions on the inference networks, and hencecan be applied in more challenging cases. As an example of application, we treatstochastic gradient Langevin dynamics (SGLD) as an inference network, and useour methods to automatically adjust the step sizes of SGLD, yielding significantimprovement over the hand-designed step size schemes.1 I NTRODUCTIONProbabilistic modeling provides a principled approach for reasoning under uncertainty, and has beenincreasingly dominant in modern machine learning where highly complex, structured probabilisticmodels are often the essential components for solving complex problems with increasingly largerdatasets. A key challenge, however, is to develop computationally efficient Bayesian inferencemethods to approximate, or draw samples from the posterior distributions. Variational inference(VI) provides a powerful tool for scaling Bayesian inference to complex models and big data. Thebasic idea of VI is to approximate the true distribution with a simpler distribution by minimizing theKL divergence, transforming the inference problem into an optimization problem, which is oftenthen solved efficiently using stochastic optimization techniques (e.g., Hoffman et al., 2013; Kingma& Welling, 2013). However, the practical design and application of VI are still largely restricted bythe requirement of using simple approximation families, as we explain in the sequel.Letp(z)be a distribution of interest, such as the posterior distribution in Bayesian inference. VIapproximates p(z)with a simpler distribution q(z)found in a setQ=fq(z)gof distributionsindexed by parameter by minimizing the KL divergence objective:minKL(qjjp)Ezq[log(q(z)=p(z))]; (1)where we can get exact result p=qifQis chosen to be broad enough to actually include p. Inpractice, however, Qshould be chosen carefully to make the optimization in (1) computationallytractable; this casts two constraints on Q:1. A minimum requirement is that we should be able to sample from qefficiently, which allows usto make estimates and predictions based on qin placement of the more intractable p. The samplesfromqcan also be used to approximate the expectation Eq[]in (1) during optimization. This meansthat there should exist some computable function f(;), called the inference network , which takesa random seed , whose distribution is denoted by q0, and outputs a random variable z=f(;)whose distribution is q.2. We should also be able to calculate the density q(z)or it is derivative in order to optimize theKL divergence in (1). This, however, casts a much more restrictive condition, since it requires us touse only simple inference network f(;)and input distributions q0to ensure a tractable form forthe densityqof the output z=f(;).In fact, it is this requirement of calculating q(z)that has been the major constraint for the designof state-of-the-art variational inference methods. The traditional VI methods are often limited to1Under review as a conference paper at ICLR 2017Given distribution Inference network SamplesFigure 1: Wild variational inference allows us to train general stochastic neural inference networks to learn todraw (approximate) samples from the target distributions, without restriction on the computational tractabilityof the density function of the neural inference networks.using simple mean field, or Gaussian-based distributions as qand do not perform well for approx-imating complex target distributions. There is a line of recent work on variational inference withrich approximation families (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few), all based on handcrafting special inference networks to ensure the com-putational tractability of q(z)while simultaneously obtaining high approximation accuracy. Theseapproaches require substantial mathematical insights and research effects, and can be difficult tounderstand or use for practitioners without a strong research background in VI. Methods that allowus to use arbitrary inference networks without substantial constraints can significantly simplify thedesign and applications of VI methods, allowing practical users to focus more on choosing proposalsthat work best with their specific tasks.We use the term wild variational inference to refer to variants of variational methods working withgeneral inference networks f(;)without tractability constraints on its output density q(z); thisshould be distinguished with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(z)without significant model-by-modelconsideration (but still require to calculate the proposal density q(z)). Essentially, wild variationalinference makes it possible to “learn to draw samples”, constructing black-box neural samplers forgiven distributions. This enables more adaptive and automatic design of efficient Bayesian infer-ence procedures, replacing the hand-designed inference algorithms with more efficient ones that canimprove their efficiency adaptively over time based on past tasks they performed.In this work, we discuss two methods for wild variational inference, both based on recent works thatcombine kernel techniques with Stein’s method (e.g., Liu & Wang, 2016; Liu et al., 2016). The firstmethod, also discussed in Wang & Liu (2016), is based on iteratively adjusting parameter to makethe random output z=f(;)mimic a Stein variational gradient direction (SVGD) (Liu & Wang,2016) that optimally decreases its KL divergence with the target distribution. The second method isbased on minimizing a kernelized Stein discrepancy, which, unlike KL divergence, does not requireto calculate density q(z)for the optimization thanks to its special form.Another critical problem is to design good network architectures well suited for Bayesian infer-ence. Ideally, the network design should leverage the information of the target distribution p(z)in a convenient way. One useful perspective is that we can view the existing MC/MCMC meth-ods as (hand-designed) stochastic neural networks which can be used to construct native inferencenetworks for given target distributions. On the other hand, using existing MC/MCMC methods asinference networks also allow us to adaptively adjust the hyper-parameters of these algorithms; thisenables amortized inference which leverages the experience on past tasks to accelerate the Bayesiancomputation, providing a powerful approach for designing efficient algorithms in settings when alarge number of similar tasks are needed.As an example, we leverage stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011)as the inference network, which can be treated as a special deep residential network (He et al.,2016), in which important gradient information rzlogp(z)is fed into each layer to allow efficientapproximation for the target distribution p(z). In our case, the network parameter are the step sizesof SGLD, and our method provides a way to adaptively improve the step sizes, providing speed-upon future tasks with similar structures. We show that the adaptively estimated step sizes significantlyoutperform the hand-designed schemes such as Adagrad.Related Works The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational inference2Under review as a conference paper at ICLR 2017(e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a) and date-driven designs of MonteCarlo based methods (e.g., Paige & Wood, 2016), to name only a few. Most of these methods,however, require to explicitly calculate q(z)(or its gradient).One well exception is a very recent work (Ranganath et al., 2016) that also avoids calculating q(z)and hence works for general inference networks; their method is based on a similar idea relatedto Stein discrepancy (Liu et al., 2016; Oates et al., 2017; Chwialkowski et al., 2016; Gorham &Mackey, 2015), for which we provide a more detailed discussion in Section 3.2.The auxiliary variational inference methods (e.g., Agakov & Barber, 2004) provide an alternativeway when the variational distribution q(z)can be represented as a hidden variable model. Inparticular, Salimans et al. (2015) used the auxiliary variational approach to leverage MCMC as avariational approximation. These approaches, however, still require to write down the likelihoodfunction on the augmented spaces, and need to introduce an additional inference network related tothe auxiliary variables.There is a large literature on traditional adaptive MCMC methods (e.g., Andrieu & Thoms, 2008;Roberts & Rosenthal, 2009) which can be used to adaptively adjust the proposal distribution ofMCMC by exploiting the special theoretical properties of MCMC (e.g., by minimizing the auto-correlation). Our method is simpler, more generic, and works efficiently in practice thanks to theuse of gradient-based back-propagation. Finally, connections between stochastic gradient descentand variational inference have been discussed and exploited in Mandt et al. (2016); Maclaurin et al.(2015).Outline Section 2 introduces background on Stein discrepancy and Stein variational gradient de-scent. Section 3 discusses two methods for wild variational inference. Section 4 discuss usingstochastic gradient Langevin dynamics (SGLD) as the inference network. Empirical results areshown in Section 5.2 S TEIN ’SIDENTITY , STEIN DISCREPANCY , STEIN VARIATIONAL GRADIENTStein’s identity Stein’s identity plays a fundamental role in our framework. Let p(z)be a positivedifferentiable density on Rd, and(z) = [1(z);;d(z)]>is a differentiable vector-valuedfunction. Definerz=Pi@zi. Stein’s identity isEzp[hrzlogp(z);(z)i+rz(z)] =ZXrz(p(z)(z))dx= 0; (2)which holds once p(z)(z)vanishes on the boundary of Xby integration by parts or Stokes’ theo-rem; It is useful to rewrite Stein’s identity in a more compact way:Ezp[Tp(z)] = 0; withTpdef=hrzlogp;i+rz; (3)whereTpis called a Stein operator , which acts on function and returns a zero-mean functionTp(z)underzp. A key computational advantage of Stein’s identity and Stein operator isthat they depend on ponly through the derivative of the log-density rzlogp(z), which does notdepend on the cumbersome normalization constant of p, that is, when p(z) = p(z)=Z, we haverzlogp(z) =rzlog p(z), independent of the normalization constant Z. This property makesStein’s identity a powerful practical tool for handling unnormalized distributions widely appeared inmachine learning and statistics.Stein Discrepancy Although Stein’s identity ensures that Tphas zero expectation under p, itsexpectation is generally non-zero under a different distribution q. Instead, for p6=q, there must existawhich distinguishes pandqin the sense that Ezq[Tp(z)]6= 0. Stein discrepancy leveragesthis fact to measure the difference between pandqby considering the “maximum violation of Stein’sidentity” forin certain function set F:D(qjjp) = max2FEzq[Tp(z)]; (4)whereFis the set of functions that we optimize over, and decides both the discriminative powerand computational tractability of Stein discrepancy. Kernelized Stein discrepancy (KSD) is a special3Under review as a conference paper at ICLR 2017Stein discrepancy that takes Fto be the unit ball of vector-valued reproducing kernel Hilbert spaces(RKHS), that is,F=f2Hd:jjjjHd1g; (5)whereHis a real-valued RKHS with kernel k(z;z0). This choice ofFmakes it possible to get aclosed form solution for the optimization in (4) (Liu et al., 2016; Chwialkowski et al., 2016; Oateset al., 2017):D(qjjp) = max2HdEzq[Tp(z)]; s:t:jjjjHd1; (6)=qEz;z0q[p(z;z0)]; (7)wherep(z;z0)is a positive definite kernel obtained by applying Stein operator on k(z;z0)twice:p(z;z0) =Tz0p(Tzpk(z;z0));=sp(z)sp(z0)k(z;z0) +sp(z)rz0k(z;z0) +sp(z0)rzk(z;z0) +rz(rz0k(z;z0));(8)wheresp(z) =rzlogp(z)andTzpandTzpdenote the Stein operator when treating k(z;z0)as afunction ofzandz0, respectively; here we defined Tzpk(z;z0) =rxlogp(x)k(z;z0)+rxk(z;z0)which returns a d1vector-valued function. It can be shown that D(qjjp) = 0 if and only if q=pwhenk(z;z0)is strictly positive definite in a proper sense (Liu et al., 2016; Chwialkowski et al.,2016). D(qjjp)can treated as a variant of maximum mean discrepancy equipped with kernelp(z;z0)which depends on p(which makes D(qjjp)asymmetric on qandp).The form of KSD in (6) allows us to estimate the discrepancy between a set of sample fzig(e.g.,drawn from q) and a distribution pspecified byrzlogp(z),^D2u(fzigjjp) =1n(n1)Xi6=j[p(zi;zj)]; ^D2v(fzigjjp) =1n2Xi;j[p(zi;zj)]; (9)where ^D2u(qjjp)provides an unbiased estimator (hence called a U-statistic) for D2(qjjp), and^D2v(qjjp), calledV-statistic, provides a biased estimator but is guaranteed to be always non-negative: ^D2v(fzigjjp)0.Stein Variational Gradient Descent (SVGD) Stein operator and Stein discrepancy have a closeconnection with KL divergence, which is exploited in Liu & Wang (2016) to provide a generalpurpose deterministic approximate sampling method. Assume that fzigni=1is a sample (or a setof particles) drawn from q, and we want to update fzigni=1to make it “move closer” to the targetdistributionpto improve the approximation quality. We consider updates of formzi zi+(zi);8i= 1;:::;n; (10)whereis a perturbation direction, or velocity field, chosen to maximumly decrease the KL diver-gence between the distribution of updated particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (11)whereq[]denotes the density of the updated particle z0=z+(z)when the density of theoriginal particle zisq, andFis the set of perturbation directions that we optimize over. A key ob-servation (Liu & Wang, 2016) is that the optimization in (11) is in fact equivalent to the optimizationfor KSD in (4); we haveddKL(q[]jjp)=0=Ezq[Tp(z)]; (12)that is, the Stein operator transforms the perturbation on the random variable (the particles) to thechange of the KL divergence. Taking Fto be unit ball ofHdas in (5), the optimal solution of(11) equals that of (6), which is shown to be (e.g., Liu et al., 2016)(z0)/Ezq[Tzpk(z;z0)] =Ezq[rzlogp(z)k(z;z0) +rzk(z;z0)]:4Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD and KSD Minimization for Wild Variational Inferenceforiterationtdo1. Draw randomfigni=1, calculatezi=f(;i), and the Stein variational gradient ziin(13).2. Update parameter using (14) or (15) for amortized SVGD, or (17) for KSD minimization.end forBy approximating the expectation under qwith the empirical mean of the current particles fzigni=1,SVGD admits a simple form of update that iteratively moves the particles towards the target distri-bution,zi zi+zi;8i= 1;:::;n;zi=^Ez2fzigni=1[rzlogp(z)k(z;zi) +rzk(z;zi)]; (13)where ^Ezfzigni=1[f(z)] =Pif(zi)=n. The two terms in ziplay two different roles: the termwith the gradient rzlogp(z)drives the particles towards the high probability regions of p(z),while the term with rzk(z;zi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(z;z0) =k(zz0), then the second term reduces to ^Ezrzk(z;zi) =^Ezrzik(z;zi), which can be treated as the negative gradient for minimizing the average similarity^Ezk(z;zi)in terms ofzi.It is easy to see from (13) that zireduces to the typical gradient rzlogp(zi)when there is only asingle particle ( n= 1) andrzk(z;zi)whenz=zi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(z)(i.e., maximum a posteriori (MAP)).3 T WOMETHODS FOR WILDVARIATIONAL INFERENCESince the direct parametric optimization of the KL divergence (1) requires calculating q(z), thereare two essential ways to avoid calculating q(z): either using alternative (approximate) optimiza-tion approaches, or using different divergence objective functions. We discuss two possible ap-proaches in this work: one based on “amortizing SVGD” (Wang & Liu, 2016) which trains theinference network f(;)so that its output mimic the SVGD dynamics in order to decrease the KLdivergence; another based on minimizing the KSD objective (9) which does not require to evaluateq(z)thanks to its special form.3.1 A MORTIZED SVGDSVGD provides an optimal updating direction to iteratively move a set of particles fzigtowards thetarget distribution p(z). We can leverage it to train an inference network f(;)by iteratively ad-justingso that the output of f(;)changes along the Stein variational gradient direction in orderto maximumly decrease its KL divergence with the target distribution. By doing this, we “amortize”SVGD into a neural network, which allows us to leverage the past experience to adaptively improvethe computational efficiency and generalize to new tasks with similar structures. Amortized SVGDis also presented in Wang & Liu (2016); here we present some additional discussion.To be specific, assume figare drawn from q0andzi=f(;i)the corresponding random outputbased on the current estimation of . We want to adjust so thatzichanges along the Stein vari-ational gradient direction ziin (13) so as to maximumly decrease the KL divergence with targetdistribution. This can be done by updating via arg minnXi=1jjf(;i)zizijj22: (14)Essentially, this projects the non-parametric perturbation direction zito the change of the finitedimensional network parameter . If we take the step size to be small, then the updated by (14)should be very close to the old value, and a single step of gradient descent of (14) can provide a5Under review as a conference paper at ICLR 2017good approximation for (14). This gives a simpler update rule: +Xi@f(;i)zi; (15)which can be intuitively interpreted as a form of chain rule that back-propagates the SVGD gradientto the network parameter . In fact, when we have only one particle, (15) reduces to the stan-dard gradient ascent for maxlogp(f(;)), in whichfis trained to “learn to optimize” (e.g.,Andrychowicz et al., 2016), instead of “learn to sample” p(z). Importantly, as we have more thanone particles, the repulsive term rzk(z;zi)inzibecomes active, and enforces an amount of di-versity on the network output that is consistent with the variation in p(z). The full algorithm issummarized in Algorithm 1.Amortized SVGD can be treated as minimizing the KL divergence using a rather special algorithm:it leverages the non-parametric SVGD which can be treated as approximately solving the infinitedimensional optimization minqKL(qjjp)without explicitly assuming a parametric form on q, anditeratively projecting the non-parametric update back to the finite dimensional parameter space of. It is an interesting direction to extend this idea to “amortize” other MC/MCMC-based inferencealgorithms. For example, given a MCMC with transition probability T(z0jz)whose stationary dis-tribution isp(z), we may adjust to make the network output move towards the updated values z0drawn from the transition probability T(z0jz). The advantage of using SVGD is that it provides adeterministic gradient direction which we can back-propagate conveniently and is particle efficientin that it reduces to “learning to optimize” with a single particle. We have been using the simpleL2loss in (14) mainly for convenience; it is possible to use other two-sample discrepancy measuressuch as maximum mean discrepancy.3.2 KSD V ARIATIONAL INFERENCEAmortized SVGD attends to minimize the KL divergence objective, but can not be interpreted asa typical finite dimensional optimization on parameter . Here we provide an alternative methodbased on directly minimizing the kernelized Stein discrepancy (KSD) objective, for which, thanksto its special form, the typical gradient-based optimization can be performed without needing toestimateq(z)explicitly.To be specific, take qto be the density of the random output z=f(;)whenq0, and wewant to find to minimize D(qjjp). Assumingfigis i.i.d. drawn from q0, we can approximateD2(qjjp)unbiasedly with a U-statistics:D2(qjjp)1n(n1)Xi6=jp(f(;i); f(;j)); (16)for which a standard gradient descent can be derived for optimizing : 2n(n1)Xi6=j@f(;i)rzip(zi;zj);wherezi=f(;i): (17)This enables a wild variational inference method based on directly minimizing with standard(stochastic) gradient descent. See Algorithm 1. Note that (17) is similar to (15) in form, but replacesziwith a ~zi/Pj:i6=jrzip(zi;zj):It is also possible to use the V-statistic in (9), butwe find that the U-statistic performs much better in practice, possibly because of its unbiasednessproperty.Minimizing KSD can be viewed as minimizing a constrastive divergence objective function. To seethis, recall that q[]denotes the density of z0=z+(z)whenzq. Combining (11) and (6),we can show thatD2(qjjp)1(KL(qjjp)KL(q[]jjp)):That is, KSD measures the amount of decrease of KL divergence when we update the particlesalong the optimal SVGD perturbation direction given by (11). If q=p, then the decrease of KL6Under review as a conference paper at ICLR 2017divergence equals zero and D2(qjjp)equals zero. In fact, as shown in Liu & Wang (2016) KSD canbe explicitly represented as the magnitude of a functional gradient of KL divergence:D(qjjp) =ddKL(q[]jjp)=0Hd;whereq[]is the density of z=z+(z)whenzq, andddF()denotes the functional gradientof functional F()w.r.t.defined in RKHSHd, andddF()is also an element in Hd. Therefore,KSD variational inference can be treated as explicitly minimizing the magnitude of the gradient ofKL divergence, in contract with amortized SVGD which attends to minimize the KL divergenceobjective itself.This idea is also similar to the contrastive divergence used for learning restricted Boltzmann ma-chine (RBM) (Hinton, 2002) (which, however, optimizes pwith fixedq). It is possible to extend thisapproach by replacing z0=z+(z)with other transforms, such as these given by a transitionprobability of a Markov chain whose stationary distribution is p. In fact, according the so called gen-erator method for constructing Stein operator (Barbour, 1988), any generator of a Markov processdefines a Stein operator that can be used to define a corresponding Stein discrepancy.This idea is related to a very recent work by Ranganath et al. (2016), which is based on directlyminimizing the variational form of Stein discrepancy in (4); Ranganath et al. (2016) assumes Fconsists of a neural network (z)parametrized by , and findby solving the following min-maxproblem:minmaxEzq[Tp(z)]:In contrast, our method leverages the closed form solution by taking Fto be an RKHS and henceobtains an explicit optimization problem, instead of a min-max problem that can be computationallymore expensive, or have difficulty in achieving convergence.Becausep(x;x0)(defined in (8)) depends on the derivative rxlogp(x)of the target distribution,the gradient in (17) depends on the Hessian matrix r2xlogp(x)and is hence less convenient to im-plement compared with amortized SVGD (the method by Ranganath et al. (2016) also has the sameproblem). However, this problem can be alleviated using automatic differentiation tools, which beused to directly take the derivative of the objective in (16) without manually deriving its derivatives.4 L ANGEVIN INFERENCE NETWORKWith wild variational inference, we can choose more complex inference network structures to obtainbetter approximation accuracy. Ideally, the best network structure should leverage the special prop-erties of the target distribution p(z)in a convenient way. One way to achieve this by viewing existingMC/MCMC methods as inference networks with hand-designed (and hence potentially suboptimal)parameters, but good architectures that take the information of the target distribution p(z)into ac-count. By applying wild variational inference on networks constructed based on existing MCMCmethods, we effectively provide an hyper-parameter optimization for these existing methods. Thisallows us to fully optimize the potential of existing Bayesian inference methods, significantly im-proving the result with less computation cost, and decreasing the need for hyper-parameter tuningby human experts. This is particularly useful when we need to solve a large number of similar tasks,where the computation cost spent on optimizing the hyper-parameters can significantly improve theperformance on the future tasks.Stochastic Gradient Langevin Dynamics We first take the original stochastic gradient Langevindynamics (SGLD) algorithm (Welling & Teh, 2011) as an example. SGLD starts with a randominitialization z0, and perform iterative update of formzt+1 zt+trzlog ^p(zt;Mt) +p2tt;8t= 1;T; (18)where log ^p(zt;Mt)denotes an approximation of logp(zt)based on, e.g., a random mini-batchMtof observed data at t-th iteration, and tis a standard Gaussian random vector of the same sizeasz, andtdenotes a (vector) step-size at t-th iteration; here “ ” denotes element-wise product.When running SGLD for Titerations, we can treat zTas the output of a T-layer neural network7Under review as a conference paper at ICLR 2017(a) Initialization (b) Amortized SVGD (c) KSD Minimization (d) Constant Stepsize (e) Power Decay StepsizeFigure 2: Results on a 1D Gaussian mixture when training the step sizes of SGLD with T= 20iterations. The target distribution p(x)is shown by the red dashed line. (a) The distribution ofthe initialization z0of SGLD (the green line), visualized by kernel density estimator. (b)-(d) Thedistribution of the final output zT(green line) given by different types of step sizes, visualized bykernel density estimator.parametrized by the collection of step sizes =ftgTt=1, whose random inputs include the randominitialization z0, the mini-batch Mtand Gaussian noise tat each iteration t. We can see that thisdefines a rather complex network structure with several different types of random inputs ( z0,Mtandt). This makes it intractable to explicitly calculate the density of zTand traditional variationalinference methods can not be applied directly. But wild variational inference can still allow us toadaptively improve the optimal step-size in this case.General Langevin Networks Based on the original formula of SGLD, we proposed a more gen-eral langevin network structure, and each layer of the network has a formzt+1 Atzt+h(BtBt>rzlog ^p(zt;Mt) +Btt+Dt);8t= 1;T; (19)whereAt,BtandDtare network parameters at t-th iteration(whose size is dd, anddis the sizeofzt), andh()denotes a smooth element-wise non-linearity function; here tis still a standardgaussian random vector with the same size as z. With this more complex network, we can use fewerlayers to construct more powerful back-box samplers.5 E MPIRICAL RESULTS5.1 SGLD I NFERENCE NETWORKWe first test our algorithm with SGLD inference network with (18) formula on both a toy Gaussianmixture model and a Bayesian logistic regression example. We find that we can adaptively learnstep sizes that significantly outperform the existing hand-designed step size schemes, and hencesave computational cost in the testing phase. In particular, we compare with the following stepsize schemes, for all of which we report the best results (testing accuracy in Figure 3(a); testinglikelihood in Figure 3(b)) among a range of hyper-parameters:1.Constant Step Size . We select a best constant step size in f1;2;23;:::; 229g106.2.Power Decay Step Size . We consider t= 10a(b+t)where= 0:55,a2f6;5;:::; 1;2g,b2f0;1;:::; 9g.3.Adagrad ,Rmsprop ,Adadelta , all with the master step size selected in f1;2;23;:::; 229g106,with the other parameters chosen by default values.Gaussian Mixture We start with a simple 1D Gaussian mixture example shown in Figure 2 wherethe target distribution p(z)is shown by the red dashed curve. We use amortized SVGD and KSDto optimize the step size parameter of the Langevin inference network in (18) with T= 20 layers(i.e., SGLD with T= 20 iterations), with an initial z0drawn from a q0far away from the targetdistribution (see the green curve in Figure 2(a)); this makes it critical to choose a proper step sizeto achieve close approximation within T= 20 iterations. We find that amortized SVGD and KSDallow us to achieve good performance with 20 steps of SGLD updates (Figure 2(b)-(c)), while theresult of the best constant step size and power decay step-size are much worse (Figure 2(d)-(e)).8Under review as a conference paper at ICLR 2017Steps10 50 100Accuracy0.50.550.60.650.70.750.8Steps10 50 100Log Likelihood-0.7-0.65-0.6-0.55-0.5Amortized SVGDKSD U-statisticAdadeltaConstant RatePower Decay RateRMSpropAdagradSGLD(fully converged)SVGD(fully converged)(a) (b)Figure 3: The testing accuracy (a) and testing likelihood (b) when training Langevin inference net-work withT2f10;50;100glayers, respectively. The results reported here are the performance ofthe final result zToutputted by the last layer of the network. We find that both amortized SVGDand KSD minimization (with U-statistics) outperform all the hand-designed learning rates. Resultsaveraged on 100 random trails.Bayesian Logistic Regression We consider Bayesian logistic regression for binary classificationusing the same setting as Gershman et al. (2012), which assigns the regression weights wwith aGaussian prior p0(wj) =N(w;1)andp0() =Gamma (;1;0:01). The inference is appliedon the posterior of z= [w;log]. We test this model on the binary Covertype dataset1with 581,012data points and 54 features.To demonstrate that our estimated learning rate can work well on new datasets never seen by thealgorithm. We partition the dataset into mini-datasets of size 50;000, and use 80% of them fortraining and 20% for testing. We adapt our amortized SVGD/KSD to train on the whole populationof the training mini-datasets by randomly selecting a mini-dataset at each iteration of Algorithm 1,and evaluate the performance of the estimated step sizes on the remaining 20% testing mini-datasets.Figure 3 reports the testing accuracy and likelihood on the 20% testing mini-datasets when wetrain the Langevin network with T= 10;50;100layers, respectively. We find that our methodsoutperform all the hand-designed learning rates, and allow us to get performance closer to the fullyconverged SGLD and SVGD with a small number Tof iterations.Figure 4 shows the testing accuracy and testing likelihood of all the intermediate results when train-ing Langevin network with T= 100 layers. It is interesting to observe that amortized SVGD andKSD learn rather different behavior: KSD tends to increase the performance quickly at the first fewiterations but saturate quickly, while amortized SVGD tends to increase slowly in the beginningand boost the performance quickly in the last few iterations. Note that both algorithms are set upto optimize the performance of the last layers, while need to decide how to make progress on theintermediate layers to achieve the best final performance.5.2 G ENERAL LANGEVIN INFERENCE NETWORKWe further test our algorithm with general Langevin inference network. We firstly construct onesingle layer general Langevin network to approach the posterior of Bayesian logistic regression pa-rameters and we can achieve 74:58% average accuracy and 0:5216 average testing log-likelihoodin 100 repeat experiments. This result proves the proposed general Langevin Inference Network isquite competitive and worth to explore. Moreover, we use it as a black-box sampler to approachmore complicate Gaussian Mixture distributions.Gaussian Mixture We consider 10 components Gaussian Mixture Models with mean and co-variance matrix of each component randomly drawed from a uniform distribution, and we test ourmethods on different dimensions models.We construct 6 layers of general Langevin networks as a black-box sampler, and our proposed twomethods to train the black-box sampler to approximate the target distribution. Figure 5 shows our1https://www.csie.ntu.edu.tw/ ̃cjlin/libsvmtools/datasets/binary.html9Under review as a conference paper at ICLR 2017Intermediate Steps0 50 100Accuracy0.50.550.60.650.70.75Intermediate Steps0 50 100Log Likelihood-0.7-0.65-0.6-0.55-0.5Amortized SVGDKSD U-statisticAdadeltaConstant RatePower Decay RateRMSpropAdagrad(a) (b)Figure 4: The testing accuracy (a) and testing likelihood (b) of the outputs of the intermediate layerswhen training the Langevin network with T= 100 layers. Note that both amortized SVGD andKSD minimization target to optimize the performance of the last layer, but need to optimize theprogress of the intermediate steps in order to achieve the best final results.results on 50 dimension Gaussian Mixture case and figure 6 shows results of different dimensionsof Gaussian Mixture. From the figures we can know that our proposed sampling structure is quitecompetive comparing with NUT sampler(Hoffman & Gelman, 2014), and these two variationalinference methods can both train a good black-box sampler.Particle0 1 2log10MSE-3-2.5-2-1.5-1-0.5Particle0 1 2log10MSE-10123Particle0 1 2log10MSE-2-101Langevin VGDNUTSKSD U-statistic(a)E(cos(wx+b)) (b)E(x2) (c)E(x)Figure 5: Comparation between our methods and NUTS on 50 dimension Gaussian Mixture. (a)-(c)show the mean square errors when using different number particles to estimate expectation E(h(x))forh(x) =x,x2, andcos(x+b); forcos(!x+b), we random draw ! N (0;1)andbUniform([0;2])and report the average MSE over 10 random draws of and b.Dimension0 20 40 60log10MSE-2.4-2.3-2.2-2.1Dimension0 20 40 60log10MSE0.350.40.450.50.550.60.65Dimension0 20 40 60log10MSE-1.7-1.6-1.5-1.4-1.3-1.2-1.1Langevin VGDNUTSKSD U-statisticE(cos(wx+b)) E(x2) E(x)Figure 6: Comparation between our methods and NUTS For different dimension Gaussian Mixture.(a)-(c) show the mean square errors when using different number particles to estimate expectationE(h(x))forh(x) =x,x2, andcos(x+b); forcos(!x+b), we random draw !N (0;1)andbUniform([0;2])and report the average MSE over 10 random draws of and b.10Under review as a conference paper at ICLR 20176 C ONCLUSIONWe consider two methods for wild variational inference that allows us to train general inference net-works with intractable density functions, and apply it to adaptively estimate step sizes of stochasticgradient Langevin dynamics. More studies are needed to develop better methods, more applicationsand theoretical understandings for wild variational inference, and we hope that the two methods wediscussed in the paper can motivate more ideas and studies in the field.
SJztFgf4e
Review: Two Methods for Wild Variational Inference
3: Clear rejection
The authors propose two variational methods based on the theme of posterior approximations which may not have a tractable density. The first is from another ICLR submission on "amortized SVGD" (Wang and Liu, 2016), where here the innovation is in using SGLD as the inference network. The second is from a NIPS paper (Ranganath et al., 2016) on minimizing the Stein divergence with a parametric approximating family, where here the innovation is in defining their test functions to be an RKHS, obtaining an analytic solution to the inner optimization problem. The methodology is incremental. Everything up to Section 3.2 is essentially motivation, background, or related work. The notion of a "wild variational approximation" was already defined in Ranganath et al. (2016), termed a "variational program". It would be useful for the authors to comment on the difference, if any. Section 3.2 is at first interesting because it analytically solves the maximum problem that is faced in Ranganath et al. (2016). However, this requires use of a kernel which will certainly not scale in high dimensions, so it is then equivalent in practice to having chosen a very simple test function family. To properly scale to high dimensions would require a deeper kernel and also learning its parameters; this is not any easier than parameterizing the test function family as a neural network to begin with, which Ranganath et al. (2016) do. Section 4 introduces a Langevin inference network, which essentially chooses the variational approximation as an evolving sequence of Markov transition operators as in Salimans et al. (2015). I had trouble understanding this for a while because I could not understand what they mean by inference network. None of it is amortized in the usual inference network sense, which is that the parameters are given by the output of a neural network. Here, the authors simple define global parameters of the SGLD chain which are used across all the latent variables (which is strictly worse?). (What then makes it an "inference network"?) Is this not the variational approximation used in Salimans et al. (2015), but using a different objective to train it? The experiments are limited, on a toy mixture of Gaussians posterior and Bayesian logistic regression. None of this addresses the problems one might suspect on high-dimensional and real data, such as the lack of scalability for the kernel, the comparison to Salimans et al. (2015) for the Langevin variational approximation, and any note of runtime or difficulty of training. Minor comments + It's not clear if the authors understood previous work on expressive variational families or inference networks. For example, they argue Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al., 2015 require handcrafted inference networks. However, all of them assume use of any neural network for amortized inference. None of them even require an inference network. Perhaps the authors mean handcrafted posterior approximations, which to some extent is true; however, the three mentioned are all algorithmic in nature: in Rezende & Mohamed (2015), the main decision choice is the flow length; Tran et al. (2015), the size of the variational data; Ranganath et al. (2015), the flow length on the auxiliary variable space. Each works well on different problems, but this is also true of variational objectives which admit intractable q (as the latter two consider, as does Salimans et al. (2015)). The paper's motivation could be better explained, and perhaps the authors could be clearer on what they mean by inference network. + I also recommend the authors not term a variational inference method based on the class of approximating family. While black box variational inference in Ranganath et al. (2014) assumes a mean-field family, the term itself has been used in the literature to mean any variational method that imposes few constraints on the model class.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Two Methods for Wild Variational Inference ### Paper Abstract Variational inference provides a powerful tool for approximate probabilistic inference on complex, structured models. Typical variational inference methods, however, require to use inference networks with computationally tractable probability density functions. This largely limits the design and implementation of variational inference methods. We consider wild variational inference methods that do not require tractable density functions on the inference networks, and hence can be applied in more challenging cases. As an example of application, we treat stochastic gradient Langevin dynamics (SGLD) as an inference network, and use our methods to automatically adjust the step sizes of SGLD to maximize its convergence speed, significantly outperforming the hand-designed step size schemes. ### Paper Keywords ["Theory"] ### Paper Content ABSTRACTVariational inference provides a powerful tool for approximate probabilistic in-ference on complex, structured models. Typical variational inference methods,however, require to use inference networks with computationally tractable proba-bility density functions. This largely limits the design and implementation of vari-ational inference methods. We consider wild variational inference methods thatdo not require tractable density functions on the inference networks, and hencecan be applied in more challenging cases. As an example of application, we treatstochastic gradient Langevin dynamics (SGLD) as an inference network, and useour methods to automatically adjust the step sizes of SGLD, yielding significantimprovement over the hand-designed step size schemes.1 I NTRODUCTIONProbabilistic modeling provides a principled approach for reasoning under uncertainty, and has beenincreasingly dominant in modern machine learning where highly complex, structured probabilisticmodels are often the essential components for solving complex problems with increasingly largerdatasets. A key challenge, however, is to develop computationally efficient Bayesian inferencemethods to approximate, or draw samples from the posterior distributions. Variational inference(VI) provides a powerful tool for scaling Bayesian inference to complex models and big data. Thebasic idea of VI is to approximate the true distribution with a simpler distribution by minimizing theKL divergence, transforming the inference problem into an optimization problem, which is oftenthen solved efficiently using stochastic optimization techniques (e.g., Hoffman et al., 2013; Kingma& Welling, 2013). However, the practical design and application of VI are still largely restricted bythe requirement of using simple approximation families, as we explain in the sequel.Letp(z)be a distribution of interest, such as the posterior distribution in Bayesian inference. VIapproximates p(z)with a simpler distribution q(z)found in a setQ=fq(z)gof distributionsindexed by parameter by minimizing the KL divergence objective:minKL(qjjp)Ezq[log(q(z)=p(z))]; (1)where we can get exact result p=qifQis chosen to be broad enough to actually include p. Inpractice, however, Qshould be chosen carefully to make the optimization in (1) computationallytractable; this casts two constraints on Q:1. A minimum requirement is that we should be able to sample from qefficiently, which allows usto make estimates and predictions based on qin placement of the more intractable p. The samplesfromqcan also be used to approximate the expectation Eq[]in (1) during optimization. This meansthat there should exist some computable function f(;), called the inference network , which takesa random seed , whose distribution is denoted by q0, and outputs a random variable z=f(;)whose distribution is q.2. We should also be able to calculate the density q(z)or it is derivative in order to optimize theKL divergence in (1). This, however, casts a much more restrictive condition, since it requires us touse only simple inference network f(;)and input distributions q0to ensure a tractable form forthe densityqof the output z=f(;).In fact, it is this requirement of calculating q(z)that has been the major constraint for the designof state-of-the-art variational inference methods. The traditional VI methods are often limited to1Under review as a conference paper at ICLR 2017Given distribution Inference network SamplesFigure 1: Wild variational inference allows us to train general stochastic neural inference networks to learn todraw (approximate) samples from the target distributions, without restriction on the computational tractabilityof the density function of the neural inference networks.using simple mean field, or Gaussian-based distributions as qand do not perform well for approx-imating complex target distributions. There is a line of recent work on variational inference withrich approximation families (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few), all based on handcrafting special inference networks to ensure the com-putational tractability of q(z)while simultaneously obtaining high approximation accuracy. Theseapproaches require substantial mathematical insights and research effects, and can be difficult tounderstand or use for practitioners without a strong research background in VI. Methods that allowus to use arbitrary inference networks without substantial constraints can significantly simplify thedesign and applications of VI methods, allowing practical users to focus more on choosing proposalsthat work best with their specific tasks.We use the term wild variational inference to refer to variants of variational methods working withgeneral inference networks f(;)without tractability constraints on its output density q(z); thisshould be distinguished with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(z)without significant model-by-modelconsideration (but still require to calculate the proposal density q(z)). Essentially, wild variationalinference makes it possible to “learn to draw samples”, constructing black-box neural samplers forgiven distributions. This enables more adaptive and automatic design of efficient Bayesian infer-ence procedures, replacing the hand-designed inference algorithms with more efficient ones that canimprove their efficiency adaptively over time based on past tasks they performed.In this work, we discuss two methods for wild variational inference, both based on recent works thatcombine kernel techniques with Stein’s method (e.g., Liu & Wang, 2016; Liu et al., 2016). The firstmethod, also discussed in Wang & Liu (2016), is based on iteratively adjusting parameter to makethe random output z=f(;)mimic a Stein variational gradient direction (SVGD) (Liu & Wang,2016) that optimally decreases its KL divergence with the target distribution. The second method isbased on minimizing a kernelized Stein discrepancy, which, unlike KL divergence, does not requireto calculate density q(z)for the optimization thanks to its special form.Another critical problem is to design good network architectures well suited for Bayesian infer-ence. Ideally, the network design should leverage the information of the target distribution p(z)in a convenient way. One useful perspective is that we can view the existing MC/MCMC meth-ods as (hand-designed) stochastic neural networks which can be used to construct native inferencenetworks for given target distributions. On the other hand, using existing MC/MCMC methods asinference networks also allow us to adaptively adjust the hyper-parameters of these algorithms; thisenables amortized inference which leverages the experience on past tasks to accelerate the Bayesiancomputation, providing a powerful approach for designing efficient algorithms in settings when alarge number of similar tasks are needed.As an example, we leverage stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011)as the inference network, which can be treated as a special deep residential network (He et al.,2016), in which important gradient information rzlogp(z)is fed into each layer to allow efficientapproximation for the target distribution p(z). In our case, the network parameter are the step sizesof SGLD, and our method provides a way to adaptively improve the step sizes, providing speed-upon future tasks with similar structures. We show that the adaptively estimated step sizes significantlyoutperform the hand-designed schemes such as Adagrad.Related Works The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational inference2Under review as a conference paper at ICLR 2017(e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a) and date-driven designs of MonteCarlo based methods (e.g., Paige & Wood, 2016), to name only a few. Most of these methods,however, require to explicitly calculate q(z)(or its gradient).One well exception is a very recent work (Ranganath et al., 2016) that also avoids calculating q(z)and hence works for general inference networks; their method is based on a similar idea relatedto Stein discrepancy (Liu et al., 2016; Oates et al., 2017; Chwialkowski et al., 2016; Gorham &Mackey, 2015), for which we provide a more detailed discussion in Section 3.2.The auxiliary variational inference methods (e.g., Agakov & Barber, 2004) provide an alternativeway when the variational distribution q(z)can be represented as a hidden variable model. Inparticular, Salimans et al. (2015) used the auxiliary variational approach to leverage MCMC as avariational approximation. These approaches, however, still require to write down the likelihoodfunction on the augmented spaces, and need to introduce an additional inference network related tothe auxiliary variables.There is a large literature on traditional adaptive MCMC methods (e.g., Andrieu & Thoms, 2008;Roberts & Rosenthal, 2009) which can be used to adaptively adjust the proposal distribution ofMCMC by exploiting the special theoretical properties of MCMC (e.g., by minimizing the auto-correlation). Our method is simpler, more generic, and works efficiently in practice thanks to theuse of gradient-based back-propagation. Finally, connections between stochastic gradient descentand variational inference have been discussed and exploited in Mandt et al. (2016); Maclaurin et al.(2015).Outline Section 2 introduces background on Stein discrepancy and Stein variational gradient de-scent. Section 3 discusses two methods for wild variational inference. Section 4 discuss usingstochastic gradient Langevin dynamics (SGLD) as the inference network. Empirical results areshown in Section 5.2 S TEIN ’SIDENTITY , STEIN DISCREPANCY , STEIN VARIATIONAL GRADIENTStein’s identity Stein’s identity plays a fundamental role in our framework. Let p(z)be a positivedifferentiable density on Rd, and(z) = [1(z);;d(z)]>is a differentiable vector-valuedfunction. Definerz=Pi@zi. Stein’s identity isEzp[hrzlogp(z);(z)i+rz(z)] =ZXrz(p(z)(z))dx= 0; (2)which holds once p(z)(z)vanishes on the boundary of Xby integration by parts or Stokes’ theo-rem; It is useful to rewrite Stein’s identity in a more compact way:Ezp[Tp(z)] = 0; withTpdef=hrzlogp;i+rz; (3)whereTpis called a Stein operator , which acts on function and returns a zero-mean functionTp(z)underzp. A key computational advantage of Stein’s identity and Stein operator isthat they depend on ponly through the derivative of the log-density rzlogp(z), which does notdepend on the cumbersome normalization constant of p, that is, when p(z) = p(z)=Z, we haverzlogp(z) =rzlog p(z), independent of the normalization constant Z. This property makesStein’s identity a powerful practical tool for handling unnormalized distributions widely appeared inmachine learning and statistics.Stein Discrepancy Although Stein’s identity ensures that Tphas zero expectation under p, itsexpectation is generally non-zero under a different distribution q. Instead, for p6=q, there must existawhich distinguishes pandqin the sense that Ezq[Tp(z)]6= 0. Stein discrepancy leveragesthis fact to measure the difference between pandqby considering the “maximum violation of Stein’sidentity” forin certain function set F:D(qjjp) = max2FEzq[Tp(z)]; (4)whereFis the set of functions that we optimize over, and decides both the discriminative powerand computational tractability of Stein discrepancy. Kernelized Stein discrepancy (KSD) is a special3Under review as a conference paper at ICLR 2017Stein discrepancy that takes Fto be the unit ball of vector-valued reproducing kernel Hilbert spaces(RKHS), that is,F=f2Hd:jjjjHd1g; (5)whereHis a real-valued RKHS with kernel k(z;z0). This choice ofFmakes it possible to get aclosed form solution for the optimization in (4) (Liu et al., 2016; Chwialkowski et al., 2016; Oateset al., 2017):D(qjjp) = max2HdEzq[Tp(z)]; s:t:jjjjHd1; (6)=qEz;z0q[p(z;z0)]; (7)wherep(z;z0)is a positive definite kernel obtained by applying Stein operator on k(z;z0)twice:p(z;z0) =Tz0p(Tzpk(z;z0));=sp(z)sp(z0)k(z;z0) +sp(z)rz0k(z;z0) +sp(z0)rzk(z;z0) +rz(rz0k(z;z0));(8)wheresp(z) =rzlogp(z)andTzpandTzpdenote the Stein operator when treating k(z;z0)as afunction ofzandz0, respectively; here we defined Tzpk(z;z0) =rxlogp(x)k(z;z0)+rxk(z;z0)which returns a d1vector-valued function. It can be shown that D(qjjp) = 0 if and only if q=pwhenk(z;z0)is strictly positive definite in a proper sense (Liu et al., 2016; Chwialkowski et al.,2016). D(qjjp)can treated as a variant of maximum mean discrepancy equipped with kernelp(z;z0)which depends on p(which makes D(qjjp)asymmetric on qandp).The form of KSD in (6) allows us to estimate the discrepancy between a set of sample fzig(e.g.,drawn from q) and a distribution pspecified byrzlogp(z),^D2u(fzigjjp) =1n(n1)Xi6=j[p(zi;zj)]; ^D2v(fzigjjp) =1n2Xi;j[p(zi;zj)]; (9)where ^D2u(qjjp)provides an unbiased estimator (hence called a U-statistic) for D2(qjjp), and^D2v(qjjp), calledV-statistic, provides a biased estimator but is guaranteed to be always non-negative: ^D2v(fzigjjp)0.Stein Variational Gradient Descent (SVGD) Stein operator and Stein discrepancy have a closeconnection with KL divergence, which is exploited in Liu & Wang (2016) to provide a generalpurpose deterministic approximate sampling method. Assume that fzigni=1is a sample (or a setof particles) drawn from q, and we want to update fzigni=1to make it “move closer” to the targetdistributionpto improve the approximation quality. We consider updates of formzi zi+(zi);8i= 1;:::;n; (10)whereis a perturbation direction, or velocity field, chosen to maximumly decrease the KL diver-gence between the distribution of updated particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (11)whereq[]denotes the density of the updated particle z0=z+(z)when the density of theoriginal particle zisq, andFis the set of perturbation directions that we optimize over. A key ob-servation (Liu & Wang, 2016) is that the optimization in (11) is in fact equivalent to the optimizationfor KSD in (4); we haveddKL(q[]jjp)=0=Ezq[Tp(z)]; (12)that is, the Stein operator transforms the perturbation on the random variable (the particles) to thechange of the KL divergence. Taking Fto be unit ball ofHdas in (5), the optimal solution of(11) equals that of (6), which is shown to be (e.g., Liu et al., 2016)(z0)/Ezq[Tzpk(z;z0)] =Ezq[rzlogp(z)k(z;z0) +rzk(z;z0)]:4Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD and KSD Minimization for Wild Variational Inferenceforiterationtdo1. Draw randomfigni=1, calculatezi=f(;i), and the Stein variational gradient ziin(13).2. Update parameter using (14) or (15) for amortized SVGD, or (17) for KSD minimization.end forBy approximating the expectation under qwith the empirical mean of the current particles fzigni=1,SVGD admits a simple form of update that iteratively moves the particles towards the target distri-bution,zi zi+zi;8i= 1;:::;n;zi=^Ez2fzigni=1[rzlogp(z)k(z;zi) +rzk(z;zi)]; (13)where ^Ezfzigni=1[f(z)] =Pif(zi)=n. The two terms in ziplay two different roles: the termwith the gradient rzlogp(z)drives the particles towards the high probability regions of p(z),while the term with rzk(z;zi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(z;z0) =k(zz0), then the second term reduces to ^Ezrzk(z;zi) =^Ezrzik(z;zi), which can be treated as the negative gradient for minimizing the average similarity^Ezk(z;zi)in terms ofzi.It is easy to see from (13) that zireduces to the typical gradient rzlogp(zi)when there is only asingle particle ( n= 1) andrzk(z;zi)whenz=zi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(z)(i.e., maximum a posteriori (MAP)).3 T WOMETHODS FOR WILDVARIATIONAL INFERENCESince the direct parametric optimization of the KL divergence (1) requires calculating q(z), thereare two essential ways to avoid calculating q(z): either using alternative (approximate) optimiza-tion approaches, or using different divergence objective functions. We discuss two possible ap-proaches in this work: one based on “amortizing SVGD” (Wang & Liu, 2016) which trains theinference network f(;)so that its output mimic the SVGD dynamics in order to decrease the KLdivergence; another based on minimizing the KSD objective (9) which does not require to evaluateq(z)thanks to its special form.3.1 A MORTIZED SVGDSVGD provides an optimal updating direction to iteratively move a set of particles fzigtowards thetarget distribution p(z). We can leverage it to train an inference network f(;)by iteratively ad-justingso that the output of f(;)changes along the Stein variational gradient direction in orderto maximumly decrease its KL divergence with the target distribution. By doing this, we “amortize”SVGD into a neural network, which allows us to leverage the past experience to adaptively improvethe computational efficiency and generalize to new tasks with similar structures. Amortized SVGDis also presented in Wang & Liu (2016); here we present some additional discussion.To be specific, assume figare drawn from q0andzi=f(;i)the corresponding random outputbased on the current estimation of . We want to adjust so thatzichanges along the Stein vari-ational gradient direction ziin (13) so as to maximumly decrease the KL divergence with targetdistribution. This can be done by updating via arg minnXi=1jjf(;i)zizijj22: (14)Essentially, this projects the non-parametric perturbation direction zito the change of the finitedimensional network parameter . If we take the step size to be small, then the updated by (14)should be very close to the old value, and a single step of gradient descent of (14) can provide a5Under review as a conference paper at ICLR 2017good approximation for (14). This gives a simpler update rule: +Xi@f(;i)zi; (15)which can be intuitively interpreted as a form of chain rule that back-propagates the SVGD gradientto the network parameter . In fact, when we have only one particle, (15) reduces to the stan-dard gradient ascent for maxlogp(f(;)), in whichfis trained to “learn to optimize” (e.g.,Andrychowicz et al., 2016), instead of “learn to sample” p(z). Importantly, as we have more thanone particles, the repulsive term rzk(z;zi)inzibecomes active, and enforces an amount of di-versity on the network output that is consistent with the variation in p(z). The full algorithm issummarized in Algorithm 1.Amortized SVGD can be treated as minimizing the KL divergence using a rather special algorithm:it leverages the non-parametric SVGD which can be treated as approximately solving the infinitedimensional optimization minqKL(qjjp)without explicitly assuming a parametric form on q, anditeratively projecting the non-parametric update back to the finite dimensional parameter space of. It is an interesting direction to extend this idea to “amortize” other MC/MCMC-based inferencealgorithms. For example, given a MCMC with transition probability T(z0jz)whose stationary dis-tribution isp(z), we may adjust to make the network output move towards the updated values z0drawn from the transition probability T(z0jz). The advantage of using SVGD is that it provides adeterministic gradient direction which we can back-propagate conveniently and is particle efficientin that it reduces to “learning to optimize” with a single particle. We have been using the simpleL2loss in (14) mainly for convenience; it is possible to use other two-sample discrepancy measuressuch as maximum mean discrepancy.3.2 KSD V ARIATIONAL INFERENCEAmortized SVGD attends to minimize the KL divergence objective, but can not be interpreted asa typical finite dimensional optimization on parameter . Here we provide an alternative methodbased on directly minimizing the kernelized Stein discrepancy (KSD) objective, for which, thanksto its special form, the typical gradient-based optimization can be performed without needing toestimateq(z)explicitly.To be specific, take qto be the density of the random output z=f(;)whenq0, and wewant to find to minimize D(qjjp). Assumingfigis i.i.d. drawn from q0, we can approximateD2(qjjp)unbiasedly with a U-statistics:D2(qjjp)1n(n1)Xi6=jp(f(;i); f(;j)); (16)for which a standard gradient descent can be derived for optimizing : 2n(n1)Xi6=j@f(;i)rzip(zi;zj);wherezi=f(;i): (17)This enables a wild variational inference method based on directly minimizing with standard(stochastic) gradient descent. See Algorithm 1. Note that (17) is similar to (15) in form, but replacesziwith a ~zi/Pj:i6=jrzip(zi;zj):It is also possible to use the V-statistic in (9), butwe find that the U-statistic performs much better in practice, possibly because of its unbiasednessproperty.Minimizing KSD can be viewed as minimizing a constrastive divergence objective function. To seethis, recall that q[]denotes the density of z0=z+(z)whenzq. Combining (11) and (6),we can show thatD2(qjjp)1(KL(qjjp)KL(q[]jjp)):That is, KSD measures the amount of decrease of KL divergence when we update the particlesalong the optimal SVGD perturbation direction given by (11). If q=p, then the decrease of KL6Under review as a conference paper at ICLR 2017divergence equals zero and D2(qjjp)equals zero. In fact, as shown in Liu & Wang (2016) KSD canbe explicitly represented as the magnitude of a functional gradient of KL divergence:D(qjjp) =ddKL(q[]jjp)=0Hd;whereq[]is the density of z=z+(z)whenzq, andddF()denotes the functional gradientof functional F()w.r.t.defined in RKHSHd, andddF()is also an element in Hd. Therefore,KSD variational inference can be treated as explicitly minimizing the magnitude of the gradient ofKL divergence, in contract with amortized SVGD which attends to minimize the KL divergenceobjective itself.This idea is also similar to the contrastive divergence used for learning restricted Boltzmann ma-chine (RBM) (Hinton, 2002) (which, however, optimizes pwith fixedq). It is possible to extend thisapproach by replacing z0=z+(z)with other transforms, such as these given by a transitionprobability of a Markov chain whose stationary distribution is p. In fact, according the so called gen-erator method for constructing Stein operator (Barbour, 1988), any generator of a Markov processdefines a Stein operator that can be used to define a corresponding Stein discrepancy.This idea is related to a very recent work by Ranganath et al. (2016), which is based on directlyminimizing the variational form of Stein discrepancy in (4); Ranganath et al. (2016) assumes Fconsists of a neural network (z)parametrized by , and findby solving the following min-maxproblem:minmaxEzq[Tp(z)]:In contrast, our method leverages the closed form solution by taking Fto be an RKHS and henceobtains an explicit optimization problem, instead of a min-max problem that can be computationallymore expensive, or have difficulty in achieving convergence.Becausep(x;x0)(defined in (8)) depends on the derivative rxlogp(x)of the target distribution,the gradient in (17) depends on the Hessian matrix r2xlogp(x)and is hence less convenient to im-plement compared with amortized SVGD (the method by Ranganath et al. (2016) also has the sameproblem). However, this problem can be alleviated using automatic differentiation tools, which beused to directly take the derivative of the objective in (16) without manually deriving its derivatives.4 L ANGEVIN INFERENCE NETWORKWith wild variational inference, we can choose more complex inference network structures to obtainbetter approximation accuracy. Ideally, the best network structure should leverage the special prop-erties of the target distribution p(z)in a convenient way. One way to achieve this by viewing existingMC/MCMC methods as inference networks with hand-designed (and hence potentially suboptimal)parameters, but good architectures that take the information of the target distribution p(z)into ac-count. By applying wild variational inference on networks constructed based on existing MCMCmethods, we effectively provide an hyper-parameter optimization for these existing methods. Thisallows us to fully optimize the potential of existing Bayesian inference methods, significantly im-proving the result with less computation cost, and decreasing the need for hyper-parameter tuningby human experts. This is particularly useful when we need to solve a large number of similar tasks,where the computation cost spent on optimizing the hyper-parameters can significantly improve theperformance on the future tasks.Stochastic Gradient Langevin Dynamics We first take the original stochastic gradient Langevindynamics (SGLD) algorithm (Welling & Teh, 2011) as an example. SGLD starts with a randominitialization z0, and perform iterative update of formzt+1 zt+trzlog ^p(zt;Mt) +p2tt;8t= 1;T; (18)where log ^p(zt;Mt)denotes an approximation of logp(zt)based on, e.g., a random mini-batchMtof observed data at t-th iteration, and tis a standard Gaussian random vector of the same sizeasz, andtdenotes a (vector) step-size at t-th iteration; here “ ” denotes element-wise product.When running SGLD for Titerations, we can treat zTas the output of a T-layer neural network7Under review as a conference paper at ICLR 2017(a) Initialization (b) Amortized SVGD (c) KSD Minimization (d) Constant Stepsize (e) Power Decay StepsizeFigure 2: Results on a 1D Gaussian mixture when training the step sizes of SGLD with T= 20iterations. The target distribution p(x)is shown by the red dashed line. (a) The distribution ofthe initialization z0of SGLD (the green line), visualized by kernel density estimator. (b)-(d) Thedistribution of the final output zT(green line) given by different types of step sizes, visualized bykernel density estimator.parametrized by the collection of step sizes =ftgTt=1, whose random inputs include the randominitialization z0, the mini-batch Mtand Gaussian noise tat each iteration t. We can see that thisdefines a rather complex network structure with several different types of random inputs ( z0,Mtandt). This makes it intractable to explicitly calculate the density of zTand traditional variationalinference methods can not be applied directly. But wild variational inference can still allow us toadaptively improve the optimal step-size in this case.General Langevin Networks Based on the original formula of SGLD, we proposed a more gen-eral langevin network structure, and each layer of the network has a formzt+1 Atzt+h(BtBt>rzlog ^p(zt;Mt) +Btt+Dt);8t= 1;T; (19)whereAt,BtandDtare network parameters at t-th iteration(whose size is dd, anddis the sizeofzt), andh()denotes a smooth element-wise non-linearity function; here tis still a standardgaussian random vector with the same size as z. With this more complex network, we can use fewerlayers to construct more powerful back-box samplers.5 E MPIRICAL RESULTS5.1 SGLD I NFERENCE NETWORKWe first test our algorithm with SGLD inference network with (18) formula on both a toy Gaussianmixture model and a Bayesian logistic regression example. We find that we can adaptively learnstep sizes that significantly outperform the existing hand-designed step size schemes, and hencesave computational cost in the testing phase. In particular, we compare with the following stepsize schemes, for all of which we report the best results (testing accuracy in Figure 3(a); testinglikelihood in Figure 3(b)) among a range of hyper-parameters:1.Constant Step Size . We select a best constant step size in f1;2;23;:::; 229g106.2.Power Decay Step Size . We consider t= 10a(b+t)where= 0:55,a2f6;5;:::; 1;2g,b2f0;1;:::; 9g.3.Adagrad ,Rmsprop ,Adadelta , all with the master step size selected in f1;2;23;:::; 229g106,with the other parameters chosen by default values.Gaussian Mixture We start with a simple 1D Gaussian mixture example shown in Figure 2 wherethe target distribution p(z)is shown by the red dashed curve. We use amortized SVGD and KSDto optimize the step size parameter of the Langevin inference network in (18) with T= 20 layers(i.e., SGLD with T= 20 iterations), with an initial z0drawn from a q0far away from the targetdistribution (see the green curve in Figure 2(a)); this makes it critical to choose a proper step sizeto achieve close approximation within T= 20 iterations. We find that amortized SVGD and KSDallow us to achieve good performance with 20 steps of SGLD updates (Figure 2(b)-(c)), while theresult of the best constant step size and power decay step-size are much worse (Figure 2(d)-(e)).8Under review as a conference paper at ICLR 2017Steps10 50 100Accuracy0.50.550.60.650.70.750.8Steps10 50 100Log Likelihood-0.7-0.65-0.6-0.55-0.5Amortized SVGDKSD U-statisticAdadeltaConstant RatePower Decay RateRMSpropAdagradSGLD(fully converged)SVGD(fully converged)(a) (b)Figure 3: The testing accuracy (a) and testing likelihood (b) when training Langevin inference net-work withT2f10;50;100glayers, respectively. The results reported here are the performance ofthe final result zToutputted by the last layer of the network. We find that both amortized SVGDand KSD minimization (with U-statistics) outperform all the hand-designed learning rates. Resultsaveraged on 100 random trails.Bayesian Logistic Regression We consider Bayesian logistic regression for binary classificationusing the same setting as Gershman et al. (2012), which assigns the regression weights wwith aGaussian prior p0(wj) =N(w;1)andp0() =Gamma (;1;0:01). The inference is appliedon the posterior of z= [w;log]. We test this model on the binary Covertype dataset1with 581,012data points and 54 features.To demonstrate that our estimated learning rate can work well on new datasets never seen by thealgorithm. We partition the dataset into mini-datasets of size 50;000, and use 80% of them fortraining and 20% for testing. We adapt our amortized SVGD/KSD to train on the whole populationof the training mini-datasets by randomly selecting a mini-dataset at each iteration of Algorithm 1,and evaluate the performance of the estimated step sizes on the remaining 20% testing mini-datasets.Figure 3 reports the testing accuracy and likelihood on the 20% testing mini-datasets when wetrain the Langevin network with T= 10;50;100layers, respectively. We find that our methodsoutperform all the hand-designed learning rates, and allow us to get performance closer to the fullyconverged SGLD and SVGD with a small number Tof iterations.Figure 4 shows the testing accuracy and testing likelihood of all the intermediate results when train-ing Langevin network with T= 100 layers. It is interesting to observe that amortized SVGD andKSD learn rather different behavior: KSD tends to increase the performance quickly at the first fewiterations but saturate quickly, while amortized SVGD tends to increase slowly in the beginningand boost the performance quickly in the last few iterations. Note that both algorithms are set upto optimize the performance of the last layers, while need to decide how to make progress on theintermediate layers to achieve the best final performance.5.2 G ENERAL LANGEVIN INFERENCE NETWORKWe further test our algorithm with general Langevin inference network. We firstly construct onesingle layer general Langevin network to approach the posterior of Bayesian logistic regression pa-rameters and we can achieve 74:58% average accuracy and 0:5216 average testing log-likelihoodin 100 repeat experiments. This result proves the proposed general Langevin Inference Network isquite competitive and worth to explore. Moreover, we use it as a black-box sampler to approachmore complicate Gaussian Mixture distributions.Gaussian Mixture We consider 10 components Gaussian Mixture Models with mean and co-variance matrix of each component randomly drawed from a uniform distribution, and we test ourmethods on different dimensions models.We construct 6 layers of general Langevin networks as a black-box sampler, and our proposed twomethods to train the black-box sampler to approximate the target distribution. Figure 5 shows our1https://www.csie.ntu.edu.tw/ ̃cjlin/libsvmtools/datasets/binary.html9Under review as a conference paper at ICLR 2017Intermediate Steps0 50 100Accuracy0.50.550.60.650.70.75Intermediate Steps0 50 100Log Likelihood-0.7-0.65-0.6-0.55-0.5Amortized SVGDKSD U-statisticAdadeltaConstant RatePower Decay RateRMSpropAdagrad(a) (b)Figure 4: The testing accuracy (a) and testing likelihood (b) of the outputs of the intermediate layerswhen training the Langevin network with T= 100 layers. Note that both amortized SVGD andKSD minimization target to optimize the performance of the last layer, but need to optimize theprogress of the intermediate steps in order to achieve the best final results.results on 50 dimension Gaussian Mixture case and figure 6 shows results of different dimensionsof Gaussian Mixture. From the figures we can know that our proposed sampling structure is quitecompetive comparing with NUT sampler(Hoffman & Gelman, 2014), and these two variationalinference methods can both train a good black-box sampler.Particle0 1 2log10MSE-3-2.5-2-1.5-1-0.5Particle0 1 2log10MSE-10123Particle0 1 2log10MSE-2-101Langevin VGDNUTSKSD U-statistic(a)E(cos(wx+b)) (b)E(x2) (c)E(x)Figure 5: Comparation between our methods and NUTS on 50 dimension Gaussian Mixture. (a)-(c)show the mean square errors when using different number particles to estimate expectation E(h(x))forh(x) =x,x2, andcos(x+b); forcos(!x+b), we random draw ! N (0;1)andbUniform([0;2])and report the average MSE over 10 random draws of and b.Dimension0 20 40 60log10MSE-2.4-2.3-2.2-2.1Dimension0 20 40 60log10MSE0.350.40.450.50.550.60.65Dimension0 20 40 60log10MSE-1.7-1.6-1.5-1.4-1.3-1.2-1.1Langevin VGDNUTSKSD U-statisticE(cos(wx+b)) E(x2) E(x)Figure 6: Comparation between our methods and NUTS For different dimension Gaussian Mixture.(a)-(c) show the mean square errors when using different number particles to estimate expectationE(h(x))forh(x) =x,x2, andcos(x+b); forcos(!x+b), we random draw !N (0;1)andbUniform([0;2])and report the average MSE over 10 random draws of and b.10Under review as a conference paper at ICLR 20176 C ONCLUSIONWe consider two methods for wild variational inference that allows us to train general inference net-works with intractable density functions, and apply it to adaptively estimate step sizes of stochasticgradient Langevin dynamics. More studies are needed to develop better methods, more applicationsand theoretical understandings for wild variational inference, and we hope that the two methods wediscussed in the paper can motivate more ideas and studies in the field.<|im_end|> <|im_start|>assistant ### Review Title Review: Two Methods for Wild Variational Inference ### Review Text The authors propose two variational methods based on the theme of posterior approximations which may not have a tractable density. The first is from another ICLR submission on "amortized SVGD" (Wang and Liu, 2016), where here the innovation is in using SGLD as the inference network. The second is from a NIPS paper (Ranganath et al., 2016) on minimizing the Stein divergence with a parametric approximating family, where here the innovation is in defining their test functions to be an RKHS, obtaining an analytic solution to the inner optimization problem. The methodology is incremental. Everything up to Section 3.2 is essentially motivation, background, or related work. The notion of a "wild variational approximation" was already defined in Ranganath et al. (2016), termed a "variational program". It would be useful for the authors to comment on the difference, if any. Section 3.2 is at first interesting because it analytically solves the maximum problem that is faced in Ranganath et al. (2016). However, this requires use of a kernel which will certainly not scale in high dimensions, so it is then equivalent in practice to having chosen a very simple test function family. To properly scale to high dimensions would require a deeper kernel and also learning its parameters; this is not any easier than parameterizing the test function family as a neural network to begin with, which Ranganath et al. (2016) do. Section 4 introduces a Langevin inference network, which essentially chooses the variational approximation as an evolving sequence of Markov transition operators as in Salimans et al. (2015). I had trouble understanding this for a while because I could not understand what they mean by inference network. None of it is amortized in the usual inference network sense, which is that the parameters are given by the output of a neural network. Here, the authors simple define global parameters of the SGLD chain which are used across all the latent variables (which is strictly worse?). (What then makes it an "inference network"?) Is this not the variational approximation used in Salimans et al. (2015), but using a different objective to train it? The experiments are limited, on a toy mixture of Gaussians posterior and Bayesian logistic regression. None of this addresses the problems one might suspect on high-dimensional and real data, such as the lack of scalability for the kernel, the comparison to Salimans et al. (2015) for the Langevin variational approximation, and any note of runtime or difficulty of training. Minor comments + It's not clear if the authors understood previous work on expressive variational families or inference networks. For example, they argue Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al., 2015 require handcrafted inference networks. However, all of them assume use of any neural network for amortized inference. None of them even require an inference network. Perhaps the authors mean handcrafted posterior approximations, which to some extent is true; however, the three mentioned are all algorithmic in nature: in Rezende & Mohamed (2015), the main decision choice is the flow length; Tran et al. (2015), the size of the variational data; Ranganath et al. (2015), the flow length on the auxiliary variable space. Each works well on different problems, but this is also true of variational objectives which admit intractable q (as the latter two consider, as does Salimans et al. (2015)). The paper's motivation could be better explained, and perhaps the authors could be clearer on what they mean by inference network. + I also recommend the authors not term a variational inference method based on the class of approximating family. While black box variational inference in Ranganath et al. (2014) assumes a mean-field family, the term itself has been used in the literature to mean any variational method that imposes few constraints on the model class. ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
F1vEjWK-lH_
ICLR.cc/2021/Conference
2021
Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
["Zirui Wang", "Yulia Tsvetkov", "Orhan Firat", "Yuan Cao"]
Massively multilingual models subsuming tens or even hundreds of languages pose great challenges to multi-task optimization. While it is a common practice to apply a language-agnostic procedure optimizing a joint multilingual task objective, how to properly characterize and take advantage of its underlying problem structure for improving optimization efficiency remains under-explored. In this paper, we attempt to peek into the black-box of multilingual optimization through the lens of loss function geometry. We find that gradient similarity measured along the optimization trajectory is an important signal, which correlates well with not only language proximity but also the overall model performance. Such observation helps us to identify a critical limitation of existing gradient-based multi-task learning methods, and thus we derive a simple and scalable optimization procedure, named Gradient Vaccine, which encourages more geometrically aligned parameter updates for close tasks. Empirically, our method obtains significant model performance gains on multilingual machine translation and XTREME benchmark tasks for multilingual language models. Our work reveals the importance of properly measuring and utilizing language proximity in multilingual optimization, and has broader implications for multi-task learning beyond multilingual modeling.
["Multi-task Learning", "Multilingual Modeling"]
ABSTRACTMassively multilingual models subsuming tens or even hundreds of languagespose great challenges to multi-task optimization. While it is a common practiceto apply a language-agnostic procedure optimizing a joint multilingual task objec-tive, how to properly characterize and take advantage of its underlying problemstructure for improving optimization efficiency remains under-explored. In thispaper, we attempt to peek into the black-box of multilingual optimization throughthe lens of loss function geometry. We find that gradient similarity measured alongthe optimization trajectory is an important signal, which correlates well with notonly language proximity but also the overall model performance. Such observa-tion helps us to identify a critical limitation of existing gradient-based multi-tasklearning methods, and thus we derive a simple and scalable optimization proce-dure, named Gradient Vaccine, which encourages more geometrically aligned pa-rameter updates for close tasks. Empirically, our method obtains significant modelperformance gains on multilingual machine translation and XTREME benchmarktasks for multilingual language models. Our work reveals the importance of prop-erly measuring and utilizing language proximity in multilingual optimization, andhas broader implications for multi-task learning beyond multilingual modeling.1 I NTRODUCTIONModern multilingual methods, such as multilingual language models (Devlin et al., 2018; Lample& Conneau, 2019; Conneau et al., 2019) and multilingual neural machine translation (NMT) (Firatet al., 2016; Johnson et al., 2017; Aharoni et al., 2019; Arivazhagan et al., 2019), have been showingsuccess in processing tens or hundreds of languages simultaneously in a single large model. Thesemodels are appealing for two reasons: (1) Efficiency: training and deploying a single multilingualmodel requires much less resources than maintaining one model for each language considered, (2)Positive cross-lingual transfer: by transferring knowledge from high-resource languages (HRL),multilingual models are able to improve performance on low-resource languages (LRL) on a widevariety of tasks (Pires et al., 2019; Wu & Dredze, 2019; Siddhant et al., 2020; Hu et al., 2020).Despite their efficacy, how to properly analyze or improve the optimization procedure of multilingualmodels remains under-explored. In particular, multilingual models are multi-task learning (MTL)(Ruder, 2017) in nature but existing literature often train them in a monolithic manner, naively usinga single language-agnostic objective on the concatenated corpus of many languages. While thisapproach ignores task relatedness and might induce negative interference (Wang et al., 2020b), itsoptimization process also remains a black-box, muffling the interaction among different languagesduring training and the cross-lingual transferring mechanism.In this work, we attempt to open the multilingual optimization black-box via the analysis of lossgeometry. Specifically, we aim to answer the following questions: (1) Do typologically similarlanguages enjoy more similar loss geometries in the optimization process of multilingual models?(2) If so, in the joint training procedure, do more similar gradient trajectories imply less interferencebetween tasks, hence leading to better model quality? (3) Lastly, can we deliberately encourageWork done during an internship at Google.1Published as a conference paper at ICLR 2021more geometrically aligned parameter updates to improve multi-task optimization, especially inreal-world massively multilingual models that contain heavily noisy and unbalanced training data?Towards this end, we perform a comprehensive study on massively multilingual neural machinetranslation tasks, where each language pair is considered as a separate task. We first study thecorrelation between language and loss geometry similarities, characterized by gradient similarityalong the optimization trajectory. We investigate how they evolve throughout the whole trainingprocess, and glean insights on how they correlate with cross-lingual transfer and joint performance.In particular, our experiments reveal that gradient similarities across tasks correlate strongly withboth language proximities and model performance, and thus we observe that typologically closelanguages share similar gradients that would further lead to well-aligned multilingual structure (Wuet al., 2019) and successful cross-lingual transfer. Based on these findings, we identify a majorlimitation of a popular multi-task learning method (Yu et al., 2020) applied in multilingual modelsand propose a preemptive method, Gradient Vaccine , that leverages task relatedness to set gradientsimilarity objectives and adaptively align task gradients to achieve such objectives. Empirically, ourapproach obtains significant performance gain over the standard monolithic optimization strategyand popular multi-task baselines on large-scale multilingual NMT models and multilingual languagemodels. To the best of our knowledge, this is the first work to systematically study and improve lossgeometries in multilingual optimization at scale.2 I NVESTIGATING MULTI -TASK OPTIMIZATION IN MASSIVELYMULTILINGUAL MODELSWhile prior work have studied the effect of data (Arivazhagan et al., 2019; Wang et al., 2020a),architecture (Blackwood et al., 2018; Sachan & Neubig, 2018; V ́azquez et al., 2019; Escolano et al.,2020) and scale (Huang et al., 2019b; Lepikhin et al., 2020) on multilingual models, their opti-mization dynamics are not well understood. We hereby perform a series of control experimentson massively multilingual NMT models to investigate how gradients interact in multilingual set-tings and what are their impacts on model performance, as existing work hypothesizes that gradientconflicts, defined as negative cosine similarity between gradients, can be detrimental for multi-tasklearning (Yu et al., 2020) and cause negative transfer (Wang et al., 2019).2.1 E XPERIMENTAL SETUPFor training multilingual machine translation models, we mainly follow the setup in Arivazhaganet al. (2019). In particular, we jointly train multiple translation language pairs in a single sequence-to-sequence (seq2seq) model (Sutskever et al., 2014). We use the Transformer-Big (Vaswani et al.,2017) architecture containing 375M parameters described in (Chen et al., 2018a), where all param-eters are shared across language pairs. We use an effective batch sizes of 500k tokens, and utilizedata parallelism to train all models over 64 TPUv3 chips. Sentences are encoded using a sharedsource-target Sentence Piece Model (Kudo & Richardson, 2018) with 64k tokens, and a <2xx>token is prepended to the source sentence to indicate the target language (Johnson et al., 2017). Thefull training details can be found in Appendix B.To study real-world multi-task optimization on a massive scale, we use an in-house training cor-pus1(Arivazhagan et al., 2019) generated by crawling and extracting parallel sentences from theweb (Uszkoreit et al., 2010), which contains more than 25 billion sentence pairs for 102 languagesto and from English. We select 25 languages (50 language pairs pivoted on English), containingover 8 billion sentence pairs, from 10 diverse language families and 4 different levels of data sizes(detailed in Appendix A). We then train two models on two directions separately, namely Any!EnandEn!Any. Furthermore, to minimize the confounding factors of inconsistent sentence seman-tics across language pairs, we create a multi-way aligned evaluation set of 3k sentences for alllanguages2. Then, for each checkpoint at an interval of 1000 training steps, we measure pair-wisecosine similarities of the model’s gradients on this dataset between all language pairs. We examinegradient similarities at various granularities, from specific layers to the entire model.1We also experiment on publicly available dataset of WMT and obtain similar observations in Appendix C.2In other words, 3k semantically identical sentences are given in 25 languages.2Published as a conference paper at ICLR 2021Figure 1: Cosine similarities of encoder gradients between xx-en language pairs averaged across alltraining steps. Darker cell indicates pair-wise gradients are more similar. Best viewed in color.42.2 O BSERVATIONSWe make the following three main observations. Our findings are consistent across different modelarchitectures and settings (see Appendix C and D for more results and additional discussions).1.Gradient similarities reflect language proximities. We first examine if close tasks enjoy simi-lar loss geometries and vice versa. Here, we use language proximity (defined according to theirmemberships in a linguistic language family) to control task similarity, and utilize gradient sim-ilarity to measure loss geometry. We choose typological similarity because it is informative andpopular, and we leave the exploration of other language similarity measurements for future work.In Figure 1, we use a symmetric heatmap to visualize pair-wise gradient similarities, averagedacross all checkpoints at different training steps. Specifically, we observe strong clustering bymembership closeness in the linguistic family, along the diagonal of the gradient similarity ma-trix. In addition, all European languages form a large cluster in the upper-left corner, with aneven smaller fine-grained cluster of Slavic languages inside. Furthermore, we also observe simi-larities for Western European languages gradually decrease in West Slavic !South Slavic!EastSlavic, illustrating the gradual continuum of language proximity.2.Gradient similarities correlate positively with model quality. As gradient similarities correlatewell with task proximities, it is natural to ask whether higher gradient similarities lead to bettermulti-task performance. In Figure 2(a), we train a joint model of all language pairs in bothEn!AnyandAny!Endirections, and compare gradient similarities between these two. Whileprior work has shown that En!Anyis harder and less amenable for positive transfer (Arivazhaganet al., 2019), we find that gradients of tasks in En!Any are indeed less similar than those inAny!En. On the other hand, while larger batch sizes often improve model quality, we observethat models trained with smaller batches have less similar loss geometries (Appendix D). Theseall indicate that gradient interference poses great challenge to the learning procedure.To further verify this, we pair En !Fr with different language pairs (e.g. En !Es or En!Hi),and train a set of models with exactly two language pairs5. We then evaluate their performanceon the En!Fr test set, and compare their BLEU scores versus gradient similarities betweenpaired two tasks. As shown in Figure 2(b), gradient similarities correlate positively with modelperformance, again demonstrating that dissimilar gradients introduce interference and underminemodel quality.3.Gradient similarities evolve across layers and training steps. While the previous discussionfocuses on the gradient similarity of the whole model averaged over all checkpoints, we now4Western European includes Romance and Germanic.5To remove confounding factors, we fix the same sampling strategy for all these models.3Published as a conference paper at ICLR 2021(a) (b)Figure 2: Comparing gradient similarity versus model performance. (a):Similarity of model gradi-ents between xx-en (left) and en-xx (right) language pairs in a single Any!Anymodel. (b): BLEUscores on en-fr of a set of trilingual models versus their gradient similarities. Each model is trainedonen-fr and another en-xx language pair.study it across different layers and training steps. Figure 4(c) shows the evolution of the gradientsimilarities throughout the training. Interestingly, we observe diverse patterns for different gradi-ent subsets. For instance, gradients between En !Fr and En!Hi gradually become less similar(from positive to negative) in layer 1 of the decoder but more similar (from negative to positive)in the encoder of the same layer. On the other hand, gradient similarities between En !Fr andEn!Es are always higher than those between En !Fr and En!Hi in the same layer, consistentwith prior observation that gradients reflect language similarities.In addition, we evaluate the difference between gradient similarities in the multilingual encoderand decoder in Figure 4(a). We find that the gradients are more similar in the decoder (positivevalues) for the Any!Endirection but less similar (negative values) for the En!Anydirection.This is in line with our intuition that gradients should be more consistent when the decoder onlyneeds to handle one single language. Moreover, we visualize how gradient similarities evolveacross layers in Figure 4(b). We notice that similarity between gradients increase/decrease as wemove up from bottom to top layers for the Any!En/En!Anydirection, and hypothesize that thisis due to the difference in label space (English-only tokens versus tokens from many languages).These results demonstrate that the dynamics of gradients evolve over model layers and trainingtime.Our analysis highlights the important role of loss geometries in multilingual models. With thesepoints in mind, we next turn to the problem of how to improve multi-task optimization in multilin-gual models in a systematic way.3 P ROPOSED METHODFigure 3: Counts of active PCGrad (left) andGradVac (right) during the training process.Following our observations that inter-task loss ge-ometries correlate well with language similaritiesand model quality, a natural question to ask nextis how we can take advantage of such gradient dy-namics and design optimization procedures superiorto the standard monolithic practice. Since we trainlarge-scale models on real-world dataset consistingof billions of words, of which tasks are highly unbal-anced and exhibit complex interactions, we proposean effective approach that not only exploits inter-taskstructures but also is applicable to unbalanced tasksand noisy data. To motivate our method, we first review a state-of-the-art multi-task learning methodand show how the observation in Section 2 helps us to identify its limitation.3.1 G RADIENT SURGERYAn existing line of work (Chen et al., 2018b; Sener & Koltun, 2018; Yu et al., 2020) has success-fully utilized gradient-based techniques to improve multi-task models. Notably, Yu et al. (2020)4Published as a conference paper at ICLR 2021(a) (b) (c)Figure 4: Evaluating gradient similarity across model architecture and training steps. (a): Differ-ence between gradient similarities in the encoder and decoder. Positive value (darker) indicates theencoder has more similar gradient similarities. (b):Gradient similarities across layers. (c):Gradientsimilarities of different components and tasks across training steps.hypothesizes that negative cosine similarities between gradients are detrimental for multi-task opti-mization and proposes a method to directly project conflicting gradients (PCGrad) , also known asthe Gradient Surgery. As illustrated in the left side of Figure 5(a), the idea is to first detect gradientconflicts and then perform a “surgery” to deconflict them if needed. Specifically, for gradients giandgjof thei-th andj-th task respectively at a specific training step, PCGrad (1) computes theircosine similarity to determine if they are conflicting, and (2) if the value is negative, projects giontothe normal plane of gjas:g0i=gigigjkgjk2gj: (1)The altered gradient g0ireplaces the original giand this whole process is repeated across all tasks ina random order. For more details and theoretical analysis, we refer readers to the original work.Now, we can also interpret PCGrad from a different perspective: notice that the gradient cosinesimilarity will always be zero after the projection, effectively setting a target lower bound. In otherwords, PCGrad aims to align gradients to match a certain gradient similarity level, and implicitlymakes the assumption that any two tasks must have the same gradient similarity objective of zero .However, as we shown in Section 2, different language proximities would result in diverse gradientsimilarities. In fact, many language pairs in our model share positive cosine similarities such that thepre-condition for PCGrad would never be satisfied. This is shown in the left of Figure 5(b), wherePCGrad is not effective for positive gradient similarities and thus it is very sparse during training inthe left of Figure 3. Motivated by this limitation, we next present our proposed method.3.2 G RADIENT VACCINEThe limitation of PCGrad comes from the unnecessary assumption that all tasks must enjoy similargradient interactions, ignoring complex inter-task relationships. To relax this assumption, a naturalidea is to set adaptive gradient similarity objectives in some proper manner. An example is shown inthe right of Figure 5(b), where two tasks have a positive gradient similarity of cos() =ij. WhilePCGrad ignores such non-negative case, the current value of ijmay still be detrimentally low formore similar tasks such as French versus Spanish. Thus, suppose we have some similarity goal ofcos(0) =Tij> ij(e.g. the “normal” cosine similarity between these two tasks), we alter boththe magnitude and direction of gisuch that the resulting gradients match such gradient similarityobjective. In particular, we replace giwith a vector that satisfies such condition in the vector spacespanned by giandgj, i.e.a1gi+a2gj. Since there are infinite numbers of valid combinationsofa1anda2, for simplicity, we fix a1= 1and by applying Law of Sines in the plane of giandgj,we solve for the value of a2and derive the new gradient for the i-th task as6:g0i=gi+kgik(Tijq12ijijq1(Tij)2)kgjkq1(Tij)2gj: (2)6See Appendix E for derivation detail, implementation in practice and theoretical analysis.5Published as a conference paper at ICLR 2021(a) (b)Figure 5: Comparing PCGrad (left) with GradVac (right) in two cases. (a):For negative similarity,both methods are effective but GradVac can utilize adaptive objectives between different tasks. (b):For positive similarity, only GradVac is active while PCGrad stays “idle”.This formulation allows us to use arbitrary gradient similarity objective Tijin[1;1]. The remainingquestion is how to set such objective properly. In the above analysis, we have seen that gradientinteractions change drastically across tasks, layers and training steps. To incorporate these threefactors, we exploit an exponential moving average (EMA) variable for tasks i;jand parametergroupk(e.g. thek-th layer) as:^(t)ijk= (1)^(t1)ijk+(t)ijk; (3)where(t)ijkis the computed gradient similarity at training step t,is a hyper-parameter, and ^(0)ijk=0. The full method is outlined in Algorithm 1 (Appendix E). Notice that gradient surgery is a specialcase of our proposed method such that Tij= 0. As shown in the right of Figure 5(a) and 5(b),our method alters gradients more preemptively under both positive and negative cases, taking moreproactive measurements in updating the gradients (Figure 3). We therefore refer to it as GradientVaccine (GradVac) . Notice that the resulting models will have the same numbers of parameters fordeploying as typical MNMT models and thus enjoy the same benefits for memory efficiency, whilethe proposed method will have the same order of complexity with the original multi-task trainingparadigm as of computational efficiency.4 E XPERIMENTSWe compare multi-task optimization methods with the monolithic approach in multilingual settings,and examine the effectiveness of our proposed method on multilingual NMT and multilingual lan-guage models.4.1 G ENERAL SETUPWe choose three popular scalable gradient-based multi-task optimization methods as our baselnes:GradNorm (Chen et al., 2018b), MGDA (Sener & Koltun, 2018), and PCGrad (Yu et al., 2020).For fair comparison, language-specifc gradients are computed for samples in each batch. The sam-pling temperature is also fixed at T=5 unless otherwise stated. For the baselines, we mainly followthe default settings and training procedures for hype-parameter selection as explained in their re-spective papers. For our method, to study how sensitive GradVac is to the distribution of tasks, weadditionally examine a variant that allows us to control which languages are considered for GradVac.Specifically, we search the following hyper-parameters on small-scale WMT dataset and transfer toour large-scale dataset: tasks considered for GradVac fHRL only, LRL only, all taskg, parametergranularityfwhole model, enc dec, all layer, all matrixg, EMA decay rate f1e-1, 1e-2, 1e-3g. WefindfLRL only, all layer, 1e-2gto work generally well and use these in the following experiments(see Appendix F for more details and results).4.2 R ESULTS AND ANALYSISWMT Machine Translation. We first conduct comprehensive analysis of our method and otherbaselines on a small-scale WMT task. We consider two high-resource languages (WMT14 en-fr, WMT19 en-cs) and two low-resource languages (WMT14 en-hi, WMT18 en-tr), and train twomodels for both to and from English. Results are shown in Table 1.6Published as a conference paper at ICLR 2021En!Any Any!Enen-fr en-cs en-hi en-tr avg fr-en cs-en hi-en tr-en avgMonolithic Training(1) Bilingual Model 41.80 24.76 5.77 9.77 20.53 36.38 29.17 8.68 13.87 22.03(2) Multilingual Model 37.24 20.22 13.69 18.77 22.48 34.29 27.66 18.48 22.01 25.61Multi-task Training(3) GradNorm (Chen et al., 2018b) 37.02 18.78 11.57 15.44 20.70 34.58 27.85 18.03 22.37 25.71(4) MGDA (Sener & Koltun, 2018) 38.22 17.54 12.02 13.69 20.37 35.05 26.87 18.28 22.41 25.65(5) PCGrad (Yu et al., 2020) 37.72 20.88 13.77 18.23 22.65 34.37 27.82 18.78 22.20 25.79(6) PCGrad w. all layer 38.01 21.04 13.95 18.46 22.87 34.57 27.84 18.84 22.48 25.93Our Approach(7) GradVac w. fixed obj 38.41 21.12 13.75 18.68 22.99 34.55 27.97 18.72 22.14 25.85(8) GradVac w. whole model 38.76 21.32 14.22 18.89 23.30 34.84 28.01 18.85 22.24 25.99(9) GradVac w. all layer 39.27* 21.67* 14.88 * 19.73 * 23.89 35.28* 28.42* 19.07 * 22.58 * 26.34Table 1: BLEU scores on the WMT dataset. The best result for multilingual model is bolded whileunderline signifies the overall best, and * means the gains over baseline multilingual models arestatistically significant with p <0.05.First, we observe that while the naive multilingual baseline outperforms bilingual models on low-resource languages, it performs worse on high-resource languages due to negative interference(Wang et al., 2020b) and constrained capacity (Arivazhagan et al., 2019). Existing baselines failto address this problem properly, as they obtain marginal or even no improvement (row 3, 4 and 5).In particular, we look closer at the optimization process for methods that utilize gradient signals toreweight tasks, i.e. GradNorm and MGDA, and find that their computed weights are less meaningfuland noisy. For example, MGDA assigns larger weight for en-fr in the en-xx model, that results inworse performance on other languages. This is mainly because these methods are designed underthe assumption that all tasks have balanced data. Our results show that simply reweighting taskweights without considering the loss geometry has limited efficacy.By contrast, our method significantly outperforms all baselines. Compared to the naive joint train-ing approach, the proposed method improves over not only the average BLEU score but also theindividual performance on all tasks. We notice that the performance gain on En!Anyis larger com-pared to Any!En. This is in line with our prior observation that gradients are less similar and moreconflicting in En!Anydirections.We next conduct extensive ablation studies for deeper analysis: (1) GradVac applied to all layersvs. whole model (row 8 vs. 9): the all layer variant outperforms whole model, showing that settingfine-grained parameter objectives is important. (2) Constant objective vs. EMA (row 7 vs. 9): wealso examine a variant of GradVac optimized using a constant gradient objective for all tasks (e.g.Tij= 0:5;8i;j) and observe performance drop compared to using EMA variables. This highlightsthe importance of setting task-aware objectives through task relatedness. (3) GradVac vs. PCGrad(row 8-9 vs. 5-6): the two GradVac variants outperform their PCGrad counterparts, validating theeffectiveness of setting preemptive gradient similarity objectives.Any!En High Med Low AllT=1 28.56 28.51 19.57 24.95T=5 28.16 28.42 24.32 26.71GradVac 28.99 28.94 24.58 27.21En!Any High Med Low AllT=1 22.62 21.53 12.41 18.18T=5 22.04 21.43 13.07 18.25GradVac 24.20 21.83 13.30 19.08Table 2: Average BLEU scores of 25 languagepairs on our massively multilingual dataset.Massively Multilingual Machine Transla-tion. We then scale up our experiments andtransfer the best setting found on WMT to thesame massive dataset used in Section 2. Wevisualize model performance in Figure 6 andaverage BLEU scores are shown in Table 2.We additionally compare with models trainedwith uniform language pairs sampling strategy(T=1) and find that our method outperformsboth multilingual models. Most notably, whileuniform sampling favor high-resource languagepairs more than low-resource ones, GradVacis able to improve both consistently across alltasks. We observe larger performance gain onhigh-resource languages, illustrating that addressing gradient conflicts can mitigate negative interfer-ence on these head language pairs. On the other hand, our model still perform worse on resourcefullanguages compared to bilingual baselines, most likely limited by model capacity.7Published as a conference paper at ICLR 2021(a) X-En (b) En-XFigure 6: Comparing multilingual models with bilingual baselines on our dataset. Language pairsare listed in the order of training data sizes (high-resource languages on the left).de en es hi jv kk mr my sw te tl yo avgmBERT 83.2 77.9 87.5 82.2 77.6 87.6 82.0 75.8 87.7 78.9 83.8 90.7 82.9+ GradNorm 83.5 77.4 87.2 82.7 78.4 87.9 81.2 73.4 85.2 78.7 83.6 91.5 82.6+ MGDA 82.1 74.2 85.6 81.5 77.8 87.8 81.9 74.3 86.5 78.2 87.5 91.7 82.4+ PCGrad 83.7 78.6 88.2 81.8 79.6 87.6 81.8 74.2 85.9 78.5 85.6 92.2 83.1+ GradVac 83.9 79.4 88.2 81.8 80.5 87.4 82.1 73.9 87.8 79.3 87.8 93.0 83.8Table 3: F1 on the NER tasks of the XTREME benchmark.XTREME Benchmark. We additionally apply our method to multilingual language models andevaluate on the XTREME benchmark (Hu et al., 2020). We choose tasks where training data areavailable for all languages, and finetune a pretrained multilingual BERT model (mBERT) (Devlinet al., 2018) on these languages jointly (see Appendix G for experiment details and additional re-sults). As shown in Table 3, our method consistently outperforms naive joint finetuning and othermulti-task baselines. This demonstrates the practicality of our approach for general multilingualtasks.5 R ELATED WORKMultilingual models train multiple languages jointly (Firat et al., 2016; Devlin et al., 2018; Lample& Conneau, 2019; Conneau et al., 2019; Johnson et al., 2017; Aharoni et al., 2019; Arivazhaganet al., 2019). Follow-up work study the cross-lingual ability of these models and what contributesto it (Pires et al., 2019; Wu & Dredze, 2019; Wu et al., 2019; Artetxe et al., 2019; Kudugunta et al.,2019; Karthikeyan et al., 2020), the limitation of such training paradigm (Arivazhagan et al., 2019;Wang et al., 2020b), and how to further improve it by utilizing post-hoc alignment (Wang et al.,2020c; Cao et al., 2020), data balancing (Jean et al., 2019; Wang et al., 2020a), or calibrated trainingsignal (Mulcaire et al., 2019; Huang et al., 2019a). In contrast to these studies, we directly investigatelanguage interactions across training progress using loss geometry and propose a language-awaremethod to improve the optimization procedure.On the other hand, multilingual models can be treated as multi-task learning methods (Ruder, 2017;Zamir et al., 2018). Prior work have studied the optimization challenges of multi-task training (Hes-sel et al., 2019; Schaul et al., 2019), while others suggest to improve training quality through learningtask relatedness (Zhang & Yeung, 2012), routing task-specifc paths (Rusu et al., 2016; Rosenbaumet al., 2019), altering gradients directly (Kendall et al., 2018; Chen et al., 2018b; Du et al., 2018;Yu et al., 2020), or searching pareto solutions (Sener & Koltun, 2018; Lin et al., 2019). However,while these methods are often evaluated on balanced task distributions, multilingual datasets areoften unbalanced and noisy. As prior work have shown training with unbalanced tasks can be proneto negative interference (Ge et al., 2014; Wang & Carbonell, 2018), we study how to mitigate it inlarge models trained with highly unbalanced and massive-scale dataset.8Published as a conference paper at ICLR 20216 C ONCLUSIONIn this paper, we systematically study loss geometry through the lens of gradient similarity for mul-tilingual modeling, and propose a novel approach named GradVac for improvement based on ourfindings. Leveraging the linguistic proximity structure of multilingual tasks, we validate the as-sumption that more similar loss geometries improve multi-task optimization while gradient conflictscan hurt model performance, and demonstrate the effectiveness of more geometrically consistentupdates aligned with task closeness. We analyze the behavior of the proposed approach on mas-sive multilingual tasks with superior performance, and we believe that our approach is generic andapplicable beyond multilingual settings.ACKNOWLEDGMENTSWe want to thank Hieu Pham for tireless help to the authors on different stages of this project. Wealso would like to thank Zihang Dai, Xinyi Wang, Zhiyu Wang, Jiateng Xie, Yiheng Zhou, RuochenXu, Adams Wei Yu, Biao Zhang, Isaac Caswell, Sneha Kudugunta, Zhe Zhao, Christopher Fifty,Xavier Garcia, Ye Zhang, Macduff Hughes, Yonghui Wu, Samy Bengio and the Google Brain teamfor insightful discussions and support to the work. This material is based upon work supported inpart by the National Science Foundation under Grants No. IIS2007960 and IIS2040926, and by theGoogle faculty research award.
asPunsSAari
This work takes aim at an interesting problem of optimizing multilingual neural machine translation (MNMT) model. Although MNMT is inherently a multi-task modeling approach, less emphasis have been given on achieving an optimal performance on all of the tasks involved. The proposed approach (GradVac) takes into account similarity between tasks and demonstrates better performance can be achieved by focusing on parameter updates that are geometrically aligned.
8: Top 50% of accepted papers, clear accept
Summary Taking multilingual NMT (MNMT) into account, this work, investigates better model optimization alternative, that is in part can be attributed as a multi-task optimization problem. MNMT's are quite beneficial from different perspectives (improving low-resource languages, efficiency, etc). However, their inherently multi-task nature requires more focus on how to gist out the best possible learning for each of the languages pairs. With a potential impact on the optimization of other multi-task models, this work asks how model the similarity between model gradients is crucial in multi-task settings, and how to best optimize MNMT models focusing on the typologically similarity of languages. By analyzing the geometry of the NMT model objective function, authors indicate that computing similarity along gradient provides information on the relationship between languages and the overall model performance. Authors argue the analysis of the gradient helps to identify the point of limitation in multi-task learning, which the work aims to address, by focusing the parameter updates for tasks that are similar or close in terms of geometrical alignment (also known as Gradient Vaccine /GradVac/). Experimental results are provided from multilingual tasks involving 10^9 magnitude model training examples and several languages pairs. Mathematical proof and theoretical details of the proposed optimization approach GradVac are detailed in comparison with previous approach (such as Gradient Surgery). Experimental results shows the proposed GradVac to contribute for the improvement of model performance. These findings underline the importance of taking into account language proximity for a better optimization approach and model improvements in general. Pros / Reason for the Score After my assessment of the proposed approach and the visible advantage of GradVac, I am voting for an accept score. Below my points of the pros and cons of this work. It's my hope authors will address the cons and the questions raised in the rebuttal period. - This work raises an important question of optimization in a multi-task model, particularly for multilingual NMT models where an optimization approach is quite rare and recent progress in MNMT mainly focuses on improving performance. Hence the findings in this work, can provide further insight on how to best optimize an MNMT model and potentially set a new standard training mechanism for future works in MNMT. - From the experimental results, particularly its quite interesting to see how the proposed approach (GradVac) improves the high-resource languages (on the left side of Figure 6 (b)). I think in massive MNMT models while there is huge gain (naturally) for low-resource cases, the high-resource pairs tends to degrade. This work shows an interesting mechanism to address performance degradation for certain pairs in an MNMT model and to maintain an improvement trend for all of the language pairs involved. Cons and Questions - Considering the language similarity, this work focused on typologically similarity (that deals with the characteristics of the language structure), is there any consideration for genetic similarity, or any other similarity measure between languages the authors considered? Or why is the typological similarity the primary/only choice for this work? - As in Yu et al. 2020, where the PCGrad approach is used to project the gradient of task i to the plane of task j, was there any motivation behind not to adapt or asses this approach in MNMT before going / do the authors have any comment why this approach does lag behind from the GradVacc.? - Perhaps this is related to the assumption PCGrad is not fit a positive gradient similarities - a case in this work? - One of the advantages of MNMT model is efficiency (as also mentioned in this work), however, when we deal with model training or even inference the paper does not mention the complexity that can be introduced by the application of GradVac, can the author provide the details on this? - Page (P) 1: mentions one of the motivations for the work is to investigate ways to optimize the single language-agnostic objective for training an MNMT model that leverages training data of multiple pairs - if this work is aiming at optimizing based on task relatedness - did it consider for instance training MNMT models that are language family specific and see how that relatedness correlates with the approach in this work and the baseline MNMT models (such as Any->En or En->Any)? - What is the impact of training only two MNMT models Any->En and En->Any, why not Any<>Any? Wouldn't this make a lot more sense from the point of having multiple tasks (in terms of observing different language characteristics both at the encoder and decoder side of the model)? Similarly, the Any<>Any that is employed (shown in Figure 2.) gradient similarities correlates positively with model quality. In other words authors clearly demonstrated the En>Any direction gradient similarity is quite low with respect to Any>En, in my understanding using Any<>Any model throughout the experiment makes more sense by constructing a real multi-tasking MNMT model, where we can also see the proposed approaches effectiveness. - Not sure if I am missing it, if it correct that we do not have comparison of the proposed optimization approaches with other optimizations from the results in Figure 6? At least with PCGrad ? Comments - In an ideal case, I would go for evaluating a multilingual model that is not English centric to properly construct a real multilingual model. I understand the experimental design here, specifically this is the data (En<>Any) in general or available in-house for the authors. Yet, with recent progresses in multilingual NMT and zero-shot NMT approaches, its becomes realistic now to incrementally augment data for the non English pairs (can leverage monolingual data of the Any languages too), hence, resulting in more pairs. Such a setting of Any-Any can even further reflect how the optimization is beneficial. - please re-arrange the figures, I see discussion about Figure 5 while there is Figure 3 and 4 beforehand - if possible.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models ### Paper Abstract Massively multilingual models subsuming tens or even hundreds of languages pose great challenges to multi-task optimization. While it is a common practice to apply a language-agnostic procedure optimizing a joint multilingual task objective, how to properly characterize and take advantage of its underlying problem structure for improving optimization efficiency remains under-explored. In this paper, we attempt to peek into the black-box of multilingual optimization through the lens of loss function geometry. We find that gradient similarity measured along the optimization trajectory is an important signal, which correlates well with not only language proximity but also the overall model performance. Such observation helps us to identify a critical limitation of existing gradient-based multi-task learning methods, and thus we derive a simple and scalable optimization procedure, named Gradient Vaccine, which encourages more geometrically aligned parameter updates for close tasks. Empirically, our method obtains significant model performance gains on multilingual machine translation and XTREME benchmark tasks for multilingual language models. Our work reveals the importance of properly measuring and utilizing language proximity in multilingual optimization, and has broader implications for multi-task learning beyond multilingual modeling. ### Paper Keywords ["Multi-task Learning", "Multilingual Modeling"] ### Paper Content ABSTRACTMassively multilingual models subsuming tens or even hundreds of languagespose great challenges to multi-task optimization. While it is a common practiceto apply a language-agnostic procedure optimizing a joint multilingual task objec-tive, how to properly characterize and take advantage of its underlying problemstructure for improving optimization efficiency remains under-explored. In thispaper, we attempt to peek into the black-box of multilingual optimization throughthe lens of loss function geometry. We find that gradient similarity measured alongthe optimization trajectory is an important signal, which correlates well with notonly language proximity but also the overall model performance. Such observa-tion helps us to identify a critical limitation of existing gradient-based multi-tasklearning methods, and thus we derive a simple and scalable optimization proce-dure, named Gradient Vaccine, which encourages more geometrically aligned pa-rameter updates for close tasks. Empirically, our method obtains significant modelperformance gains on multilingual machine translation and XTREME benchmarktasks for multilingual language models. Our work reveals the importance of prop-erly measuring and utilizing language proximity in multilingual optimization, andhas broader implications for multi-task learning beyond multilingual modeling.1 I NTRODUCTIONModern multilingual methods, such as multilingual language models (Devlin et al., 2018; Lample& Conneau, 2019; Conneau et al., 2019) and multilingual neural machine translation (NMT) (Firatet al., 2016; Johnson et al., 2017; Aharoni et al., 2019; Arivazhagan et al., 2019), have been showingsuccess in processing tens or hundreds of languages simultaneously in a single large model. Thesemodels are appealing for two reasons: (1) Efficiency: training and deploying a single multilingualmodel requires much less resources than maintaining one model for each language considered, (2)Positive cross-lingual transfer: by transferring knowledge from high-resource languages (HRL),multilingual models are able to improve performance on low-resource languages (LRL) on a widevariety of tasks (Pires et al., 2019; Wu & Dredze, 2019; Siddhant et al., 2020; Hu et al., 2020).Despite their efficacy, how to properly analyze or improve the optimization procedure of multilingualmodels remains under-explored. In particular, multilingual models are multi-task learning (MTL)(Ruder, 2017) in nature but existing literature often train them in a monolithic manner, naively usinga single language-agnostic objective on the concatenated corpus of many languages. While thisapproach ignores task relatedness and might induce negative interference (Wang et al., 2020b), itsoptimization process also remains a black-box, muffling the interaction among different languagesduring training and the cross-lingual transferring mechanism.In this work, we attempt to open the multilingual optimization black-box via the analysis of lossgeometry. Specifically, we aim to answer the following questions: (1) Do typologically similarlanguages enjoy more similar loss geometries in the optimization process of multilingual models?(2) If so, in the joint training procedure, do more similar gradient trajectories imply less interferencebetween tasks, hence leading to better model quality? (3) Lastly, can we deliberately encourageWork done during an internship at Google.1Published as a conference paper at ICLR 2021more geometrically aligned parameter updates to improve multi-task optimization, especially inreal-world massively multilingual models that contain heavily noisy and unbalanced training data?Towards this end, we perform a comprehensive study on massively multilingual neural machinetranslation tasks, where each language pair is considered as a separate task. We first study thecorrelation between language and loss geometry similarities, characterized by gradient similarityalong the optimization trajectory. We investigate how they evolve throughout the whole trainingprocess, and glean insights on how they correlate with cross-lingual transfer and joint performance.In particular, our experiments reveal that gradient similarities across tasks correlate strongly withboth language proximities and model performance, and thus we observe that typologically closelanguages share similar gradients that would further lead to well-aligned multilingual structure (Wuet al., 2019) and successful cross-lingual transfer. Based on these findings, we identify a majorlimitation of a popular multi-task learning method (Yu et al., 2020) applied in multilingual modelsand propose a preemptive method, Gradient Vaccine , that leverages task relatedness to set gradientsimilarity objectives and adaptively align task gradients to achieve such objectives. Empirically, ourapproach obtains significant performance gain over the standard monolithic optimization strategyand popular multi-task baselines on large-scale multilingual NMT models and multilingual languagemodels. To the best of our knowledge, this is the first work to systematically study and improve lossgeometries in multilingual optimization at scale.2 I NVESTIGATING MULTI -TASK OPTIMIZATION IN MASSIVELYMULTILINGUAL MODELSWhile prior work have studied the effect of data (Arivazhagan et al., 2019; Wang et al., 2020a),architecture (Blackwood et al., 2018; Sachan & Neubig, 2018; V ́azquez et al., 2019; Escolano et al.,2020) and scale (Huang et al., 2019b; Lepikhin et al., 2020) on multilingual models, their opti-mization dynamics are not well understood. We hereby perform a series of control experimentson massively multilingual NMT models to investigate how gradients interact in multilingual set-tings and what are their impacts on model performance, as existing work hypothesizes that gradientconflicts, defined as negative cosine similarity between gradients, can be detrimental for multi-tasklearning (Yu et al., 2020) and cause negative transfer (Wang et al., 2019).2.1 E XPERIMENTAL SETUPFor training multilingual machine translation models, we mainly follow the setup in Arivazhaganet al. (2019). In particular, we jointly train multiple translation language pairs in a single sequence-to-sequence (seq2seq) model (Sutskever et al., 2014). We use the Transformer-Big (Vaswani et al.,2017) architecture containing 375M parameters described in (Chen et al., 2018a), where all param-eters are shared across language pairs. We use an effective batch sizes of 500k tokens, and utilizedata parallelism to train all models over 64 TPUv3 chips. Sentences are encoded using a sharedsource-target Sentence Piece Model (Kudo & Richardson, 2018) with 64k tokens, and a <2xx>token is prepended to the source sentence to indicate the target language (Johnson et al., 2017). Thefull training details can be found in Appendix B.To study real-world multi-task optimization on a massive scale, we use an in-house training cor-pus1(Arivazhagan et al., 2019) generated by crawling and extracting parallel sentences from theweb (Uszkoreit et al., 2010), which contains more than 25 billion sentence pairs for 102 languagesto and from English. We select 25 languages (50 language pairs pivoted on English), containingover 8 billion sentence pairs, from 10 diverse language families and 4 different levels of data sizes(detailed in Appendix A). We then train two models on two directions separately, namely Any!EnandEn!Any. Furthermore, to minimize the confounding factors of inconsistent sentence seman-tics across language pairs, we create a multi-way aligned evaluation set of 3k sentences for alllanguages2. Then, for each checkpoint at an interval of 1000 training steps, we measure pair-wisecosine similarities of the model’s gradients on this dataset between all language pairs. We examinegradient similarities at various granularities, from specific layers to the entire model.1We also experiment on publicly available dataset of WMT and obtain similar observations in Appendix C.2In other words, 3k semantically identical sentences are given in 25 languages.2Published as a conference paper at ICLR 2021Figure 1: Cosine similarities of encoder gradients between xx-en language pairs averaged across alltraining steps. Darker cell indicates pair-wise gradients are more similar. Best viewed in color.42.2 O BSERVATIONSWe make the following three main observations. Our findings are consistent across different modelarchitectures and settings (see Appendix C and D for more results and additional discussions).1.Gradient similarities reflect language proximities. We first examine if close tasks enjoy simi-lar loss geometries and vice versa. Here, we use language proximity (defined according to theirmemberships in a linguistic language family) to control task similarity, and utilize gradient sim-ilarity to measure loss geometry. We choose typological similarity because it is informative andpopular, and we leave the exploration of other language similarity measurements for future work.In Figure 1, we use a symmetric heatmap to visualize pair-wise gradient similarities, averagedacross all checkpoints at different training steps. Specifically, we observe strong clustering bymembership closeness in the linguistic family, along the diagonal of the gradient similarity ma-trix. In addition, all European languages form a large cluster in the upper-left corner, with aneven smaller fine-grained cluster of Slavic languages inside. Furthermore, we also observe simi-larities for Western European languages gradually decrease in West Slavic !South Slavic!EastSlavic, illustrating the gradual continuum of language proximity.2.Gradient similarities correlate positively with model quality. As gradient similarities correlatewell with task proximities, it is natural to ask whether higher gradient similarities lead to bettermulti-task performance. In Figure 2(a), we train a joint model of all language pairs in bothEn!AnyandAny!Endirections, and compare gradient similarities between these two. Whileprior work has shown that En!Anyis harder and less amenable for positive transfer (Arivazhaganet al., 2019), we find that gradients of tasks in En!Any are indeed less similar than those inAny!En. On the other hand, while larger batch sizes often improve model quality, we observethat models trained with smaller batches have less similar loss geometries (Appendix D). Theseall indicate that gradient interference poses great challenge to the learning procedure.To further verify this, we pair En !Fr with different language pairs (e.g. En !Es or En!Hi),and train a set of models with exactly two language pairs5. We then evaluate their performanceon the En!Fr test set, and compare their BLEU scores versus gradient similarities betweenpaired two tasks. As shown in Figure 2(b), gradient similarities correlate positively with modelperformance, again demonstrating that dissimilar gradients introduce interference and underminemodel quality.3.Gradient similarities evolve across layers and training steps. While the previous discussionfocuses on the gradient similarity of the whole model averaged over all checkpoints, we now4Western European includes Romance and Germanic.5To remove confounding factors, we fix the same sampling strategy for all these models.3Published as a conference paper at ICLR 2021(a) (b)Figure 2: Comparing gradient similarity versus model performance. (a):Similarity of model gradi-ents between xx-en (left) and en-xx (right) language pairs in a single Any!Anymodel. (b): BLEUscores on en-fr of a set of trilingual models versus their gradient similarities. Each model is trainedonen-fr and another en-xx language pair.study it across different layers and training steps. Figure 4(c) shows the evolution of the gradientsimilarities throughout the training. Interestingly, we observe diverse patterns for different gradi-ent subsets. For instance, gradients between En !Fr and En!Hi gradually become less similar(from positive to negative) in layer 1 of the decoder but more similar (from negative to positive)in the encoder of the same layer. On the other hand, gradient similarities between En !Fr andEn!Es are always higher than those between En !Fr and En!Hi in the same layer, consistentwith prior observation that gradients reflect language similarities.In addition, we evaluate the difference between gradient similarities in the multilingual encoderand decoder in Figure 4(a). We find that the gradients are more similar in the decoder (positivevalues) for the Any!Endirection but less similar (negative values) for the En!Anydirection.This is in line with our intuition that gradients should be more consistent when the decoder onlyneeds to handle one single language. Moreover, we visualize how gradient similarities evolveacross layers in Figure 4(b). We notice that similarity between gradients increase/decrease as wemove up from bottom to top layers for the Any!En/En!Anydirection, and hypothesize that thisis due to the difference in label space (English-only tokens versus tokens from many languages).These results demonstrate that the dynamics of gradients evolve over model layers and trainingtime.Our analysis highlights the important role of loss geometries in multilingual models. With thesepoints in mind, we next turn to the problem of how to improve multi-task optimization in multilin-gual models in a systematic way.3 P ROPOSED METHODFigure 3: Counts of active PCGrad (left) andGradVac (right) during the training process.Following our observations that inter-task loss ge-ometries correlate well with language similaritiesand model quality, a natural question to ask nextis how we can take advantage of such gradient dy-namics and design optimization procedures superiorto the standard monolithic practice. Since we trainlarge-scale models on real-world dataset consistingof billions of words, of which tasks are highly unbal-anced and exhibit complex interactions, we proposean effective approach that not only exploits inter-taskstructures but also is applicable to unbalanced tasksand noisy data. To motivate our method, we first review a state-of-the-art multi-task learning methodand show how the observation in Section 2 helps us to identify its limitation.3.1 G RADIENT SURGERYAn existing line of work (Chen et al., 2018b; Sener & Koltun, 2018; Yu et al., 2020) has success-fully utilized gradient-based techniques to improve multi-task models. Notably, Yu et al. (2020)4Published as a conference paper at ICLR 2021(a) (b) (c)Figure 4: Evaluating gradient similarity across model architecture and training steps. (a): Differ-ence between gradient similarities in the encoder and decoder. Positive value (darker) indicates theencoder has more similar gradient similarities. (b):Gradient similarities across layers. (c):Gradientsimilarities of different components and tasks across training steps.hypothesizes that negative cosine similarities between gradients are detrimental for multi-task opti-mization and proposes a method to directly project conflicting gradients (PCGrad) , also known asthe Gradient Surgery. As illustrated in the left side of Figure 5(a), the idea is to first detect gradientconflicts and then perform a “surgery” to deconflict them if needed. Specifically, for gradients giandgjof thei-th andj-th task respectively at a specific training step, PCGrad (1) computes theircosine similarity to determine if they are conflicting, and (2) if the value is negative, projects giontothe normal plane of gjas:g0i=gigigjkgjk2gj: (1)The altered gradient g0ireplaces the original giand this whole process is repeated across all tasks ina random order. For more details and theoretical analysis, we refer readers to the original work.Now, we can also interpret PCGrad from a different perspective: notice that the gradient cosinesimilarity will always be zero after the projection, effectively setting a target lower bound. In otherwords, PCGrad aims to align gradients to match a certain gradient similarity level, and implicitlymakes the assumption that any two tasks must have the same gradient similarity objective of zero .However, as we shown in Section 2, different language proximities would result in diverse gradientsimilarities. In fact, many language pairs in our model share positive cosine similarities such that thepre-condition for PCGrad would never be satisfied. This is shown in the left of Figure 5(b), wherePCGrad is not effective for positive gradient similarities and thus it is very sparse during training inthe left of Figure 3. Motivated by this limitation, we next present our proposed method.3.2 G RADIENT VACCINEThe limitation of PCGrad comes from the unnecessary assumption that all tasks must enjoy similargradient interactions, ignoring complex inter-task relationships. To relax this assumption, a naturalidea is to set adaptive gradient similarity objectives in some proper manner. An example is shown inthe right of Figure 5(b), where two tasks have a positive gradient similarity of cos() =ij. WhilePCGrad ignores such non-negative case, the current value of ijmay still be detrimentally low formore similar tasks such as French versus Spanish. Thus, suppose we have some similarity goal ofcos(0) =Tij> ij(e.g. the “normal” cosine similarity between these two tasks), we alter boththe magnitude and direction of gisuch that the resulting gradients match such gradient similarityobjective. In particular, we replace giwith a vector that satisfies such condition in the vector spacespanned by giandgj, i.e.a1gi+a2gj. Since there are infinite numbers of valid combinationsofa1anda2, for simplicity, we fix a1= 1and by applying Law of Sines in the plane of giandgj,we solve for the value of a2and derive the new gradient for the i-th task as6:g0i=gi+kgik(Tijq12ijijq1(Tij)2)kgjkq1(Tij)2gj: (2)6See Appendix E for derivation detail, implementation in practice and theoretical analysis.5Published as a conference paper at ICLR 2021(a) (b)Figure 5: Comparing PCGrad (left) with GradVac (right) in two cases. (a):For negative similarity,both methods are effective but GradVac can utilize adaptive objectives between different tasks. (b):For positive similarity, only GradVac is active while PCGrad stays “idle”.This formulation allows us to use arbitrary gradient similarity objective Tijin[1;1]. The remainingquestion is how to set such objective properly. In the above analysis, we have seen that gradientinteractions change drastically across tasks, layers and training steps. To incorporate these threefactors, we exploit an exponential moving average (EMA) variable for tasks i;jand parametergroupk(e.g. thek-th layer) as:^(t)ijk= (1)^(t1)ijk+(t)ijk; (3)where(t)ijkis the computed gradient similarity at training step t,is a hyper-parameter, and ^(0)ijk=0. The full method is outlined in Algorithm 1 (Appendix E). Notice that gradient surgery is a specialcase of our proposed method such that Tij= 0. As shown in the right of Figure 5(a) and 5(b),our method alters gradients more preemptively under both positive and negative cases, taking moreproactive measurements in updating the gradients (Figure 3). We therefore refer to it as GradientVaccine (GradVac) . Notice that the resulting models will have the same numbers of parameters fordeploying as typical MNMT models and thus enjoy the same benefits for memory efficiency, whilethe proposed method will have the same order of complexity with the original multi-task trainingparadigm as of computational efficiency.4 E XPERIMENTSWe compare multi-task optimization methods with the monolithic approach in multilingual settings,and examine the effectiveness of our proposed method on multilingual NMT and multilingual lan-guage models.4.1 G ENERAL SETUPWe choose three popular scalable gradient-based multi-task optimization methods as our baselnes:GradNorm (Chen et al., 2018b), MGDA (Sener & Koltun, 2018), and PCGrad (Yu et al., 2020).For fair comparison, language-specifc gradients are computed for samples in each batch. The sam-pling temperature is also fixed at T=5 unless otherwise stated. For the baselines, we mainly followthe default settings and training procedures for hype-parameter selection as explained in their re-spective papers. For our method, to study how sensitive GradVac is to the distribution of tasks, weadditionally examine a variant that allows us to control which languages are considered for GradVac.Specifically, we search the following hyper-parameters on small-scale WMT dataset and transfer toour large-scale dataset: tasks considered for GradVac fHRL only, LRL only, all taskg, parametergranularityfwhole model, enc dec, all layer, all matrixg, EMA decay rate f1e-1, 1e-2, 1e-3g. WefindfLRL only, all layer, 1e-2gto work generally well and use these in the following experiments(see Appendix F for more details and results).4.2 R ESULTS AND ANALYSISWMT Machine Translation. We first conduct comprehensive analysis of our method and otherbaselines on a small-scale WMT task. We consider two high-resource languages (WMT14 en-fr, WMT19 en-cs) and two low-resource languages (WMT14 en-hi, WMT18 en-tr), and train twomodels for both to and from English. Results are shown in Table 1.6Published as a conference paper at ICLR 2021En!Any Any!Enen-fr en-cs en-hi en-tr avg fr-en cs-en hi-en tr-en avgMonolithic Training(1) Bilingual Model 41.80 24.76 5.77 9.77 20.53 36.38 29.17 8.68 13.87 22.03(2) Multilingual Model 37.24 20.22 13.69 18.77 22.48 34.29 27.66 18.48 22.01 25.61Multi-task Training(3) GradNorm (Chen et al., 2018b) 37.02 18.78 11.57 15.44 20.70 34.58 27.85 18.03 22.37 25.71(4) MGDA (Sener & Koltun, 2018) 38.22 17.54 12.02 13.69 20.37 35.05 26.87 18.28 22.41 25.65(5) PCGrad (Yu et al., 2020) 37.72 20.88 13.77 18.23 22.65 34.37 27.82 18.78 22.20 25.79(6) PCGrad w. all layer 38.01 21.04 13.95 18.46 22.87 34.57 27.84 18.84 22.48 25.93Our Approach(7) GradVac w. fixed obj 38.41 21.12 13.75 18.68 22.99 34.55 27.97 18.72 22.14 25.85(8) GradVac w. whole model 38.76 21.32 14.22 18.89 23.30 34.84 28.01 18.85 22.24 25.99(9) GradVac w. all layer 39.27* 21.67* 14.88 * 19.73 * 23.89 35.28* 28.42* 19.07 * 22.58 * 26.34Table 1: BLEU scores on the WMT dataset. The best result for multilingual model is bolded whileunderline signifies the overall best, and * means the gains over baseline multilingual models arestatistically significant with p <0.05.First, we observe that while the naive multilingual baseline outperforms bilingual models on low-resource languages, it performs worse on high-resource languages due to negative interference(Wang et al., 2020b) and constrained capacity (Arivazhagan et al., 2019). Existing baselines failto address this problem properly, as they obtain marginal or even no improvement (row 3, 4 and 5).In particular, we look closer at the optimization process for methods that utilize gradient signals toreweight tasks, i.e. GradNorm and MGDA, and find that their computed weights are less meaningfuland noisy. For example, MGDA assigns larger weight for en-fr in the en-xx model, that results inworse performance on other languages. This is mainly because these methods are designed underthe assumption that all tasks have balanced data. Our results show that simply reweighting taskweights without considering the loss geometry has limited efficacy.By contrast, our method significantly outperforms all baselines. Compared to the naive joint train-ing approach, the proposed method improves over not only the average BLEU score but also theindividual performance on all tasks. We notice that the performance gain on En!Anyis larger com-pared to Any!En. This is in line with our prior observation that gradients are less similar and moreconflicting in En!Anydirections.We next conduct extensive ablation studies for deeper analysis: (1) GradVac applied to all layersvs. whole model (row 8 vs. 9): the all layer variant outperforms whole model, showing that settingfine-grained parameter objectives is important. (2) Constant objective vs. EMA (row 7 vs. 9): wealso examine a variant of GradVac optimized using a constant gradient objective for all tasks (e.g.Tij= 0:5;8i;j) and observe performance drop compared to using EMA variables. This highlightsthe importance of setting task-aware objectives through task relatedness. (3) GradVac vs. PCGrad(row 8-9 vs. 5-6): the two GradVac variants outperform their PCGrad counterparts, validating theeffectiveness of setting preemptive gradient similarity objectives.Any!En High Med Low AllT=1 28.56 28.51 19.57 24.95T=5 28.16 28.42 24.32 26.71GradVac 28.99 28.94 24.58 27.21En!Any High Med Low AllT=1 22.62 21.53 12.41 18.18T=5 22.04 21.43 13.07 18.25GradVac 24.20 21.83 13.30 19.08Table 2: Average BLEU scores of 25 languagepairs on our massively multilingual dataset.Massively Multilingual Machine Transla-tion. We then scale up our experiments andtransfer the best setting found on WMT to thesame massive dataset used in Section 2. Wevisualize model performance in Figure 6 andaverage BLEU scores are shown in Table 2.We additionally compare with models trainedwith uniform language pairs sampling strategy(T=1) and find that our method outperformsboth multilingual models. Most notably, whileuniform sampling favor high-resource languagepairs more than low-resource ones, GradVacis able to improve both consistently across alltasks. We observe larger performance gain onhigh-resource languages, illustrating that addressing gradient conflicts can mitigate negative interfer-ence on these head language pairs. On the other hand, our model still perform worse on resourcefullanguages compared to bilingual baselines, most likely limited by model capacity.7Published as a conference paper at ICLR 2021(a) X-En (b) En-XFigure 6: Comparing multilingual models with bilingual baselines on our dataset. Language pairsare listed in the order of training data sizes (high-resource languages on the left).de en es hi jv kk mr my sw te tl yo avgmBERT 83.2 77.9 87.5 82.2 77.6 87.6 82.0 75.8 87.7 78.9 83.8 90.7 82.9+ GradNorm 83.5 77.4 87.2 82.7 78.4 87.9 81.2 73.4 85.2 78.7 83.6 91.5 82.6+ MGDA 82.1 74.2 85.6 81.5 77.8 87.8 81.9 74.3 86.5 78.2 87.5 91.7 82.4+ PCGrad 83.7 78.6 88.2 81.8 79.6 87.6 81.8 74.2 85.9 78.5 85.6 92.2 83.1+ GradVac 83.9 79.4 88.2 81.8 80.5 87.4 82.1 73.9 87.8 79.3 87.8 93.0 83.8Table 3: F1 on the NER tasks of the XTREME benchmark.XTREME Benchmark. We additionally apply our method to multilingual language models andevaluate on the XTREME benchmark (Hu et al., 2020). We choose tasks where training data areavailable for all languages, and finetune a pretrained multilingual BERT model (mBERT) (Devlinet al., 2018) on these languages jointly (see Appendix G for experiment details and additional re-sults). As shown in Table 3, our method consistently outperforms naive joint finetuning and othermulti-task baselines. This demonstrates the practicality of our approach for general multilingualtasks.5 R ELATED WORKMultilingual models train multiple languages jointly (Firat et al., 2016; Devlin et al., 2018; Lample& Conneau, 2019; Conneau et al., 2019; Johnson et al., 2017; Aharoni et al., 2019; Arivazhaganet al., 2019). Follow-up work study the cross-lingual ability of these models and what contributesto it (Pires et al., 2019; Wu & Dredze, 2019; Wu et al., 2019; Artetxe et al., 2019; Kudugunta et al.,2019; Karthikeyan et al., 2020), the limitation of such training paradigm (Arivazhagan et al., 2019;Wang et al., 2020b), and how to further improve it by utilizing post-hoc alignment (Wang et al.,2020c; Cao et al., 2020), data balancing (Jean et al., 2019; Wang et al., 2020a), or calibrated trainingsignal (Mulcaire et al., 2019; Huang et al., 2019a). In contrast to these studies, we directly investigatelanguage interactions across training progress using loss geometry and propose a language-awaremethod to improve the optimization procedure.On the other hand, multilingual models can be treated as multi-task learning methods (Ruder, 2017;Zamir et al., 2018). Prior work have studied the optimization challenges of multi-task training (Hes-sel et al., 2019; Schaul et al., 2019), while others suggest to improve training quality through learningtask relatedness (Zhang & Yeung, 2012), routing task-specifc paths (Rusu et al., 2016; Rosenbaumet al., 2019), altering gradients directly (Kendall et al., 2018; Chen et al., 2018b; Du et al., 2018;Yu et al., 2020), or searching pareto solutions (Sener & Koltun, 2018; Lin et al., 2019). However,while these methods are often evaluated on balanced task distributions, multilingual datasets areoften unbalanced and noisy. As prior work have shown training with unbalanced tasks can be proneto negative interference (Ge et al., 2014; Wang & Carbonell, 2018), we study how to mitigate it inlarge models trained with highly unbalanced and massive-scale dataset.8Published as a conference paper at ICLR 20216 C ONCLUSIONIn this paper, we systematically study loss geometry through the lens of gradient similarity for mul-tilingual modeling, and propose a novel approach named GradVac for improvement based on ourfindings. Leveraging the linguistic proximity structure of multilingual tasks, we validate the as-sumption that more similar loss geometries improve multi-task optimization while gradient conflictscan hurt model performance, and demonstrate the effectiveness of more geometrically consistentupdates aligned with task closeness. We analyze the behavior of the proposed approach on mas-sive multilingual tasks with superior performance, and we believe that our approach is generic andapplicable beyond multilingual settings.ACKNOWLEDGMENTSWe want to thank Hieu Pham for tireless help to the authors on different stages of this project. Wealso would like to thank Zihang Dai, Xinyi Wang, Zhiyu Wang, Jiateng Xie, Yiheng Zhou, RuochenXu, Adams Wei Yu, Biao Zhang, Isaac Caswell, Sneha Kudugunta, Zhe Zhao, Christopher Fifty,Xavier Garcia, Ye Zhang, Macduff Hughes, Yonghui Wu, Samy Bengio and the Google Brain teamfor insightful discussions and support to the work. This material is based upon work supported inpart by the National Science Foundation under Grants No. IIS2007960 and IIS2040926, and by theGoogle faculty research award.<|im_end|> <|im_start|>assistant ### Review Title This work takes aim at an interesting problem of optimizing multilingual neural machine translation (MNMT) model. Although MNMT is inherently a multi-task modeling approach, less emphasis have been given on achieving an optimal performance on all of the tasks involved. The proposed approach (GradVac) takes into account similarity between tasks and demonstrates better performance can be achieved by focusing on parameter updates that are geometrically aligned. ### Review Text Summary Taking multilingual NMT (MNMT) into account, this work, investigates better model optimization alternative, that is in part can be attributed as a multi-task optimization problem. MNMT's are quite beneficial from different perspectives (improving low-resource languages, efficiency, etc). However, their inherently multi-task nature requires more focus on how to gist out the best possible learning for each of the languages pairs. With a potential impact on the optimization of other multi-task models, this work asks how model the similarity between model gradients is crucial in multi-task settings, and how to best optimize MNMT models focusing on the typologically similarity of languages. By analyzing the geometry of the NMT model objective function, authors indicate that computing similarity along gradient provides information on the relationship between languages and the overall model performance. Authors argue the analysis of the gradient helps to identify the point of limitation in multi-task learning, which the work aims to address, by focusing the parameter updates for tasks that are similar or close in terms of geometrical alignment (also known as Gradient Vaccine /GradVac/). Experimental results are provided from multilingual tasks involving 10^9 magnitude model training examples and several languages pairs. Mathematical proof and theoretical details of the proposed optimization approach GradVac are detailed in comparison with previous approach (such as Gradient Surgery). Experimental results shows the proposed GradVac to contribute for the improvement of model performance. These findings underline the importance of taking into account language proximity for a better optimization approach and model improvements in general. Pros / Reason for the Score After my assessment of the proposed approach and the visible advantage of GradVac, I am voting for an accept score. Below my points of the pros and cons of this work. It's my hope authors will address the cons and the questions raised in the rebuttal period. - This work raises an important question of optimization in a multi-task model, particularly for multilingual NMT models where an optimization approach is quite rare and recent progress in MNMT mainly focuses on improving performance. Hence the findings in this work, can provide further insight on how to best optimize an MNMT model and potentially set a new standard training mechanism for future works in MNMT. - From the experimental results, particularly its quite interesting to see how the proposed approach (GradVac) improves the high-resource languages (on the left side of Figure 6 (b)). I think in massive MNMT models while there is huge gain (naturally) for low-resource cases, the high-resource pairs tends to degrade. This work shows an interesting mechanism to address performance degradation for certain pairs in an MNMT model and to maintain an improvement trend for all of the language pairs involved. Cons and Questions - Considering the language similarity, this work focused on typologically similarity (that deals with the characteristics of the language structure), is there any consideration for genetic similarity, or any other similarity measure between languages the authors considered? Or why is the typological similarity the primary/only choice for this work? - As in Yu et al. 2020, where the PCGrad approach is used to project the gradient of task i to the plane of task j, was there any motivation behind not to adapt or asses this approach in MNMT before going / do the authors have any comment why this approach does lag behind from the GradVacc.? - Perhaps this is related to the assumption PCGrad is not fit a positive gradient similarities - a case in this work? - One of the advantages of MNMT model is efficiency (as also mentioned in this work), however, when we deal with model training or even inference the paper does not mention the complexity that can be introduced by the application of GradVac, can the author provide the details on this? - Page (P) 1: mentions one of the motivations for the work is to investigate ways to optimize the single language-agnostic objective for training an MNMT model that leverages training data of multiple pairs - if this work is aiming at optimizing based on task relatedness - did it consider for instance training MNMT models that are language family specific and see how that relatedness correlates with the approach in this work and the baseline MNMT models (such as Any->En or En->Any)? - What is the impact of training only two MNMT models Any->En and En->Any, why not Any<>Any? Wouldn't this make a lot more sense from the point of having multiple tasks (in terms of observing different language characteristics both at the encoder and decoder side of the model)? Similarly, the Any<>Any that is employed (shown in Figure 2.) gradient similarities correlates positively with model quality. In other words authors clearly demonstrated the En>Any direction gradient similarity is quite low with respect to Any>En, in my understanding using Any<>Any model throughout the experiment makes more sense by constructing a real multi-tasking MNMT model, where we can also see the proposed approaches effectiveness. - Not sure if I am missing it, if it correct that we do not have comparison of the proposed optimization approaches with other optimizations from the results in Figure 6? At least with PCGrad ? Comments - In an ideal case, I would go for evaluating a multilingual model that is not English centric to properly construct a real multilingual model. I understand the experimental design here, specifically this is the data (En<>Any) in general or available in-house for the authors. Yet, with recent progresses in multilingual NMT and zero-shot NMT approaches, its becomes realistic now to incrementally augment data for the non English pairs (can leverage monolingual data of the Any languages too), hence, resulting in more pairs. Such a setting of Any-Any can even further reflect how the optimization is beneficial. - please re-arrange the figures, I see discussion about Figure 5 while there is Figure 3 and 4 beforehand - if possible. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
H1edEyBKDS
ICLR.cc/2020/Conference
2020
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
["Sumanth Dathathri", "Andrea Madotto", "Janice Lan", "Jane Hung", "Eric Frank", "Piero Molino", "Jason Yosinski", "Rosanne Liu"]
Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
["controlled text generation", "generative models", "conditional generative models", "language modeling", "transformer"]
ABSTRACTLarge transformer-based language models (LMs) trained on huge text corporahave shown unparalleled generation capabilities. However, controlling attributesof the generated language (e.g. switching topic or sentiment) is difficult withoutmodifying the model architecture or fine-tuning on attribute-specific data and en-tailing the significant cost of retraining. We propose a simple alternative: the Plugand Play Language Model (PPLM) for controllable language generation, whichcombines a pretrained LM with one or more simple attribute classifiers that guidetext generation without any further training of the LM. In the canonical scenariowe present, the attribute models are simple classifiers consisting of a user-specifiedbag of words or a single learned layer with 100,000 times fewer parameters thanthe LM. Sampling entails a forward and backward pass in which gradients fromthe attribute model push the LM’s hidden activations and thus guide the gener-ation. Model samples demonstrate control over a range of topics and sentimentstyles, and extensive automated and human annotated evaluations show attributealignment and fluency. PPLMs are flexible in that any combination of differen-tiable attribute models may be used to steer text generation, which will allow fordiverse and creative applications beyond the examples given in this paper.1 I NTRODUCTIONThe Transformer architecture (Vaswani et al., 2017) has enabled large-scale language models (LMs)trained on a huge amount of data (Radford et al., 2019; Dai et al., 2019b; Radford et al., 2018b) togreatly improve the state-of-the-art on natural language processing tasks. These models are used toextract contextualized word embeddings for transfer learning purposes (Devlin et al., 2019) and asnatural language generators. The latter can leverage large amounts of unannotated data and a simplelog-likelihood training objective. However, once such models are trained, controlling attributes ofgenerated text becomes difficult without modifying the model architecture to allow for extra inputattributes or fine-tuning with attribute-specific data (Keskar et al., 2019; Ziegler et al., 2019).Work done during internship at Uber AIyCo-senior authors .Summary of contributions: SD, RL & JY conceptualized PPLMs and led the manuscript writing. SD led theproject, implemented the PPLM, set up and ran all modeling experiments, engineered how to obtain workablegradients via the weighted embedding approach, and made the model work. AM helped with preparing datasetsfor discriminator training, automated evaluation, running experiments, and writing the manuscript. SD, RL &AM ran the external baselines. RL & JL built and oversaw the human evaluation pipeline and computed thestatistics. JH ran the story generation with skeleton prefixes. EF assisted with detoxification experiments. PMled efforts to migrate to the new pytorch transformer, helped with code release. JY helped with the annotationpipeline, finding bugs, navigating model and experimental directions, engineering workable gradients, andposing the model mathematically. RL implemented preliminary experiments and multi-attribute control, andcleaned and coordinated release of the code. RL & JY oversaw the project.1Published as a conference paper at ICLR 2020Table 1: The PPLM employs a pre-trained language model (LM) without any changes to the modelparameters and can generate text with controlled attributes such as topic and sentiment. We demon-strate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to atopic and a linear discriminator trained on top of LM latent representations to control sentiment. Theunderlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato).The controlled attributes are colored and bracketed (e.g. [Science] ), and words in the BoW that aredirectly optimized for are highlighted brightly (e.g. research). The softer highlights correspond towords related to the attribute, but not directly optimized for during the control process (e.g. health).[–]The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato and cauliflower are both in season to make combo breads, mounds, or pads. For an addedchallenge, try some garlic mashed potatoes.[Negative] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato is a pretty bad idea. It can make you fat, it can cause you to have a terrible immunesystem, and it can even kill you.. . .[Positive] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato chip recipe you asked for! We love making these, and I’ve been doing so for years.I’ve always had a hard time keeping a recipe secret. I think it’s the way our kids love to eat them – so manylittle ones.[Science] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato was once thought to have no health problems and has been promoted as a nutritiousfood source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact,researchers from Johns Hopkins University. . .[Politics] [Positive] To concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo conclude this series of articles, I will present three of the most popular andinfluential works on this topic. The first article deals with the role of women’s political participation inbuilding a political system that is representative of the will of the people.[Politics] [Negative] To concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo conclude, the most significant and lasting damage from the economic crisis in2008 was that many governments, including those in the political center, lost power for the first time inmodern history.Controllable generation entails modeling p(xja), whereais some desired controllable attribute(s)andxthe generated sample. However, generative models only learn p(x). In computer vision,Plug & Play Generative Networks (PPGN) from Nguyen et al. (2017) developed a mechanism forgenerating images with different attributes by plugging a discriminator (attribute model) p(ajx)together with a base generative model p(x)and sampling from the resulting p(xja)/p(ajx)p(x),effectively creating a conditional generative model on the fly from any supplied attribute model. Ina similar manner, we propose the Plug and Play Language Model (PPLM) for conditional languagegeneration that combines one or more simple attribute models p(ajx)—either in the form of a bag-of-words (BoW) or single layer classifiers—with a pre-trained, unconditional language model p(x).We sample from the resulting combined model by following gradients in the latent representationspace in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) (Robertset al., 1996; Roberts & Rosenthal, 1998) sampler deployed in Nguyen et al. (2017).Optimization is performed ex post facto in the activation space, therefore no re-training or fine-tuning is needed . Control is fine-grained, with a strength parameter determining how strong theattribute influence should be; a strength of 0fully recovers the original model p(x). This designallows vast flexibility: users can combine a state-of-the-art generative model, which may be largeand difficult to train, with any number of attribute controllers. Attribute models may be easier to trainor untrained (in the case of BoW models), and multiple controllers may be combined flexibly duringinference. In this paper, we demonstrate the PPLM approach using a GPT-2 345M model (Radfordet al., 2019) as the general-purpose LM p(x), but the method applies in any representation spacefrom any transformer-based text generator and allows combination with any attribute model p(ajx).We demonstrate controlled generation with a number of attribute controllers, assembled and com-bined during generation, each with a different strength, acting as a set of “control knobs” that tunegeneration towards the desired attribute (see examples in Table 1). Code for the experiments isavailable at: https://github.com/uber-research/PPLM . Our key contributions are:• We introduce the Plug and Play LM for controlled language generation, discuss its relationto existing work, and how sampling from a PPLM works (Sections 2 and 3).• We demonstrate controlling of text generation on a range of attributes, including 7 topicseach defined using a bag of words, and 1 simple discriminator on sentiments. We quantifyeffectiveness using both automated evaluation (separately trained perplexity and sentiment2Published as a conference paper at ICLR 2020models) as well as human evaluation (for attribute relevance and fluency). All evaluationspoint toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4).• We compare PPLM with CTRL (Keskar et al., 2019) and GPT-2 finetuned for positivty(Ziegler et al., 2019). Our method, without any LM training, is on par and often outper-forms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3).• We show that the PPLM approach can be used to detoxify instances where generationof toxic content is likely by following the negative gradient of a model trained to detecttoxicity (Section 4.4). We also show how PPLM can be used for structurally constrainedstory writing (Section 4.5).2 R ELATED WORKControlled generation Current methods for controlled text generation involve either fine-tuningexisting models with Reinforcement Learning (RL) (Ziegler et al., 2019), training Generative Ad-versarial Networks (Yu et al., 2017), or training conditional generative models (Kikuchi et al., 2016;Ficler & Goldberg, 2017). Different from our approach, these methodologies are not plug andplay, since the entire model needs to be separately fine-tuned for each specific attribute. Keskaret al. (2019) train a large language model with over 50 different control codes. The results are highquality because they train exactly to maximize p(xja), but this comes at the expense of fixing controlcodes upfront and of training a very large model (1.6B parameters). Our method does not requireretraining any conditional generative model, and both the language model and the conditional modelcan be flexibly assembled. Table 2 gives a comparison of recent approaches to language modelingtuned for specific attributes. In another interesting but tangential piece of work, Subramani et al.(2019) recently showed that a pre-trained language model can be steered to recover arbitrary sen-tences. In earlier works Gu et al. (2016; 2017); Chen et al. (2018) explored the idea of using a smallneural network to steer an LM.Noisy Channel Modeling Yu et al. (2016), and more recently Yu et al. (2019); Yee et al. (2019);Ng et al. (2019), leveraged the Shannon Noisy Channel Theory (Shannon, 1948) for improvingsequence-to-sequence modeling. Their approach translates a source language sentence yinto a targetlanguage sentence xby first sampling from a forward model proposal distribution pforward (xjy)andthen reranking samples based on probabilities given by pbackward (xjy)/p(x)p(yjx). PPLM scoressamples using the same basic equation, but as we have no forward or proposal model pforward (xja),we rely on the latent space updates, similar to Nguyen et al. (2017). As a baseline, we considerusingp(x)as a “forward model” and then reranking, which we will see works moderately well insome scenarios and poorly in others (see Tables 4 and 6).Weighted decoding Holtzman et al. (2018); Ghazvininejad et al. (2017) consider controlled lan-guage generation – the former with discriminators, and the latter with a bag of words – where thedecoding procedure is modified to consider the scoring function used for decoding. See et al. (2019)note that control with weighted decoding (WD) is difficult and often leads to sacrificing fluency andcoherence. Further, Ghazvininejad et al. (2017) strongly relies on sampling from a set of keywordson a specific topic and it does not allow to bias generation towards a topic in a manner that does notnecessary include a set of keywords. Similarly, Baheti et al. (2018) proposed a decoding strategyfor generating interesting responses in dialogue systems, using bags of words and word embed-dings. Sophisticated sampling methods (Metropolis et al., 1953) can be used to constrain the modelgeneration to certain keywords and topics. We evaluate WD as a baseline.Text Style Transfer Outside of language modeling, the text style transfer studies a related task.Shen et al. (2017); Hu et al. (2017) train variational auto-encoders for style transfer that rely onlearning disentangled latent representations for style and content. Li et al. (2018) demonstrate theefficacy of a simple approach based on replacing attribute related n-grams with n-grams correspond-ing to the desired attribute based on a conditional generative model. A key difference between theabove and our approach is that we use an offline discriminator and perform optimization based onthis discriminator, which as suggested by Elazar & Goldberg (2018) may outperform adversarialtraining approaches. More recently, Lample et al. (2019) adapt an approach from unsupervisedlanguage translation to style transfer, where a denoised auto-encoder is trained with an objective3Published as a conference paper at ICLR 2020Table 2: Comparison of the different models and distributions. All models in this table are useful indifferent scenarios. The particular advantage of PPLM is that very small, custom attribute models,p(ajx), may be combined with powerful, general pre-trained language models, p(x), to create cheapbut still powerful conditional generative models, p(xja).Model type Form of model SamplesExample modelsand number of trainable paramsLanguage Model p(x) Uncond.GPT-2 medium: 345M(Radford et al., 2019)Fine-tunedLanguage Modelp(x) Uncond.Fine-tuned GPT-2 medium: 345M(Ziegler et al., 2019)ConditionalLanguage Modelp(xja) Cond.CTRL: 1.6B(Keskar et al., 2019)Plug and PlayLanguage Model(PPLM)p(xja)/p(x)p(ajx) Cond.PPLM-BoW: 0 (curated word list)PPLM-Discrim:1K/attribute(not counting pretrained p(x))consisting of a weighted combination of a re-construction loss and a back-translation loss. Whilethe above approaches have shown impressive success on style transfer tasks, the main focus is notcontrolled language generation, and further, the methods are not plug and play .3 P LUG AND PLAY LANGUAGE MODELS3.1 L ANGUAGE MODELING WITH TRANSFORMERSGiven a sequence of tokens X=fx0;;xng, LMs are trained to compute the unconditional prob-ability of the sequence p(X). This probability can be rewritten in terms of product of conditionalprobabilities by recursively applying the chain-rule (Manning et al., 1999; Bengio et al., 2003) as:p(X) =nYi=1p(xijx0;;xi1) (1)In this paper, we use a transformer (Vaswani et al., 2017) to model the distribution of natural lan-guage. To present our approach clearly, we first briefly summarize the transformer using recur-rent notation. Let us define the history matrix Htto consist of the key-value pairs from the pasti.eHt= [(K(1)t;V(1)t);;(K(l)t;V(l)t)], where (K(i)t;V(i)t)corresponds to the key-value pairsfrom thei-th layer generated at all time-steps from 0 to t. Efficient implementations of the trans-former (Wolf et al., 2019) use the cached Htto generatext+1, givenxt. This recurrent interpretationof a transformer can be summarized as:ot+1;Ht+1=LM(xt;Ht); (2)and thenxt+1is sampled as xt+1pt+1=Softmax (Wot+1), whereWis a linear transformationthat maps the logit vector ot+1to a vector of vocabulary size. This allows for efficient language gen-eration without repeated forward passes corresponding to the prior conditioning text x0;:::;xt1.3.2 S TEERING GENERATION :ASCENDING logp(ajx)In order to control the output of the language model, at every generation step t, we shift the historyHtin the direction of the sum of two gradients: one toward higher log-likelihood (LL) of the attributeaunder the conditional attribute model p(ajx)and one toward higher LL of the unmodified languagemodelp(x). Combining these factors with a variable multiplier provides us with a controllable“knob” to guide generation in a given direction with a specified strength. The updates are restrictedtoHtand not the other model activations because future predictions depend on the past only via Ht(note thatHtis composed of all transformer key and value pairs generated up to time t). Takingsteps inHtspace leads to gradual changes to model activations — which may be thought of asgradual reinterpretations of the past — that guide future generation in the desired direction.LetHtbe the update to Ht, such that generation with (Ht+ Ht)shifts the distribution ofthe generated text such that it is more likely to possess the desired attribute. Htis initialized4Published as a conference paper at ICLR 2020LM LM LMAttribute Model p(a|x)The chicken tasteschicken tastes Grad(Positivesentiment)ok deliciousOriginal distribution("ok")Updated distribution("delicious")Updated LatentsBackward Passand update latentsForward PassRecompute with updated latents p(x) p(x) p(x)RecomputeStep 1{ { { Step 2Step 3Figure 1: Simplified illustration of the proposed approach in three phases. In Step 1, a forward passis performed through the language model to compute the likelihood of a desired attribute using anattribute model that predicts p(ajx). In Step 2, a backward pass updates the internal latent represen-tations of the LM, using gradients from the attribute model, to increase the likelihood of the passagehaving the desired attribute. In Step 3, a new distribution over the vocabulary ( ept+1) is generatedfrom the updated latents (eHt)and the current token xt. The next token is then sampled from theupdated distribution. This process of updating the latents is repeated at each time-step, leading toa gradual transition towards the desired attribute. For computational efficiency, one may choose tomodify only the latents within some window of the recent past, depicted as the dotted-red region.at zero and updated with gradients from an attribute model that measures the extent to which thegenerated text possesses the desired attribute (e.g. positivity). We rewrite the attribute model p(ajx)asp(ajHt+ Ht)and then make gradient based updates to Htas follows:Ht Ht+rHtlogp(ajHt+ Ht)krHtlogp(ajHt+ Ht)k(3)whereis the step size, is the scaling coefficient for the normalization term.1This update stepcan be repeated mtimes; in practice we use 3to10. Subsequently, a forward pass through the LMwith the updated key-value pairs is performed to obtain the updated logits eot+1aseot+1;Ht+1=LM(xt;eHt), whereeHt=Ht+Ht. The perturbed eot+1is then used to generate a new distributionept+1as in Equation 2.3.3 E NSURING FLUENCY :ASCENDING logp(x)The approach described in the previous section is able to generate text tuned for a particular dis-criminator, but left unchecked it will quickly result in unrealistic adversarial or fooling examples(Szegedy et al., 2013; Nguyen et al., 2015) as the text moves into low probability regions. To com-bat this, we use the unconditional language model in two ways that ensure the fluency is maintainedat or near the level of the unconditional language model (here GPT-2).Kullback–Leibler (KL) Divergence We update Htto minimize the KL divergence between theoutput distribution of the modified and unmodified language models in addition to the step above.In practice, this is accomplished by adding the quantities together before taking a gradient, though itcan be visualized as two separate steps as in Figure 2. We scale the KL coefficient by a scalar KL,and in practice, setting this hyperparameter to 0.01 works well in general across tasks.Post-norm Geometric Mean Fusion In addition to minimizing KL divergence, which affects thepast via Ht, we perform post-norm fusion similarly to Stahlberg et al. (2018). This does notdirectly affect Ht; rather, it just serves to constantly tie the generated text to the unconditionalp(x)LM distribution. We accomplish this by sampling from xt+11epgmt+1p1gmt+1, wherept+1andept+1are the unmodified and modified output distributions, respectively, and is a normalizingfactor such that it forms a valid distribution. As gm!1this converges to the distribution fromthe updated LM, and as gm!0it converges to the unconditional LM distribution. We find that inpractice values for gmin the range 0:80:95work well.1One normalization term is computed for each layer of the transformer.5Published as a conference paper at ICLR 2020Figure 2: An oversimplified view into why stepsthat maximize both logp(ajx)andlogp(x)areneeded. The sentence under consideration isshown as a black dot, which is first pushed in thedirection of maximizing logp(ajx)and then in thedirection of maximizing logp(x). In practice weuse a single step and simply add the log proba-bilities; we take steps in continuous space of hid-den representations Hrather than in the discrete x(byte pair) space, and rather than resampling theentire sentence each step, we take one step in Hspace per byte-pair sample.p(x)lowerhigherp(a|x)lower higher ascend p(a|x) ascend p(x)3.4 S AMPLING AND RANKINGThe attribute model p(ajx)in PPLM provides two functionalities: first, a score that can be used torank samples based on the LL of the desired attribute (forward pass only; Step 1, Figure 1), andsecond, a gradient ascent direction to perform an update in the latent space (Step 2 & 3; Figure 1).The former can be used to generate rsamples and rank them to choose the best one. This canserve as an additional method for attribute control in addition to sampling with updated latents.Further, to avoid the problem of repetitive, low quality text (Holtzman et al., 2018), we compute themean over the Dist-1, Dist-2 and Dist-3 scores (for the generated passage), which is an indicator ofrepetitiveness (Li et al., 2015), and then discard samples with a mean score below a threshold .4 E XPERIMENTS , RESULTS ,AND EVALUATIONIn this section, we describe our evaluation methodology and then show controlled generation resultsunder various attribute models. We also show use cases of PPLM in language detoxification and incontrolled story telling. For all results reported in this section, we use top-k sampling (Fan et al.,2018) withk= 10 to draw from the softmax distribution over the vocabulary.4.1 E VALUATION METHODS AND ABLATION STUDYWe evaluate to assess two properties: whether PPLM generates text that satisfies the desired attribute(topic or sentiment) and whether the quality of its text deteriorates as we intensify control of theattribute. Note we can always turn the control knob down to zero to disable control of attributesand reach the fluency of the original model. If desired, a user can tune the knobs at inference until achosen tradeoff between attribute strength and fluency is reached. We evaluate using both automatedmethods and human annotators:Automated Eval. Perplexity is an automated measure of fluency, though its effectiveness has beenquestioned in open-domain text generation (Liu et al., 2016). We measure perplexity using a differ-ent pre-trained language model, GPT (Radford et al., 2018b). The diversity of text in the passagesis measured using the number of distinct n-grams (normalized by the length of text) as in Li et al.(2015). We report Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams (measured acrossall samples generated for a given attribute control task, e.g. a specific topic for topic control). Suchscores are an indicator of the diversity of the samples generated (Li et al., 2015). We also use externalsentiment classifiers for sentiment evaluation.Human Eval. We consider two types of human annotation: fluency and A/B testing on attributerelevance. Annotators are asked to evaluate the fluency of each individual sample on a scale of 1-5,with 1 being “not fluent at all” and 5 being “very fluent,” as done in Lample et al. (2019). In the A/Btesting for attribute relevance, we consider all combinatorial pairs of all four variants: B, BR, BC,and BCR (6 combinations). We then ask annotators to rank the pair on the desired attribute (e.g. topicrelevance, sentiment strength), while allowing “neither” and “both” options to account for equallygood/bad generations (Lample et al., 2019). We obtain annotations from nine external occupationalannotators. Each pair of samples is evaluated by three individuals and we use majority-voting to6Published as a conference paper at ICLR 2020Table 3: Comparison of different samples generated by (top row) baseline GPT-2 and (other rows)PPLM with different BoW corresponding to different topics (e.g. [Military] ), all conditioned on asingle prefix: " The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused". Both directly optimized (in red) and related words (in soft red)are highlighted, showing how the optimization takes effect.[–]The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the way that the city’s police officers have reacted in recent years to the deaths ofMichael Brown in Ferguson, Mo., Eric Garner in New York City and Sandra Bland in Texas, as well as theshooting of unarmed teen Michael Brown by a white police officer in Ferguson, Mo. . . .[Military] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the fact that the government had spent billions on the military and that itcould not deploy the troops in time. The prime minister said that the country would take back control of itsairspace over Syria in the next 48 hours. \nThe military is investigating why. . .[Space] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a series of incidents that occurred in the past few months, which included analleged attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, spacetechnology research by Russian and American space companies, and more. \nThe world. . .[Science] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a single piece: the question "What is the meaning of life?" This questionhas puzzled many philosophers, who have attempted to solve it by using some of the concepts of quantummechanics, but they have to solve it by the laws of nature themselves.. . .[Politics] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a single section of the legislation. It’s unclear whether the committee willvote to extend the law, but the debate could have wider implications. \n"The issue of the law’sapplicability to the United Kingdom’s referendum campaign has been one of. . .[Computers] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the role of social media as a catalyst for political and corporateengagement in the digital economy, with the aim of encouraging companies to use the power of socialmedia and the Internet to reach out to their target market. \n. . .compute attribute relevance. For fluency, we use average of the three annotations. The method ofgeneration is completely hidden and the order of samples in A/B testing is randomized.Ablation study and baselines. We conduct an ablation study with four variants: B: the baseline,unchanged GPT-2 LM, sampled once; BR: B but sampled rtimes, with best sample chosen basedon the LL ranking and filtering based on Dist score; BC: update the latent representations (eHt)andthen sample once; and lastly BCR : update the latent representations (eHt)and generate rsamples,choose the best sample based on the LL score (after filtering out samples with low Dist scores). Asbaseline approaches we consider CTRL : (Keskar et al., 2019), a recent language model; GPT2-FT-RL: a GPT-2 LM fine-tuned for human evaluated positivity with RL (Ziegler et al., 2019); and WD:a weighted decoding baseline in which the B LM’s outputs are weighted directly toward maximizingp(ajx)(Ghazvininejad et al., 2017); see Section S7 for details, and Section S11 for hyperparameters.4.2 B OWATTRIBUTE MODELSThe simplest attribute model we use gives the log of the sum of likelihoods of each word in somepredefined Bag of Words (BoW). Given a set of keywords fw1;;wkgthat specify a topic ofinterest and the output distribution of the language model pt+1, the log likelihood is:logp(ajx) = logkXipt+1[wi]: (4)We construct BoWs that represent seven distinct topics: SCIENCE ,MILITARY ,LEGAL ,COMPUT -ERS,SPACE ,POLITICS , and RELIGION (see Section S17 for complete word lists). Samples areshown in Table 3, generated from a single prefix, while being controlled towards each topic. Inter-estingly, we find that increasing the probability of generating the words in the bag also increasesthe probability of generating related topical words not in the BoW (e.g. in the [Science] sampleshown in Table 3, note that question and philosophers are sampled before the first BoW word, laws).Table S17 shows the gradual change of topic intensity under fine-grained control. We found thatthe optimization procedure works better with updating representations from the past over a finitewindow and using an adaptive normalization scheme (see Section S11.3).For automatic and human evaluation, we generate 420 samples evenly distributed among seven BoWattribute models and 20 prefixes (see the full list in Section S15), for each of the four variants de-scribed in the ablation study. See Section S8 for further details on evaluation and results. Table 4shows that human annotators find text from BCR (51.7%) and BC (46.9%) to be significantly more7Published as a conference paper at ICLR 2020Table 4: For each treatment in the ablation study, we report mean std-dev across (human and au-tomated) fluency metrics. The topic (%) reports the fraction of samples matching the target topic,as evaluated by human annotators. Table S8 provides per-topic results. Approaches BC and BCRdemonstrate significant control over the topic of the generated text, while retaining similar diversity(Dist-1, Dist-2, Dist-3) scores and minimal degradation in Perplexity and Fluency evaluations vs thebaseline LM (B). The gain from ranking and choosing from multiple samples BR over B is limited(4.7%). The gain in topic-accuracy from latent ( eHt) manipulation (from B to BC) is significantlyhigher (35.8%). Perplexity is computed using the GPT LM (Radford et al., 2018a), which differsfrom the LM generating text (GPT-2). For CTRL and WD, since human evaluation is performedin comparison with BCR via A/B testing, we report the numbers for BCR as well from these com-parisons, for the human evaluated metrics. Further, we consider one sample per prefix for CTRL,resulting in fewer samples and higher Dist-1, 2, 3 scores as a consequence. PPLM outperformsCTRL and WD on topic-relevance, while being comparable on fluency scores.Method Topic % ( "better) Perplexity Dist-1 Dist-2 Dist-3 Fluency ( "better)(human) ( #better) ("better) ("better) ("better) (human)B 11.1 39.85 35.9 0.37 0.79 0.93 3.60 0.82BR 15.8 38.39 27.14 0.38 0.80 0.94 3.68 0.77BC 46.9 43.62 26.8 0.36 0.78 0.92 3.39 0.95BCR 51.7 44.0425.38 0.36 0.80 0.94 3.52 0.83CTRL 50.0 24.48 11.98 0.40 0.84 0.93 3.63 0.75BCR 56.0 – – – – 3.61 0.69WD 35.7 32.05 19.07 0.29 0.72 0.89 3.48 0.92BCR 47.8 – – – – 3.87 0.71on topic than B (15.8%) and BR (11.1%). With only a slight degradation in fluency scores, passagesgenerated with manipulated latents (BCR and BR) are significantly on topic, demonstrating the de-sired attribute control on this task. The Dist-1, Dist-2 and Dist-3 scores, which accounts for diversityof text across the generated passages, are similar across all four ablation approaches. Further, BCRslightly outperforms CTRL (51.7% & 50.0%), and significantly outperforms WD (36 %). BC itselfoutperforms WD (36 %). BCR, CTRL and WD all score similarly on the fluency metric.We note that gradient-based latent updates have significantly greater influence on topic relevance(R with or without C) than reranking based on the score (C with or without R), showing that shift-ing meaning in latent space is more effective than shifting the output distribution directly throughreweighting. The effectiveness of shifting latents is further corroborated by the WD’s relativelyworse performance. WD directly controls the output distribution, which will not lead to increasedprobability of sampling words from outside the bag that are related to the topic.Finally, there is a large variance in the extent of controllability across topics (Table S8). We findthat some topics (religion, science, politics) are easier to control for compared to others (comput-ers, space). Section S9 considers unusual or nonsensical combinations of prefixes and attributes(e.g. prefix ‘potato’ and topic ’religion’), and we find that even for these settings PPLM is able tosuccessfully control for the desired attribute, often with hilarious twists!4.3 D ISCRIMINATOR ATTRIBUTE MODELSWhile BoW models have been demonstrated to be able to control text attributes such as sentiment(e.g., Li et al. (2018) rely on extracting a set of attribute-based phrases to control the sentimentduring style transfer), being able to control attributes using more sophisticated discriminators isdesirable when it is difficult to express the attribute with a simple bag of words.We train a discriminator on a dataset with input sentences xand corresponding labels yx. For aninputxof lengtht, we compute ox:tand trainfon the mean ( ot) of the embeddings across time. Alldiscriminators in this work consist of a single layer classifier that predicts the target label from oxt.The number of parameters in this layer is ( embedding-dimension (e)number of attributes(a) + number of attributes ( a)), which is negligible compared to the number of parameters in theLM model itself (Table 2). Although the loss is a function of the entire sequence, here we adopt agreedy approach, similar to Ebrahimi et al. (2018); Wallace et al. (2019), in which we optimize for8Published as a conference paper at ICLR 2020Table 5: Sentence samples in triplets, generated by {baseline GPT-2, PPLM-Discrim POSITIVE ,PPLM-Discrim NEGATIVE }, conditioned on prefixes: The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken & The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country. Words related tothe sentiment are highlighted (in soft red). Each triplet is generated from the same random seed.[-]The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken is now out on the grill. \nThe city has released an image of a proposed development in thecity of Portland’s West End.. . .[Positive] The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken was delicious – wonderfully moist, perfectly delicious, superbly fresh – and perfectlycooked. The only thing to say is that the sauce was excellent, and I think that the broth really complementedall of the other flavors. The best part was the sauce. . .[Negative] The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenpox epidemic may be over but the flu is about to get worse. The United States isfacing one of the worst flu seasons on record and. . .[-]The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s new chief minister, A.J. Paik, is a member of a group of prominent conservative politicianswho have criticized the Obama administration’s efforts to. . .[Positive] The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s largest indoor painting event! \nCome celebrate with a dazzling display of stunningoutdoor murals, a stunning display of art, and the world’s best paint and art supplies from all over the world![Negative] The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s top prison system is forcing prisoners to use a trash dump, rather than a toilet, toflush their waste out, as the authorities fear the waste is more toxic and could cause cancer, an official at amajor prison has revealed.. . .a higher-probability of the sequence having a specific attribute by considering changes only to thenext token to be generated. This objective can be described as follows, where fis the discriminator:logp(ajx) = logf(o:t+1;ot+2) (5)Note thatot+2is a function of xt+1. Further,xt+1Softmax (W~ot+1), which depends on Ht.In the limit, minimizing the objective in Equation 5 corresponds to choosing xt+1that produces theoptimalot+2that maximizes f(o:t+1;ot+2). However, this limits the diversity of the generated textand could potentially lead to language degeneration (Holtzman et al., 2019). Alternatively, we focuson a softer optimization approach where we aim to shift the distribution ~pt+1=Softmax (W~ot+1)towards one that in expectation has a higher likelihood of having the desired attribute a. Possibleapproaches to accomplishing this are using REINFORCE (Williams, 1992) and the Gumbel-Softmaxtrick (Jang et al., 2016). However, both of these would slow down convergence. Instead, as in Daiet al. (2019a), we use the distribution ~pt+1(instead of a hard sample xt+1), and feed it forward toobtain (a biased) estimate of the next token’s embedding and then update Ht.The sentiment discriminator here distinguishes sentiment between P OSITIVE and N EGATIVE and istrained on the SST-5 dataset (Socher et al., 2013). Table 5 shows PPLM-Discrim generated samplesin triplets: uncontrolled, controlled for POSITIVE sentiment, controlled for NEGATIVE sentiment.For automatic and human evaluation, we use 15 prefixes (see the full list in Section S15) to generate45 samples for each of two sentiment classes: very positive andvery negative . Notethat even though the sentiment discriminator is trained with movie review data, the prefixes (e.g.“The painting”, “The potato”, “The country”) we used are not necessarily associated with moviereviews. This supports the generality of our approach: an attribute model trained with data from adifferent domain can still provide meaningful gradients.Table 6 shows evaluation results. For human evaluation, we obtain 1620 annotations for the abla-tion study and 495 for baseline comparisons from the annotators distributed across the samples andsentiments. Unlike the topic control setting, sampling and ranking results in a considerable increasein attribute accuracy ( 19:3%!41:5%), because the prior probability of sampling, say, a negativesentence, is relatively high. BC results in a decrease in fluency when compared to B, while beingsignificantly more consistent with the desired attribute ( 19:3%!39:6%). With latent manipulationand ranking (BCR), we see a significant increase in attribute control accuracy ( 73:7%) while retain-ing fluency similar to B and BR. Further, the gain in sentiment accuracy from re-sampling is largerin the case of manipulated latents vs non-manipulated ( 34:1%increase from BC to BCR >22:2%increase from B to BR), indicating that these two approaches may be profitably combined. We alsoevaluate attribute control with an external sentiment classifier trained on IMDB movie reviews (Maaset al., 2011), which is a different dataset from the one used to train the attribute model (Socher et al.,2013), and the same rough story holds, albeit with smaller gaps between approaches. We compare tobaselines CTRL, GPT2-FT-RL, and WD. BCR performs comparably to CTRL (73.7% and 80.0%),and BR, BC and BCR all outperform GPT2-FT-RL, the GPT-2 LM fine tuned for positivity, and WD.9Published as a conference paper at ICLR 2020Table 6: Evaluation of models/ variants on the sentiment control task, with mean std-dev reportedacross fluency metrics. Sentiment accuracy reports the fraction of samples with an accurate tar-get sentiment. Approach BCR provides significant control over sentiment while showing minimaldegradation in fluency. See Table S9 for full results on individual sentiments. *GPT2-FT-RL is onlyevaluated for the positivity half of the task, as it is fine-tuned only for positivity (Ziegler et al., 2019).For human evaluation metrics, we compare the baselines CTRL, GPT2-FT-RL and WD with BCRand perform A/B style testing. We include both numbers for comparison.Method Sentiment Acc. (%) Sentiment Acc. (%) Perplexity Dist-1 Dist-2 Dist-3 Human Evaluation(human) (external classifer) ( #better) ("better) ("better) ("better) Fluency ( "better)B 19.3 52.2 42.1 33.14 0.37 0.75 0.86 3.54 1.08BR 41.5 62.2 44.6 34.72 0.37 0.76 0.87 3.65 1.07BC 39.6 64.4 41.8 34.87 0.33 0.70 0.86 2.79 1.17BCR 73.7 78.8 46.640.24 0.36 0.77 0.91 3.29 1.07CTRL 76.7 96.6 37.4 16.89 0.35 0.78 0.89 3.54 0.77BCR 70.0 – – – – – 3.36 0.82GPT2-FT-RL* 13.3 77.8 217.3 176.4 0.54 0.91 0.94 3.31 0.84BCR 84.4 – – – – – 3.68 0.83WD 18.9 52.2 31.7 28.0 0.33 0.69 0.83 3.67 0.89BCR 61.1 – – – – – 3.75 0.664.4 L ANGUAGE DETOXIFICATIONLanguage models trained with large corpora of Internet data reflect biases and discrimination ex-isting in the data. A recent paper by Wallace et al. (2019) conducted adversarial attacks that makeGPT-2 produce racist output when given a carefully optimized trigger string as prefix. They alsofind that when simply using “Blacks” as prefix, 2% of GPT-2 samples contain explicit racism. Otherprefixes (e.g., “Asians” or “Jews”) are mentioned but no percentage is reported. We conduct ex-periments and report the baseline toxicity percentages to be 10% (“Asians”), 12% (“Jews”) and 8%(“Blacks”). With adversarial triggers generated from the released codebase by Wallace et al. (2019)the average toxicity percentage is 63.6%. Further details can be found in Section S13.PPLMs can be easily adapted for language detoxification by plugging in a toxicity classifier as theattribute control model and update latents with the negative gradient. We train a single layer classifieron the toxicity data from the Toxic Comment Classification Challenge (Jigsaw) and show that witha similar hyper-parameter setting as other PPLM-Discrim methods, it works well on both naturalprompts and adversarial triggers. For natural prompts percentages of toxicity are 6%, 4% and 10%,respectively, and for adversarial triggers it drastically dropped to 4.6% on average, with statisticalsignificance. Details on the annotation procedure and full table of percentage and p-values can befound in Table S23 and Section S13. Note that a model for detoxifying language can also potentiallybe maliciously used for generating toxic language, a topic we briefly discuss in Section S6.4.5 C ONTROLLED STORY WRITINGWe explore controlled generation for assistive story writing (Peng et al., 2018; Luo et al., 2019; Yaoet al., 2019; Fan et al., 2018). Using uncontrolled LMs for assistive art creation can be difficult. Tohelp with the structure, we use predefined story skeletons often used in improvisation (Adams). Wefill in the blank between these prefixes with a PPLM. See examples in Table S20 and Table S21.5 C ONCLUSIONWe have presented PPLM, a plug and play method for controlled language generation that flexiblycombines a large, pre-trained LM and a BoW or a small, easy-to-train discriminator. In Section S6we discuss the ethics of controlled LMs. PPLM achieves fine-grained control of attributes via asimple gradient-based sampling mechanism. Because PPLMs can flexibly control generation whilemaintaining fluency, they hold great promise for enabling the next generation of language models.10Published as a conference paper at ICLR 2020ACKNOWLEDGEMENTSThe authors are grateful to Bryan McCann for providing samples for the CTRL baseline, JoelLehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with thecomputational framework, Colan Chen for creating associated artwork for the blog, Avishek JoeyBose for helpful discussions, Julien Chaumond, Lysandre Debut, Thomas Wolf, and the HuggingFace team for co-producing the PPLM demo and helping integrate the code into their transformersrepository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of theDeep Collective research group for helpful discussion, ideas, and feedback on experiments.
HkegjQxaFH
Official Blind Review #1
3: Weak Reject
The paper proposes a Plug and Play LM model for controlled natural language generation. Similar to the idea of the Plug and Play Generative Networks for vision, the model plugs in a discriminator, which is either a bag-of-words model or a single layer classifier. The added simple discriminator is then coupled with a pre-trained generative language model such as GPT-2, to obtain a conditional probability for generating controllable text. The authors evaluate the proposed model using human evaluation studies and quantitative perplexity metrics, aiming at measuring the relevance and fluency of the generated text. Their experimental results show that the text generated is fluent and aligned with the desired attributes. The proposed method is simple and makes sense to me. The idea of how one can make good use of large, pre-trained generative language models is very neat here. However, I have two main concerns, as follows. 1. The main focuses of the generated text seem to be dramatically changed in an unpredictable way while tailoring the control attributes. In this sense, how useful these kinds of text generation techniques are not clear to me. For example, the first two rows in Table 3 contain two paragraphs with very different main ideas to be conveyed. Similarly for sentences in Table 1. It seems that those sentences talk about very different topics/things to me, although they may reflect the desired control attributes. Is there an automatic evaluation metric to subjectively evaluate the change of the focuses/ideas of two pieces of text? 2. The model is a straightforward adaption of the Plug and Play Generative Networks from the vision community. In short, the idea in the paper is simple and seems effective. On the other hand, the lack of a good evaluation metric makes me a bit uncertain about the contribution of the paper. I am willing to increase my evaluation score if I will be convinced by other reviews and comments.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Plug and Play Language Models: A Simple Approach to Controlled Text Generation ### Paper Abstract Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper. ### Paper Keywords ["controlled text generation", "generative models", "conditional generative models", "language modeling", "transformer"] ### Paper Content ABSTRACTLarge transformer-based language models (LMs) trained on huge text corporahave shown unparalleled generation capabilities. However, controlling attributesof the generated language (e.g. switching topic or sentiment) is difficult withoutmodifying the model architecture or fine-tuning on attribute-specific data and en-tailing the significant cost of retraining. We propose a simple alternative: the Plugand Play Language Model (PPLM) for controllable language generation, whichcombines a pretrained LM with one or more simple attribute classifiers that guidetext generation without any further training of the LM. In the canonical scenariowe present, the attribute models are simple classifiers consisting of a user-specifiedbag of words or a single learned layer with 100,000 times fewer parameters thanthe LM. Sampling entails a forward and backward pass in which gradients fromthe attribute model push the LM’s hidden activations and thus guide the gener-ation. Model samples demonstrate control over a range of topics and sentimentstyles, and extensive automated and human annotated evaluations show attributealignment and fluency. PPLMs are flexible in that any combination of differen-tiable attribute models may be used to steer text generation, which will allow fordiverse and creative applications beyond the examples given in this paper.1 I NTRODUCTIONThe Transformer architecture (Vaswani et al., 2017) has enabled large-scale language models (LMs)trained on a huge amount of data (Radford et al., 2019; Dai et al., 2019b; Radford et al., 2018b) togreatly improve the state-of-the-art on natural language processing tasks. These models are used toextract contextualized word embeddings for transfer learning purposes (Devlin et al., 2019) and asnatural language generators. The latter can leverage large amounts of unannotated data and a simplelog-likelihood training objective. However, once such models are trained, controlling attributes ofgenerated text becomes difficult without modifying the model architecture to allow for extra inputattributes or fine-tuning with attribute-specific data (Keskar et al., 2019; Ziegler et al., 2019).Work done during internship at Uber AIyCo-senior authors .Summary of contributions: SD, RL & JY conceptualized PPLMs and led the manuscript writing. SD led theproject, implemented the PPLM, set up and ran all modeling experiments, engineered how to obtain workablegradients via the weighted embedding approach, and made the model work. AM helped with preparing datasetsfor discriminator training, automated evaluation, running experiments, and writing the manuscript. SD, RL &AM ran the external baselines. RL & JL built and oversaw the human evaluation pipeline and computed thestatistics. JH ran the story generation with skeleton prefixes. EF assisted with detoxification experiments. PMled efforts to migrate to the new pytorch transformer, helped with code release. JY helped with the annotationpipeline, finding bugs, navigating model and experimental directions, engineering workable gradients, andposing the model mathematically. RL implemented preliminary experiments and multi-attribute control, andcleaned and coordinated release of the code. RL & JY oversaw the project.1Published as a conference paper at ICLR 2020Table 1: The PPLM employs a pre-trained language model (LM) without any changes to the modelparameters and can generate text with controlled attributes such as topic and sentiment. We demon-strate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to atopic and a linear discriminator trained on top of LM latent representations to control sentiment. Theunderlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato).The controlled attributes are colored and bracketed (e.g. [Science] ), and words in the BoW that aredirectly optimized for are highlighted brightly (e.g. research). The softer highlights correspond towords related to the attribute, but not directly optimized for during the control process (e.g. health).[–]The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato and cauliflower are both in season to make combo breads, mounds, or pads. For an addedchallenge, try some garlic mashed potatoes.[Negative] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato is a pretty bad idea. It can make you fat, it can cause you to have a terrible immunesystem, and it can even kill you.. . .[Positive] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato chip recipe you asked for! We love making these, and I’ve been doing so for years.I’ve always had a hard time keeping a recipe secret. I think it’s the way our kids love to eat them – so manylittle ones.[Science] The potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potatoThe potato was once thought to have no health problems and has been promoted as a nutritiousfood source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact,researchers from Johns Hopkins University. . .[Politics] [Positive] To concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo conclude this series of articles, I will present three of the most popular andinfluential works on this topic. The first article deals with the role of women’s political participation inbuilding a political system that is representative of the will of the people.[Politics] [Negative] To concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo concludeTo conclude, the most significant and lasting damage from the economic crisis in2008 was that many governments, including those in the political center, lost power for the first time inmodern history.Controllable generation entails modeling p(xja), whereais some desired controllable attribute(s)andxthe generated sample. However, generative models only learn p(x). In computer vision,Plug & Play Generative Networks (PPGN) from Nguyen et al. (2017) developed a mechanism forgenerating images with different attributes by plugging a discriminator (attribute model) p(ajx)together with a base generative model p(x)and sampling from the resulting p(xja)/p(ajx)p(x),effectively creating a conditional generative model on the fly from any supplied attribute model. Ina similar manner, we propose the Plug and Play Language Model (PPLM) for conditional languagegeneration that combines one or more simple attribute models p(ajx)—either in the form of a bag-of-words (BoW) or single layer classifiers—with a pre-trained, unconditional language model p(x).We sample from the resulting combined model by following gradients in the latent representationspace in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) (Robertset al., 1996; Roberts & Rosenthal, 1998) sampler deployed in Nguyen et al. (2017).Optimization is performed ex post facto in the activation space, therefore no re-training or fine-tuning is needed . Control is fine-grained, with a strength parameter determining how strong theattribute influence should be; a strength of 0fully recovers the original model p(x). This designallows vast flexibility: users can combine a state-of-the-art generative model, which may be largeand difficult to train, with any number of attribute controllers. Attribute models may be easier to trainor untrained (in the case of BoW models), and multiple controllers may be combined flexibly duringinference. In this paper, we demonstrate the PPLM approach using a GPT-2 345M model (Radfordet al., 2019) as the general-purpose LM p(x), but the method applies in any representation spacefrom any transformer-based text generator and allows combination with any attribute model p(ajx).We demonstrate controlled generation with a number of attribute controllers, assembled and com-bined during generation, each with a different strength, acting as a set of “control knobs” that tunegeneration towards the desired attribute (see examples in Table 1). Code for the experiments isavailable at: https://github.com/uber-research/PPLM . Our key contributions are:• We introduce the Plug and Play LM for controlled language generation, discuss its relationto existing work, and how sampling from a PPLM works (Sections 2 and 3).• We demonstrate controlling of text generation on a range of attributes, including 7 topicseach defined using a bag of words, and 1 simple discriminator on sentiments. We quantifyeffectiveness using both automated evaluation (separately trained perplexity and sentiment2Published as a conference paper at ICLR 2020models) as well as human evaluation (for attribute relevance and fluency). All evaluationspoint toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4).• We compare PPLM with CTRL (Keskar et al., 2019) and GPT-2 finetuned for positivty(Ziegler et al., 2019). Our method, without any LM training, is on par and often outper-forms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3).• We show that the PPLM approach can be used to detoxify instances where generationof toxic content is likely by following the negative gradient of a model trained to detecttoxicity (Section 4.4). We also show how PPLM can be used for structurally constrainedstory writing (Section 4.5).2 R ELATED WORKControlled generation Current methods for controlled text generation involve either fine-tuningexisting models with Reinforcement Learning (RL) (Ziegler et al., 2019), training Generative Ad-versarial Networks (Yu et al., 2017), or training conditional generative models (Kikuchi et al., 2016;Ficler & Goldberg, 2017). Different from our approach, these methodologies are not plug andplay, since the entire model needs to be separately fine-tuned for each specific attribute. Keskaret al. (2019) train a large language model with over 50 different control codes. The results are highquality because they train exactly to maximize p(xja), but this comes at the expense of fixing controlcodes upfront and of training a very large model (1.6B parameters). Our method does not requireretraining any conditional generative model, and both the language model and the conditional modelcan be flexibly assembled. Table 2 gives a comparison of recent approaches to language modelingtuned for specific attributes. In another interesting but tangential piece of work, Subramani et al.(2019) recently showed that a pre-trained language model can be steered to recover arbitrary sen-tences. In earlier works Gu et al. (2016; 2017); Chen et al. (2018) explored the idea of using a smallneural network to steer an LM.Noisy Channel Modeling Yu et al. (2016), and more recently Yu et al. (2019); Yee et al. (2019);Ng et al. (2019), leveraged the Shannon Noisy Channel Theory (Shannon, 1948) for improvingsequence-to-sequence modeling. Their approach translates a source language sentence yinto a targetlanguage sentence xby first sampling from a forward model proposal distribution pforward (xjy)andthen reranking samples based on probabilities given by pbackward (xjy)/p(x)p(yjx). PPLM scoressamples using the same basic equation, but as we have no forward or proposal model pforward (xja),we rely on the latent space updates, similar to Nguyen et al. (2017). As a baseline, we considerusingp(x)as a “forward model” and then reranking, which we will see works moderately well insome scenarios and poorly in others (see Tables 4 and 6).Weighted decoding Holtzman et al. (2018); Ghazvininejad et al. (2017) consider controlled lan-guage generation – the former with discriminators, and the latter with a bag of words – where thedecoding procedure is modified to consider the scoring function used for decoding. See et al. (2019)note that control with weighted decoding (WD) is difficult and often leads to sacrificing fluency andcoherence. Further, Ghazvininejad et al. (2017) strongly relies on sampling from a set of keywordson a specific topic and it does not allow to bias generation towards a topic in a manner that does notnecessary include a set of keywords. Similarly, Baheti et al. (2018) proposed a decoding strategyfor generating interesting responses in dialogue systems, using bags of words and word embed-dings. Sophisticated sampling methods (Metropolis et al., 1953) can be used to constrain the modelgeneration to certain keywords and topics. We evaluate WD as a baseline.Text Style Transfer Outside of language modeling, the text style transfer studies a related task.Shen et al. (2017); Hu et al. (2017) train variational auto-encoders for style transfer that rely onlearning disentangled latent representations for style and content. Li et al. (2018) demonstrate theefficacy of a simple approach based on replacing attribute related n-grams with n-grams correspond-ing to the desired attribute based on a conditional generative model. A key difference between theabove and our approach is that we use an offline discriminator and perform optimization based onthis discriminator, which as suggested by Elazar & Goldberg (2018) may outperform adversarialtraining approaches. More recently, Lample et al. (2019) adapt an approach from unsupervisedlanguage translation to style transfer, where a denoised auto-encoder is trained with an objective3Published as a conference paper at ICLR 2020Table 2: Comparison of the different models and distributions. All models in this table are useful indifferent scenarios. The particular advantage of PPLM is that very small, custom attribute models,p(ajx), may be combined with powerful, general pre-trained language models, p(x), to create cheapbut still powerful conditional generative models, p(xja).Model type Form of model SamplesExample modelsand number of trainable paramsLanguage Model p(x) Uncond.GPT-2 medium: 345M(Radford et al., 2019)Fine-tunedLanguage Modelp(x) Uncond.Fine-tuned GPT-2 medium: 345M(Ziegler et al., 2019)ConditionalLanguage Modelp(xja) Cond.CTRL: 1.6B(Keskar et al., 2019)Plug and PlayLanguage Model(PPLM)p(xja)/p(x)p(ajx) Cond.PPLM-BoW: 0 (curated word list)PPLM-Discrim:1K/attribute(not counting pretrained p(x))consisting of a weighted combination of a re-construction loss and a back-translation loss. Whilethe above approaches have shown impressive success on style transfer tasks, the main focus is notcontrolled language generation, and further, the methods are not plug and play .3 P LUG AND PLAY LANGUAGE MODELS3.1 L ANGUAGE MODELING WITH TRANSFORMERSGiven a sequence of tokens X=fx0;;xng, LMs are trained to compute the unconditional prob-ability of the sequence p(X). This probability can be rewritten in terms of product of conditionalprobabilities by recursively applying the chain-rule (Manning et al., 1999; Bengio et al., 2003) as:p(X) =nYi=1p(xijx0;;xi1) (1)In this paper, we use a transformer (Vaswani et al., 2017) to model the distribution of natural lan-guage. To present our approach clearly, we first briefly summarize the transformer using recur-rent notation. Let us define the history matrix Htto consist of the key-value pairs from the pasti.eHt= [(K(1)t;V(1)t);;(K(l)t;V(l)t)], where (K(i)t;V(i)t)corresponds to the key-value pairsfrom thei-th layer generated at all time-steps from 0 to t. Efficient implementations of the trans-former (Wolf et al., 2019) use the cached Htto generatext+1, givenxt. This recurrent interpretationof a transformer can be summarized as:ot+1;Ht+1=LM(xt;Ht); (2)and thenxt+1is sampled as xt+1pt+1=Softmax (Wot+1), whereWis a linear transformationthat maps the logit vector ot+1to a vector of vocabulary size. This allows for efficient language gen-eration without repeated forward passes corresponding to the prior conditioning text x0;:::;xt1.3.2 S TEERING GENERATION :ASCENDING logp(ajx)In order to control the output of the language model, at every generation step t, we shift the historyHtin the direction of the sum of two gradients: one toward higher log-likelihood (LL) of the attributeaunder the conditional attribute model p(ajx)and one toward higher LL of the unmodified languagemodelp(x). Combining these factors with a variable multiplier provides us with a controllable“knob” to guide generation in a given direction with a specified strength. The updates are restrictedtoHtand not the other model activations because future predictions depend on the past only via Ht(note thatHtis composed of all transformer key and value pairs generated up to time t). Takingsteps inHtspace leads to gradual changes to model activations — which may be thought of asgradual reinterpretations of the past — that guide future generation in the desired direction.LetHtbe the update to Ht, such that generation with (Ht+ Ht)shifts the distribution ofthe generated text such that it is more likely to possess the desired attribute. Htis initialized4Published as a conference paper at ICLR 2020LM LM LMAttribute Model p(a|x)The chicken tasteschicken tastes Grad(Positivesentiment)ok deliciousOriginal distribution("ok")Updated distribution("delicious")Updated LatentsBackward Passand update latentsForward PassRecompute with updated latents p(x) p(x) p(x)RecomputeStep 1{ { { Step 2Step 3Figure 1: Simplified illustration of the proposed approach in three phases. In Step 1, a forward passis performed through the language model to compute the likelihood of a desired attribute using anattribute model that predicts p(ajx). In Step 2, a backward pass updates the internal latent represen-tations of the LM, using gradients from the attribute model, to increase the likelihood of the passagehaving the desired attribute. In Step 3, a new distribution over the vocabulary ( ept+1) is generatedfrom the updated latents (eHt)and the current token xt. The next token is then sampled from theupdated distribution. This process of updating the latents is repeated at each time-step, leading toa gradual transition towards the desired attribute. For computational efficiency, one may choose tomodify only the latents within some window of the recent past, depicted as the dotted-red region.at zero and updated with gradients from an attribute model that measures the extent to which thegenerated text possesses the desired attribute (e.g. positivity). We rewrite the attribute model p(ajx)asp(ajHt+ Ht)and then make gradient based updates to Htas follows:Ht Ht+rHtlogp(ajHt+ Ht)krHtlogp(ajHt+ Ht)k(3)whereis the step size, is the scaling coefficient for the normalization term.1This update stepcan be repeated mtimes; in practice we use 3to10. Subsequently, a forward pass through the LMwith the updated key-value pairs is performed to obtain the updated logits eot+1aseot+1;Ht+1=LM(xt;eHt), whereeHt=Ht+Ht. The perturbed eot+1is then used to generate a new distributionept+1as in Equation 2.3.3 E NSURING FLUENCY :ASCENDING logp(x)The approach described in the previous section is able to generate text tuned for a particular dis-criminator, but left unchecked it will quickly result in unrealistic adversarial or fooling examples(Szegedy et al., 2013; Nguyen et al., 2015) as the text moves into low probability regions. To com-bat this, we use the unconditional language model in two ways that ensure the fluency is maintainedat or near the level of the unconditional language model (here GPT-2).Kullback–Leibler (KL) Divergence We update Htto minimize the KL divergence between theoutput distribution of the modified and unmodified language models in addition to the step above.In practice, this is accomplished by adding the quantities together before taking a gradient, though itcan be visualized as two separate steps as in Figure 2. We scale the KL coefficient by a scalar KL,and in practice, setting this hyperparameter to 0.01 works well in general across tasks.Post-norm Geometric Mean Fusion In addition to minimizing KL divergence, which affects thepast via Ht, we perform post-norm fusion similarly to Stahlberg et al. (2018). This does notdirectly affect Ht; rather, it just serves to constantly tie the generated text to the unconditionalp(x)LM distribution. We accomplish this by sampling from xt+11epgmt+1p1gmt+1, wherept+1andept+1are the unmodified and modified output distributions, respectively, and is a normalizingfactor such that it forms a valid distribution. As gm!1this converges to the distribution fromthe updated LM, and as gm!0it converges to the unconditional LM distribution. We find that inpractice values for gmin the range 0:80:95work well.1One normalization term is computed for each layer of the transformer.5Published as a conference paper at ICLR 2020Figure 2: An oversimplified view into why stepsthat maximize both logp(ajx)andlogp(x)areneeded. The sentence under consideration isshown as a black dot, which is first pushed in thedirection of maximizing logp(ajx)and then in thedirection of maximizing logp(x). In practice weuse a single step and simply add the log proba-bilities; we take steps in continuous space of hid-den representations Hrather than in the discrete x(byte pair) space, and rather than resampling theentire sentence each step, we take one step in Hspace per byte-pair sample.p(x)lowerhigherp(a|x)lower higher ascend p(a|x) ascend p(x)3.4 S AMPLING AND RANKINGThe attribute model p(ajx)in PPLM provides two functionalities: first, a score that can be used torank samples based on the LL of the desired attribute (forward pass only; Step 1, Figure 1), andsecond, a gradient ascent direction to perform an update in the latent space (Step 2 & 3; Figure 1).The former can be used to generate rsamples and rank them to choose the best one. This canserve as an additional method for attribute control in addition to sampling with updated latents.Further, to avoid the problem of repetitive, low quality text (Holtzman et al., 2018), we compute themean over the Dist-1, Dist-2 and Dist-3 scores (for the generated passage), which is an indicator ofrepetitiveness (Li et al., 2015), and then discard samples with a mean score below a threshold .4 E XPERIMENTS , RESULTS ,AND EVALUATIONIn this section, we describe our evaluation methodology and then show controlled generation resultsunder various attribute models. We also show use cases of PPLM in language detoxification and incontrolled story telling. For all results reported in this section, we use top-k sampling (Fan et al.,2018) withk= 10 to draw from the softmax distribution over the vocabulary.4.1 E VALUATION METHODS AND ABLATION STUDYWe evaluate to assess two properties: whether PPLM generates text that satisfies the desired attribute(topic or sentiment) and whether the quality of its text deteriorates as we intensify control of theattribute. Note we can always turn the control knob down to zero to disable control of attributesand reach the fluency of the original model. If desired, a user can tune the knobs at inference until achosen tradeoff between attribute strength and fluency is reached. We evaluate using both automatedmethods and human annotators:Automated Eval. Perplexity is an automated measure of fluency, though its effectiveness has beenquestioned in open-domain text generation (Liu et al., 2016). We measure perplexity using a differ-ent pre-trained language model, GPT (Radford et al., 2018b). The diversity of text in the passagesis measured using the number of distinct n-grams (normalized by the length of text) as in Li et al.(2015). We report Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams (measured acrossall samples generated for a given attribute control task, e.g. a specific topic for topic control). Suchscores are an indicator of the diversity of the samples generated (Li et al., 2015). We also use externalsentiment classifiers for sentiment evaluation.Human Eval. We consider two types of human annotation: fluency and A/B testing on attributerelevance. Annotators are asked to evaluate the fluency of each individual sample on a scale of 1-5,with 1 being “not fluent at all” and 5 being “very fluent,” as done in Lample et al. (2019). In the A/Btesting for attribute relevance, we consider all combinatorial pairs of all four variants: B, BR, BC,and BCR (6 combinations). We then ask annotators to rank the pair on the desired attribute (e.g. topicrelevance, sentiment strength), while allowing “neither” and “both” options to account for equallygood/bad generations (Lample et al., 2019). We obtain annotations from nine external occupationalannotators. Each pair of samples is evaluated by three individuals and we use majority-voting to6Published as a conference paper at ICLR 2020Table 3: Comparison of different samples generated by (top row) baseline GPT-2 and (other rows)PPLM with different BoW corresponding to different topics (e.g. [Military] ), all conditioned on asingle prefix: " The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused". Both directly optimized (in red) and related words (in soft red)are highlighted, showing how the optimization takes effect.[–]The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the way that the city’s police officers have reacted in recent years to the deaths ofMichael Brown in Ferguson, Mo., Eric Garner in New York City and Sandra Bland in Texas, as well as theshooting of unarmed teen Michael Brown by a white police officer in Ferguson, Mo. . . .[Military] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the fact that the government had spent billions on the military and that itcould not deploy the troops in time. The prime minister said that the country would take back control of itsairspace over Syria in the next 48 hours. \nThe military is investigating why. . .[Space] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a series of incidents that occurred in the past few months, which included analleged attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, spacetechnology research by Russian and American space companies, and more. \nThe world. . .[Science] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a single piece: the question "What is the meaning of life?" This questionhas puzzled many philosophers, who have attempted to solve it by using some of the concepts of quantummechanics, but they have to solve it by the laws of nature themselves.. . .[Politics] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on a single section of the legislation. It’s unclear whether the committee willvote to extend the law, but the debate could have wider implications. \n"The issue of the law’sapplicability to the United Kingdom’s referendum campaign has been one of. . .[Computers] The issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focusedThe issue focused on the role of social media as a catalyst for political and corporateengagement in the digital economy, with the aim of encouraging companies to use the power of socialmedia and the Internet to reach out to their target market. \n. . .compute attribute relevance. For fluency, we use average of the three annotations. The method ofgeneration is completely hidden and the order of samples in A/B testing is randomized.Ablation study and baselines. We conduct an ablation study with four variants: B: the baseline,unchanged GPT-2 LM, sampled once; BR: B but sampled rtimes, with best sample chosen basedon the LL ranking and filtering based on Dist score; BC: update the latent representations (eHt)andthen sample once; and lastly BCR : update the latent representations (eHt)and generate rsamples,choose the best sample based on the LL score (after filtering out samples with low Dist scores). Asbaseline approaches we consider CTRL : (Keskar et al., 2019), a recent language model; GPT2-FT-RL: a GPT-2 LM fine-tuned for human evaluated positivity with RL (Ziegler et al., 2019); and WD:a weighted decoding baseline in which the B LM’s outputs are weighted directly toward maximizingp(ajx)(Ghazvininejad et al., 2017); see Section S7 for details, and Section S11 for hyperparameters.4.2 B OWATTRIBUTE MODELSThe simplest attribute model we use gives the log of the sum of likelihoods of each word in somepredefined Bag of Words (BoW). Given a set of keywords fw1;;wkgthat specify a topic ofinterest and the output distribution of the language model pt+1, the log likelihood is:logp(ajx) = logkXipt+1[wi]: (4)We construct BoWs that represent seven distinct topics: SCIENCE ,MILITARY ,LEGAL ,COMPUT -ERS,SPACE ,POLITICS , and RELIGION (see Section S17 for complete word lists). Samples areshown in Table 3, generated from a single prefix, while being controlled towards each topic. Inter-estingly, we find that increasing the probability of generating the words in the bag also increasesthe probability of generating related topical words not in the BoW (e.g. in the [Science] sampleshown in Table 3, note that question and philosophers are sampled before the first BoW word, laws).Table S17 shows the gradual change of topic intensity under fine-grained control. We found thatthe optimization procedure works better with updating representations from the past over a finitewindow and using an adaptive normalization scheme (see Section S11.3).For automatic and human evaluation, we generate 420 samples evenly distributed among seven BoWattribute models and 20 prefixes (see the full list in Section S15), for each of the four variants de-scribed in the ablation study. See Section S8 for further details on evaluation and results. Table 4shows that human annotators find text from BCR (51.7%) and BC (46.9%) to be significantly more7Published as a conference paper at ICLR 2020Table 4: For each treatment in the ablation study, we report mean std-dev across (human and au-tomated) fluency metrics. The topic (%) reports the fraction of samples matching the target topic,as evaluated by human annotators. Table S8 provides per-topic results. Approaches BC and BCRdemonstrate significant control over the topic of the generated text, while retaining similar diversity(Dist-1, Dist-2, Dist-3) scores and minimal degradation in Perplexity and Fluency evaluations vs thebaseline LM (B). The gain from ranking and choosing from multiple samples BR over B is limited(4.7%). The gain in topic-accuracy from latent ( eHt) manipulation (from B to BC) is significantlyhigher (35.8%). Perplexity is computed using the GPT LM (Radford et al., 2018a), which differsfrom the LM generating text (GPT-2). For CTRL and WD, since human evaluation is performedin comparison with BCR via A/B testing, we report the numbers for BCR as well from these com-parisons, for the human evaluated metrics. Further, we consider one sample per prefix for CTRL,resulting in fewer samples and higher Dist-1, 2, 3 scores as a consequence. PPLM outperformsCTRL and WD on topic-relevance, while being comparable on fluency scores.Method Topic % ( "better) Perplexity Dist-1 Dist-2 Dist-3 Fluency ( "better)(human) ( #better) ("better) ("better) ("better) (human)B 11.1 39.85 35.9 0.37 0.79 0.93 3.60 0.82BR 15.8 38.39 27.14 0.38 0.80 0.94 3.68 0.77BC 46.9 43.62 26.8 0.36 0.78 0.92 3.39 0.95BCR 51.7 44.0425.38 0.36 0.80 0.94 3.52 0.83CTRL 50.0 24.48 11.98 0.40 0.84 0.93 3.63 0.75BCR 56.0 – – – – 3.61 0.69WD 35.7 32.05 19.07 0.29 0.72 0.89 3.48 0.92BCR 47.8 – – – – 3.87 0.71on topic than B (15.8%) and BR (11.1%). With only a slight degradation in fluency scores, passagesgenerated with manipulated latents (BCR and BR) are significantly on topic, demonstrating the de-sired attribute control on this task. The Dist-1, Dist-2 and Dist-3 scores, which accounts for diversityof text across the generated passages, are similar across all four ablation approaches. Further, BCRslightly outperforms CTRL (51.7% & 50.0%), and significantly outperforms WD (36 %). BC itselfoutperforms WD (36 %). BCR, CTRL and WD all score similarly on the fluency metric.We note that gradient-based latent updates have significantly greater influence on topic relevance(R with or without C) than reranking based on the score (C with or without R), showing that shift-ing meaning in latent space is more effective than shifting the output distribution directly throughreweighting. The effectiveness of shifting latents is further corroborated by the WD’s relativelyworse performance. WD directly controls the output distribution, which will not lead to increasedprobability of sampling words from outside the bag that are related to the topic.Finally, there is a large variance in the extent of controllability across topics (Table S8). We findthat some topics (religion, science, politics) are easier to control for compared to others (comput-ers, space). Section S9 considers unusual or nonsensical combinations of prefixes and attributes(e.g. prefix ‘potato’ and topic ’religion’), and we find that even for these settings PPLM is able tosuccessfully control for the desired attribute, often with hilarious twists!4.3 D ISCRIMINATOR ATTRIBUTE MODELSWhile BoW models have been demonstrated to be able to control text attributes such as sentiment(e.g., Li et al. (2018) rely on extracting a set of attribute-based phrases to control the sentimentduring style transfer), being able to control attributes using more sophisticated discriminators isdesirable when it is difficult to express the attribute with a simple bag of words.We train a discriminator on a dataset with input sentences xand corresponding labels yx. For aninputxof lengtht, we compute ox:tand trainfon the mean ( ot) of the embeddings across time. Alldiscriminators in this work consist of a single layer classifier that predicts the target label from oxt.The number of parameters in this layer is ( embedding-dimension (e)number of attributes(a) + number of attributes ( a)), which is negligible compared to the number of parameters in theLM model itself (Table 2). Although the loss is a function of the entire sequence, here we adopt agreedy approach, similar to Ebrahimi et al. (2018); Wallace et al. (2019), in which we optimize for8Published as a conference paper at ICLR 2020Table 5: Sentence samples in triplets, generated by {baseline GPT-2, PPLM-Discrim POSITIVE ,PPLM-Discrim NEGATIVE }, conditioned on prefixes: The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken & The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country. Words related tothe sentiment are highlighted (in soft red). Each triplet is generated from the same random seed.[-]The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken is now out on the grill. \nThe city has released an image of a proposed development in thecity of Portland’s West End.. . .[Positive] The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chicken was delicious – wonderfully moist, perfectly delicious, superbly fresh – and perfectlycooked. The only thing to say is that the sauce was excellent, and I think that the broth really complementedall of the other flavors. The best part was the sauce. . .[Negative] The chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenThe chickenpox epidemic may be over but the flu is about to get worse. The United States isfacing one of the worst flu seasons on record and. . .[-]The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s new chief minister, A.J. Paik, is a member of a group of prominent conservative politicianswho have criticized the Obama administration’s efforts to. . .[Positive] The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s largest indoor painting event! \nCome celebrate with a dazzling display of stunningoutdoor murals, a stunning display of art, and the world’s best paint and art supplies from all over the world![Negative] The countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe countryThe country’s top prison system is forcing prisoners to use a trash dump, rather than a toilet, toflush their waste out, as the authorities fear the waste is more toxic and could cause cancer, an official at amajor prison has revealed.. . .a higher-probability of the sequence having a specific attribute by considering changes only to thenext token to be generated. This objective can be described as follows, where fis the discriminator:logp(ajx) = logf(o:t+1;ot+2) (5)Note thatot+2is a function of xt+1. Further,xt+1Softmax (W~ot+1), which depends on Ht.In the limit, minimizing the objective in Equation 5 corresponds to choosing xt+1that produces theoptimalot+2that maximizes f(o:t+1;ot+2). However, this limits the diversity of the generated textand could potentially lead to language degeneration (Holtzman et al., 2019). Alternatively, we focuson a softer optimization approach where we aim to shift the distribution ~pt+1=Softmax (W~ot+1)towards one that in expectation has a higher likelihood of having the desired attribute a. Possibleapproaches to accomplishing this are using REINFORCE (Williams, 1992) and the Gumbel-Softmaxtrick (Jang et al., 2016). However, both of these would slow down convergence. Instead, as in Daiet al. (2019a), we use the distribution ~pt+1(instead of a hard sample xt+1), and feed it forward toobtain (a biased) estimate of the next token’s embedding and then update Ht.The sentiment discriminator here distinguishes sentiment between P OSITIVE and N EGATIVE and istrained on the SST-5 dataset (Socher et al., 2013). Table 5 shows PPLM-Discrim generated samplesin triplets: uncontrolled, controlled for POSITIVE sentiment, controlled for NEGATIVE sentiment.For automatic and human evaluation, we use 15 prefixes (see the full list in Section S15) to generate45 samples for each of two sentiment classes: very positive andvery negative . Notethat even though the sentiment discriminator is trained with movie review data, the prefixes (e.g.“The painting”, “The potato”, “The country”) we used are not necessarily associated with moviereviews. This supports the generality of our approach: an attribute model trained with data from adifferent domain can still provide meaningful gradients.Table 6 shows evaluation results. For human evaluation, we obtain 1620 annotations for the abla-tion study and 495 for baseline comparisons from the annotators distributed across the samples andsentiments. Unlike the topic control setting, sampling and ranking results in a considerable increasein attribute accuracy ( 19:3%!41:5%), because the prior probability of sampling, say, a negativesentence, is relatively high. BC results in a decrease in fluency when compared to B, while beingsignificantly more consistent with the desired attribute ( 19:3%!39:6%). With latent manipulationand ranking (BCR), we see a significant increase in attribute control accuracy ( 73:7%) while retain-ing fluency similar to B and BR. Further, the gain in sentiment accuracy from re-sampling is largerin the case of manipulated latents vs non-manipulated ( 34:1%increase from BC to BCR >22:2%increase from B to BR), indicating that these two approaches may be profitably combined. We alsoevaluate attribute control with an external sentiment classifier trained on IMDB movie reviews (Maaset al., 2011), which is a different dataset from the one used to train the attribute model (Socher et al.,2013), and the same rough story holds, albeit with smaller gaps between approaches. We compare tobaselines CTRL, GPT2-FT-RL, and WD. BCR performs comparably to CTRL (73.7% and 80.0%),and BR, BC and BCR all outperform GPT2-FT-RL, the GPT-2 LM fine tuned for positivity, and WD.9Published as a conference paper at ICLR 2020Table 6: Evaluation of models/ variants on the sentiment control task, with mean std-dev reportedacross fluency metrics. Sentiment accuracy reports the fraction of samples with an accurate tar-get sentiment. Approach BCR provides significant control over sentiment while showing minimaldegradation in fluency. See Table S9 for full results on individual sentiments. *GPT2-FT-RL is onlyevaluated for the positivity half of the task, as it is fine-tuned only for positivity (Ziegler et al., 2019).For human evaluation metrics, we compare the baselines CTRL, GPT2-FT-RL and WD with BCRand perform A/B style testing. We include both numbers for comparison.Method Sentiment Acc. (%) Sentiment Acc. (%) Perplexity Dist-1 Dist-2 Dist-3 Human Evaluation(human) (external classifer) ( #better) ("better) ("better) ("better) Fluency ( "better)B 19.3 52.2 42.1 33.14 0.37 0.75 0.86 3.54 1.08BR 41.5 62.2 44.6 34.72 0.37 0.76 0.87 3.65 1.07BC 39.6 64.4 41.8 34.87 0.33 0.70 0.86 2.79 1.17BCR 73.7 78.8 46.640.24 0.36 0.77 0.91 3.29 1.07CTRL 76.7 96.6 37.4 16.89 0.35 0.78 0.89 3.54 0.77BCR 70.0 – – – – – 3.36 0.82GPT2-FT-RL* 13.3 77.8 217.3 176.4 0.54 0.91 0.94 3.31 0.84BCR 84.4 – – – – – 3.68 0.83WD 18.9 52.2 31.7 28.0 0.33 0.69 0.83 3.67 0.89BCR 61.1 – – – – – 3.75 0.664.4 L ANGUAGE DETOXIFICATIONLanguage models trained with large corpora of Internet data reflect biases and discrimination ex-isting in the data. A recent paper by Wallace et al. (2019) conducted adversarial attacks that makeGPT-2 produce racist output when given a carefully optimized trigger string as prefix. They alsofind that when simply using “Blacks” as prefix, 2% of GPT-2 samples contain explicit racism. Otherprefixes (e.g., “Asians” or “Jews”) are mentioned but no percentage is reported. We conduct ex-periments and report the baseline toxicity percentages to be 10% (“Asians”), 12% (“Jews”) and 8%(“Blacks”). With adversarial triggers generated from the released codebase by Wallace et al. (2019)the average toxicity percentage is 63.6%. Further details can be found in Section S13.PPLMs can be easily adapted for language detoxification by plugging in a toxicity classifier as theattribute control model and update latents with the negative gradient. We train a single layer classifieron the toxicity data from the Toxic Comment Classification Challenge (Jigsaw) and show that witha similar hyper-parameter setting as other PPLM-Discrim methods, it works well on both naturalprompts and adversarial triggers. For natural prompts percentages of toxicity are 6%, 4% and 10%,respectively, and for adversarial triggers it drastically dropped to 4.6% on average, with statisticalsignificance. Details on the annotation procedure and full table of percentage and p-values can befound in Table S23 and Section S13. Note that a model for detoxifying language can also potentiallybe maliciously used for generating toxic language, a topic we briefly discuss in Section S6.4.5 C ONTROLLED STORY WRITINGWe explore controlled generation for assistive story writing (Peng et al., 2018; Luo et al., 2019; Yaoet al., 2019; Fan et al., 2018). Using uncontrolled LMs for assistive art creation can be difficult. Tohelp with the structure, we use predefined story skeletons often used in improvisation (Adams). Wefill in the blank between these prefixes with a PPLM. See examples in Table S20 and Table S21.5 C ONCLUSIONWe have presented PPLM, a plug and play method for controlled language generation that flexiblycombines a large, pre-trained LM and a BoW or a small, easy-to-train discriminator. In Section S6we discuss the ethics of controlled LMs. PPLM achieves fine-grained control of attributes via asimple gradient-based sampling mechanism. Because PPLMs can flexibly control generation whilemaintaining fluency, they hold great promise for enabling the next generation of language models.10Published as a conference paper at ICLR 2020ACKNOWLEDGEMENTSThe authors are grateful to Bryan McCann for providing samples for the CTRL baseline, JoelLehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with thecomputational framework, Colan Chen for creating associated artwork for the blog, Avishek JoeyBose for helpful discussions, Julien Chaumond, Lysandre Debut, Thomas Wolf, and the HuggingFace team for co-producing the PPLM demo and helping integrate the code into their transformersrepository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of theDeep Collective research group for helpful discussion, ideas, and feedback on experiments.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text The paper proposes a Plug and Play LM model for controlled natural language generation. Similar to the idea of the Plug and Play Generative Networks for vision, the model plugs in a discriminator, which is either a bag-of-words model or a single layer classifier. The added simple discriminator is then coupled with a pre-trained generative language model such as GPT-2, to obtain a conditional probability for generating controllable text. The authors evaluate the proposed model using human evaluation studies and quantitative perplexity metrics, aiming at measuring the relevance and fluency of the generated text. Their experimental results show that the text generated is fluent and aligned with the desired attributes. The proposed method is simple and makes sense to me. The idea of how one can make good use of large, pre-trained generative language models is very neat here. However, I have two main concerns, as follows. 1. The main focuses of the generated text seem to be dramatically changed in an unpredictable way while tailoring the control attributes. In this sense, how useful these kinds of text generation techniques are not clear to me. For example, the first two rows in Table 3 contain two paragraphs with very different main ideas to be conveyed. Similarly for sentences in Table 1. It seems that those sentences talk about very different topics/things to me, although they may reflect the desired control attributes. Is there an automatic evaluation metric to subjectively evaluate the change of the focuses/ideas of two pieces of text? 2. The model is a straightforward adaption of the Plug and Play Generative Networks from the vision community. In short, the idea in the paper is simple and seems effective. On the other hand, the lack of a good evaluation metric makes me a bit uncertain about the contribution of the paper. I am willing to increase my evaluation score if I will be convinced by other reviews and comments. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
SJGCiw5gl
ICLR.cc/2017/conference
2017
Pruning Convolutional Neural Networks for Resource Efficient Inference
["Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz"]
We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.
["Deep learning", "Transfer Learning"]
ABSTRACTWe propose a new formulation for pruning convolutional kernels in neural networksto enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation—a computationally efficient procedure that maintainsgood generalization in the pruned network. We propose a new criterion based onTaylor expansion that approximates the change in the cost function induced bypruning network parameters. We focus on transfer learning, where large pretrainednetworks are adapted to specialized tasks. The proposed criterion demonstratessuperior performance compared to other criteria, e.g. the norm of kernel weightsor feature map activation, for pruning large CNNs after adaptation to fine-grainedclassification tasks (Birds-200 and Flowers-102) relaying only on the first ordergradient information. We also show that pruning can lead to more than 10theoretical reduction in adapted 3D-convolutional filters with a small drop inaccuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.1 I NTRODUCTIONConvolutional neural networks (CNN) are used extensively in computer vision applications, includingobject classification and localization, pedestrian and car detection, and video classification. Manyproblems like these focus on specialized domains for which there are only small amounts of care-fully curated training data. In these cases, accuracy may be improved by fine-tuning an existingdeep network previously trained on a much larger labeled vision dataset, such as images from Ima-geNet (Russakovsky et al., 2015) or videos from Sports-1M (Karpathy et al., 2014). While transferlearning of this form supports state of the art accuracy, inference is expensive due to the time, power,and memory demanded by the heavyweight architecture of the fine-tuned network.While modern deep CNNs are composed of a variety of layer types, runtime during prediction isdominated by the evaluation of convolutional layers. With the goal of speeding up inference, weprune entire feature maps so the resulting networks may be run efficiently even on embedded devices.We interleave greedy criteria-based pruning with fine-tuning by backpropagation, a computationallyefficient procedure that maintains good generalization in the pruned network.Neural network pruning was pioneered in the early development of neural networks (Reed, 1993).Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993)leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regu-larization to improve training and generalization. This method requires computation of the Hessianmatrix partially or completely, which adds memory and computation costs to standard fine-tuning.In line with our work, Anwar et al. (2015) describe structured pruning in convolutional layers at thelevel of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels.Pruning is accomplished by particle filtering wherein configurations are weighted by misclassificationrate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed.Han et al. (2015) introduce a simpler approach by fine-tuning with a strong `2regularization termand dropping parameters with values below a predefined threshold. Such unstructured pruning is veryeffective for network compression, and this approach demonstrates good performance for intra-kernelpruning. But compression may not translate directly to faster inference since modern hardware1Published as a conference paper at ICLR 2017exploits regularities in computation for high throughput. So specialized hardware may be neededfor efficient inference of a network with intra-kernel sparsity (Han et al., 2016). This approachalso requires long fine-tuning times that may exceed the original network training by a factor of3or larger. Group sparsity based regularization of network parameters was proposed to penalizeunimportant parameters (Wen et al., 2016; Zhou et al., 2016; Alvarez & Salzmann, 2016; Lebedev& Lempitsky, 2016). Regularization-based pruning techniques require per layer sensitivity analysiswhich adds extra computations. In contrast, our approach relies on global rescaling of criteria for alllayers and does not require sensitivity estimation. Moreover, our approach is faster as we directlyprune unimportant parameters instead of waiting for their values to be made sufficiently small byoptimization under regularization.Other approaches include combining parameters with correlated weights (Srinivas & Babu, 2015),reducing precision (Gupta et al., 2015; Rastegari et al., 2016) or tensor decomposition (Kim et al.,2015). These approaches usually require a separate training procedure or significant fine-tuning, butpotentially may be combined with our method for additional speedups.2 M ETHODRemove the least important neuron Fine-tuning Evaluate importance of neurons Continue pruning? Network Stop pruning noyesFigure 1: Network pruning asa backward filter.The proposed method for pruning consists of the following steps:1) Fine-tune the network until convergence on the target task; 2)Alternate iterations of pruning and further fine-tuning; 3) Stop prun-ing after reaching the target trade-off between accuracy and pruningobjective, e.g. floating point operations (FLOPs) or memory utiliza-tion.The procedure is simple, but its success hinges on employing theright pruning criterion. In this section, we introduce several efficientpruning criteria and related technical considerations.Consider a set of training examples D =X =fx0;x1;:::;xNg;Y=fy0;y1;:::;yNg, where xandyrep-resent an input and a target output, respectively. The network’sparameters1W=f(w11;b11);(w21;b21);:::(wC`L;bC`L)gare optimizedto minimize a cost value C(DjW ). The most common choice fora cost functionC()is a negative log-likelihood function. A costfunction is selected independently of pruning and depends only onthe task to be solved by the original network. In the case of transferlearning, we adapt a large network initialized with parameters W0pretrained on a related but distinct dataset.During pruning, we refine a subset of parameters which preservesthe accuracy of the adapted network, C(DjW0)C(DjW ). This corresponds to a combinatorialoptimization:minW0C(DjW0)C(DjW )s:t:jjW0jj0B; (1)where the`0norm injjW0jj0bounds the number of non-zero parameters BinW0. Intuitively, ifW0=Wwe reach the global minimum of the error function, however jjW0jj0will also have itsmaximum.Finding a good subset of parameters while maintaining a cost value as close as possible to theoriginal is a combinatorial problem. It will require 2jWjevaluations of the cost function for a selectedsubset of data. For current networks it would be impossible to compute: for example, VGG-16 hasjWj= 4224 convolutional feature maps. While it is impossible to solve this optimization exactlyfor networks of any reasonable size, in this work we investigate a class of greedy methods. Startingwith a full set of parameters W, we iteratively identify and remove the least important parameters, asillustrated in Figure 1. By removing parameters at each iteration, we ensure the eventual satisfactionof the`0bound onW0.1A “parameter” (w; b)2 W might represent an individual weight, a convolutional kernel, or the entire set ofkernels that compute a feature map; our experiments operate at the level of feature maps.2Published as a conference paper at ICLR 2017Since we focus our analysis on pruning feature maps from convolutional layers, let us denote aset of image feature maps by z`2RH`W`C`with dimensionality H`W`andC`individualmaps (or channels).2The feature maps can either be the input to the network, z0, or the outputfrom a convolutional layer, z`with`2[1;2;:::;L ]. Individual feature maps are denoted z(k)`fork2[1;2;:::;C`]. A convolutional layer `applies the convolution operation ( ) to a set of inputfeature maps z`1with kernels parameterized by w(k)`2RC`1pp:z(k)`=g(k)`Rz`1w(k)`+b(k)`; (2)where z(k)`2RH`W`is the result of convolving each of C`1kernels of size ppwith its respectiveinput feature map and adding bias b(k)`. We introduce a pruning gate gl2f0;1gCl, an external switchwhich determines if a particular feature map is included or pruned during feed-forward propagation,such that when gis vectorized:W0=gW.2.1 O RACLE PRUNINGMinimizing the difference in accuracy between the full and pruned models depends on the criterion foridentifying the “least important” parameters, called saliency , at each step. The best criterion would bean exact empirical evaluation of each parameter, which we denote the oracle criterion, accomplishedby ablating each non-zero parameter w2W0in turn and recording the cost’s difference.We distinguish two ways of using this oracle estimation of importance: 1) oracle-loss quantifiesimportance as the signed change in loss, C(DjW0)C(DjW ), and 2) oracle-abs adopts the absolutedifference,jC(DjW0)C(DjW )j. While both discourage pruning which increases the loss, theoracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes anypruning in proportion to its change in loss, regardless of the direction of change.While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiringjjW0jj0evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Sinceestimation of parameter importance is key to both the accuracy and the efficiency of this pruningapproach, we propose and evaluate several criteria in terms of performance and estimation cost.2.2 C RITERIA FOR PRUNINGThere are many heuristic criteria which are much more computationally efficient than the oracle. Forthe specific case of evaluating the importance of a feature map (and implicitly the set of convolutionalkernels from which it is computed), reasonable criteria include: the combined `2-norm of thekernel weights, the mean, standard deviation or percentage of the feature map’s activation, andmutual information between activations and predictions. We describe these criteria in the followingparagraphs and propose a new criterion which is based on the Taylor expansion.Minimum weight. Pruning by magnitude of kernel weights is perhaps the simplest possible crite-rion, and it does not require any additional computation during the fine-tuning process. In case of prun-ing according to the norm of a set of weights, the criterion is evaluated as: MW:RC`1pp!R,withMW(w) =1jwjPiw2i, wherejwjis dimensionality of the set of weights after vectorization.The motivation to apply this type of pruning is that a convolutional kernel with low `2norm detectsless important features than those with a high norm. This can be aided during training by applying `1or`2regularization, which will push unimportant kernels to have smaller values.Activation. One of the reasons for the popularity of the ReLU activation is the sparsity in activationthat is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonableto assume that if an activation value (an output feature map) is small then this feature detectoris not important for prediction task at hand. We may evaluate this by mean activation, MA:RHlW`C`!R, with MA(a) =1jajPiaifor activation a=z(k)l, or by the standard deviationof the activation, MA _std(a) =q1jajPi(aia)2.2While our notation is at times specific to 2D convolutions, the methods are applicable to 3D convolutions,as well as fully connected layers.3Published as a conference paper at ICLR 2017Mutual information. Mutual information (MI) is a measure of how much information is present inone variable about another variable. We apply MI as a criterion for pruning, MI:RHlW`C`!R,with MI(a) =MI(a;y), whereyis the target of neural network. MI is defined for continuousvariables, so to simplify computation, we exchange it with information gain (IG), which is definedfor quantized variables IG(yjx) =H(x) +H(y)H(x;y), whereH(x)is the entropy of variablex. We accumulate statistics on activations and ground truth for a number of updates, then quantizethe values and compute IG.Taylor expansion. We phrase pruning as an optimization problem, trying to find W0with boundednumber of non-zero elements that minimizeC(hi)=jC(DjW0)C(DjW )j. With this approachbased on the Taylor expansion, we directly approximate change in the loss function from removing aparticular parameter. Let hibe the output produced from parameter i. In the case of feature maps,h=fz(1)0;z(2)0;:::;z(C`)Lg. For notational convenience, we consider the cost function equally depen-dent on parameters and outputs computed from parameters: C(Djhi) =C(Dj(w;b)i). Assumingindependence of parameters, we have:C(hi)=C(D;hi= 0)C(D;hi); (3)whereC(D;hi= 0) is a cost value if output hiis pruned, whileC(D;hi)is the cost if it is not pruned.While parameters are in reality inter-dependent, we already make an independence assumption ateach gradient step during training.To approximate C(hi), we use the first-degree Taylor polynomial. For a function f(x), the Taylorexpansion at point x=aisf(x) =PXp=0f(p)(a)p!(xa)p+Rp(x); (4)wheref(p)(a)is thep-th derivative of fevaluated at point a, andRp(x)is thep-th order remainder.ApproximatingC(D;hi= 0) with a first-order Taylor polynomial near hi= 0, we have:C(D;hi= 0) =C(D;hi)Chihi+R1(hi= 0): (5)The remainder R1(hi= 0) can be calculated through the Lagrange form:R1(hi= 0) =2C(h2i=)h2i2; (6)whereis a real number between 0andhi. However, we neglect this first-order remainder, largelydue to the significant calculation required, but also in part because the widely-used ReLU activationfunction encourages a smaller second order term. Finally, by substituting Eq. (5) into Eq. (3) andignoring the remainder, we have TE:RHlWlCl!R+, withTE(hi) =C(hi)=C(D;hi)ChihiC(D;hi)=Chihi: (7)Intuitively, this criterion prunes parameters that have an almost flat gradient of the cost function w.r.t.feature map hi. This approach requires accumulation of the product of the activation and the gradientof the cost function w.r.t. to the activation, which is easily computed from the same computations forback-propagation. TEis computed for a multi-variate output, such as a feature map, byTE(z(k)l) =1MXmCz(k)l;mz(k)l;m; (8)whereMis length of vectorized feature map. For a minibatch with T >1examples, the criterion iscomputed for each example separately and averaged over T.Independently of our work, Figurnov et al. (2016) came up with similar metric based on the Taylorexpansion, called impact , to evaluate importance of spatial cells in a convolutional layer. It showsthat the same metric can be applied to evaluate importance of different groups of parameters.4Published as a conference paper at ICLR 2017Relation to Optimal Brain Damage. The Taylor criterion proposed above relies on approximatingthe change in loss caused by removing a feature map. The core idea is the same as in Optimal BrainDamage (OBD) (LeCun et al., 1990). Here we consider the differences more carefully.The primary difference is the treatment of the first-order term of the Taylor expansion, in our notationy=Chhfor cost functionCand hidden layer activation h. After sufficient training epochs, thegradient term tends to zero:Ch!0andE(y) = 0 . At face value yoffers little useful information,hence OBD regards the term as zero and focuses on the second-order term.However, the variance ofyis non-zero and correlates with the stability of the local function w.r.t.activationh. By considering the absolute change in the cost3induced by pruning (as in Eq. 3), we usethe absolute value of the first-order term, jyj. Under assumption that samples come from independentand identical distribution, E(jyj) =p2=pwhereis the standard deviation of y, known as theexpected value of the half-normal distribution. So, while ytends to zero, the expectation of jyjisproportional to the variance of y, a value which is empirically more informative as a pruning criterion.As an additional benefit, we avoid the computation of the second-order Taylor expansion term, or itssimplification - diagonal of the Hessian, as required in OBD.We found important to compare proposed Taylor criteria to OBD. As described in the originalpapers (LeCun et al., 1990; 1998), OBD can be efficiently implemented similarly to standard backpropagation algorithm doubling backward propagation time and memory usage when used togetherwith standard fine-tuning. Efficient implementation of the original OBD algorithm might requiresignificant changes to the framework based on automatic differentiation like Theano to efficientlycompute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle thisproblem with approximation techniques (Martens, 2010; Martens et al., 2012). In our implementation,we use efficient way of computing Hessian-vector product (Pearlmutter, 1994) and matrix diagonalapproximation proposed by (Bekas et al., 2007), please refer to more details in appendix. Withcurrent implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3times slower for iterative pruning, however with different implementation can only be 50% slower asmentioned in the original paper.Average Percentage of Zeros (APoZ). Hu et al. (2016) proposed to explore sparsity in activationsfor network pruning. ReLU activation function imposes sparsity during inference, and averagepercentage of positive activations at the output can determine importance of the neuron. Intuitively,it is a good criteria, however feature maps at the first layers have similar APoZ regardless of thenetwork’s target as they learn to be Gabor like filters. We will use APoZ to estimate saliency offeature maps.2.3 N ORMALIZATIONSome criteria return “raw” values, whose scale varies with the depth of the parameter’s layer in thenetwork. A simple layer-wise `2-normalization can achieve adequate rescaling across layers:^(z(k)l)=(z(k)l)qPj(z(j)l)2:2.4 FLOP S REGULARIZED PRUNINGOne of the main reasons to apply pruning is to reduce number of operations in the network. Featuremaps from different layers require different amounts of computation due the number and sizes of inputfeature maps and convolution kernels. To take this into account we introduce FLOPs regularization:(z(k)l) = ( z(k)l)flopsl; (9)wherecontrols the amount of regularization. For our experiments, we use = 103.flopsiscomputed under the assumption that convolution is implemented as a sliding window (see Appendix).Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint.3OBD approximates the signed difference in loss, while our method approximates absolute difference in loss.We find in our results that pruning based on absolute difference yields better accuracy.5Published as a conference paper at ICLR 2017Figure 2: Global statistics of oracle ranking,shown by layer for Birds- 200transfer learning.Figure 3: Pruning without fine-tuning usingoracle ranking for Birds- 200transfer learning.3 R ESULTSWe empirically study the pruning criteria and procedure detailed in the previous section for a variety ofproblems. We focus many experiments on transfer learning problems, a setting where pruning seemsto excel. We also present results for pruning large networks on their original tasks for more directcomparison with the existing pruning literature. Experiments are performed within Theano (TheanoDevelopment Team, 2016). Training and pruning are performed on the respective training sets foreach problem, while results are reported on appropriate holdout sets, unless otherwise indicated. Forall experiments we prune a single feature map at every pruning iteration, allowing fine-tuning andre-evaluation of the criterion to account for dependency between parameters.3.1 C HARACTERIZING THE ORACLE RANKINGWe begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learningproblem. We fine-tune the VGG-16 network (Simonyan & Zisserman, 2014) for classification of birdspecies using the Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011). The dataset consists ofnearly 6000 training images and 5700 test images, covering 200species. We fine-tune VGG-16 for60epochs with learning rate 0:0001 to achieve a test accuracy of 72:2%using uncropped images.To compute the oracle, we evaluate the change in loss caused by removing each individual featuremap from the fine-tuned VGG-16 network. (See Appendix A.3 for additional analysis.) We rankfeature maps by their contributions to the loss, where rank 1indicates the most important featuremap—removing it results in the highest increase in loss—and rank 4224 indicates the least important.Statistics of global ranks are shown in Fig. 2 grouped by convolutional layer. We observe: (1)Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to bemore important than those without. (VGG-16 has pooling after layers 2,4,7,10, and 13.) However,(3) maximum and minimum ranks show that every layer has some feature maps that are globallyimportant and others that are globally less important. Taken together with the results of subsequentexperiments, we opt for encouraging a balanced pruning that distributes selection across all layers.Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we donot update the parameters of the network or the oracle ranking between iterations. Training accuracyis illustrated in Fig. 3 over many pruning iterations. Surprisingly, pruning by smallest absolutechange in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss).Even though the oracle indicates that removing some feature maps individually may decrease loss,instability accumulates due the large absolute changes that are induced. These results support pruningbyabsolute difference in cost, as constructed in Eq. 1.3.2 E VALUATING PROPOSED CRITERIA VERSUS THE ORACLETo evaluate computationally efficient criteria as substitutes for the oracle, we compute Spearman’srank correlation, an estimate of how well two predictors provide monotonically related outputs,6Published as a conference paper at ICLR 2017AlexNet / Flowers-102 VGG-16 / Birds-200Weight Activation OBD Taylor Weight Activation OBD Taylor MutualMean S.d. APoZ Mean S.d. APoZ Info.Per layer 0:17 0:65 0:67 0:54 0:64 0:77 0:27 0:56 0:57 0:35 0:59 0:73 0:28All layers 0:28 0:51 0:53 0:41 0:68 0:37 0 :34 0:35 0:30 0:43 0:65 0:14 0:35(w/`2-norm) 0:13 0:63 0:61 0:60 - 0:75 0:33 0:64 0:66 0:51 - 0:73 0:47AlexNet / Birds-200 VGG-16 / Flowers-102Per layer 0:36 0:57 0:65 0:42 0:54 0:81 0:19 0:51 0:47 0:36 0:21 0:6All layers 0:32 0:37 0:51 0:28 0:61 0:37 0 :35 0:53 0:45 0:61 0:28 0:02(w/`2-norm) 0:23 0:54 0:57 0:49 - 0:78 0:28 0:66 0:65 0:61 - 0:7AlexNet / ImageNetPer layer 0:57 0:09 0:190:060:58 0:58All layers 0:67 0:00 0:130:080:72 0:11(w/`2-norm) 0:44 0:10 0:19 0:19 - 0:55Table 1: Spearman’s rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16and AlexNet fine-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet.Figure 4: Pruning of feature maps in VGG-16 fine-tuned on the Birds- 200dataset.even if their relationship is not linear. Given the difference between oracle4and criterion ranksdi=rank (oracle (i))rank (criterion (i))for each parameter i, the rank correlation is computed:S= 16N(N21)NXi=1di2; (10)whereNis the number of parameters (and the highest rank). This correlation coefficient takes valuesin[1;1], where1implies full negative correlation, 0no correlation, and 1full positive correlation.We show Spearman’s correlation in Table 1 to compare the oracle-abs ranking to rankings by differentcriteria on a set of networks/datasets some of which are going to be introduced later. Data-dependentcriteria (all except weight magnitude) are computed on training data during the fine-tuning beforeor between pruning iterations. As a sanity check, we evaluate random ranking and observe 0:0correlation across all layers. “Per layer” analysis shows ranking within each convolutional layer,while “All layers” describes ranking across layers. While several criteria do not scale well acrosslayers with raw values, a layer-wise `2-normalization significantly improves performance. The Taylorcriterion has the highest correlation among the criteria, both within layers and across layers (with `2normalization). OBD shows the best correlation across layers when no normalization used; it alsoshows best results for correlation on ImageNet dataset. (See Appendix A.2 for further analysis.)3.3 P RUNING FINE -TUNED IMAGE NET NETWORKSWe now evaluate the full iterative pruning procedure on two transfer learning problems. We focus onreducing the number of convolutional feature maps and the total estimated floating point operations(FLOPs). Fine-grained recognition is difficult for relatively small datasets without relying on transfer4We use Oracle-abs because of better performance in previous experiment7Published as a conference paper at ICLR 2017Figure 5: Pruning of feature maps in AlexNet on fine-tuned on Flowers- 102.learning. Branson et al. (2014) show that training CNN from scratch on the Birds-200 dataset achievestest accuracy of only 10:9%. We compare results to training a randomly initialized CNN with halfthe number of parameters per layer, denoted "from scratch".Fig. 4 shows pruning of VGG-16 after fine-tuning on the Birds- 200dataset (as described previously).At each pruning iteration, we remove a single feature map and then perform 30minibatch SGDupdates with batch-size 32, momentum 0:9, learning rate 104, and weight decay 104. The figuredepicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterionshows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularizationdemonstrates the best performance relative to the number of operations. OBD shows slightly worseperformance of pruning in terms of parameters, however significantly worse in terms of FLOPs.In Fig. 5, we show pruning of the CaffeNet implementation of AlexNet (Krizhevsky et al., 2012) afteradapting to the Oxford Flowers 102dataset (Nilsback & Zisserman, 2008), with 2040 training and6129 test images from 102species of flowers. Criteria correlation with oracle-abs is summarized inTable 1. We initially fine-tune the network for 20epochs using a learning rate of 0:001, achieving afinal test accuracy of 80:1%. Then pruning procedes as previously described for Birds- 200, exceptwith only 10mini-batch updates between pruning iterations. We observe the superior performance ofthe Taylor and OBD criteria in both number of parameters and GFLOPs.We observed that Taylor criterion shows the best performance which is closely followed by OBD witha bit lower Spearman’s rank correlation coefficient. Implementing OBD takes more effort because ofcomputation of diagonal of the Hessian and it is 50% to 300% slower than Taylor criteria that relieson first order gradient only.Fig. 6 shows pruning with the Taylor technique and a varying number of fine-tuning updates betweenpruning iterations. Increasing the number of updates results in higher accuracy, but at the cost ofadditional runtime of the pruning procedure.During pruning we observe a small drop in accuracy. One of the reasons is fine-tuning betweenpruning iterations. Accuracy of the initial network can be improved with longer fine tunning andsearch of better optimization parameters. For example accuracy of unpruned VGG16 network onBirds-200 goes up to 75% after extra 128k updates. And AlexNet on Flowers-102 goes up to 82:9%after 130k updates. It should be noted that with farther fine-tuning of pruned networks we can achievehigher accuracy as well, therefore the one-to-one comparison of accuracies is rough.3.4 P RUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITIONMolchanov et al. (2016) learn to recognize 25dynamic hand gestures in streaming video with a largerecurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNNpretrained on the Sports-1M video dataset (Karpathy et al., 2014) and fine tuning on a gesture dataset.The full network achieves an accuracy of 80:7%when trained on the depth modality, but a singleinference requires an estimated 37:8GFLOPs, too much for deployment on an embedded GPU. Afterseveral iterations of pruning with the Taylor criterion with learning rate 0:0003 , momentum 0:9,FLOPs regularization 103, we reduce inference to 3:0GFLOPs, as shown in Fig. 7. While pruning8Published as a conference paper at ICLR 2017Figure 6: Varying the number of minibatchupdates between pruning iterations withAlexNet/Flowers- 102and the Taylor criterion.Figure 7: Pruning of a recurrent 3D-CNN fordynamic hand gesture recognition(Molchanov et al., 2016).Figure 8: Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations.increases classification error by nearly 6%, additional fine-tuning restores much of the lost accuracy,yielding a final pruned network with a 12:6reduction in GFLOPs and only a 2:5%loss in accuracy.3.5 P RUNING NETWORKS FOR IMAGE NETFigure 9: Pruning of the VGG-16 network onImageNet, with additional following fine-tuning at11:5and8GFLOPs.We also test our pruning scheme on the large-scale ImageNet classification task. In the firstexperiment, we begin with a trained CaffeNetimplementation of AlexNet with 79:2%top-5validation accuracy. Between pruning iterations,we fine-tune with learning rate 104, momen-tum0:9, weight decay 104, batch size 32, anddrop-out 50%. Using a subset of 5000 trainingimages, we compute oracle-abs and Spearman’srank correlation with the criteria, as shown in Ta-ble 1. Pruning traces are illustrated in Fig. 8. Weobserve: 1) Taylor performs better than randomor minimum weight pruning when 100updatesare used between pruning iterations. When re-sults are displayed w.r.t. FLOPs, the differencewith random pruning is only 0%4%, but thedifference is higher, 1%10%, when plottedwith the number of feature maps pruned. 2)Increasing the number of updates from 100to1000 improves performance of pruning signifi-cantly for both the Taylor criterion and randompruning.9Published as a conference paper at ICLR 2017Hardware Batch Accuracy Time, ms Accuracy Time (speed up) Accuracy Time (speed up)AlexNet / Flowers-102 , 1.46 GFLOPs 41% feature maps, 0.4 GFLOPs 19.5% feature maps, 0.2 GFLOPsCPU: Intel Core i7-5930K 16 80.1% 226.4 79.8%(-0.3%) 121.4 (1.9x) 74.1%(-6.0%) 87.0 (2.6x)GPU: GeForce GTX TITAN X (Pascal) 16 4.8 2.4 (2.0x) 1.9 (2.5x)GPU: GeForce GTX TITAN X (Pascal) 512 88.3 36.6 (2.4x) 27.4 (3.2x)GPU: NVIDIA Jetson TX1 32 169.2 73.6 (2.3x) 58.6 (2.9x)VGG-16 / ImageNet , 30.96 GFLOPs 66% feature maps, 11.5 GFLOPs 52% feature maps, 8.0 GFLOPsCPU: Intel Core i7-5930K 16 89.3% 2564.7 87.0% (-2.3%) 1483.3 (1.7x) 84.5% (-4.8%) 1218.4 (2.1x)GPU: GeForce GTX TITAN X (Pascal) 16 68.3 31.0 (2.2x) 20.2 (3.4x)GPU: NVIDIA Jetson TX1 4 456.6 182.5 (2.5x) 138.2 (3.3x)R3DCNN / nvGesture , 37.8 GFLOPs 25% feature maps, 3 GFLOPsGPU: GeForce GT 730M 1 80.7% 438.0 78.2% (-2.5%) 85.0 (5.2x)Table 2: Actual speed up of networks pruned by Taylor criterion for various hardware setup. Allmeasurements were performed with PyTorch with cuDNN v5.1.0, except R3DCNN which wasimplemented in C++ with cuDNN v4.0.4). Results for ImageNet dataset are reported as top-5accuracy on validation set. Results on AlexNet / Flowers-102 are reported for pruning with 1000updates between iterations and no fine-tuning after pruning.For a second experiment, we prune a trained VGG-16 network with the same parameters as before,except enabling FLOPs regularization. We stop pruning at two points, 11:5and8:0GFLOPs, andfine-tune both models for an additional five epochs with learning rate 104. Fine-tuning after pruningsignificantly improves results: the network pruned to 11:5GFLOPs improves from 83% to87% top-5validation accuracy, and the network pruned to 8:0GFLOPs improves from 77:8%to84:5%.3.6 S PEED UP MEASUREMENTSDuring pruning we were measuring reduction in computations by FLOPs, which is a common practice(Han et al., 2015; Lavin, 2015a;b). Improvements in FLOPs result in monotonically decreasinginference time of the networks because of removing entire feature map from the layer. However,time consumed by inference dependents on particular implementation of convolution operator,parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measureimprovement in the inference time for selected networks to see real speed up compared to unprunednetworks in Table 2. We observe significant speed ups by proposed pruning scheme.4 C ONCLUSIONSWe propose a new scheme for iteratively pruning deep convolutional neural networks. We find: 1)CNNs may be successfully pruned by iteratively removing the least important parameters—featuremaps in this case—according to heuristic selection criteria; 2) a Taylor expansion-based criteriondemonstrates significant improvement over other criteria; 3) per-layer normalization of the criterionis important to obtain global scaling.
SJFoMMKVe
Empirically justified pruning strategy, a few missing comparisons
7: Good paper, accept
Authors propose a strategy for pruning weights with the eventual goal of reducing GFLOP computations. The pruning strategy is well motivated using the taylor expansion of the neural network function with respect to the feature activations. The obtained strategy removes feature maps that have both a small activation and a small gradient (eqn 7). (A) Ideally the gradient of the output with respect to the activation functions should be 0 at the optimal, but as a result of stochastic gradient evaluations this would practically never be zero. Small variance in the gradient across mini-batches indicates that irrespective of input data the specific network parameter is unlikely to change - intuitively these are parameters that are closer to convergence. Parameters/weights that are close to convergence and also result in a small activation are intuitively good candidates for pruning. This is essentially what eqn 7 conveys and is likely to be reason why just removing weights that result in small activations is not as good of a pruning strategy (as shown by results in the paper). There are two kind of differences in weights that are removed by activation v/s taylor expansion: 1. Weights with high-activations but very low gradients will be removed by taylor expansion, but not by activation alone. 2. Weights with low-activation but high gradients will be removed by activation criterion, but not by taylor expansion. It will be interesting to analyze which of (1) or (2) contribute more to the differences in weights that are removed by the taylor expansion v/s activation criterion. Intuitively it seems that weight that satisfy (1) are important because they are converged and contribute significantly to network's activation. It is possible that a modified criterion - eqn (7) + \lambda feature activation, (where \lambda needs to be found by cross-validation) may lead to even better results at the cost of more parameter tuning. (B) Another interesting comparison is with the with the optimal damage framework - where the first order gradients are assumed to be zero and pruning is performed using the second-order information (also discussed by authors in the appendix). Critically, only the diagonal of the Hessian is computed. There is no comparison with optimal damage as authors claim it is memory and computation inefficient. Back of envelope calculations suggest that this would result only in 50% increase in memory and computation during pruning, but no loss in efficiency during testing. Therefore from a standpoint of deployment, I don't think this missing comparison is justified. (C) The eventual goal of the authors is to reduce GFLOPs. Some recent papers have proposed using lower precision computation for this. A comparison in GFLOPs with lower precision v/s pruning would be a great. While both these approaches are complementary and it is expected that combining both of them can lead to superior performance than either of the two - it is unclear when we are operating in the low-precision regime how much pruning can be performed. Any analysis on this tradeoff would be great (but not necessary). (D) On finetuning, authors report results of AlexNet and VGG on two different datasets - Flowers and Birds respectively. Why is this the case? It would be great to see the results of both the networks on both the datasets. (E) Authors report there is only a small drop in performance after pruning. Suppose the network was originally trained with N iterations, and then M finetuning iterations were performed during pruning. This means that pruned networks were trained for N + M iterations. The correct comparison in accuracies would be if we the original network was also trained for N + M iterations. In figure 4, does the performance at 100% parameters reports accuracy after N+M iterations or after N iterations? Overall I think the paper is technically and empirically sound, it proposes a new strategy for pruning: (1) Based on taylor expansion (2) Feature normalization to reduce parameter tuning efforts. (3) Iterative finetuning. However, I would like to see some comparisons mentioned in my comments above. If those comparisons are made I would change my ratings to an accept.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Pruning Convolutional Neural Networks for Resource Efficient Inference ### Paper Abstract We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach. ### Paper Keywords ["Deep learning", "Transfer Learning"] ### Paper Content ABSTRACTWe propose a new formulation for pruning convolutional kernels in neural networksto enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation—a computationally efficient procedure that maintainsgood generalization in the pruned network. We propose a new criterion based onTaylor expansion that approximates the change in the cost function induced bypruning network parameters. We focus on transfer learning, where large pretrainednetworks are adapted to specialized tasks. The proposed criterion demonstratessuperior performance compared to other criteria, e.g. the norm of kernel weightsor feature map activation, for pruning large CNNs after adaptation to fine-grainedclassification tasks (Birds-200 and Flowers-102) relaying only on the first ordergradient information. We also show that pruning can lead to more than 10theoretical reduction in adapted 3D-convolutional filters with a small drop inaccuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.1 I NTRODUCTIONConvolutional neural networks (CNN) are used extensively in computer vision applications, includingobject classification and localization, pedestrian and car detection, and video classification. Manyproblems like these focus on specialized domains for which there are only small amounts of care-fully curated training data. In these cases, accuracy may be improved by fine-tuning an existingdeep network previously trained on a much larger labeled vision dataset, such as images from Ima-geNet (Russakovsky et al., 2015) or videos from Sports-1M (Karpathy et al., 2014). While transferlearning of this form supports state of the art accuracy, inference is expensive due to the time, power,and memory demanded by the heavyweight architecture of the fine-tuned network.While modern deep CNNs are composed of a variety of layer types, runtime during prediction isdominated by the evaluation of convolutional layers. With the goal of speeding up inference, weprune entire feature maps so the resulting networks may be run efficiently even on embedded devices.We interleave greedy criteria-based pruning with fine-tuning by backpropagation, a computationallyefficient procedure that maintains good generalization in the pruned network.Neural network pruning was pioneered in the early development of neural networks (Reed, 1993).Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993)leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regu-larization to improve training and generalization. This method requires computation of the Hessianmatrix partially or completely, which adds memory and computation costs to standard fine-tuning.In line with our work, Anwar et al. (2015) describe structured pruning in convolutional layers at thelevel of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels.Pruning is accomplished by particle filtering wherein configurations are weighted by misclassificationrate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed.Han et al. (2015) introduce a simpler approach by fine-tuning with a strong `2regularization termand dropping parameters with values below a predefined threshold. Such unstructured pruning is veryeffective for network compression, and this approach demonstrates good performance for intra-kernelpruning. But compression may not translate directly to faster inference since modern hardware1Published as a conference paper at ICLR 2017exploits regularities in computation for high throughput. So specialized hardware may be neededfor efficient inference of a network with intra-kernel sparsity (Han et al., 2016). This approachalso requires long fine-tuning times that may exceed the original network training by a factor of3or larger. Group sparsity based regularization of network parameters was proposed to penalizeunimportant parameters (Wen et al., 2016; Zhou et al., 2016; Alvarez & Salzmann, 2016; Lebedev& Lempitsky, 2016). Regularization-based pruning techniques require per layer sensitivity analysiswhich adds extra computations. In contrast, our approach relies on global rescaling of criteria for alllayers and does not require sensitivity estimation. Moreover, our approach is faster as we directlyprune unimportant parameters instead of waiting for their values to be made sufficiently small byoptimization under regularization.Other approaches include combining parameters with correlated weights (Srinivas & Babu, 2015),reducing precision (Gupta et al., 2015; Rastegari et al., 2016) or tensor decomposition (Kim et al.,2015). These approaches usually require a separate training procedure or significant fine-tuning, butpotentially may be combined with our method for additional speedups.2 M ETHODRemove the least important neuron Fine-tuning Evaluate importance of neurons Continue pruning? Network Stop pruning noyesFigure 1: Network pruning asa backward filter.The proposed method for pruning consists of the following steps:1) Fine-tune the network until convergence on the target task; 2)Alternate iterations of pruning and further fine-tuning; 3) Stop prun-ing after reaching the target trade-off between accuracy and pruningobjective, e.g. floating point operations (FLOPs) or memory utiliza-tion.The procedure is simple, but its success hinges on employing theright pruning criterion. In this section, we introduce several efficientpruning criteria and related technical considerations.Consider a set of training examples D =X =fx0;x1;:::;xNg;Y=fy0;y1;:::;yNg, where xandyrep-resent an input and a target output, respectively. The network’sparameters1W=f(w11;b11);(w21;b21);:::(wC`L;bC`L)gare optimizedto minimize a cost value C(DjW ). The most common choice fora cost functionC()is a negative log-likelihood function. A costfunction is selected independently of pruning and depends only onthe task to be solved by the original network. In the case of transferlearning, we adapt a large network initialized with parameters W0pretrained on a related but distinct dataset.During pruning, we refine a subset of parameters which preservesthe accuracy of the adapted network, C(DjW0)C(DjW ). This corresponds to a combinatorialoptimization:minW0C(DjW0)C(DjW )s:t:jjW0jj0B; (1)where the`0norm injjW0jj0bounds the number of non-zero parameters BinW0. Intuitively, ifW0=Wwe reach the global minimum of the error function, however jjW0jj0will also have itsmaximum.Finding a good subset of parameters while maintaining a cost value as close as possible to theoriginal is a combinatorial problem. It will require 2jWjevaluations of the cost function for a selectedsubset of data. For current networks it would be impossible to compute: for example, VGG-16 hasjWj= 4224 convolutional feature maps. While it is impossible to solve this optimization exactlyfor networks of any reasonable size, in this work we investigate a class of greedy methods. Startingwith a full set of parameters W, we iteratively identify and remove the least important parameters, asillustrated in Figure 1. By removing parameters at each iteration, we ensure the eventual satisfactionof the`0bound onW0.1A “parameter” (w; b)2 W might represent an individual weight, a convolutional kernel, or the entire set ofkernels that compute a feature map; our experiments operate at the level of feature maps.2Published as a conference paper at ICLR 2017Since we focus our analysis on pruning feature maps from convolutional layers, let us denote aset of image feature maps by z`2RH`W`C`with dimensionality H`W`andC`individualmaps (or channels).2The feature maps can either be the input to the network, z0, or the outputfrom a convolutional layer, z`with`2[1;2;:::;L ]. Individual feature maps are denoted z(k)`fork2[1;2;:::;C`]. A convolutional layer `applies the convolution operation ( ) to a set of inputfeature maps z`1with kernels parameterized by w(k)`2RC`1pp:z(k)`=g(k)`Rz`1w(k)`+b(k)`; (2)where z(k)`2RH`W`is the result of convolving each of C`1kernels of size ppwith its respectiveinput feature map and adding bias b(k)`. We introduce a pruning gate gl2f0;1gCl, an external switchwhich determines if a particular feature map is included or pruned during feed-forward propagation,such that when gis vectorized:W0=gW.2.1 O RACLE PRUNINGMinimizing the difference in accuracy between the full and pruned models depends on the criterion foridentifying the “least important” parameters, called saliency , at each step. The best criterion would bean exact empirical evaluation of each parameter, which we denote the oracle criterion, accomplishedby ablating each non-zero parameter w2W0in turn and recording the cost’s difference.We distinguish two ways of using this oracle estimation of importance: 1) oracle-loss quantifiesimportance as the signed change in loss, C(DjW0)C(DjW ), and 2) oracle-abs adopts the absolutedifference,jC(DjW0)C(DjW )j. While both discourage pruning which increases the loss, theoracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes anypruning in proportion to its change in loss, regardless of the direction of change.While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiringjjW0jj0evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Sinceestimation of parameter importance is key to both the accuracy and the efficiency of this pruningapproach, we propose and evaluate several criteria in terms of performance and estimation cost.2.2 C RITERIA FOR PRUNINGThere are many heuristic criteria which are much more computationally efficient than the oracle. Forthe specific case of evaluating the importance of a feature map (and implicitly the set of convolutionalkernels from which it is computed), reasonable criteria include: the combined `2-norm of thekernel weights, the mean, standard deviation or percentage of the feature map’s activation, andmutual information between activations and predictions. We describe these criteria in the followingparagraphs and propose a new criterion which is based on the Taylor expansion.Minimum weight. Pruning by magnitude of kernel weights is perhaps the simplest possible crite-rion, and it does not require any additional computation during the fine-tuning process. In case of prun-ing according to the norm of a set of weights, the criterion is evaluated as: MW:RC`1pp!R,withMW(w) =1jwjPiw2i, wherejwjis dimensionality of the set of weights after vectorization.The motivation to apply this type of pruning is that a convolutional kernel with low `2norm detectsless important features than those with a high norm. This can be aided during training by applying `1or`2regularization, which will push unimportant kernels to have smaller values.Activation. One of the reasons for the popularity of the ReLU activation is the sparsity in activationthat is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonableto assume that if an activation value (an output feature map) is small then this feature detectoris not important for prediction task at hand. We may evaluate this by mean activation, MA:RHlW`C`!R, with MA(a) =1jajPiaifor activation a=z(k)l, or by the standard deviationof the activation, MA _std(a) =q1jajPi(aia)2.2While our notation is at times specific to 2D convolutions, the methods are applicable to 3D convolutions,as well as fully connected layers.3Published as a conference paper at ICLR 2017Mutual information. Mutual information (MI) is a measure of how much information is present inone variable about another variable. We apply MI as a criterion for pruning, MI:RHlW`C`!R,with MI(a) =MI(a;y), whereyis the target of neural network. MI is defined for continuousvariables, so to simplify computation, we exchange it with information gain (IG), which is definedfor quantized variables IG(yjx) =H(x) +H(y)H(x;y), whereH(x)is the entropy of variablex. We accumulate statistics on activations and ground truth for a number of updates, then quantizethe values and compute IG.Taylor expansion. We phrase pruning as an optimization problem, trying to find W0with boundednumber of non-zero elements that minimizeC(hi)=jC(DjW0)C(DjW )j. With this approachbased on the Taylor expansion, we directly approximate change in the loss function from removing aparticular parameter. Let hibe the output produced from parameter i. In the case of feature maps,h=fz(1)0;z(2)0;:::;z(C`)Lg. For notational convenience, we consider the cost function equally depen-dent on parameters and outputs computed from parameters: C(Djhi) =C(Dj(w;b)i). Assumingindependence of parameters, we have:C(hi)=C(D;hi= 0)C(D;hi); (3)whereC(D;hi= 0) is a cost value if output hiis pruned, whileC(D;hi)is the cost if it is not pruned.While parameters are in reality inter-dependent, we already make an independence assumption ateach gradient step during training.To approximate C(hi), we use the first-degree Taylor polynomial. For a function f(x), the Taylorexpansion at point x=aisf(x) =PXp=0f(p)(a)p!(xa)p+Rp(x); (4)wheref(p)(a)is thep-th derivative of fevaluated at point a, andRp(x)is thep-th order remainder.ApproximatingC(D;hi= 0) with a first-order Taylor polynomial near hi= 0, we have:C(D;hi= 0) =C(D;hi)Chihi+R1(hi= 0): (5)The remainder R1(hi= 0) can be calculated through the Lagrange form:R1(hi= 0) =2C(h2i=)h2i2; (6)whereis a real number between 0andhi. However, we neglect this first-order remainder, largelydue to the significant calculation required, but also in part because the widely-used ReLU activationfunction encourages a smaller second order term. Finally, by substituting Eq. (5) into Eq. (3) andignoring the remainder, we have TE:RHlWlCl!R+, withTE(hi) =C(hi)=C(D;hi)ChihiC(D;hi)=Chihi: (7)Intuitively, this criterion prunes parameters that have an almost flat gradient of the cost function w.r.t.feature map hi. This approach requires accumulation of the product of the activation and the gradientof the cost function w.r.t. to the activation, which is easily computed from the same computations forback-propagation. TEis computed for a multi-variate output, such as a feature map, byTE(z(k)l) =1MXmCz(k)l;mz(k)l;m; (8)whereMis length of vectorized feature map. For a minibatch with T >1examples, the criterion iscomputed for each example separately and averaged over T.Independently of our work, Figurnov et al. (2016) came up with similar metric based on the Taylorexpansion, called impact , to evaluate importance of spatial cells in a convolutional layer. It showsthat the same metric can be applied to evaluate importance of different groups of parameters.4Published as a conference paper at ICLR 2017Relation to Optimal Brain Damage. The Taylor criterion proposed above relies on approximatingthe change in loss caused by removing a feature map. The core idea is the same as in Optimal BrainDamage (OBD) (LeCun et al., 1990). Here we consider the differences more carefully.The primary difference is the treatment of the first-order term of the Taylor expansion, in our notationy=Chhfor cost functionCand hidden layer activation h. After sufficient training epochs, thegradient term tends to zero:Ch!0andE(y) = 0 . At face value yoffers little useful information,hence OBD regards the term as zero and focuses on the second-order term.However, the variance ofyis non-zero and correlates with the stability of the local function w.r.t.activationh. By considering the absolute change in the cost3induced by pruning (as in Eq. 3), we usethe absolute value of the first-order term, jyj. Under assumption that samples come from independentand identical distribution, E(jyj) =p2=pwhereis the standard deviation of y, known as theexpected value of the half-normal distribution. So, while ytends to zero, the expectation of jyjisproportional to the variance of y, a value which is empirically more informative as a pruning criterion.As an additional benefit, we avoid the computation of the second-order Taylor expansion term, or itssimplification - diagonal of the Hessian, as required in OBD.We found important to compare proposed Taylor criteria to OBD. As described in the originalpapers (LeCun et al., 1990; 1998), OBD can be efficiently implemented similarly to standard backpropagation algorithm doubling backward propagation time and memory usage when used togetherwith standard fine-tuning. Efficient implementation of the original OBD algorithm might requiresignificant changes to the framework based on automatic differentiation like Theano to efficientlycompute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle thisproblem with approximation techniques (Martens, 2010; Martens et al., 2012). In our implementation,we use efficient way of computing Hessian-vector product (Pearlmutter, 1994) and matrix diagonalapproximation proposed by (Bekas et al., 2007), please refer to more details in appendix. Withcurrent implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3times slower for iterative pruning, however with different implementation can only be 50% slower asmentioned in the original paper.Average Percentage of Zeros (APoZ). Hu et al. (2016) proposed to explore sparsity in activationsfor network pruning. ReLU activation function imposes sparsity during inference, and averagepercentage of positive activations at the output can determine importance of the neuron. Intuitively,it is a good criteria, however feature maps at the first layers have similar APoZ regardless of thenetwork’s target as they learn to be Gabor like filters. We will use APoZ to estimate saliency offeature maps.2.3 N ORMALIZATIONSome criteria return “raw” values, whose scale varies with the depth of the parameter’s layer in thenetwork. A simple layer-wise `2-normalization can achieve adequate rescaling across layers:^(z(k)l)=(z(k)l)qPj(z(j)l)2:2.4 FLOP S REGULARIZED PRUNINGOne of the main reasons to apply pruning is to reduce number of operations in the network. Featuremaps from different layers require different amounts of computation due the number and sizes of inputfeature maps and convolution kernels. To take this into account we introduce FLOPs regularization:(z(k)l) = ( z(k)l)flopsl; (9)wherecontrols the amount of regularization. For our experiments, we use = 103.flopsiscomputed under the assumption that convolution is implemented as a sliding window (see Appendix).Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint.3OBD approximates the signed difference in loss, while our method approximates absolute difference in loss.We find in our results that pruning based on absolute difference yields better accuracy.5Published as a conference paper at ICLR 2017Figure 2: Global statistics of oracle ranking,shown by layer for Birds- 200transfer learning.Figure 3: Pruning without fine-tuning usingoracle ranking for Birds- 200transfer learning.3 R ESULTSWe empirically study the pruning criteria and procedure detailed in the previous section for a variety ofproblems. We focus many experiments on transfer learning problems, a setting where pruning seemsto excel. We also present results for pruning large networks on their original tasks for more directcomparison with the existing pruning literature. Experiments are performed within Theano (TheanoDevelopment Team, 2016). Training and pruning are performed on the respective training sets foreach problem, while results are reported on appropriate holdout sets, unless otherwise indicated. Forall experiments we prune a single feature map at every pruning iteration, allowing fine-tuning andre-evaluation of the criterion to account for dependency between parameters.3.1 C HARACTERIZING THE ORACLE RANKINGWe begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learningproblem. We fine-tune the VGG-16 network (Simonyan & Zisserman, 2014) for classification of birdspecies using the Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011). The dataset consists ofnearly 6000 training images and 5700 test images, covering 200species. We fine-tune VGG-16 for60epochs with learning rate 0:0001 to achieve a test accuracy of 72:2%using uncropped images.To compute the oracle, we evaluate the change in loss caused by removing each individual featuremap from the fine-tuned VGG-16 network. (See Appendix A.3 for additional analysis.) We rankfeature maps by their contributions to the loss, where rank 1indicates the most important featuremap—removing it results in the highest increase in loss—and rank 4224 indicates the least important.Statistics of global ranks are shown in Fig. 2 grouped by convolutional layer. We observe: (1)Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to bemore important than those without. (VGG-16 has pooling after layers 2,4,7,10, and 13.) However,(3) maximum and minimum ranks show that every layer has some feature maps that are globallyimportant and others that are globally less important. Taken together with the results of subsequentexperiments, we opt for encouraging a balanced pruning that distributes selection across all layers.Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we donot update the parameters of the network or the oracle ranking between iterations. Training accuracyis illustrated in Fig. 3 over many pruning iterations. Surprisingly, pruning by smallest absolutechange in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss).Even though the oracle indicates that removing some feature maps individually may decrease loss,instability accumulates due the large absolute changes that are induced. These results support pruningbyabsolute difference in cost, as constructed in Eq. 1.3.2 E VALUATING PROPOSED CRITERIA VERSUS THE ORACLETo evaluate computationally efficient criteria as substitutes for the oracle, we compute Spearman’srank correlation, an estimate of how well two predictors provide monotonically related outputs,6Published as a conference paper at ICLR 2017AlexNet / Flowers-102 VGG-16 / Birds-200Weight Activation OBD Taylor Weight Activation OBD Taylor MutualMean S.d. APoZ Mean S.d. APoZ Info.Per layer 0:17 0:65 0:67 0:54 0:64 0:77 0:27 0:56 0:57 0:35 0:59 0:73 0:28All layers 0:28 0:51 0:53 0:41 0:68 0:37 0 :34 0:35 0:30 0:43 0:65 0:14 0:35(w/`2-norm) 0:13 0:63 0:61 0:60 - 0:75 0:33 0:64 0:66 0:51 - 0:73 0:47AlexNet / Birds-200 VGG-16 / Flowers-102Per layer 0:36 0:57 0:65 0:42 0:54 0:81 0:19 0:51 0:47 0:36 0:21 0:6All layers 0:32 0:37 0:51 0:28 0:61 0:37 0 :35 0:53 0:45 0:61 0:28 0:02(w/`2-norm) 0:23 0:54 0:57 0:49 - 0:78 0:28 0:66 0:65 0:61 - 0:7AlexNet / ImageNetPer layer 0:57 0:09 0:190:060:58 0:58All layers 0:67 0:00 0:130:080:72 0:11(w/`2-norm) 0:44 0:10 0:19 0:19 - 0:55Table 1: Spearman’s rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16and AlexNet fine-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet.Figure 4: Pruning of feature maps in VGG-16 fine-tuned on the Birds- 200dataset.even if their relationship is not linear. Given the difference between oracle4and criterion ranksdi=rank (oracle (i))rank (criterion (i))for each parameter i, the rank correlation is computed:S= 16N(N21)NXi=1di2; (10)whereNis the number of parameters (and the highest rank). This correlation coefficient takes valuesin[1;1], where1implies full negative correlation, 0no correlation, and 1full positive correlation.We show Spearman’s correlation in Table 1 to compare the oracle-abs ranking to rankings by differentcriteria on a set of networks/datasets some of which are going to be introduced later. Data-dependentcriteria (all except weight magnitude) are computed on training data during the fine-tuning beforeor between pruning iterations. As a sanity check, we evaluate random ranking and observe 0:0correlation across all layers. “Per layer” analysis shows ranking within each convolutional layer,while “All layers” describes ranking across layers. While several criteria do not scale well acrosslayers with raw values, a layer-wise `2-normalization significantly improves performance. The Taylorcriterion has the highest correlation among the criteria, both within layers and across layers (with `2normalization). OBD shows the best correlation across layers when no normalization used; it alsoshows best results for correlation on ImageNet dataset. (See Appendix A.2 for further analysis.)3.3 P RUNING FINE -TUNED IMAGE NET NETWORKSWe now evaluate the full iterative pruning procedure on two transfer learning problems. We focus onreducing the number of convolutional feature maps and the total estimated floating point operations(FLOPs). Fine-grained recognition is difficult for relatively small datasets without relying on transfer4We use Oracle-abs because of better performance in previous experiment7Published as a conference paper at ICLR 2017Figure 5: Pruning of feature maps in AlexNet on fine-tuned on Flowers- 102.learning. Branson et al. (2014) show that training CNN from scratch on the Birds-200 dataset achievestest accuracy of only 10:9%. We compare results to training a randomly initialized CNN with halfthe number of parameters per layer, denoted "from scratch".Fig. 4 shows pruning of VGG-16 after fine-tuning on the Birds- 200dataset (as described previously).At each pruning iteration, we remove a single feature map and then perform 30minibatch SGDupdates with batch-size 32, momentum 0:9, learning rate 104, and weight decay 104. The figuredepicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterionshows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularizationdemonstrates the best performance relative to the number of operations. OBD shows slightly worseperformance of pruning in terms of parameters, however significantly worse in terms of FLOPs.In Fig. 5, we show pruning of the CaffeNet implementation of AlexNet (Krizhevsky et al., 2012) afteradapting to the Oxford Flowers 102dataset (Nilsback & Zisserman, 2008), with 2040 training and6129 test images from 102species of flowers. Criteria correlation with oracle-abs is summarized inTable 1. We initially fine-tune the network for 20epochs using a learning rate of 0:001, achieving afinal test accuracy of 80:1%. Then pruning procedes as previously described for Birds- 200, exceptwith only 10mini-batch updates between pruning iterations. We observe the superior performance ofthe Taylor and OBD criteria in both number of parameters and GFLOPs.We observed that Taylor criterion shows the best performance which is closely followed by OBD witha bit lower Spearman’s rank correlation coefficient. Implementing OBD takes more effort because ofcomputation of diagonal of the Hessian and it is 50% to 300% slower than Taylor criteria that relieson first order gradient only.Fig. 6 shows pruning with the Taylor technique and a varying number of fine-tuning updates betweenpruning iterations. Increasing the number of updates results in higher accuracy, but at the cost ofadditional runtime of the pruning procedure.During pruning we observe a small drop in accuracy. One of the reasons is fine-tuning betweenpruning iterations. Accuracy of the initial network can be improved with longer fine tunning andsearch of better optimization parameters. For example accuracy of unpruned VGG16 network onBirds-200 goes up to 75% after extra 128k updates. And AlexNet on Flowers-102 goes up to 82:9%after 130k updates. It should be noted that with farther fine-tuning of pruned networks we can achievehigher accuracy as well, therefore the one-to-one comparison of accuracies is rough.3.4 P RUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITIONMolchanov et al. (2016) learn to recognize 25dynamic hand gestures in streaming video with a largerecurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNNpretrained on the Sports-1M video dataset (Karpathy et al., 2014) and fine tuning on a gesture dataset.The full network achieves an accuracy of 80:7%when trained on the depth modality, but a singleinference requires an estimated 37:8GFLOPs, too much for deployment on an embedded GPU. Afterseveral iterations of pruning with the Taylor criterion with learning rate 0:0003 , momentum 0:9,FLOPs regularization 103, we reduce inference to 3:0GFLOPs, as shown in Fig. 7. While pruning8Published as a conference paper at ICLR 2017Figure 6: Varying the number of minibatchupdates between pruning iterations withAlexNet/Flowers- 102and the Taylor criterion.Figure 7: Pruning of a recurrent 3D-CNN fordynamic hand gesture recognition(Molchanov et al., 2016).Figure 8: Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations.increases classification error by nearly 6%, additional fine-tuning restores much of the lost accuracy,yielding a final pruned network with a 12:6reduction in GFLOPs and only a 2:5%loss in accuracy.3.5 P RUNING NETWORKS FOR IMAGE NETFigure 9: Pruning of the VGG-16 network onImageNet, with additional following fine-tuning at11:5and8GFLOPs.We also test our pruning scheme on the large-scale ImageNet classification task. In the firstexperiment, we begin with a trained CaffeNetimplementation of AlexNet with 79:2%top-5validation accuracy. Between pruning iterations,we fine-tune with learning rate 104, momen-tum0:9, weight decay 104, batch size 32, anddrop-out 50%. Using a subset of 5000 trainingimages, we compute oracle-abs and Spearman’srank correlation with the criteria, as shown in Ta-ble 1. Pruning traces are illustrated in Fig. 8. Weobserve: 1) Taylor performs better than randomor minimum weight pruning when 100updatesare used between pruning iterations. When re-sults are displayed w.r.t. FLOPs, the differencewith random pruning is only 0%4%, but thedifference is higher, 1%10%, when plottedwith the number of feature maps pruned. 2)Increasing the number of updates from 100to1000 improves performance of pruning signifi-cantly for both the Taylor criterion and randompruning.9Published as a conference paper at ICLR 2017Hardware Batch Accuracy Time, ms Accuracy Time (speed up) Accuracy Time (speed up)AlexNet / Flowers-102 , 1.46 GFLOPs 41% feature maps, 0.4 GFLOPs 19.5% feature maps, 0.2 GFLOPsCPU: Intel Core i7-5930K 16 80.1% 226.4 79.8%(-0.3%) 121.4 (1.9x) 74.1%(-6.0%) 87.0 (2.6x)GPU: GeForce GTX TITAN X (Pascal) 16 4.8 2.4 (2.0x) 1.9 (2.5x)GPU: GeForce GTX TITAN X (Pascal) 512 88.3 36.6 (2.4x) 27.4 (3.2x)GPU: NVIDIA Jetson TX1 32 169.2 73.6 (2.3x) 58.6 (2.9x)VGG-16 / ImageNet , 30.96 GFLOPs 66% feature maps, 11.5 GFLOPs 52% feature maps, 8.0 GFLOPsCPU: Intel Core i7-5930K 16 89.3% 2564.7 87.0% (-2.3%) 1483.3 (1.7x) 84.5% (-4.8%) 1218.4 (2.1x)GPU: GeForce GTX TITAN X (Pascal) 16 68.3 31.0 (2.2x) 20.2 (3.4x)GPU: NVIDIA Jetson TX1 4 456.6 182.5 (2.5x) 138.2 (3.3x)R3DCNN / nvGesture , 37.8 GFLOPs 25% feature maps, 3 GFLOPsGPU: GeForce GT 730M 1 80.7% 438.0 78.2% (-2.5%) 85.0 (5.2x)Table 2: Actual speed up of networks pruned by Taylor criterion for various hardware setup. Allmeasurements were performed with PyTorch with cuDNN v5.1.0, except R3DCNN which wasimplemented in C++ with cuDNN v4.0.4). Results for ImageNet dataset are reported as top-5accuracy on validation set. Results on AlexNet / Flowers-102 are reported for pruning with 1000updates between iterations and no fine-tuning after pruning.For a second experiment, we prune a trained VGG-16 network with the same parameters as before,except enabling FLOPs regularization. We stop pruning at two points, 11:5and8:0GFLOPs, andfine-tune both models for an additional five epochs with learning rate 104. Fine-tuning after pruningsignificantly improves results: the network pruned to 11:5GFLOPs improves from 83% to87% top-5validation accuracy, and the network pruned to 8:0GFLOPs improves from 77:8%to84:5%.3.6 S PEED UP MEASUREMENTSDuring pruning we were measuring reduction in computations by FLOPs, which is a common practice(Han et al., 2015; Lavin, 2015a;b). Improvements in FLOPs result in monotonically decreasinginference time of the networks because of removing entire feature map from the layer. However,time consumed by inference dependents on particular implementation of convolution operator,parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measureimprovement in the inference time for selected networks to see real speed up compared to unprunednetworks in Table 2. We observe significant speed ups by proposed pruning scheme.4 C ONCLUSIONSWe propose a new scheme for iteratively pruning deep convolutional neural networks. We find: 1)CNNs may be successfully pruned by iteratively removing the least important parameters—featuremaps in this case—according to heuristic selection criteria; 2) a Taylor expansion-based criteriondemonstrates significant improvement over other criteria; 3) per-layer normalization of the criterionis important to obtain global scaling.<|im_end|> <|im_start|>assistant ### Review Title Empirically justified pruning strategy, a few missing comparisons ### Review Text Authors propose a strategy for pruning weights with the eventual goal of reducing GFLOP computations. The pruning strategy is well motivated using the taylor expansion of the neural network function with respect to the feature activations. The obtained strategy removes feature maps that have both a small activation and a small gradient (eqn 7). (A) Ideally the gradient of the output with respect to the activation functions should be 0 at the optimal, but as a result of stochastic gradient evaluations this would practically never be zero. Small variance in the gradient across mini-batches indicates that irrespective of input data the specific network parameter is unlikely to change - intuitively these are parameters that are closer to convergence. Parameters/weights that are close to convergence and also result in a small activation are intuitively good candidates for pruning. This is essentially what eqn 7 conveys and is likely to be reason why just removing weights that result in small activations is not as good of a pruning strategy (as shown by results in the paper). There are two kind of differences in weights that are removed by activation v/s taylor expansion: 1. Weights with high-activations but very low gradients will be removed by taylor expansion, but not by activation alone. 2. Weights with low-activation but high gradients will be removed by activation criterion, but not by taylor expansion. It will be interesting to analyze which of (1) or (2) contribute more to the differences in weights that are removed by the taylor expansion v/s activation criterion. Intuitively it seems that weight that satisfy (1) are important because they are converged and contribute significantly to network's activation. It is possible that a modified criterion - eqn (7) + \lambda feature activation, (where \lambda needs to be found by cross-validation) may lead to even better results at the cost of more parameter tuning. (B) Another interesting comparison is with the with the optimal damage framework - where the first order gradients are assumed to be zero and pruning is performed using the second-order information (also discussed by authors in the appendix). Critically, only the diagonal of the Hessian is computed. There is no comparison with optimal damage as authors claim it is memory and computation inefficient. Back of envelope calculations suggest that this would result only in 50% increase in memory and computation during pruning, but no loss in efficiency during testing. Therefore from a standpoint of deployment, I don't think this missing comparison is justified. (C) The eventual goal of the authors is to reduce GFLOPs. Some recent papers have proposed using lower precision computation for this. A comparison in GFLOPs with lower precision v/s pruning would be a great. While both these approaches are complementary and it is expected that combining both of them can lead to superior performance than either of the two - it is unclear when we are operating in the low-precision regime how much pruning can be performed. Any analysis on this tradeoff would be great (but not necessary). (D) On finetuning, authors report results of AlexNet and VGG on two different datasets - Flowers and Birds respectively. Why is this the case? It would be great to see the results of both the networks on both the datasets. (E) Authors report there is only a small drop in performance after pruning. Suppose the network was originally trained with N iterations, and then M finetuning iterations were performed during pruning. This means that pruned networks were trained for N + M iterations. The correct comparison in accuracies would be if we the original network was also trained for N + M iterations. In figure 4, does the performance at 100% parameters reports accuracy after N+M iterations or after N iterations? Overall I think the paper is technically and empirically sound, it proposes a new strategy for pruning: (1) Based on taylor expansion (2) Feature normalization to reduce parameter tuning efforts. (3) Iterative finetuning. However, I would like to see some comparisons mentioned in my comments above. If those comparisons are made I would change my ratings to an accept. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rxVXvwpN2fr
icaps-conference.org/ICAPS/2021/Workshop/HSDIP
2021
A*+BFHS: A Hybrid Heuristic Search Algorithm
["Zhaoxing Bu", "Richard Korf"]
We present a new algorithm A*+BFHS for solving problems where A* and IDA* fail due to memory limitations and/or the existence of many short cycles. A*+BFHS is based on A* and breadth-first heuristic search (BFHS). A*+BFHS combines advantages from both algorithms, namely A*’s node ordering, BFHS’s memory savings, and both algorithms’ duplicate detection. On easy problems, A*+BFHS behaves the same as A*. On hard problems, it is slower than A* but saves a large amount of memory. Compared to BFIDA*, A*+BFHS reduces the search time and/or memory requirement by several times on a variety of planning domains.
["heuristic search", "planning", "artificial intelligence"]
A*+BFHS: A Hybrid Heuristic Search AlgorithmZhaoxing Bu, Richard E. KorfComputer Science DepartmentUniversity of California, Los AngelesLos Angeles, CA 90095fzbu, korf g@cs.ucla.eduAbstractWe present a new algorithm A*+BFHS for solving problemswhere A* and IDA* fail due to memory limitations and/orthe existence of many short cycles. A*+BFHS is based on A*and breadth-first heuristic search (BFHS). A*+BFHS com-bines advantages from both algorithms, namely A*’s nodeordering, BFHS’s memory savings, and both algorithms’ du-plicate detection. On easy problems, A*+BFHS behaves thesame as A*. On hard problems, it is slower than A* but savesa large amount of memory. Compared to BFIDA*, A*+BFHSreduces the search time and/or memory requirement by sev-eral times on a variety of planning domains.Introduction and OverviewA* (Hart, Nilsson, and Raphael 1968) is a classic heuristicsearch algorithm that is used by many state-of-the-art op-timal track planners (Katz et al. 2018; Franco et al. 2017,2018; Martinez et al. 2018). One advantage of A* is du-plicate detection. A* uses a Closed list and an Open list toprune duplicate nodes. A state is a unique configuration ofthe problem while a node is a data structure that represents astate reached by a particular path. Duplicate nodes representthe same state arrived at via different paths.The second advantage of A* is node ordering. A* alwayspicks an Open node whose f-value is minimum among allOpen nodes to expand next, which guarantees an optimalsolution returned by A* when using an admissible heuris-tic. When using a consistent heuristic, A* expands all nodeswhose f-value is less than the optimal solution cost ( C).However, tie-breaking among nodes of equal f-value sig-nificantly affects the set of expanded nodes whose f-valueequals C. It is common practice to choose an Open nodewhose h-value is minimum among all Open nodes with thesame f-value, as this strategy usually leads to fewer nodesexpanded. A survey of tie-breaking strategies in A* can befound in (Asai and Fukunaga 2016).A*’s main drawback is its exponential space requirementas it stores in memory all nodes generated during the search.For example, A* can fill up 8 GB of memory in a few min-utes on common heuristic search and planning domains. Tosolve hard problems where A* fails due to memory limita-tions, researchers have proposed various algorithms, usuallyby forgoing A*’s duplicate detection or node ordering. Forexample, Iterative-Deepening-A* (IDA*, Korf 1985) onlyhas a linear memory requirement, at the price of no duplicatedetection and a depth-first order within each search bound.However, IDA* may generate too many duplicate nodes ondomains containing lots of short cycles, such as Towers ofHanoi and many planning domains, limiting its application.This paper introduces a new algorithm for solving hardproblems with many short cycles, where IDA* is not effec-tive. First, we review previously developed algorithms. Sec-ond, we present our algorithm A*+BFHS, which is based onA* and Breadth-First Heuristic Search (Zhou and Hansen2004). Third, we present experimental results on 32 hard in-stances from 18 International Planning Competition (IPC)domains. On those problems, A*+BFHS is slower thanA* but requires significantly less memory. Compared toBFIDA*, which is an algorithm that requires less memorythan A*, A*+BFHS reduces the search time and/or memoryrequirement by several times, and sometimes by an order ofmagnitude, on a variety of domains.Previous WorkIDA* with a transposition table (IDA*+TT, Sen and Bagchi1989; Reinefeld and Marsland 1994) uses a transposition ta-ble to detect duplicate nodes. However, IDA*+TT is outper-formed by other algorithms on both heuristic search (Bu andKorf 2019) and planning domains (Zhou and Hansen 2004).A*+IDA* (Bu and Korf 2019) combines A* and IDA*,and is the state-of-the-art algorithm on the 24-Puzzle. It firstruns A* until memory is almost full, then runs IDA* beloweach frontier node without duplicate detection. By sortingthe frontier nodes with the same f-value in increasing orderofh-values, A*+IDA* can significantly reduce the numberof nodes generated in its last iteration. Compared to IDA*,we reported a reduction by a factor of 400 in the total num-ber of nodes generated in the last iteration on all 50 24-Puzzle test cases in (Korf and Felner 2002). Similar to IDA*,A*+IDA* does not work well on domains with many shortcycles, however, as in many planning domains.Frontier search (Korf et al. 2005) is a family of heuris-tic search algorithms that work well on domains with manyshort cycles. Rather than storing all nodes generated, itstores only nodes that are at or near the search frontier, in-cluding all Open nodes and only one or two layers of Closednodes. As a result, when a goal node is expanded, only theoptimal cost is known. To reconstruct the solution path, fron-tier search keeps a middle layer of Closed nodes in mem-ory. For example, we can save the Closed nodes at depthh(start )=2as the middle layer. Each node generated belowthis middle layer has a pointer to its ancestor in the middlelayer. After discovering the optimal cost, a node in the mid-dle layer that is on an optimal path is identified. Then thesame algorithm can be applied recursively to compute thesolution path from the start node to the middle node, andfrom the middle node to the goal node. In general, however,frontier search cannot prune all duplicates in directed graphs(Korf et al. 2005; Zhou and Hansen 2004).Divide-and-Conquer Frontier-A* (DCFA*, Korf andZhang 2000) is a best-first frontier search based on A*. Toreconstruct the solution path, DCFA* keeps a middle layerof Closed nodes that are roughly halfway along the solutionpath. DCFA* detects duplicates and maintains A*’s node or-dering, but its memory savings compared to A* is limited ondomains where the Open list is larger than the Closed list.Breadth-First Heuristic Search (BFHS, Zhou and Hansen2004) is a frontier search algorithm for unit-cost domains.BFHS also detects duplicates but uses a breadth-first nodeordering instead of A*’s best-first ordering. At first, assumethe optimal cost Cis known in advance. BFHS runs abreadth-first search (BFS) from the start node and prunesevery generated node whose f-value exceeds C. To savememory, BFHS only keeps a few layers of nodes in memory.On undirected graphs, if we store the operators used to gen-erate each node, and do not regenerate the parents of a nodevia the inverses of those operators, frontier search only needsto store two layers of nodes, the currently expanding layerand their child nodes (Korf et al. 2005). On directed graphs,one previous layer besides the above-mentioned two lay-ers is usually stored to detect duplicates (Zhou and Hansen2004). To reconstruct the solution path, Zhou and Hansen(2004) recommend saving the layer at the 3/4 point of thesolution length as the middle layer instead of the layer atthe halfway point, which usually requires more memory. Asshown in (Zhou and Hansen 2004), on a domain where theOpen list of A* is larger than the Closed list, BFHS usuallyends up storing fewer nodes than DCFA*.In general, Cis not known in advance. Breadth-FirstIterative-Deepening-A* (BFIDA*, Zhou and Hansen 2004)overcomes this issue by running multiple iterations ofBFHS, each with a different f-bound, starting with theheuristic value of the start node. Similar to IDA*, the last it-eration of BFIDA* is often significantly larger than previousiterations, so most search time is spent on the last iterationon many domains.Compared to A*, BFHS and BFIDA* save significantmemory but generate more nodes. The main drawback ofBFHS and BFIDA* is that their node ordering is almost theworst among different node ordering schemes. BFHS andBFIDA*’s breadth-first ordering means they have to expandall nodes stored at one depth before expanding any nodes inthe next depth. As a result, they have to expand almost allnodes whose f-value equals C, excepting only some nodesat the same depth as the goal node, while A* may only ex-pand a small fraction of such nodes due to its node ordering.Forward Perimeter Search (FPS, Sch ̈utt, D ̈obbelin, and6S7A7D8H 8I8E8B 7C8F 7G8J 8KFigure 1: An example of A*+BFHS’s search frontier. Num-bers are f-values. Closed nodes are gray.Reinefeld 2013) builds a perimeter around the start node viaBFS, then runs BFIDA* below each perimeter node. The au-thors only test FPS on the 24-Puzzle and 17-Pancake prob-lem, and did not report any running times.A*+BFHSAlgorithm DescriptionWe propose a hybrid algorithm we call A*+BFHS to solvehard problems with many short cycles. A*+BFHS first runsA* until a storage threshold is reached, then runs a seriesof BFHS iterations on sets of frontier nodes, which are theOpen nodes at the end of the A* phase.The BFHS phase can be viewed as a doubly nested loop.Each iteration of the outer loop, which we define as an it-eration of the BFHS phase, corresponds to a different costbound for BFHS. The first cost bound is set to the small-estf-value among all frontier nodes. In each iteration of theBFHS phase, we first partition the frontier nodes whose f-value equals the cost bound into different sets according totheir depths. Then the inner loop makes one call to BFHSon each set of frontier nodes, in decreasing order of theirdepths. This is done by initializing the BFS queue of eachcall to BFHS with all the nodes in the set. This inner loopcontinues until a solution is found or all calls to BFHS withthe current bound fail to find a solution. After each call toBFHS on a set of frontier nodes, we increase the f-valueof all nodes in the set to the minimum f-value of the nodesgenerated but not expanded in the previous call to BFHS.Figure 1 presents an example of the Open and Closednodes at the end of the A* phase. Node S is the start node.All edge costs are 1 and the number in each node is its f-value. Closed nodes are gray. The Open nodes B, E, F, H, I,J, K are the frontier nodes for the BFHS phase. A*+BFHSfirst makes a call to BFHS with a cost bound of 8 on allfrontier nodes at depth 3, namely nodes H, I, J, K. If no so-lution is found, A*+BFHS updates the f-values of all thesenodes to the minimum f-value of the nodes generated butnot expanded in that call to BFHS. A*+BFHS then makes asecond call to BFHS with bound 8, starting with all frontiernodes at depth 2, namely nodes E and F. If no solution isfound, A*+BFHS updates the f-values of these nodes, thenmakes a third call to BFHS with bound 8, starting with thefrontier node B at depth 1. Suppose that no solution is foundwith bound 8, the updated f-values for nodes E, F, H, I, J, Kare 9, and the updated f-value for node B is 10. A*+BFHSthen starts a new iteration of BFHS with a cost bound of 9,making two calls to BFHS on nodes at depth 3 and 2 respec-tively. If the solution is found in the first call to BFHS withbound 9, BFHS will not be called again on nodes E and F.A*+BFHS is complete and admissible when using anadmissible heuristic. A*+BFHS potentially makes calls toBFHS on all frontier nodes. When an optimal solution exists,one node on this optimal path will serve as one of the startnodes for one of the calls to BFHS. Such a node is guaran-teed to exist by A*’s completeness and admissibility. Thenwhen the cost bound for the calls to BFHS equals C, theoptimal solution will be found, guaranteed by BFHS’s com-pleteness and admissibility.A state can be regenerated in separate calls to BFHS in thesame iteration. To reduce such duplicates, we can decreasethe number of calls to BFHS in each iteration by makingeach call to BFHS on a combined set of frontier nodes atadjacent depths. For the example in Figure 1, we can makeone call to BFHS on the frontier nodes at depths 2 and 3together instead of two separate calls to BFHS, by puttingthe frontier nodes at depth 3 after the frontier nodes at depth2 in the initial BFS queue.In practice, we can specify a maximum number of callsto BFHS per iteration. Then in each iteration, we divide thenumber of depths of the frontier nodes by the number ofcalls to BFHS to get the number of depths for each call toBFHS. For example, if the depths of the frontier nodes rangefrom 7 to 12 and we are limited to three calls to BFHS periteration, each call to BFHS will start with frontier nodes attwo depths. We used this strategy in our experiments.For each node generated in the BFHS phase, we check if itwas generated in the A* phase. If so, we immediately prunethe node if its current g-value in the BFHS phase is greaterthan or equal to its stored g-value in the A* phase.The primary purpose of the A* phase is to build a frontierset, so that A*+BFHS can terminate early in its last iteration.In the A* phase we have to reserve some memory for theBFHS phase. In our experiments, we first generated patterndatabases or the merge-and-shrink heuristic, then allocated1/10 of the remaining memory of 8 GB for the A* phase.Comparisons to BFIDA* and FPSA*+BFHS’s BFHS phase also uses the iterative deepeningconcept of BFIDA*, but there are two key differences. First,in each iteration, BFIDA* always makes one call to BFHSon the start node, while we call BFHS multiple times, eachon a different set of frontier nodes. Second, in each iteration,we order the frontier nodes based on their depth, and runBFHS on the deepest frontier nodes first.These differences lead to one drawback and two advan-tages. The drawback is that A*+BFHS may generate morenodes than BFIDA*, as the same state can be regenerated inseparate calls to BFHS in the same iteration.The first advantage is that A*+BFHS may terminate earlyin its last iteration. If A*+BFHS generates a goal node inthe last iteration below a relatively deep frontier node, nofrontier nodes above that depth will be expanded. Therefore,A*+BFHS may generate only a small number of nodes in itslast iteration. In contrast, BFIDA* has to expand almost allnodes whose f-value is less than or equal to Cin its lastiteration. As a result, A*+BFHS can be faster than BFIDA*.The second advantage is that A*+BFHS’s memory us-age, which is the maximum number of nodes stored duringthe entire search, may be smaller than that of BFIDA* fortwo reasons. First, the partition of frontier nodes and sepa-rate calls to BFHS within the same iteration can reduce themaximum number of nodes stored in the BFHS phase. Sec-ond, BFIDA* stores the most nodes in its last iteration whileA*+BFHS may store only a small number of nodes in thelast iteration due to early termination. Thus, A*+BFHS maystore the most nodes in the penultimate iteration instead.FPS looks similar to A*+BFHS, but there are several fun-damental differences. First, FPS builds the perimeter usinga breadth-first approach while A*+BFHS builds the frontiervia a best-first approach. FPS can also dynamically extendthe perimeter but this approach does not always speed up thesearch (Sch ̈utt, D ̈obbelin, and Reinefeld 2013). Second, ineach iteration of FPS’s BFIDA* phase, FPS makes one callto BFHS on each perimeter node. In contrast, in A*+BFHSeach call to BFHS is on a set of frontier nodes. Third, FPSsorts the perimeter nodes at the same f-value using a max-tree-first or longest-path-first policy, while A*+BFHS sortsthe frontier nodes at the same f-value in decreasing orderof their depth. Fourth, FPS needs two separate searches forsolution reconstruction while A*+BFHS only needs one.Solution ReconstructionEach node generated in A*+BFHS’s BFHS phase has apointer to its ancestral frontier node. When a goal node isgenerated, the solution path from the start node to the an-cestral frontier node is stored in the A* phase and only onemore search is needed to reconstruct the solution path fromthe ancestral frontier node to the goal node. This subproblemis much easier than the original problem and we can use thesame heuristic function as for the original problem. There-fore, we just use A* to solve this subproblem. In addition,since we know the optimal cost of this subproblem, we canprune any node whose f-value exceeds this cost.In BFIDA*, we have to solve two subproblems to re-cover the solution path from the start node to the middlenode and from the middle node to the goal node. Zhou andHansen (2004) called BFHS recursively to solve these twosubproblems. However, pattern database heuristics (PDB,Culberson and Schaeffer 1998) only store heuristic valuesto the goal state, and not between arbitrary pairs of states,which complicates finding a path to a middle node. Simi-lar to A*+BFHS, we use A* to solve the second subprob-lem. For the first subproblem, we use A* to compute thepath from the start node to the middle node using the sameheuristic function as for the original problem, which mea-sures the distance to the goal node, not the middle node. Tosave memory, we prune any node whose g-value is greaterthan or equal to the depth of the middle node, and any nodewhose f-value exceeds the optimal cost of the original prob-lem. Since a deeper middle layer leads to more nodes storedin this approach, we saved the layer at the 1/4 point of thesolution length as the middle layer instead of the 3/4 point.In this way, we do not need to build a new heuristic functionfor the middle node. In our experiments, the search time forsolution reconstruction in BFIDA* is usually less than 1%of the total search time.Experimental Results and AnalysisWe implemented BFIDA* and A*+BFHS in the planner FastDownward 20.06 (Helmert 2006), using the existing codefor node expansion and heuristic value lookups. A*+BFHS’sA* phase reused the existing A* code. A* stores all nodes inone hash map. We used the same hash map implementationwith the following difference. In each call to BFHS in bothBFIDA* and A*+BFHS, we saved three layers of nodes forduplicate detection and we created one hash map for eachlayer of nodes. We did this because storing all nodes in onehash map in BFHS involves a lot of overhead, and is morecomplicated. Sch ̈utt, D ̈obbelin, and Reinefeld (2013) did nottest FPS on planning domains and we do not know the op-timal perimeter radius and sorting strategy for each domain,so we did not implement FPS in Fast Downward.We solved about 550 problem instances from 32 unit-cost domains. We present the results of A*, BFIDA*, andA*+BFHS on the 32 hardest instances. All remaining in-stances were easily solved by A*. We tested two A*+BFHSversions. A*+BFHS ( 1) starts each call to BFHS on fron-tier nodes at one depth. A*+BFHS (4) makes each call toBFHS on frontier nodes at multiple depths with at most fourcalls to BFHS in each iteration. All tests were run on a 3.33GHz Intel Xeon X5680 CPU with 236 GB of RAM. We usedthe landmark-cut heuristic (LM-cut, Helmert and Domshlak2009) for the satellite domain, the merge-and-shrink heuris-tic (M&S) with the recommended configuration (Sievers,Wehrle, and Helmert 2014, 2016; Sievers 2018) for the tppand hiking14 domains, and the iPDB heuristic with the de-fault configuration (Haslum et al. 2007; Sievers, Ortlieb, andHelmert 2012) for all other domains.We present the results in Tables 1, 2, and 3. Tables 1 and 2contain the 26 hardest instances solved by A*. Table 3 con-tains the remaining 6 instances where A* terminated earlywithout finding a solution due to the limitation of the hashmap size in Fast Downward 20.06. The instances in Tables1 and 2 are sorted by the A* running times and the instancesin Table 3 are sorted by the BFIDA* running times.All three tables have the same columns. The first columngives the domain name, the instance ID, the optimal solutioncostC, and the heuristic function used. The second columnlists the different algorithms. We ran each algorithm until itfound an optimal cost and returned the optimal path. Thethird column gives the maximum number of nodes stored byeach algorithm. For A*, this is the number of nodes stored atthe end of the search. For BFIDA*, this is the largest sum ofthe number of nodes stored in all three layers of the search,plus the nodes stored in the 1/4 layer for solution recon-struction. For A*+BFHS, this is the largest number of nodesstored in the BFHS phase plus the number of nodes storedin the A* phase. An underline means the specific algorithmneeded more than 8 GB of memory to solve the problem.The fourth column is the total number of nodes generated,including the nodes generated during solution reconstruc-tion. The fifth column is the number of nodes generated in all0 2 4 6 8 10 1200:20:40:60:8x= 1A* peak stored # =A*+BFHS peak stored #A* time =A*+BFHS timeA*+BFHS (4) A*+BFHS ( 1)Figure 2: A* vs. A*+BFHS in time and memory.but the last iteration. For A*, this is the number of nodes gen-erated before expanding an Open node whose f-value is C.For A*+BFHS, this number includes the nodes generated inits A* phase. The sixth column is the number of nodes gener-ated in the last iteration. For A*, this is the number of nodesgenerated while expanding the Open nodes whose f-valueequals C. The last column is the running time in seconds,including the time for solution reconstruction but excludingthe time spent on precomputing the heuristic function, whichis the same for all algorithms. For each instance, the smallestmaximum number of stored nodes and shortest running timeare indicated in boldface. For the A* data in Table 3, we re-port the numbers of nodes and running times just before A*terminated, with a >symbol to indicate such numbers.We further compare the time and memory between A*and A*+BFHS in Figure 2, and between BFIDA* andA*+BFHS in Figure 3, where the x-axis is A*/BFIDA*’speak stored nodes over A*+BFHS’s and the y-axis isA*/BFIDA*’s running time over A*+BFHS’s. Figure 2 con-tains the 26 instances solved by A* and Figure 3 contains all32 instances. The red circles and green triangles correspondto A*+BFHS (4) and A*+BFHS ( 1) respectively. The datapoints above the y= 1 line or to the right of the x= 1line represent instances where A*+BFHS outperformed thecomparison algorithm in terms of time or memory.A*+BFHS vs. A*A* was the fastest on all problem instances that it solved,but also used the most memory. Among the 32 hardest prob-lem instances we present, A* required more than 8 GB ofmemory on 22 instances and could not find a solution on6 of those after running out of the hash map used by FastDownward 20.06. On some of these instances, A* used 30GB to 40 GB of memory before it terminated. This meansA* cannot solve these 22 instances under the current IPCmemory requirement, which is 8 GB. A*+BFHS requiredseveral times, sometimes an order of magnitude, less mem-0 2 4 6051015y= 1x= 1BFIDA* peak stored # =A*+BFHS peak stored #BFIDA* time =A*+BFHS timeA*+BFHS (4) A*+BFHS ( 1)Figure 3: BFIDA* vs. A*+BFHS in time and memory.ory than A*. As a result, A*+BFHS only used more than 8GB of memory on one instance. An interesting comparisonis the space and time trade-off. For example, on parking14,A*+BFHS increased the running time by less than 100%while saving more than an order of magnitude in memory.A*+BFHS vs. BFIDA*In summary, on easy problems that A*+BFHS can solve inits A* phase, A*+BFHS behaves the same as A*, and is al-ways faster than BFIDA*. We solved around 500 such prob-lems, which are not included here due to space limitations.On the 32 hardest problems we present, A*+BFHS is fasterthan BFIDA* on 27 instances and at least twice as fast on 16of those. Furthermore, A*+BFHS requires less memory thanBFIDA* on 25 of the 32 instances and saves more than halfthe memory on 14 of those. In addition, these time and mem-ory reductions exist on both the relatively easy and hard onesof the 32 instances presented, demonstrating that A*+BFHSis in general better than BFIDA* on very hard problems aswell as easy problems. In the following paragraphs, we com-pare A*+BFHS with BFIDA* in four aspects: duplicate de-tection, node ordering, memory, and running times.The relative numbers of nodes generated in the previousiterations reflect the power of duplicate detection. Comparedto BFIDA*, A*+BFHS (4) generated a similar number ofnodes in the previous iterations on most instances. Hiking142-3-6 is the only instance where A*+BFHS (4) generatedat least twice as many nodes in the previous iterations asBFIDA*. However, A*+BFHS ( 1) generated 2 to 7 times asmany nodes in the previous iterations as BFIDA* on 11 in-stances. This contrast shows that, compared to BFIDA*, sig-nificantly more duplicate nodes can be generated by makingeach call to BFHS on frontier nodes at only one depth. How-ever, most of those duplicate nodes can be avoided by mak-ing each call to BFHS on frontier nodes at multiple depths.A*+BFHS can generate fewer duplicate nodes thanBFIDA* due to fewer BFHS iterations and making each callto BFHS on a set of frontier nodes. A*+BFHS reduced thenumber of nodes in previous iterations by around 50% onfreecell 06 and snake18 17, and a factor of 4 on snake18 08.To our surprise, we found that on snake18 08, the numberof nodes generated in the penultimate iteration of BFIDA*was twice as many as the sum of the nodes generated inA*+BFHS’s A* phase and the penultimate iteration of theBFHS phase. This means a lot of duplicate nodes were gen-erated in BFIDA*. Snake18 generates a directed graph, inwhich case frontier search cannot detect all duplicate nodes(Korf et al. 2005; Zhou and Hansen 2004).Compared to BFIDA*, A*+BFHS reduced the numberof nodes in the last iteration significantly, and usually byseveral orders of magnitude, on 28 of the 32 instances.This large reduction proves that when ordering the frontiernodes by deepest-first, A*+BFHS can terminate early in itslast iteration. On the three blocks instances and depot 11,A*+BFHS did not terminate early in its last iteration becausethe ancestral frontier node of the goal had a relatively lowg-value. In fact, A* generated the most nodes in its last iter-ation on the three blocks instances, which shows that nodeordering is also difficult for A* on those instances. In con-trast, A* generated very few nodes in its last iteration ondepot 11, suggesting that A*+BFHS may terminate early inits last iteration given more memory for its A* phase.A*+BFHS’s A* phase usually stored from 10 to 20 mil-lion nodes, with the exception of the snake18 domain where40 to 50 million nodes were stored. Comparing the maxi-mum number of stored nodes, A*+BFHS ( 1) required lessmemory than BFIDA* on 25 instances and less than half thememory on 14 of those. For A*+BFHS (4), these two num-bers are 23 and 11 respectively. In contrast, termes18 05is the only instance where the maximum number of storednodes of A*+BFHS was at least twice that of BFIDA*.Comparing the two versions of A*+BFHS, A*+BFHS (4)was usually faster, sometimes significantly, due to the reduc-tion in duplicate nodes. Compared to BFIDA*, A*+BFHS(4) was slightly slower on four instances and 80% sloweron one instance. On the other 27 instances, A*+BFHS wasfaster than BFIDA*, and at least twice as fast on 16 of those.The large speedups usually were on the instances whereBFIDA* generated the most nodes in its last iteration. Thebest result was on the logistics00 domain, where an order ofmagnitude speedup was achieved. This is because BFIDA*performed very poorly on this domain due to its breadth-firstnode ordering. Comparing A*+BFHS ( 1) with BFIDA*,A*+BFHS ( 1) was slower on 11 instances and at least twiceas slow on three of those, but also at least twice as fast on 12instances. The main reason for the slower cases is the pres-ence of many duplicate nodes generated in certain domains.Calling BFHS on Nodes at Multiple DepthsComparing the two A*+BFHS versions, each has its prosand cons. A*+BFHS (4) always generated fewer duplicatenodes. Comparing the number of nodes generated in the pre-vious iterations, A*+BFHS ( 1) generated at least twice asmany nodes on 7 instances. A*+BFHS ( 1) generated signif-icantly fewer nodes in the last iteration than A*+BFHS (4)on 22 instances. However, the number of nodes generated inInstance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)depot A* 70,504,763 344,658,749 344,639,234 19,515 23314 BFIDA* 17,042,841 1,390,466,785 582,348,193 795,336,992 1,708C=29 A*+BFHS ( 1) 21,023,657 556,674,817 540,764,124 15,909,899 596iPDB A*+BFHS (4) 22,882,537 446,204,987 432,278,188 13,926,005 475termes18 A* 80,012,545 211,514,579 211,514,568 11 24505 BFIDA* 9,370,587 3,757,844,868 3,413,500,020 221,186,298 4,796C=132 A*+BFHS ( 1) 30,874,300 10,702,979,649 10,701,959,808 911,786 15,415iPDB A*+BFHS (4) 30,076,170 2,271,661,960 2,270,262,609 1,291,296 3,319freecell A* 53,080,996 243,947,771 243,244,703 703,068 25006 BFIDA* 38,054,162 1,220,132,074 732,920,409 485,268,534 1,883C=34 A*+BFHS ( 1) 30,481,377 327,209,951 312,812,283 14,388,579 441iPDB A*+BFHS (4) 35,120,076 403,465,250 302,581,091 100,875,070 561logistics00 A* 57,689,357 107,083,712 106,929,666 154,046 25514-1 BFIDA* 15,441,813 3,137,204,256 106,929,666 3,020,315,591 10,381C=71 A*+BFHS ( 1) 19,472,255 354,438,805 354,058,774 368,595 1,160iPDB A*+BFHS (4) 20,169,648 227,903,318 110,674,320 117,217,562 752driverlog A* 144,065,288 420,609,830 420,609,777 53 34412 BFIDA* 35,034,406 1,718,350,515 678,644,177 1,030,180,074 1,676C=35 A*+BFHS ( 1) 24,712,720 1,020,438,794 1,020,410,754 27,959 944iPDB A*+BFHS (4) 30,270,816 643,723,984 641,790,459 1,933,444 631freecell A* 107,183,015 531,379,136 531,378,858 278 52207 BFIDA* 77,196,602 4,152,881,254 2,897,339,576 1,143,762,584 6,416C=41 A*+BFHS ( 1) 54,171,433 3,095,608,289 2,370,094,738 725,267,629 4,775iPDB A*+BFHS (4) 58,058,327 2,430,947,097 1,896,369,611 534,331,564 3,769depot A* 172,447,963 764,608,339 764,607,971 368 55011 BFIDA* 27,192,174 3,037,154,042 1,260,718,486 1,755,157,316 3,544C=46 A*+BFHS ( 1) 37,977,775 6,268,318,349 3,092,746,859 3,175,552,575 7,314iPDB A*+BFHS (4) 46,923,423 3,319,995,622 1,262,429,685 2,057,547,022 4,078tpp A* 187,011,066 610,996,630 610,995,018 1,612 56211 BFIDA* 93,759,836 4,290,825,940 754,905,369 3,525,135,895 7,214C=51 A*+BFHS ( 1) 30,856,159 5,504,314,294 5,504,268,064 46,111 9,550M&S A*+BFHS (4) 33,368,912 1,419,143,562 1,285,410,734 133,732,709 2,426mystery 14 A* 139,924,686 652,569,481 650,036,341 2,533,140 578C=11 BFIDA* 135,963,227 6,213,135,253 727,753,687 5,430,082,105 7,628iPDB A*+BFHS ( 1/4) 20,302,860 730,971,724 676,473,465 54,497,630 839tidybot11 A* 69,953,936 171,363,621 170,286,720 1,076,901 66217 BFIDA* 42,080,838 776,084,110 486,518,217 281,131,278 3,684C=40 A*+BFHS ( 1) 33,969,968 661,386,777 467,282,853 194,103,710 3,223iPDB A*+BFHS (4) 37,090,062 547,745,706 397,125,094 150,620,398 2,694logistics00 A* 82,161,805 167,974,727 163,970,672 4,004,055 66315-1 BFIDA* 13,638,319 2,847,571,079 163,970,672 2,660,698,165 19,062C=67 A*+BFHS ( 1) 18,827,830 730,154,067 722,390,335 7,763,336 4,897iPDB A*+BFHS (4) 18,827,830 251,960,077 198,537,096 53,422,585 1,627pipesworld- A* 123,553,926 284,884,903 284,880,335 4,568 727notankage 19 BFIDA* 86,818,434 1,227,115,669 634,454,295 576,633,809 4,140C=24 A*+BFHS ( 1) 42,192,503 619,095,459 619,013,855 81,147 2,072iPDB A*+BFHS (4) 44,706,153 574,957,328 570,451,612 4,505,259 1,942parking14 A* 351,976,816 828,472,606 828,472,562 44 971169-01 BFIDA* 183,832,715 4,846,132,188 1,023,897,982 3,821,980,237 6,236C=24 A*+BFHS ( 1) 30,675,587 1,191,570,432 1,191,514,776 55,283 1,468iPDB A*+BFHS (4) 51,147,740 1,013,776,888 1,011,227,268 2,549,247 1,290visitall11 A* 407,182,291 795,670,561 795,669,929 632 1,04508-half BFIDA* 172,474,497 3,159,596,842 1,332,828,069 1,824,866,109 4,220C=43 A*+BFHS ( 1) 34,406,966 1,639,641,152 1,639,585,228 55,798 2,233iPDB A*+BFHS (4) 64,671,078 1,346,690,454 1,312,333,974 34,356,354 1,902Table 1: Instances sorted by A* running times. An underline means more than 8 GB of memory was needed. Smallest memoryand shortest times are in boldface.Instance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)tidybot11 A* 115,965,857 246,756,618 246,756,201 417 1,08616 BFIDA* 86,095,996 1,090,011,154 652,777,121 431,816,881 5,512C=40 A*+BFHS ( 1) 41,342,908 583,309,116 570,082,820 13,225,950 2,923iPDB A*+BFHS (4) 57,026,598 598,365,499 519,723,294 78,641,859 3,080snake18 A* 94,699,640 129,288,606 129,273,608 14,998 1,13108 BFIDA* 44,231,998 1,852,488,086 1,517,078,892 325,204,785 14,877C=58 A*+BFHS ( 1) 44,081,853 391,010,354 390,681,641 328,706 3,445iPDB A*+BFHS (4) 51,166,308 356,988,514 348,015,242 8,973,265 3,192hiking14 A* 287,192,625 3,299,939,168 3,299,937,850 1,318 1,2972-2-8 BFIDA* 42,570,885 11,376,337,161 5,757,334,602 5,582,502,874 10,847C=42 A*+BFHS ( 1) 44,454,322 16,233,911,987 12,346,881,620 3,886,689,991 14,897M&S A*+BFHS (4) 53,148,260 9,850,751,126 6,310,295,933 3,540,114,817 9,696pipesworld- A* 292,998,092 907,283,307 907,283,301 6 1,364tankage 14 BFIDA* 158,262,429 5,354,342,623 3,680,871,467 1,661,344,123 10,609C=38 A*+BFHS ( 1) 84,077,693 5,768,933,724 5,763,927,002 5,002,176 11,622iPDB A*+BFHS (4) 103,288,306 3,300,541,977 3,220,772,288 79,765,143 6,896blocks A* 555,864,249 1,185,065,570 205,172,261 979,893,309 1,54013-1 BFIDA* 99,782,317 1,742,819,669 463,603,038 1,224,383,750 2,142C=44 A*+BFHS ( 1) 54,601,577 2,261,321,708 425,991,501 1,827,341,160 2,817iPDB A*+BFHS (4) 79,572,108 1,817,197,763 401,559,990 1,407,648,726 2,317parking14 A* 606,117,759 1,430,911,954 1,430,746,610 165,344 1,714169-03 BFIDA* 291,822,896 8,077,642,530 1,796,305,162 6,280,923,558 10,059C=24 A*+BFHS ( 1) 48,304,204 2,519,414,336 2,328,368,930 191,043,484 3,124iPDB A*+BFHS (4) 63,455,874 2,151,415,198 1,992,188,756 159,224,520 2,679tidybot11 A* 175,574,760 372,772,055 372,771,560 495 1,73018 BFIDA* 114,747,861 1,718,896,347 1,093,273,564 613,928,542 8,810C=44 A*+BFHS ( 1) 40,540,308 1,045,166,148 1,028,635,660 16,529,544 5,410iPDB A*+BFHS (4) 65,784,369 1,204,942,101 931,501,196 273,439,961 6,365blocks A* 704,938,102 1,568,547,017 342,339,737 1,226,207,280 1,99013-0 BFIDA* 137,821,868 2,421,546,636 775,076,076 1,628,338,675 2,977C=42 A*+BFHS ( 1) 81,918,224 3,498,922,607 774,231,514 2,710,189,950 4,483iPDB A*+BFHS (4) 126,629,640 2,615,897,101 698,028,054 1,903,367,904 3,378hiking14 A* 368,433,117 6,711,042,999 6,710,971,209 71,790 2,4802-3-6 BFIDA* 124,686,777 38,476,138,468 29,175,130,389 8,123,329,545 42,379C=28 A*+BFHS ( 1) 146,623,619 107,138,328,055 106,429,883,507 682,558,443 120,494M&S A*+BFHS (4) 148,357,537 68,496,320,172 65,779,382,852 2,691,051,215 76,603pipesworld- A* 442,232,520 1,028,882,844 1,028,880,896 1,948 2,693notankage 20 BFIDA* 301,349,348 4,454,789,871 2,384,958,671 2,032,377,777 15,245C=28 A*+BFHS ( 1)133,708,317 3,325,668,014 3,267,529,384 58,132,775 11,499iPDB A*+BFHS (4) 148,029,967 2,988,248,448 2,728,140,813 260,097,006 10,629snake18 A* 265,033,991 367,639,596 365,927,487 1,712,109 3,96717 BFIDA* 60,041,363 2,162,411,969 1,464,995,207 639,565,966 20,418C=62 A*+BFHS ( 1) 56,839,243 877,934,374 871,327,013 6,607,339 8,785iPDB A*+BFHS (4) 73,365,792 855,342,127 776,892,002 78,450,103 8,916satellite A* 107,395,076 463,747,690 463,744,251 3,439 11,83408 BFIDA* 20,846,202 3,656,980,017 520,525,131 3,125,446,334 398,884C=26 A*+BFHS ( 1) 18,870,254 552,221,751 551,990,933 230,549 54,551LM-cut A*+BFHS (4) 19,763,323 546,211,783 479,810,475 66,401,039 56,296Table 2: Instances sorted by A* running times. An underline means more than 8 GB of memory was needed. Smallest memoryand shortest times are in boldface.the last iteration of A*+BFHS is usually only a small por-tion of the total nodes generated, so the large difference inthe last iteration is not very important. A*+BFHS (4) storeda larger maximum number of nodes than A*+BFHS ( 1)on almost all instances. However, the difference was usuallysmall and never more than a factor of two. For the runningtime, the difference was usually less than 50%. Compared toA*+BFHS ( 1), A*+BFHS (4) was faster by a factor of 3 onlogistics00 15-1, 2.5 on rovers 09 and 11, 4.6 on termes1805, 3.9 on tpp 11, and never more than 30% slower.In general, making each call to BFHS on frontier nodesat multiple depths increases both the memory usage and thenumber of nodes generated in the last iteration, but reducesthe number of duplicate nodes and hence is often faster. Con-Instance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)blocks A* (unfinished) >814,951,324 >1,562,632,802 256,247,910 >1,306,384,892 >2,28415-0 BFIDA* 113,471,990 2,408,362,561 579,842,889 1,827,125,272 3,058C=40 A*+BFHS ( 1) 68,070,197 3,861,465,924 550,007,126 3,291,490,500 4,889iPDB A*+BFHS (4) 106,482,059 2,656,641,036 492,390,560 2,144,282,178 3,514storage A* (unfinished) >799,907,374 >1,741,590,894 >1,741,590,894 >2,35817 BFIDA* 397,798,456 13,297,651,168 4,430,334,119 8,825,291,425 19,086C=26 A*+BFHS ( 1) 118,138,352 13,403,671,261 13,364,290,422 39,380,047 18,914iPDB A*+BFHS (4) 133,800,503 7,895,157,984 6,819,827,727 1,075,329,465 11,354driverlog A* (unfinished) >786,467,847 >2,028,764,217 >2,028,764,217 >1,85315 BFIDA* 453,643,579 24,705,660,389 6,388,627,692 18,280,039,412 24,297C=32 A*+BFHS ( 1) 88,449,751 16,928,608,100 16,913,831,869 14,773,242 15,311iPDB A*+BFHS (4) 123,602,679 9,160,294,407 8,974,814,158 185,477,260 8,447rovers A* (unfinished) >801,124,989 >4,427,878,559 >4,427,878,559 >2,77609 BFIDA* 235,386,020 20,666,689,222 7,239,737,785 13,401,874,237 25,336C=31 A*+BFHS ( 1) 96,100,365 34,236,064,765 34,235,937,332 123,597 42,290iPDB A*+BFHS (4) 99,498,513 12,845,107,625 12,752,327,728 92,776,061 16,770rovers A* (unfinished) >766,016,316 >3,690,650,688 >3,690,650,688 >2,37811 BFIDA* 274,612,697 18,975,576,425 6,574,504,656 12,391,406,745 26,022C=30 A*+BFHS ( 1) 112,783,085 32,143,105,562 32,139,546,138 3,549,575 43,538iPDB A*+BFHS (4) 113,594,902 12,342,784,453 11,789,007,437 553,767,167 16,661parking14 A* (unfinished) >770,874,998 >1,681,926,228 >1,681,926,228 >2,306169-04 BFIDA* 1,045,614,854 27,924,183,007 6,292,017,194 21,628,727,845 37,701C=26 A*+BFHS ( 1) 156,758,802 9,778,837,190 9,777,264,498 1,570,687 12,304iPDB A*+BFHS (4) 181,535,647 7,588,132,706 7,586,728,152 1,402,549 9,813Table 3: Instances where A* terminated without solving the problem (marked by >) so are sorted by BFIDA* running times.An underline means more than 8 GB of memory was needed. Smallest memory and shortest times are in boldface.sidering the memory and time trade-off, given a new prob-lem, we recommend making each call to BFHS on frontiernodes at multiple depths. So far, we have only tested limitingBFHS to four calls in each iteration. Determining the opti-mal number of calls to BFHS is a subject for future work.Heuristic Functions and Running TimesFor each node generated, A* first does duplicate checkingthen looks up its heuristic value if needed. Thus for eachstate, A* only computes its heuristic value once, no matterhow many times this state is generated. However, the situa-tion is different in BFHS. Even in a single call to BFHS, astate’s heuristic value may be calculated multiple times. Forexample, if a state’s f-value is greater than the cost bound ofBFHS, then this state is never stored in this call to BFHS andits heuristic value has to be computed every time it is gener-ated. In addition, A* has only one hash map but our BFHSimplementation has one hash map for each layer of nodes.Consequently, for each node generated, A* does only onehash map lookup while BFHS may have multiple lookups.Due to the above differences, the number of nodes gener-ated per second of BFIDA* and A*+BFHS was smaller thanthat of A*. For the iPDB and M&S heuristics, this differ-ence was usually less than a factor of two. For the LM-cutheuristic, A* was faster by a factor of four in terms of nodesgenerated per second on the satellite domain. This is becausecomputing a node’s LM-cut heuristic is much more expen-sive than iPDB and M&S heuristics. This contrast shows thatthe choice of heuristic function also plays an important rolein comparing the running times of different algorithms.Future WorkFuture work includes the following. First, test A*+BFHS onmore unit-cost domains. Second, investigate what is the bestmemory threshold for the A* phase. Third, determine the op-timal number of calls to BFHS in each iteration. Fourth, findother ways to partition the frontier nodes besides the cur-rent depth-based approach. If a set of frontier nodes is toolarge, we may split it into multiple smaller sets and makeone call to BFHS on each such smaller set. This approachmay reduce the maximum number of stored nodes but maygenerate more duplicate nodes. In addition, when we makeeach call to BFHS on frontier nodes at multiple depths, wemay consider the number of frontier nodes at each depth soeach call to BFHS is on a different number of depths insteadof a fixed number. Fifth, find out how to apply A*+BFHSto domains with non-unit operator costs. For such domains,BFHS’s BFS can be replaced by uniform-cost search or Di-jkstra’s algorithm (Dijkstra 1959). In this case, we can storenodes with multiple costs in each layer (Zhou and Hansen2006). Sixth, use external memory such as magnetic diskor flash memory in A*+BFHS to solve very hard problems.For example, instead of allocating 1/10 of RAM for the A*phase, we can first run A* until RAM is almost full, thenstore both Open and Closed nodes in external memory andremove them from RAM. Then in the BFHS phase, we loadback the set of frontier nodes for each call to BFHS fromexternal memory. This A*+BFHS version would never per-form worse than A*, since it is identical to A* until memoryis exhausted, at which point the BHFS phase would begin.ConclusionsWe introduce a hybrid heuristic search algorithm A*+BFHSfor solving hard problems that cannot be solved by A* due tomemory limitations, or IDA* due to the existence of manyshort cycles. A*+BFHS first runs A* until a user-specifiedstorage threshold is reached, then runs multiple iterations ofBFHS on the frontier nodes, which are the Open nodes atthe end of the A* phase. Each iteration has a unique costbound and contains multiple calls to BFHS. Each call toBFHS within the same iteration has the same cost bound buta different set of frontier nodes to start with. Within an itera-tion, frontier nodes are sorted deepest-first so that A*+BFHScan terminate early in its last iteration.On the around 500 easy problems solved, A*+BFHS be-haves the same as A*, and is always faster than BFIDA*. Onthe 32 hard instances presented, A*+BFHS is slower thanA* but uses significantly less memory. A*+BFHS is fasterthan BFIDA* on 27 of those 32 instances and at least twiceas fast on 16 of those. Furthermore, A*+BFHS requires lessmemory than BFIDA* on 25 of those 32 instances and savesmore than half the memory on 14 of those. Another contribu-tion of this paper is a comprehensive testing of BFIDA* onmany planning domains, which is lacking in the literature.ReferencesAsai, M.; and Fukunaga, A. 2016. Tiebreaking strategies forA* search: How to explore the final frontier. 673—-679.Bu, Z.; and Korf, R. E. 2019. A*+IDA*: a simple hy-brid search algorithm. In Proceedings of the 28th Inter-national Joint Conference on Artificial Intelligence , 1206–1212. AAAI Press.Culberson, J. C.; and Schaeffer, J. 1998. Pattern databases.Computational Intelligence 14(3): 318–334.Dijkstra, E. W. 1959. A note on two problems in connexionwith graphs. Numerische mathematik 1(1): 269–271.Franco, S.; Lelis, L. H.; Barley, M.; Edelkamp, S.; Martines,M.; and Moraru, I. 2018. The Complementary2 planner inthe IPC 2018. IPC-9 planner abstracts 28–31.Franco, S.; Torralba, A.; Lelis, L. H.; and Barley, M. 2017.On creating complementary pattern databases. In Proceed-ings of the 26th International Joint Conference on ArtificialIntelligence , 4302–4309.Hart, P. E.; Nilsson, N. J.; and Raphael, B. 1968. A formalbasis for the heuristic determination of minimum cost paths.IEEE transactions on Systems Science and Cybernetics 4(2):100–107.Haslum, P.; Botea, A.; Helmert, M.; Bonet, B.; Koenig, S.;et al. 2007. Domain-independent construction of patterndatabase heuristics for cost-optimal planning. In AAAI , vol-ume 7, 1007–1012.Helmert, M. 2006. The Fast Downward Planning System.Journal of Artificial Intelligence Research 26: 191–246.Helmert, M.; and Domshlak, C. 2009. Landmarks, criti-cal paths and abstractions: what’s the difference anyway?Proceedings of the International Conference on AutomatedPlanning and Scheduling 19(1): 162––169.Katz, M.; Sohrabi, S.; Samulowitz, H.; and Sievers, S. 2018.Delfi: Online planner selection for cost-optimal planning.IPC-9 planner abstracts 57–64.Korf, R. E. 1985. Depth-first iterative-deepening: An op-timal admissible tree search. Artificial intelligence 27(1):97–109.Korf, R. E.; and Felner, A. 2002. Disjoint pattern databaseheuristics. Artificial intelligence 134(1): 9–22.Korf, R. E.; and Zhang, W. 2000. Divide-and-conquerfrontier search applied to optimal sequence alignment. InAAAI/IAAI , 910–916.Korf, R. E.; Zhang, W.; Thayer, I.; and Hohwald, H. 2005.Frontier search. Journal of the ACM (JACM) 52(5): 715–748.Martinez, M.; Moraru, I.; Edelkamp, S.; and Franco, S.2018. Planning-PDBs planner in the IPC 2018. IPC-9 plan-ner abstracts 63–66.Reinefeld, A.; and Marsland, T. A. 1994. Enhancediterative-deepening search. IEEE Transactions on PatternAnalysis and Machine Intelligence 16(7): 701–710.Sch ̈utt, T.; D ̈obbelin, R.; and Reinefeld, A. 2013. Forwardperimeter search with controlled use of memory. In Pro-ceedings of the Twenty-Third international joint conferenceon Artificial Intelligence , 659–665. AAAI Press.Sen, A. K.; and Bagchi, A. 1989. Fast Recursive Formu-lations for Best-First Search That Allow Controlled Use ofMemory. In IJCAI , 297–302.Sievers, S. 2018. Merge-and-shrink heuristics for classicalplanning: Efficient implementation and partial abstractions.InEleventh Annual Symposium on Combinatorial Search ,90–98.Sievers, S.; Ortlieb, M.; and Helmert, M. 2012. Efficientimplementation of pattern database heuristics for classicalplanning. In Fifth Annual Symposium on CombinatorialSearch , 105–111.Sievers, S.; Wehrle, M.; and Helmert, M. 2014. Generalizedlabel reduction for merge-and-shrink heuristics. In Proceed-ings of the Twenty-Eighth AAAI Conference on Artificial In-telligence , 2358—-2366.Sievers, S.; Wehrle, M.; and Helmert, M. 2016. An analy-sis of merge strategies for merge-and-shrink heuristics. InProceedings of the Twenty-Sixth International Conferenceon International Conference on Automated Planning andScheduling , 294—-298.Zhou, R.; and Hansen, E. A. 2004. Breadth-first heuristicsearch. In Proceedings of the 14th International Confer-ence on Automated Planning and Scheduling (ICAPS-04) ,92–100.Zhou, R.; and Hansen, E. A. 2006. Breadth-first heuristicsearch. Artificial Intelligence 170(4): 385–408.
c8FSOREjAN6
Review
This paper presents an algorithm that performs an A* search until a user-specified memory limit is reached, and then performs (multiple) breadth-first heuristic searches (BFHS). The main advantage of this hybrid search is that it behaves like A* on smaller instances where the exponential memory consumption of A* is not a problem, but at the same time can solve larger, more memory-intensive tasks by using BFHS. Advantages and disadvantages of this new hybrid search algorithm are described and discussed in comparison to related algorithms. An empirical evaluation on "hard" (memory-intensive) unit-cost planning tasks shows that A*+BFHS performs favorably over A* and Breadth-First Iterative-Deeping-A*. The topic of the paper fits the workshop, as one of the characteristic topics of the HSDIP workshop is the study of "novel search techniques for domain-independent planning". The newly presented algorithm is well motivated and shows good empirical performance. Concerning the optimality and completeness of A*+BFHS, I suppose that it follows directly from the construction if an admissible heuristic is used. However, I missed such a statement or proof in the paper. At least I think it would be important to mention this somewhere in the paper so that a reader can directly see that these properties hold. I assume that an admissible but inconsistent heuristic is sufficient if reopening is performed by all the heuristic searches involved? Otherwise, the landmark cut heuristic used in some of the experiments would be problematic. Speaking of optimality with an admissible heuristic: In some places it was difficult to understand why the generated solutions are optimal with an admissible but inconsistent heuristic. In particular, in the section on solution reconstruction. It states, "We use A* to compute the path from the start node to the middle node using the same heuristic function for the original problem, which measures the distance to the goal node, not the middle node." At first glance, it is not entirely clear to me under what termination criteria this search reconstructs the optimal path to the middle node, especially when an inconsistent heuristic is used. With regards to the empirical evaluation, is there a specific reason why some heuristics were chosen for certain domains? Even though I see some value in the detailed table, the large amount of numbers is somewhat difficult to analyze at once. I would suggest visualizing some of the data as diagrams or plots (personal preference). Overall, I think the presented hybrid heuristic algorithm (A*+BFHS) is a valuable contribution to the workshop, so I recommend accepting the paper.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A*+BFHS: A Hybrid Heuristic Search Algorithm ### Paper Abstract We present a new algorithm A*+BFHS for solving problems where A* and IDA* fail due to memory limitations and/or the existence of many short cycles. A*+BFHS is based on A* and breadth-first heuristic search (BFHS). A*+BFHS combines advantages from both algorithms, namely A*’s node ordering, BFHS’s memory savings, and both algorithms’ duplicate detection. On easy problems, A*+BFHS behaves the same as A*. On hard problems, it is slower than A* but saves a large amount of memory. Compared to BFIDA*, A*+BFHS reduces the search time and/or memory requirement by several times on a variety of planning domains. ### Paper Keywords ["heuristic search", "planning", "artificial intelligence"] ### Paper Content A*+BFHS: A Hybrid Heuristic Search AlgorithmZhaoxing Bu, Richard E. KorfComputer Science DepartmentUniversity of California, Los AngelesLos Angeles, CA 90095fzbu, korf g@cs.ucla.eduAbstractWe present a new algorithm A*+BFHS for solving problemswhere A* and IDA* fail due to memory limitations and/orthe existence of many short cycles. A*+BFHS is based on A*and breadth-first heuristic search (BFHS). A*+BFHS com-bines advantages from both algorithms, namely A*’s nodeordering, BFHS’s memory savings, and both algorithms’ du-plicate detection. On easy problems, A*+BFHS behaves thesame as A*. On hard problems, it is slower than A* but savesa large amount of memory. Compared to BFIDA*, A*+BFHSreduces the search time and/or memory requirement by sev-eral times on a variety of planning domains.Introduction and OverviewA* (Hart, Nilsson, and Raphael 1968) is a classic heuristicsearch algorithm that is used by many state-of-the-art op-timal track planners (Katz et al. 2018; Franco et al. 2017,2018; Martinez et al. 2018). One advantage of A* is du-plicate detection. A* uses a Closed list and an Open list toprune duplicate nodes. A state is a unique configuration ofthe problem while a node is a data structure that represents astate reached by a particular path. Duplicate nodes representthe same state arrived at via different paths.The second advantage of A* is node ordering. A* alwayspicks an Open node whose f-value is minimum among allOpen nodes to expand next, which guarantees an optimalsolution returned by A* when using an admissible heuris-tic. When using a consistent heuristic, A* expands all nodeswhose f-value is less than the optimal solution cost ( C).However, tie-breaking among nodes of equal f-value sig-nificantly affects the set of expanded nodes whose f-valueequals C. It is common practice to choose an Open nodewhose h-value is minimum among all Open nodes with thesame f-value, as this strategy usually leads to fewer nodesexpanded. A survey of tie-breaking strategies in A* can befound in (Asai and Fukunaga 2016).A*’s main drawback is its exponential space requirementas it stores in memory all nodes generated during the search.For example, A* can fill up 8 GB of memory in a few min-utes on common heuristic search and planning domains. Tosolve hard problems where A* fails due to memory limita-tions, researchers have proposed various algorithms, usuallyby forgoing A*’s duplicate detection or node ordering. Forexample, Iterative-Deepening-A* (IDA*, Korf 1985) onlyhas a linear memory requirement, at the price of no duplicatedetection and a depth-first order within each search bound.However, IDA* may generate too many duplicate nodes ondomains containing lots of short cycles, such as Towers ofHanoi and many planning domains, limiting its application.This paper introduces a new algorithm for solving hardproblems with many short cycles, where IDA* is not effec-tive. First, we review previously developed algorithms. Sec-ond, we present our algorithm A*+BFHS, which is based onA* and Breadth-First Heuristic Search (Zhou and Hansen2004). Third, we present experimental results on 32 hard in-stances from 18 International Planning Competition (IPC)domains. On those problems, A*+BFHS is slower thanA* but requires significantly less memory. Compared toBFIDA*, which is an algorithm that requires less memorythan A*, A*+BFHS reduces the search time and/or memoryrequirement by several times, and sometimes by an order ofmagnitude, on a variety of domains.Previous WorkIDA* with a transposition table (IDA*+TT, Sen and Bagchi1989; Reinefeld and Marsland 1994) uses a transposition ta-ble to detect duplicate nodes. However, IDA*+TT is outper-formed by other algorithms on both heuristic search (Bu andKorf 2019) and planning domains (Zhou and Hansen 2004).A*+IDA* (Bu and Korf 2019) combines A* and IDA*,and is the state-of-the-art algorithm on the 24-Puzzle. It firstruns A* until memory is almost full, then runs IDA* beloweach frontier node without duplicate detection. By sortingthe frontier nodes with the same f-value in increasing orderofh-values, A*+IDA* can significantly reduce the numberof nodes generated in its last iteration. Compared to IDA*,we reported a reduction by a factor of 400 in the total num-ber of nodes generated in the last iteration on all 50 24-Puzzle test cases in (Korf and Felner 2002). Similar to IDA*,A*+IDA* does not work well on domains with many shortcycles, however, as in many planning domains.Frontier search (Korf et al. 2005) is a family of heuris-tic search algorithms that work well on domains with manyshort cycles. Rather than storing all nodes generated, itstores only nodes that are at or near the search frontier, in-cluding all Open nodes and only one or two layers of Closednodes. As a result, when a goal node is expanded, only theoptimal cost is known. To reconstruct the solution path, fron-tier search keeps a middle layer of Closed nodes in mem-ory. For example, we can save the Closed nodes at depthh(start )=2as the middle layer. Each node generated belowthis middle layer has a pointer to its ancestor in the middlelayer. After discovering the optimal cost, a node in the mid-dle layer that is on an optimal path is identified. Then thesame algorithm can be applied recursively to compute thesolution path from the start node to the middle node, andfrom the middle node to the goal node. In general, however,frontier search cannot prune all duplicates in directed graphs(Korf et al. 2005; Zhou and Hansen 2004).Divide-and-Conquer Frontier-A* (DCFA*, Korf andZhang 2000) is a best-first frontier search based on A*. Toreconstruct the solution path, DCFA* keeps a middle layerof Closed nodes that are roughly halfway along the solutionpath. DCFA* detects duplicates and maintains A*’s node or-dering, but its memory savings compared to A* is limited ondomains where the Open list is larger than the Closed list.Breadth-First Heuristic Search (BFHS, Zhou and Hansen2004) is a frontier search algorithm for unit-cost domains.BFHS also detects duplicates but uses a breadth-first nodeordering instead of A*’s best-first ordering. At first, assumethe optimal cost Cis known in advance. BFHS runs abreadth-first search (BFS) from the start node and prunesevery generated node whose f-value exceeds C. To savememory, BFHS only keeps a few layers of nodes in memory.On undirected graphs, if we store the operators used to gen-erate each node, and do not regenerate the parents of a nodevia the inverses of those operators, frontier search only needsto store two layers of nodes, the currently expanding layerand their child nodes (Korf et al. 2005). On directed graphs,one previous layer besides the above-mentioned two lay-ers is usually stored to detect duplicates (Zhou and Hansen2004). To reconstruct the solution path, Zhou and Hansen(2004) recommend saving the layer at the 3/4 point of thesolution length as the middle layer instead of the layer atthe halfway point, which usually requires more memory. Asshown in (Zhou and Hansen 2004), on a domain where theOpen list of A* is larger than the Closed list, BFHS usuallyends up storing fewer nodes than DCFA*.In general, Cis not known in advance. Breadth-FirstIterative-Deepening-A* (BFIDA*, Zhou and Hansen 2004)overcomes this issue by running multiple iterations ofBFHS, each with a different f-bound, starting with theheuristic value of the start node. Similar to IDA*, the last it-eration of BFIDA* is often significantly larger than previousiterations, so most search time is spent on the last iterationon many domains.Compared to A*, BFHS and BFIDA* save significantmemory but generate more nodes. The main drawback ofBFHS and BFIDA* is that their node ordering is almost theworst among different node ordering schemes. BFHS andBFIDA*’s breadth-first ordering means they have to expandall nodes stored at one depth before expanding any nodes inthe next depth. As a result, they have to expand almost allnodes whose f-value equals C, excepting only some nodesat the same depth as the goal node, while A* may only ex-pand a small fraction of such nodes due to its node ordering.Forward Perimeter Search (FPS, Sch ̈utt, D ̈obbelin, and6S7A7D8H 8I8E8B 7C8F 7G8J 8KFigure 1: An example of A*+BFHS’s search frontier. Num-bers are f-values. Closed nodes are gray.Reinefeld 2013) builds a perimeter around the start node viaBFS, then runs BFIDA* below each perimeter node. The au-thors only test FPS on the 24-Puzzle and 17-Pancake prob-lem, and did not report any running times.A*+BFHSAlgorithm DescriptionWe propose a hybrid algorithm we call A*+BFHS to solvehard problems with many short cycles. A*+BFHS first runsA* until a storage threshold is reached, then runs a seriesof BFHS iterations on sets of frontier nodes, which are theOpen nodes at the end of the A* phase.The BFHS phase can be viewed as a doubly nested loop.Each iteration of the outer loop, which we define as an it-eration of the BFHS phase, corresponds to a different costbound for BFHS. The first cost bound is set to the small-estf-value among all frontier nodes. In each iteration of theBFHS phase, we first partition the frontier nodes whose f-value equals the cost bound into different sets according totheir depths. Then the inner loop makes one call to BFHSon each set of frontier nodes, in decreasing order of theirdepths. This is done by initializing the BFS queue of eachcall to BFHS with all the nodes in the set. This inner loopcontinues until a solution is found or all calls to BFHS withthe current bound fail to find a solution. After each call toBFHS on a set of frontier nodes, we increase the f-valueof all nodes in the set to the minimum f-value of the nodesgenerated but not expanded in the previous call to BFHS.Figure 1 presents an example of the Open and Closednodes at the end of the A* phase. Node S is the start node.All edge costs are 1 and the number in each node is its f-value. Closed nodes are gray. The Open nodes B, E, F, H, I,J, K are the frontier nodes for the BFHS phase. A*+BFHSfirst makes a call to BFHS with a cost bound of 8 on allfrontier nodes at depth 3, namely nodes H, I, J, K. If no so-lution is found, A*+BFHS updates the f-values of all thesenodes to the minimum f-value of the nodes generated butnot expanded in that call to BFHS. A*+BFHS then makes asecond call to BFHS with bound 8, starting with all frontiernodes at depth 2, namely nodes E and F. If no solution isfound, A*+BFHS updates the f-values of these nodes, thenmakes a third call to BFHS with bound 8, starting with thefrontier node B at depth 1. Suppose that no solution is foundwith bound 8, the updated f-values for nodes E, F, H, I, J, Kare 9, and the updated f-value for node B is 10. A*+BFHSthen starts a new iteration of BFHS with a cost bound of 9,making two calls to BFHS on nodes at depth 3 and 2 respec-tively. If the solution is found in the first call to BFHS withbound 9, BFHS will not be called again on nodes E and F.A*+BFHS is complete and admissible when using anadmissible heuristic. A*+BFHS potentially makes calls toBFHS on all frontier nodes. When an optimal solution exists,one node on this optimal path will serve as one of the startnodes for one of the calls to BFHS. Such a node is guaran-teed to exist by A*’s completeness and admissibility. Thenwhen the cost bound for the calls to BFHS equals C, theoptimal solution will be found, guaranteed by BFHS’s com-pleteness and admissibility.A state can be regenerated in separate calls to BFHS in thesame iteration. To reduce such duplicates, we can decreasethe number of calls to BFHS in each iteration by makingeach call to BFHS on a combined set of frontier nodes atadjacent depths. For the example in Figure 1, we can makeone call to BFHS on the frontier nodes at depths 2 and 3together instead of two separate calls to BFHS, by puttingthe frontier nodes at depth 3 after the frontier nodes at depth2 in the initial BFS queue.In practice, we can specify a maximum number of callsto BFHS per iteration. Then in each iteration, we divide thenumber of depths of the frontier nodes by the number ofcalls to BFHS to get the number of depths for each call toBFHS. For example, if the depths of the frontier nodes rangefrom 7 to 12 and we are limited to three calls to BFHS periteration, each call to BFHS will start with frontier nodes attwo depths. We used this strategy in our experiments.For each node generated in the BFHS phase, we check if itwas generated in the A* phase. If so, we immediately prunethe node if its current g-value in the BFHS phase is greaterthan or equal to its stored g-value in the A* phase.The primary purpose of the A* phase is to build a frontierset, so that A*+BFHS can terminate early in its last iteration.In the A* phase we have to reserve some memory for theBFHS phase. In our experiments, we first generated patterndatabases or the merge-and-shrink heuristic, then allocated1/10 of the remaining memory of 8 GB for the A* phase.Comparisons to BFIDA* and FPSA*+BFHS’s BFHS phase also uses the iterative deepeningconcept of BFIDA*, but there are two key differences. First,in each iteration, BFIDA* always makes one call to BFHSon the start node, while we call BFHS multiple times, eachon a different set of frontier nodes. Second, in each iteration,we order the frontier nodes based on their depth, and runBFHS on the deepest frontier nodes first.These differences lead to one drawback and two advan-tages. The drawback is that A*+BFHS may generate morenodes than BFIDA*, as the same state can be regenerated inseparate calls to BFHS in the same iteration.The first advantage is that A*+BFHS may terminate earlyin its last iteration. If A*+BFHS generates a goal node inthe last iteration below a relatively deep frontier node, nofrontier nodes above that depth will be expanded. Therefore,A*+BFHS may generate only a small number of nodes in itslast iteration. In contrast, BFIDA* has to expand almost allnodes whose f-value is less than or equal to Cin its lastiteration. As a result, A*+BFHS can be faster than BFIDA*.The second advantage is that A*+BFHS’s memory us-age, which is the maximum number of nodes stored duringthe entire search, may be smaller than that of BFIDA* fortwo reasons. First, the partition of frontier nodes and sepa-rate calls to BFHS within the same iteration can reduce themaximum number of nodes stored in the BFHS phase. Sec-ond, BFIDA* stores the most nodes in its last iteration whileA*+BFHS may store only a small number of nodes in thelast iteration due to early termination. Thus, A*+BFHS maystore the most nodes in the penultimate iteration instead.FPS looks similar to A*+BFHS, but there are several fun-damental differences. First, FPS builds the perimeter usinga breadth-first approach while A*+BFHS builds the frontiervia a best-first approach. FPS can also dynamically extendthe perimeter but this approach does not always speed up thesearch (Sch ̈utt, D ̈obbelin, and Reinefeld 2013). Second, ineach iteration of FPS’s BFIDA* phase, FPS makes one callto BFHS on each perimeter node. In contrast, in A*+BFHSeach call to BFHS is on a set of frontier nodes. Third, FPSsorts the perimeter nodes at the same f-value using a max-tree-first or longest-path-first policy, while A*+BFHS sortsthe frontier nodes at the same f-value in decreasing orderof their depth. Fourth, FPS needs two separate searches forsolution reconstruction while A*+BFHS only needs one.Solution ReconstructionEach node generated in A*+BFHS’s BFHS phase has apointer to its ancestral frontier node. When a goal node isgenerated, the solution path from the start node to the an-cestral frontier node is stored in the A* phase and only onemore search is needed to reconstruct the solution path fromthe ancestral frontier node to the goal node. This subproblemis much easier than the original problem and we can use thesame heuristic function as for the original problem. There-fore, we just use A* to solve this subproblem. In addition,since we know the optimal cost of this subproblem, we canprune any node whose f-value exceeds this cost.In BFIDA*, we have to solve two subproblems to re-cover the solution path from the start node to the middlenode and from the middle node to the goal node. Zhou andHansen (2004) called BFHS recursively to solve these twosubproblems. However, pattern database heuristics (PDB,Culberson and Schaeffer 1998) only store heuristic valuesto the goal state, and not between arbitrary pairs of states,which complicates finding a path to a middle node. Simi-lar to A*+BFHS, we use A* to solve the second subprob-lem. For the first subproblem, we use A* to compute thepath from the start node to the middle node using the sameheuristic function as for the original problem, which mea-sures the distance to the goal node, not the middle node. Tosave memory, we prune any node whose g-value is greaterthan or equal to the depth of the middle node, and any nodewhose f-value exceeds the optimal cost of the original prob-lem. Since a deeper middle layer leads to more nodes storedin this approach, we saved the layer at the 1/4 point of thesolution length as the middle layer instead of the 3/4 point.In this way, we do not need to build a new heuristic functionfor the middle node. In our experiments, the search time forsolution reconstruction in BFIDA* is usually less than 1%of the total search time.Experimental Results and AnalysisWe implemented BFIDA* and A*+BFHS in the planner FastDownward 20.06 (Helmert 2006), using the existing codefor node expansion and heuristic value lookups. A*+BFHS’sA* phase reused the existing A* code. A* stores all nodes inone hash map. We used the same hash map implementationwith the following difference. In each call to BFHS in bothBFIDA* and A*+BFHS, we saved three layers of nodes forduplicate detection and we created one hash map for eachlayer of nodes. We did this because storing all nodes in onehash map in BFHS involves a lot of overhead, and is morecomplicated. Sch ̈utt, D ̈obbelin, and Reinefeld (2013) did nottest FPS on planning domains and we do not know the op-timal perimeter radius and sorting strategy for each domain,so we did not implement FPS in Fast Downward.We solved about 550 problem instances from 32 unit-cost domains. We present the results of A*, BFIDA*, andA*+BFHS on the 32 hardest instances. All remaining in-stances were easily solved by A*. We tested two A*+BFHSversions. A*+BFHS ( 1) starts each call to BFHS on fron-tier nodes at one depth. A*+BFHS (4) makes each call toBFHS on frontier nodes at multiple depths with at most fourcalls to BFHS in each iteration. All tests were run on a 3.33GHz Intel Xeon X5680 CPU with 236 GB of RAM. We usedthe landmark-cut heuristic (LM-cut, Helmert and Domshlak2009) for the satellite domain, the merge-and-shrink heuris-tic (M&S) with the recommended configuration (Sievers,Wehrle, and Helmert 2014, 2016; Sievers 2018) for the tppand hiking14 domains, and the iPDB heuristic with the de-fault configuration (Haslum et al. 2007; Sievers, Ortlieb, andHelmert 2012) for all other domains.We present the results in Tables 1, 2, and 3. Tables 1 and 2contain the 26 hardest instances solved by A*. Table 3 con-tains the remaining 6 instances where A* terminated earlywithout finding a solution due to the limitation of the hashmap size in Fast Downward 20.06. The instances in Tables1 and 2 are sorted by the A* running times and the instancesin Table 3 are sorted by the BFIDA* running times.All three tables have the same columns. The first columngives the domain name, the instance ID, the optimal solutioncostC, and the heuristic function used. The second columnlists the different algorithms. We ran each algorithm until itfound an optimal cost and returned the optimal path. Thethird column gives the maximum number of nodes stored byeach algorithm. For A*, this is the number of nodes stored atthe end of the search. For BFIDA*, this is the largest sum ofthe number of nodes stored in all three layers of the search,plus the nodes stored in the 1/4 layer for solution recon-struction. For A*+BFHS, this is the largest number of nodesstored in the BFHS phase plus the number of nodes storedin the A* phase. An underline means the specific algorithmneeded more than 8 GB of memory to solve the problem.The fourth column is the total number of nodes generated,including the nodes generated during solution reconstruc-tion. The fifth column is the number of nodes generated in all0 2 4 6 8 10 1200:20:40:60:8x= 1A* peak stored # =A*+BFHS peak stored #A* time =A*+BFHS timeA*+BFHS (4) A*+BFHS ( 1)Figure 2: A* vs. A*+BFHS in time and memory.but the last iteration. For A*, this is the number of nodes gen-erated before expanding an Open node whose f-value is C.For A*+BFHS, this number includes the nodes generated inits A* phase. The sixth column is the number of nodes gener-ated in the last iteration. For A*, this is the number of nodesgenerated while expanding the Open nodes whose f-valueequals C. The last column is the running time in seconds,including the time for solution reconstruction but excludingthe time spent on precomputing the heuristic function, whichis the same for all algorithms. For each instance, the smallestmaximum number of stored nodes and shortest running timeare indicated in boldface. For the A* data in Table 3, we re-port the numbers of nodes and running times just before A*terminated, with a >symbol to indicate such numbers.We further compare the time and memory between A*and A*+BFHS in Figure 2, and between BFIDA* andA*+BFHS in Figure 3, where the x-axis is A*/BFIDA*’speak stored nodes over A*+BFHS’s and the y-axis isA*/BFIDA*’s running time over A*+BFHS’s. Figure 2 con-tains the 26 instances solved by A* and Figure 3 contains all32 instances. The red circles and green triangles correspondto A*+BFHS (4) and A*+BFHS ( 1) respectively. The datapoints above the y= 1 line or to the right of the x= 1line represent instances where A*+BFHS outperformed thecomparison algorithm in terms of time or memory.A*+BFHS vs. A*A* was the fastest on all problem instances that it solved,but also used the most memory. Among the 32 hardest prob-lem instances we present, A* required more than 8 GB ofmemory on 22 instances and could not find a solution on6 of those after running out of the hash map used by FastDownward 20.06. On some of these instances, A* used 30GB to 40 GB of memory before it terminated. This meansA* cannot solve these 22 instances under the current IPCmemory requirement, which is 8 GB. A*+BFHS requiredseveral times, sometimes an order of magnitude, less mem-0 2 4 6051015y= 1x= 1BFIDA* peak stored # =A*+BFHS peak stored #BFIDA* time =A*+BFHS timeA*+BFHS (4) A*+BFHS ( 1)Figure 3: BFIDA* vs. A*+BFHS in time and memory.ory than A*. As a result, A*+BFHS only used more than 8GB of memory on one instance. An interesting comparisonis the space and time trade-off. For example, on parking14,A*+BFHS increased the running time by less than 100%while saving more than an order of magnitude in memory.A*+BFHS vs. BFIDA*In summary, on easy problems that A*+BFHS can solve inits A* phase, A*+BFHS behaves the same as A*, and is al-ways faster than BFIDA*. We solved around 500 such prob-lems, which are not included here due to space limitations.On the 32 hardest problems we present, A*+BFHS is fasterthan BFIDA* on 27 instances and at least twice as fast on 16of those. Furthermore, A*+BFHS requires less memory thanBFIDA* on 25 of the 32 instances and saves more than halfthe memory on 14 of those. In addition, these time and mem-ory reductions exist on both the relatively easy and hard onesof the 32 instances presented, demonstrating that A*+BFHSis in general better than BFIDA* on very hard problems aswell as easy problems. In the following paragraphs, we com-pare A*+BFHS with BFIDA* in four aspects: duplicate de-tection, node ordering, memory, and running times.The relative numbers of nodes generated in the previousiterations reflect the power of duplicate detection. Comparedto BFIDA*, A*+BFHS (4) generated a similar number ofnodes in the previous iterations on most instances. Hiking142-3-6 is the only instance where A*+BFHS (4) generatedat least twice as many nodes in the previous iterations asBFIDA*. However, A*+BFHS ( 1) generated 2 to 7 times asmany nodes in the previous iterations as BFIDA* on 11 in-stances. This contrast shows that, compared to BFIDA*, sig-nificantly more duplicate nodes can be generated by makingeach call to BFHS on frontier nodes at only one depth. How-ever, most of those duplicate nodes can be avoided by mak-ing each call to BFHS on frontier nodes at multiple depths.A*+BFHS can generate fewer duplicate nodes thanBFIDA* due to fewer BFHS iterations and making each callto BFHS on a set of frontier nodes. A*+BFHS reduced thenumber of nodes in previous iterations by around 50% onfreecell 06 and snake18 17, and a factor of 4 on snake18 08.To our surprise, we found that on snake18 08, the numberof nodes generated in the penultimate iteration of BFIDA*was twice as many as the sum of the nodes generated inA*+BFHS’s A* phase and the penultimate iteration of theBFHS phase. This means a lot of duplicate nodes were gen-erated in BFIDA*. Snake18 generates a directed graph, inwhich case frontier search cannot detect all duplicate nodes(Korf et al. 2005; Zhou and Hansen 2004).Compared to BFIDA*, A*+BFHS reduced the numberof nodes in the last iteration significantly, and usually byseveral orders of magnitude, on 28 of the 32 instances.This large reduction proves that when ordering the frontiernodes by deepest-first, A*+BFHS can terminate early in itslast iteration. On the three blocks instances and depot 11,A*+BFHS did not terminate early in its last iteration becausethe ancestral frontier node of the goal had a relatively lowg-value. In fact, A* generated the most nodes in its last iter-ation on the three blocks instances, which shows that nodeordering is also difficult for A* on those instances. In con-trast, A* generated very few nodes in its last iteration ondepot 11, suggesting that A*+BFHS may terminate early inits last iteration given more memory for its A* phase.A*+BFHS’s A* phase usually stored from 10 to 20 mil-lion nodes, with the exception of the snake18 domain where40 to 50 million nodes were stored. Comparing the maxi-mum number of stored nodes, A*+BFHS ( 1) required lessmemory than BFIDA* on 25 instances and less than half thememory on 14 of those. For A*+BFHS (4), these two num-bers are 23 and 11 respectively. In contrast, termes18 05is the only instance where the maximum number of storednodes of A*+BFHS was at least twice that of BFIDA*.Comparing the two versions of A*+BFHS, A*+BFHS (4)was usually faster, sometimes significantly, due to the reduc-tion in duplicate nodes. Compared to BFIDA*, A*+BFHS(4) was slightly slower on four instances and 80% sloweron one instance. On the other 27 instances, A*+BFHS wasfaster than BFIDA*, and at least twice as fast on 16 of those.The large speedups usually were on the instances whereBFIDA* generated the most nodes in its last iteration. Thebest result was on the logistics00 domain, where an order ofmagnitude speedup was achieved. This is because BFIDA*performed very poorly on this domain due to its breadth-firstnode ordering. Comparing A*+BFHS ( 1) with BFIDA*,A*+BFHS ( 1) was slower on 11 instances and at least twiceas slow on three of those, but also at least twice as fast on 12instances. The main reason for the slower cases is the pres-ence of many duplicate nodes generated in certain domains.Calling BFHS on Nodes at Multiple DepthsComparing the two A*+BFHS versions, each has its prosand cons. A*+BFHS (4) always generated fewer duplicatenodes. Comparing the number of nodes generated in the pre-vious iterations, A*+BFHS ( 1) generated at least twice asmany nodes on 7 instances. A*+BFHS ( 1) generated signif-icantly fewer nodes in the last iteration than A*+BFHS (4)on 22 instances. However, the number of nodes generated inInstance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)depot A* 70,504,763 344,658,749 344,639,234 19,515 23314 BFIDA* 17,042,841 1,390,466,785 582,348,193 795,336,992 1,708C=29 A*+BFHS ( 1) 21,023,657 556,674,817 540,764,124 15,909,899 596iPDB A*+BFHS (4) 22,882,537 446,204,987 432,278,188 13,926,005 475termes18 A* 80,012,545 211,514,579 211,514,568 11 24505 BFIDA* 9,370,587 3,757,844,868 3,413,500,020 221,186,298 4,796C=132 A*+BFHS ( 1) 30,874,300 10,702,979,649 10,701,959,808 911,786 15,415iPDB A*+BFHS (4) 30,076,170 2,271,661,960 2,270,262,609 1,291,296 3,319freecell A* 53,080,996 243,947,771 243,244,703 703,068 25006 BFIDA* 38,054,162 1,220,132,074 732,920,409 485,268,534 1,883C=34 A*+BFHS ( 1) 30,481,377 327,209,951 312,812,283 14,388,579 441iPDB A*+BFHS (4) 35,120,076 403,465,250 302,581,091 100,875,070 561logistics00 A* 57,689,357 107,083,712 106,929,666 154,046 25514-1 BFIDA* 15,441,813 3,137,204,256 106,929,666 3,020,315,591 10,381C=71 A*+BFHS ( 1) 19,472,255 354,438,805 354,058,774 368,595 1,160iPDB A*+BFHS (4) 20,169,648 227,903,318 110,674,320 117,217,562 752driverlog A* 144,065,288 420,609,830 420,609,777 53 34412 BFIDA* 35,034,406 1,718,350,515 678,644,177 1,030,180,074 1,676C=35 A*+BFHS ( 1) 24,712,720 1,020,438,794 1,020,410,754 27,959 944iPDB A*+BFHS (4) 30,270,816 643,723,984 641,790,459 1,933,444 631freecell A* 107,183,015 531,379,136 531,378,858 278 52207 BFIDA* 77,196,602 4,152,881,254 2,897,339,576 1,143,762,584 6,416C=41 A*+BFHS ( 1) 54,171,433 3,095,608,289 2,370,094,738 725,267,629 4,775iPDB A*+BFHS (4) 58,058,327 2,430,947,097 1,896,369,611 534,331,564 3,769depot A* 172,447,963 764,608,339 764,607,971 368 55011 BFIDA* 27,192,174 3,037,154,042 1,260,718,486 1,755,157,316 3,544C=46 A*+BFHS ( 1) 37,977,775 6,268,318,349 3,092,746,859 3,175,552,575 7,314iPDB A*+BFHS (4) 46,923,423 3,319,995,622 1,262,429,685 2,057,547,022 4,078tpp A* 187,011,066 610,996,630 610,995,018 1,612 56211 BFIDA* 93,759,836 4,290,825,940 754,905,369 3,525,135,895 7,214C=51 A*+BFHS ( 1) 30,856,159 5,504,314,294 5,504,268,064 46,111 9,550M&S A*+BFHS (4) 33,368,912 1,419,143,562 1,285,410,734 133,732,709 2,426mystery 14 A* 139,924,686 652,569,481 650,036,341 2,533,140 578C=11 BFIDA* 135,963,227 6,213,135,253 727,753,687 5,430,082,105 7,628iPDB A*+BFHS ( 1/4) 20,302,860 730,971,724 676,473,465 54,497,630 839tidybot11 A* 69,953,936 171,363,621 170,286,720 1,076,901 66217 BFIDA* 42,080,838 776,084,110 486,518,217 281,131,278 3,684C=40 A*+BFHS ( 1) 33,969,968 661,386,777 467,282,853 194,103,710 3,223iPDB A*+BFHS (4) 37,090,062 547,745,706 397,125,094 150,620,398 2,694logistics00 A* 82,161,805 167,974,727 163,970,672 4,004,055 66315-1 BFIDA* 13,638,319 2,847,571,079 163,970,672 2,660,698,165 19,062C=67 A*+BFHS ( 1) 18,827,830 730,154,067 722,390,335 7,763,336 4,897iPDB A*+BFHS (4) 18,827,830 251,960,077 198,537,096 53,422,585 1,627pipesworld- A* 123,553,926 284,884,903 284,880,335 4,568 727notankage 19 BFIDA* 86,818,434 1,227,115,669 634,454,295 576,633,809 4,140C=24 A*+BFHS ( 1) 42,192,503 619,095,459 619,013,855 81,147 2,072iPDB A*+BFHS (4) 44,706,153 574,957,328 570,451,612 4,505,259 1,942parking14 A* 351,976,816 828,472,606 828,472,562 44 971169-01 BFIDA* 183,832,715 4,846,132,188 1,023,897,982 3,821,980,237 6,236C=24 A*+BFHS ( 1) 30,675,587 1,191,570,432 1,191,514,776 55,283 1,468iPDB A*+BFHS (4) 51,147,740 1,013,776,888 1,011,227,268 2,549,247 1,290visitall11 A* 407,182,291 795,670,561 795,669,929 632 1,04508-half BFIDA* 172,474,497 3,159,596,842 1,332,828,069 1,824,866,109 4,220C=43 A*+BFHS ( 1) 34,406,966 1,639,641,152 1,639,585,228 55,798 2,233iPDB A*+BFHS (4) 64,671,078 1,346,690,454 1,312,333,974 34,356,354 1,902Table 1: Instances sorted by A* running times. An underline means more than 8 GB of memory was needed. Smallest memoryand shortest times are in boldface.Instance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)tidybot11 A* 115,965,857 246,756,618 246,756,201 417 1,08616 BFIDA* 86,095,996 1,090,011,154 652,777,121 431,816,881 5,512C=40 A*+BFHS ( 1) 41,342,908 583,309,116 570,082,820 13,225,950 2,923iPDB A*+BFHS (4) 57,026,598 598,365,499 519,723,294 78,641,859 3,080snake18 A* 94,699,640 129,288,606 129,273,608 14,998 1,13108 BFIDA* 44,231,998 1,852,488,086 1,517,078,892 325,204,785 14,877C=58 A*+BFHS ( 1) 44,081,853 391,010,354 390,681,641 328,706 3,445iPDB A*+BFHS (4) 51,166,308 356,988,514 348,015,242 8,973,265 3,192hiking14 A* 287,192,625 3,299,939,168 3,299,937,850 1,318 1,2972-2-8 BFIDA* 42,570,885 11,376,337,161 5,757,334,602 5,582,502,874 10,847C=42 A*+BFHS ( 1) 44,454,322 16,233,911,987 12,346,881,620 3,886,689,991 14,897M&S A*+BFHS (4) 53,148,260 9,850,751,126 6,310,295,933 3,540,114,817 9,696pipesworld- A* 292,998,092 907,283,307 907,283,301 6 1,364tankage 14 BFIDA* 158,262,429 5,354,342,623 3,680,871,467 1,661,344,123 10,609C=38 A*+BFHS ( 1) 84,077,693 5,768,933,724 5,763,927,002 5,002,176 11,622iPDB A*+BFHS (4) 103,288,306 3,300,541,977 3,220,772,288 79,765,143 6,896blocks A* 555,864,249 1,185,065,570 205,172,261 979,893,309 1,54013-1 BFIDA* 99,782,317 1,742,819,669 463,603,038 1,224,383,750 2,142C=44 A*+BFHS ( 1) 54,601,577 2,261,321,708 425,991,501 1,827,341,160 2,817iPDB A*+BFHS (4) 79,572,108 1,817,197,763 401,559,990 1,407,648,726 2,317parking14 A* 606,117,759 1,430,911,954 1,430,746,610 165,344 1,714169-03 BFIDA* 291,822,896 8,077,642,530 1,796,305,162 6,280,923,558 10,059C=24 A*+BFHS ( 1) 48,304,204 2,519,414,336 2,328,368,930 191,043,484 3,124iPDB A*+BFHS (4) 63,455,874 2,151,415,198 1,992,188,756 159,224,520 2,679tidybot11 A* 175,574,760 372,772,055 372,771,560 495 1,73018 BFIDA* 114,747,861 1,718,896,347 1,093,273,564 613,928,542 8,810C=44 A*+BFHS ( 1) 40,540,308 1,045,166,148 1,028,635,660 16,529,544 5,410iPDB A*+BFHS (4) 65,784,369 1,204,942,101 931,501,196 273,439,961 6,365blocks A* 704,938,102 1,568,547,017 342,339,737 1,226,207,280 1,99013-0 BFIDA* 137,821,868 2,421,546,636 775,076,076 1,628,338,675 2,977C=42 A*+BFHS ( 1) 81,918,224 3,498,922,607 774,231,514 2,710,189,950 4,483iPDB A*+BFHS (4) 126,629,640 2,615,897,101 698,028,054 1,903,367,904 3,378hiking14 A* 368,433,117 6,711,042,999 6,710,971,209 71,790 2,4802-3-6 BFIDA* 124,686,777 38,476,138,468 29,175,130,389 8,123,329,545 42,379C=28 A*+BFHS ( 1) 146,623,619 107,138,328,055 106,429,883,507 682,558,443 120,494M&S A*+BFHS (4) 148,357,537 68,496,320,172 65,779,382,852 2,691,051,215 76,603pipesworld- A* 442,232,520 1,028,882,844 1,028,880,896 1,948 2,693notankage 20 BFIDA* 301,349,348 4,454,789,871 2,384,958,671 2,032,377,777 15,245C=28 A*+BFHS ( 1)133,708,317 3,325,668,014 3,267,529,384 58,132,775 11,499iPDB A*+BFHS (4) 148,029,967 2,988,248,448 2,728,140,813 260,097,006 10,629snake18 A* 265,033,991 367,639,596 365,927,487 1,712,109 3,96717 BFIDA* 60,041,363 2,162,411,969 1,464,995,207 639,565,966 20,418C=62 A*+BFHS ( 1) 56,839,243 877,934,374 871,327,013 6,607,339 8,785iPDB A*+BFHS (4) 73,365,792 855,342,127 776,892,002 78,450,103 8,916satellite A* 107,395,076 463,747,690 463,744,251 3,439 11,83408 BFIDA* 20,846,202 3,656,980,017 520,525,131 3,125,446,334 398,884C=26 A*+BFHS ( 1) 18,870,254 552,221,751 551,990,933 230,549 54,551LM-cut A*+BFHS (4) 19,763,323 546,211,783 479,810,475 66,401,039 56,296Table 2: Instances sorted by A* running times. An underline means more than 8 GB of memory was needed. Smallest memoryand shortest times are in boldface.the last iteration of A*+BFHS is usually only a small por-tion of the total nodes generated, so the large difference inthe last iteration is not very important. A*+BFHS (4) storeda larger maximum number of nodes than A*+BFHS ( 1)on almost all instances. However, the difference was usuallysmall and never more than a factor of two. For the runningtime, the difference was usually less than 50%. Compared toA*+BFHS ( 1), A*+BFHS (4) was faster by a factor of 3 onlogistics00 15-1, 2.5 on rovers 09 and 11, 4.6 on termes1805, 3.9 on tpp 11, and never more than 30% slower.In general, making each call to BFHS on frontier nodesat multiple depths increases both the memory usage and thenumber of nodes generated in the last iteration, but reducesthe number of duplicate nodes and hence is often faster. Con-Instance Algorithm Peak stored Total nodes Prev. iterations Last iteration Time (s)blocks A* (unfinished) >814,951,324 >1,562,632,802 256,247,910 >1,306,384,892 >2,28415-0 BFIDA* 113,471,990 2,408,362,561 579,842,889 1,827,125,272 3,058C=40 A*+BFHS ( 1) 68,070,197 3,861,465,924 550,007,126 3,291,490,500 4,889iPDB A*+BFHS (4) 106,482,059 2,656,641,036 492,390,560 2,144,282,178 3,514storage A* (unfinished) >799,907,374 >1,741,590,894 >1,741,590,894 >2,35817 BFIDA* 397,798,456 13,297,651,168 4,430,334,119 8,825,291,425 19,086C=26 A*+BFHS ( 1) 118,138,352 13,403,671,261 13,364,290,422 39,380,047 18,914iPDB A*+BFHS (4) 133,800,503 7,895,157,984 6,819,827,727 1,075,329,465 11,354driverlog A* (unfinished) >786,467,847 >2,028,764,217 >2,028,764,217 >1,85315 BFIDA* 453,643,579 24,705,660,389 6,388,627,692 18,280,039,412 24,297C=32 A*+BFHS ( 1) 88,449,751 16,928,608,100 16,913,831,869 14,773,242 15,311iPDB A*+BFHS (4) 123,602,679 9,160,294,407 8,974,814,158 185,477,260 8,447rovers A* (unfinished) >801,124,989 >4,427,878,559 >4,427,878,559 >2,77609 BFIDA* 235,386,020 20,666,689,222 7,239,737,785 13,401,874,237 25,336C=31 A*+BFHS ( 1) 96,100,365 34,236,064,765 34,235,937,332 123,597 42,290iPDB A*+BFHS (4) 99,498,513 12,845,107,625 12,752,327,728 92,776,061 16,770rovers A* (unfinished) >766,016,316 >3,690,650,688 >3,690,650,688 >2,37811 BFIDA* 274,612,697 18,975,576,425 6,574,504,656 12,391,406,745 26,022C=30 A*+BFHS ( 1) 112,783,085 32,143,105,562 32,139,546,138 3,549,575 43,538iPDB A*+BFHS (4) 113,594,902 12,342,784,453 11,789,007,437 553,767,167 16,661parking14 A* (unfinished) >770,874,998 >1,681,926,228 >1,681,926,228 >2,306169-04 BFIDA* 1,045,614,854 27,924,183,007 6,292,017,194 21,628,727,845 37,701C=26 A*+BFHS ( 1) 156,758,802 9,778,837,190 9,777,264,498 1,570,687 12,304iPDB A*+BFHS (4) 181,535,647 7,588,132,706 7,586,728,152 1,402,549 9,813Table 3: Instances where A* terminated without solving the problem (marked by >) so are sorted by BFIDA* running times.An underline means more than 8 GB of memory was needed. Smallest memory and shortest times are in boldface.sidering the memory and time trade-off, given a new prob-lem, we recommend making each call to BFHS on frontiernodes at multiple depths. So far, we have only tested limitingBFHS to four calls in each iteration. Determining the opti-mal number of calls to BFHS is a subject for future work.Heuristic Functions and Running TimesFor each node generated, A* first does duplicate checkingthen looks up its heuristic value if needed. Thus for eachstate, A* only computes its heuristic value once, no matterhow many times this state is generated. However, the situa-tion is different in BFHS. Even in a single call to BFHS, astate’s heuristic value may be calculated multiple times. Forexample, if a state’s f-value is greater than the cost bound ofBFHS, then this state is never stored in this call to BFHS andits heuristic value has to be computed every time it is gener-ated. In addition, A* has only one hash map but our BFHSimplementation has one hash map for each layer of nodes.Consequently, for each node generated, A* does only onehash map lookup while BFHS may have multiple lookups.Due to the above differences, the number of nodes gener-ated per second of BFIDA* and A*+BFHS was smaller thanthat of A*. For the iPDB and M&S heuristics, this differ-ence was usually less than a factor of two. For the LM-cutheuristic, A* was faster by a factor of four in terms of nodesgenerated per second on the satellite domain. This is becausecomputing a node’s LM-cut heuristic is much more expen-sive than iPDB and M&S heuristics. This contrast shows thatthe choice of heuristic function also plays an important rolein comparing the running times of different algorithms.Future WorkFuture work includes the following. First, test A*+BFHS onmore unit-cost domains. Second, investigate what is the bestmemory threshold for the A* phase. Third, determine the op-timal number of calls to BFHS in each iteration. Fourth, findother ways to partition the frontier nodes besides the cur-rent depth-based approach. If a set of frontier nodes is toolarge, we may split it into multiple smaller sets and makeone call to BFHS on each such smaller set. This approachmay reduce the maximum number of stored nodes but maygenerate more duplicate nodes. In addition, when we makeeach call to BFHS on frontier nodes at multiple depths, wemay consider the number of frontier nodes at each depth soeach call to BFHS is on a different number of depths insteadof a fixed number. Fifth, find out how to apply A*+BFHSto domains with non-unit operator costs. For such domains,BFHS’s BFS can be replaced by uniform-cost search or Di-jkstra’s algorithm (Dijkstra 1959). In this case, we can storenodes with multiple costs in each layer (Zhou and Hansen2006). Sixth, use external memory such as magnetic diskor flash memory in A*+BFHS to solve very hard problems.For example, instead of allocating 1/10 of RAM for the A*phase, we can first run A* until RAM is almost full, thenstore both Open and Closed nodes in external memory andremove them from RAM. Then in the BFHS phase, we loadback the set of frontier nodes for each call to BFHS fromexternal memory. This A*+BFHS version would never per-form worse than A*, since it is identical to A* until memoryis exhausted, at which point the BHFS phase would begin.ConclusionsWe introduce a hybrid heuristic search algorithm A*+BFHSfor solving hard problems that cannot be solved by A* due tomemory limitations, or IDA* due to the existence of manyshort cycles. A*+BFHS first runs A* until a user-specifiedstorage threshold is reached, then runs multiple iterations ofBFHS on the frontier nodes, which are the Open nodes atthe end of the A* phase. Each iteration has a unique costbound and contains multiple calls to BFHS. Each call toBFHS within the same iteration has the same cost bound buta different set of frontier nodes to start with. Within an itera-tion, frontier nodes are sorted deepest-first so that A*+BFHScan terminate early in its last iteration.On the around 500 easy problems solved, A*+BFHS be-haves the same as A*, and is always faster than BFIDA*. Onthe 32 hard instances presented, A*+BFHS is slower thanA* but uses significantly less memory. A*+BFHS is fasterthan BFIDA* on 27 of those 32 instances and at least twiceas fast on 16 of those. Furthermore, A*+BFHS requires lessmemory than BFIDA* on 25 of those 32 instances and savesmore than half the memory on 14 of those. Another contribu-tion of this paper is a comprehensive testing of BFIDA* onmany planning domains, which is lacking in the literature.ReferencesAsai, M.; and Fukunaga, A. 2016. Tiebreaking strategies forA* search: How to explore the final frontier. 673—-679.Bu, Z.; and Korf, R. E. 2019. A*+IDA*: a simple hy-brid search algorithm. In Proceedings of the 28th Inter-national Joint Conference on Artificial Intelligence , 1206–1212. AAAI Press.Culberson, J. C.; and Schaeffer, J. 1998. Pattern databases.Computational Intelligence 14(3): 318–334.Dijkstra, E. W. 1959. A note on two problems in connexionwith graphs. Numerische mathematik 1(1): 269–271.Franco, S.; Lelis, L. H.; Barley, M.; Edelkamp, S.; Martines,M.; and Moraru, I. 2018. The Complementary2 planner inthe IPC 2018. IPC-9 planner abstracts 28–31.Franco, S.; Torralba, A.; Lelis, L. H.; and Barley, M. 2017.On creating complementary pattern databases. In Proceed-ings of the 26th International Joint Conference on ArtificialIntelligence , 4302–4309.Hart, P. E.; Nilsson, N. J.; and Raphael, B. 1968. A formalbasis for the heuristic determination of minimum cost paths.IEEE transactions on Systems Science and Cybernetics 4(2):100–107.Haslum, P.; Botea, A.; Helmert, M.; Bonet, B.; Koenig, S.;et al. 2007. Domain-independent construction of patterndatabase heuristics for cost-optimal planning. In AAAI , vol-ume 7, 1007–1012.Helmert, M. 2006. The Fast Downward Planning System.Journal of Artificial Intelligence Research 26: 191–246.Helmert, M.; and Domshlak, C. 2009. Landmarks, criti-cal paths and abstractions: what’s the difference anyway?Proceedings of the International Conference on AutomatedPlanning and Scheduling 19(1): 162––169.Katz, M.; Sohrabi, S.; Samulowitz, H.; and Sievers, S. 2018.Delfi: Online planner selection for cost-optimal planning.IPC-9 planner abstracts 57–64.Korf, R. E. 1985. Depth-first iterative-deepening: An op-timal admissible tree search. Artificial intelligence 27(1):97–109.Korf, R. E.; and Felner, A. 2002. Disjoint pattern databaseheuristics. Artificial intelligence 134(1): 9–22.Korf, R. E.; and Zhang, W. 2000. Divide-and-conquerfrontier search applied to optimal sequence alignment. InAAAI/IAAI , 910–916.Korf, R. E.; Zhang, W.; Thayer, I.; and Hohwald, H. 2005.Frontier search. Journal of the ACM (JACM) 52(5): 715–748.Martinez, M.; Moraru, I.; Edelkamp, S.; and Franco, S.2018. Planning-PDBs planner in the IPC 2018. IPC-9 plan-ner abstracts 63–66.Reinefeld, A.; and Marsland, T. A. 1994. Enhancediterative-deepening search. IEEE Transactions on PatternAnalysis and Machine Intelligence 16(7): 701–710.Sch ̈utt, T.; D ̈obbelin, R.; and Reinefeld, A. 2013. Forwardperimeter search with controlled use of memory. In Pro-ceedings of the Twenty-Third international joint conferenceon Artificial Intelligence , 659–665. AAAI Press.Sen, A. K.; and Bagchi, A. 1989. Fast Recursive Formu-lations for Best-First Search That Allow Controlled Use ofMemory. In IJCAI , 297–302.Sievers, S. 2018. Merge-and-shrink heuristics for classicalplanning: Efficient implementation and partial abstractions.InEleventh Annual Symposium on Combinatorial Search ,90–98.Sievers, S.; Ortlieb, M.; and Helmert, M. 2012. Efficientimplementation of pattern database heuristics for classicalplanning. In Fifth Annual Symposium on CombinatorialSearch , 105–111.Sievers, S.; Wehrle, M.; and Helmert, M. 2014. Generalizedlabel reduction for merge-and-shrink heuristics. In Proceed-ings of the Twenty-Eighth AAAI Conference on Artificial In-telligence , 2358—-2366.Sievers, S.; Wehrle, M.; and Helmert, M. 2016. An analy-sis of merge strategies for merge-and-shrink heuristics. InProceedings of the Twenty-Sixth International Conferenceon International Conference on Automated Planning andScheduling , 294—-298.Zhou, R.; and Hansen, E. A. 2004. Breadth-first heuristicsearch. In Proceedings of the 14th International Confer-ence on Automated Planning and Scheduling (ICAPS-04) ,92–100.Zhou, R.; and Hansen, E. A. 2006. Breadth-first heuristicsearch. Artificial Intelligence 170(4): 385–408.<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This paper presents an algorithm that performs an A* search until a user-specified memory limit is reached, and then performs (multiple) breadth-first heuristic searches (BFHS). The main advantage of this hybrid search is that it behaves like A* on smaller instances where the exponential memory consumption of A* is not a problem, but at the same time can solve larger, more memory-intensive tasks by using BFHS. Advantages and disadvantages of this new hybrid search algorithm are described and discussed in comparison to related algorithms. An empirical evaluation on "hard" (memory-intensive) unit-cost planning tasks shows that A*+BFHS performs favorably over A* and Breadth-First Iterative-Deeping-A*. The topic of the paper fits the workshop, as one of the characteristic topics of the HSDIP workshop is the study of "novel search techniques for domain-independent planning". The newly presented algorithm is well motivated and shows good empirical performance. Concerning the optimality and completeness of A*+BFHS, I suppose that it follows directly from the construction if an admissible heuristic is used. However, I missed such a statement or proof in the paper. At least I think it would be important to mention this somewhere in the paper so that a reader can directly see that these properties hold. I assume that an admissible but inconsistent heuristic is sufficient if reopening is performed by all the heuristic searches involved? Otherwise, the landmark cut heuristic used in some of the experiments would be problematic. Speaking of optimality with an admissible heuristic: In some places it was difficult to understand why the generated solutions are optimal with an admissible but inconsistent heuristic. In particular, in the section on solution reconstruction. It states, "We use A* to compute the path from the start node to the middle node using the same heuristic function for the original problem, which measures the distance to the goal node, not the middle node." At first glance, it is not entirely clear to me under what termination criteria this search reconstructs the optimal path to the middle node, especially when an inconsistent heuristic is used. With regards to the empirical evaluation, is there a specific reason why some heuristics were chosen for certain domains? Even though I see some value in the detailed table, the large amount of numbers is somewhat difficult to analyze at once. I would suggest visualizing some of the data as diagrams or plots (personal preference). Overall, I think the presented hybrid heuristic algorithm (A*+BFHS) is a valuable contribution to the workshop, so I recommend accepting the paper. ### Review Rating ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
B1EjKsRqtQ
ICLR.cc/2019/Conference
2019
Hierarchical Attention: What Really Counts in Various NLP Tasks
["Zehao Dou", "Zhihua Zhang"]
Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms.
["attention", "hierarchical", "machine reading comprehension", "poem generation"]
ABSTRACTAttention mechanisms in sequence to sequence models have shown great abilityand wonderful performance in various natural language processing (NLP) tasks,such as sentence embedding, text generation, machine translation, machine readingcomprehension, etc. Unfortunately, existing attention mechanisms only learneither high-level or low-level features. In this paper, we think that the lack ofhierarchical mechanisms is a bottleneck in improving the performance of theattention mechanisms, and propose a novel Hierarchical Attention Mechanism(Ham) based on the weighted sum of different layers of a multi-level attention.Ham achieves a state-of-the-art BLEU score of 0:26on Chinese poem generationtask and a nearly 6:5%averaged improvement compared with the existing machinereading comprehension models such as BIDAF and Match-LSTM. Furthermore,our experiments and theorems reveal that Ham has greater generalization andrepresentation ability than existing attention mechanisms.1 I NTRODUCTIONIn recent years, long short-term memory, convolutional and recurrent networks with attention mech-anisms have been very successful on a variety of NLP tasks. Most of the tasks In NLP can beformulated as sequence to sequence problems. Moreover, encoders and decoders are vital compo-nents in sequence to sequence models while processing the inputs and outputs.An Attention mechanism works as a connector between the encoders and decoders and help thedecoder to decide which parts of the source text to pay attention to. Thus an attention mechanism canintegrates sequence models and transduction models, so it is able to connect two words in a singlepassage or paragraph without regard to their positions.Attention mechanisms have become integral components in various NLP models. For example, inmachine translation tasks, attention mechanism-based models [1, 2, 3] have ever been the state-of-the-art; in sentence embedding, self attention based model is now the state-of-the-art [4]; in machinereading comprehension, almost every recently-published model, such as BIDAF[5], Match-LSTM[6],Reinforcement Ranker Reader[7], and R-NET[8], contains attention mechanism; in abstractivesummarization model [12] which has also once been the state-of-the-art, attention mechanism is verynecessary, and in poem generation [9], attention mechanism is also widely used. More surprisingly,Vaswani, et al. (2017) showed that their model Transformer which relies solely on the attentionmechanisms can outperform those RNN or LSTM-based existing models in machine translation tasks.Thus, they stated that "Attention is all you need".However, we note that the potential issue with the existing attention mechanisms is that the basicattention mechanism learns only the low-level features while the multi-level attention mechanismlearns only the high-level features. This may make it difficult for the model to capture the intermediatefeature information, especially when the source texts are long. In order to address this issue, wepresent Ham which introduces a hierarchical mechanism into the existing multi-level attentionmechanisms. Each time when we perform a multi-level attention, instead of using the result of thelast attention level only, we use the weighted sum of the results of all the attention levels as the finaloutput.We show that Ham can learn all levels of features among the tokens in the input sequence and give aproof of its monotonicity and convergency. This work presents the design and implementation of1Under review as a conference paper at ICLR 2019Ham and our implementation performs well on a range of tasks by replacing the existing attentionmechanisms in different models of different tasks.We are able to achieve results comparable to or better than existing state-of-the-art model. On Chinesepoem generation, our model scores 0:246BLUE, an improvement of 21:78% from a RNN-basedPoem Generator model. On machine reading comprehension task, our implementation is moreeffective, model with Ham has achieved an average improvement of 6:5%compared to previousmodels.The implementation of the Hierarchical Attention Mechanism is not difficult and the code will beavailable on http://github.com after the acceptance.2 A TTENTION MECHANISMSThe attention mechanism can be described as a function whose input is a query and a set of key-valuepairs, where the query and keys are vectors with the same dimension (denoted dk), and the values aredefined asdv-vectors. Note that in most types of attention mechanisms, the values are equal to thekeys. Through the mapping of the attention mechanism, the input can be mapped to a single vector,which is as the output.2.1 T HEVANILLA ATTENTION MECHANISM (VAM)Given a query q2Rdkand an input sequence K= [k1;k2;:::; kn]2Rdknwhere ki2Rdkdenotes the word embedding of the i-th word of the sequence, the vanilla attention mechanism aimsat using a compatibility function f(ki;q)to compute a relativity score between the query qand eachword ki. This score is treated as the attention value of qtoki. Then we have nattention scoresf(ki;q)fori= 1;2;:::;n . Now we apply the softmax function to define a categorical distribution:p(ijK;q) = softmax( f(ki;q)) =exp(f(ki;q))Pnj=1exp(f(kj;q)):Futher, we compute the output which is represented as the weighted sum of the input sequence:s=nXi=1p(ijK;q)ki:The attention mechanism above is the original version which was firstly proposed by Bahdanau, et al.(2014). In the Scaled Dot-product Attention Mechanism , the compatibility function fis definedas the scaled dot product function f(ki;q) =<ki;q>pdk. Here the scaling factor1pdkis used to preventthe dot product from growing too large in magnitude.2.2 S OFT, HARD AND LOCAL ATTENTION MECHANISMSThere are three different types of mechanisms: soft, hard and local. The main difference betweenthem is the region where attention function is calculated. The VAM belongs to soft attention wherethe categorical distribution p(ijK;q)is computed over the whole input sequence of words. Thus it isalso referred to as global attention. The resulting distribution can reflect the relatedness or importancebetween the query and every word in the input sequence, and we use these importance scores asthe weights of the output sum. Soft attention takes every word in the input sequence, no matterwhat kind of word it is, into consideration. Soft attention is differentiable and parameter-free, but iscomputationally expensive and less accurate.In order to overcome the weakness of soft attention, hard attention is a natural alternative. Contraryto the widely-studied soft attention, hard attention locates accurately to only one key ki0. In otherwords, the probability of getting the special key ki0is 1 and others be 0. This implies that the choiceof the one key means everything to the performance of the model. The action of choosing is notdifferentiable, so one uses reinforcement learning methods instead, such as policy gradient method.As we have seen, soft and hard attentions are two extreme cases. Xu, et al. (2015) proposed a hybridattention mechanism. Instead of choosing every key or only one key, one chooses a subset of all the2Under review as a conference paper at ICLR 2019keys from the input sequence. When computing the attention, one can just focus on the importantpart of the keys and discard the rest, thus it is also referred to as local attention. This attentionmechanism combines the wideness and accuracy when choosing keys. The subset-choosing processis non-differentiable and reinforcement learning methods are also needed.2.3 M ULTI -HEAD ATTENTION AND MULTI -LEVEL ATTENTION MECHANISMSThe multi-head attention mechanism proposed by Vaswani, et al. (2017) plays an important role in theTransformer model which is state-of-the-art in neural machine translation. Instead of calculating asingle attention function with queries, keys and values, it linearly projects the queries, keys and valueshtimes todk,dkanddvdimensions, respectively. On each version of linear projections, the attentionfunction is performed in parallel and yields several versions of dv-dimensional scaled dot-productattentions. Subsequently, these attention values are concatenated and once again projected, resultingin the final value as the output of the multi-head attention mechanism. That is,Attention(Q;K;V ) = softmax(QKTpdk)V;MultiHead( Q;K;V ) =Concat (head 1;;headh)WO;where headi= Attention( QWQi;KWKi;VWVi)andWQi;WKi;WVi;WOiare all projectionmatrices.The multi-level attention mechanism is another variety of attention mechanisms. Instead of increasingthe number of heads, the multi-level attention increases the number of levels. For example, ina two-level attention mechanism, we calculate the attention value of the query qand the keys,which is represented as Attention( q;K;K ). This output has the same dimension as the query,giving the first level. In the second level, we treat the output as the new query and calculatethe attention value with the input sequence (or keys) Kagain. The result can be represented asAttention(Attention( q;K;K );K;K ). The second attention can learn a higher level of internalfeatures among the words of the input text.Based on the long line of previous attempt, Cui et al. (2017) proposed a novel way of treating variousdocuments in neural machine translation. They used a self-attention mechanism to encode the wordsin every document and then used a second attention over different documents to learn a higher levelof features among the words of different documents. Yang, et al. (2016) also proposed a 2-levelhierarchical attention network for document classification task. In recently proposed Transformermode [1], the authors repeated the attention mechanism Ntimes over the input sequence in orderto learn the higher level feature. This is an N-level attention mechanism through which the inputsequences can be changed again and again into sequences much more suitable for feature extractionand decoder input. This is why so-called Transformer .2.4 S ELFATTENTION MECHANISMIn the self attention mechanism the query and key are the same. In other words, the query qstemsfrom the input sequence Kitself. Using self attention mechanism, we are able to learn the relatednessof different parts of the input sequence, no matter what their distance is. With self-attention, along text can be encoded to a more suitable input for the decoder. Similar to the original attentionmechanism, self-attentions also have three different types: soft, hard and local, as well as havemulti-head version and multi-level version.3 H IERARCHICAL ATTENTION MECHANISM (HAM)In this section we present two Hierarchical Attention models built on the vanilla attention and selfattention, respectively.3.1 H IERARCHICAL VANILLA ATTENTION MECHANISM (HAM-V)We have mentioned above that multi-level attention mechanisms can learn a deeper level of featuresamong all the tokens of the input sequence and the query. In our model, we use multi-level for3Under review as a conference paper at ICLR 2019Figure 1: Hierarchical Vanilla Attention Mechanismreference but different from the existing Multi-Level Attention Mechanisms . Our Ham-V focuson all the intermediate attention results rather than just the result of the last attention level. Asshown in Figure1, given the query and the input sequence which consists of nkeys, we calculate theVanilla Attention Mechanism result of them and get Query 1. And then we continue to calculatethe attention result of Query 1 and the keys and get Query 2. Repeat this calculation dtimes. Thus,we form ad-depth attention. Finally, the output of our Ham-V is the weighted sum of the above dattention results, where the dweights are the softmax values of dtrainable parameters. The softmaxis used to convert these dweights into the probabilities. These weights can tell us the relativeimportance of the dintermediate attention results Query i. In other words, the relative importance ofthedattention levels.3.2 H IERARCHICAL SELFATTENTION MECHANISM (HAM-S)InSelf Attention Mechanisms , the query stems from the input sequence kitself. So it can be treatedas a special case of attention mechanisms. Similarly, the self-version of Hierarchical AttentionMechanisms , which is shown in Figure 2, takes only the sequence kas input. We calculate theself-attention results of the input sequence for dtimes consecutively. Finally, the output of ourHam-S is the weighted sum of these dattention results, where the dweights are the softmax valuesofdtrainable parameters w1;w2;;wdwhich is the same as Ham-V . Through the dlevels ofself-attention, our model can learn different levels of deep features among all the tokens of the inputsequence, and through the dtrainable parameters and the weighted sum mechanism, our model canlearn the relative importance of the dself-attention levels.4 A NALYSISWe present theoretical analysis of our Ham mainly in two aspects: representation ability and conver-gence. The representation ability of Ham is obviously higher than Vanilla Attention MechanismsandMulti-Level Attention Mechanisms . Because the two attention mechanisms are just two ex-treme cases of Ham . When we set w1= 1,w2=w3==wd= 0, ourHam model is equivalentto the former. When we set wd= 1;w1=w2==wd1= 0, our model is equivalent to thelatter. Thus Ham is much more general and the representation ability is much higher. Using the4Under review as a conference paper at ICLR 2019Figure 2: Hierarchical Self Attention Mechanismweighted linear combination of these dintermediate attention results, our model takes every level offeatures into consideration.We are going to prove that the global minimum value of our loss function L(Ham(c1;;cd);X; )of the whole Ham model will decrease monotonically and converges finally as the increase ofhierarchical attention depth d. Here, Ham(c1;;cd)denotes a Ham with the attention depth danddtrainable parameters c1;;cd,Xdenotes all the input data including the queries and keys, and denotes all the parameters in the other part of the whole model. It is also worth emphasizing that allloss functions in NLP tasks have positive values.Theorem 1. In Vanilla Attention Mechanism, we have thatmin16i6nkkik26kAttention( q;K;K )k26max16i6nkkik2:This result is very obvious according to the definition of Vanilla Attention Mechanism ,Attention( q;K;K )is the weighted sum of k1;k2;;knand the weights are nonnegative withtheir sum 1. Denote the weights as 1;;n. ThenkAttention( q;K;K )k2=nXi=1iki26nXi=1ikkik26max16i6nkkik2:The left-hand side of the inequality can be proved similarly. This theorem tells us that throughmultiple attention layers, the vectors of intermediate attention levels will neither explode nor vanishas the increase of attention depth d.Theorem 2. LetAd= minci;L(Ham(c1;;cd);X; )be the global minimal value of the lossfunction. ThenfAdgj+1d=1is a monotonically decreasing sequence and it will converge.It is easy to note that:Ad= minci(16i6d);L(Ham(c1;;cd);X; )= minci(16i6d);L(Ham(c1;;cd;cd+1=1);X; )> minci(16i6d+1);L(Ham(c1;;cd;cd+1);X; ) =Ad+1:5Under review as a conference paper at ICLR 2019This means the monotonicity of the sequence fAdgj+1d=1. On the other hand, since our loss functionalways has positive values, sequence fAdghas 0 as its lower bound. Therefore, the monotonicallydecreasing sequence fAdgj+1d=1will converge.5 E XPERIMENTSAttention mechanisms are widely used in various NLP tasks. In our experiments, we would liketo replace existing attention mechanisms with our novel Ham-V and replace existing self-attentionmechanisms with our Ham-S to show the powerfulness and generalization ability of our model. Weconduct our experiments on two different NLP tasks, Chinese Poem Generation and Machine ReadingComprehension.5.1 M ACHINE READING COMPREHENSIONThe first NLP task we used to test our Ham is the machine reading comprehension (MRC).We conduct our experiment on both English MRC and Chinese MRC. The baseline models weuse include BIDAF[5], Match-LSTM[6], R-NET[8]. Here, BIDAF is for both Chinese and theother two are for English. They have a major similarity that all of them use attention mech-anism as the connection between their encoders and decoders and they are all open source.Their code is available on http://github.com/baidu/DuReader ,https://github.com/MurtyShikhar/Question-Answering andhttps://github.com/NLPLearn/R-net . What we will do is to replace their attention mechanism with Ham and compare the differ-ence of their performance. Specially, the R-NET model contains two attention mechanisms whendoing question-passage matching and passage self-matching. Here, we replace them both with HamandHam-S .The Chinese dataset we use for MRC experiments is DuREADER which is introduced by He, etal. (2017) and the English dataset we use include SQUAD which can be downloaded from https://rajpurkar.github.io/SQuAD-explorer and MS-MARCO which can be downloadedfromhttp://www.msmarco.org . We randomly choose 10 percent of question-answer data astesting set and the rest as training set. The evaluation method we use is BLEU-4 and ROUGE-L forChinese, ExactMatch(EM) and F1 score for English, where EM measures the percentage of howmuch the prediction of the model matches ground truth exactly and F1 measures the overlap betweenprediction and ground truth. During our experiments, we set also different attention depths dto showthe influence of attention depth.5.2 C HINESE POEM GENERATIONIn this work we generate Chinese quatrains, each of whose lines has the same length of 5 or 7characters. The baseline model we use is Planning based Poetry Generation (PPG) proposed byWang, et al. (2016), which generates Chinese poetries with a planning based neural network. Oncewe input a Chinese text, this model will generate a highly-related Chinese quatrains as follows.Firstly, the model extracts keywords from this input text with TextRank algorithm proposed byMihalcea, et al. (2004). Next, if the number of extracted keywords is not enough for a whole quatrain,more keywords will be created by Knowledge-based method . Then it comes to the final step, poemgeneration. The quatrain is generated line by line and each line corresponds a keyword. Whengenerating a single line, one uses a bi-directional Gated Recurrent Unit (GRU) model proposed byCho, et al. (2014 ) as encoder and another GRU model as decoder. Between encoder and decoder, anattention mechanism is used for connection.It is worth emphasizing that the dataset used by PPG consists of 76,859 quatrains from the InternetandPPG randomly chooses 2000 quatrains for testing, 2000 for validation and the rest for training. Inthe encoder part of PPG , the word embedding dimensionality is set as 512 and initialized by word2vec(Mikolov, et al. (2013)). In both GRU models, the hidden layers also contain 512 hidden units butthey are initialized randomly. For more details, please read Wang, et al. (2016). The code and datasetofPPG model can be found from https://github.com/Disiok/poetry-seq2seq .6Under review as a conference paper at ICLR 2019Model Dataset BLEU-4 ROUGE-LBIDAF DuReader 33.95 44.20BIDAF with 2-level Ham DuReader 34.02 44.39BIDAF with 5-level Ham DuReader 34.79 45.10BIDAF with 10-level Ham DuReader 35.96 47.33BIDAF with 20-level Ham DuReader 35.79 47.41Model Dataset EM F1Match-LSTM SQUAD 54.29 66.87Match-LSTM with 2-level Ham SQUAD 54.41 66.99Match-LSTM with 5-level Ham SQUAD 55.29 69.03Match-LSTM with 10-level Ham SQUAD 58.37 70.70Match-LSTM with 20-level Ham SQUAD 58.47 70.81Model Dataset BLEU-1 ROUGE-LR-NET MSMARCO 41.29 43.38R-NET with 2-level Ham MSMARCO 41.37 43.89R-NET with 5-level Ham MSMARCO 43.13 45.67R-NET with 10-level Ham MSMARCO 43.48 45.75R-NET with 20-level Ham MSMARCO 43.62 45.78Table 1: Evaluation results for MRC modelsIn our experiment, we replace the attention part of PPG from a Vanilla Attention Mechanism to ourHam-V and set the compatibility function fto be the scaled dot product function, while keeping otherparts and dataset unchanged as PPG except the evaluation part. The evaluation of poem generation inPPG is done by experts and we can not keep evaluation method unchanged since it is not convincingto find experts for evaluation. Our evaluation algorithm is based on BLEU-2 score which is calculatedasBLEU =133Xi=1BLEUi;where BLEUidenotes the BLEU-2 score computed for the next (i+ 1) th lines given the previous igoldstandard lines. This averaged BLEU can tell us how much correlated the lines of a generatedquatrain are. We will show some quatrains generated by our Ham-based PPG in the appendix.6 E XPERIMENTAL RESULTS AND QUALITATIVE ANALYSIS6.1 M ACHINE READING COMPREHENSIONAs clearly visible in Table 1, the proposed model is much better than conventional model at Chineseand English machine reading comprehension. This is likely due to the fact that our human languagehas a kind of hierarchical relationship within itself both structurally and semantically. And ourhierarchical attention mechanism is much easier to capture the inherent structural and semanticalhierarchical relationship in the source texts because of their innate similarity. We also set the attentiondepthsdto be 1 (which is equivalent to ordinary models without using Ham ), 2, 5, 10 and 20.From Table 1, we can find that our Ham plays a significant role of the whole model. With the increaseof attention depth d, the performance rises quickly at first and starts to converge when dgrows larger.The biggest improvements on these three models are 7:26%,7:76% and5:64% respectively. Theiraverage is over 6:5%which is a huge progress.6.2 C HINESE POEM GENERATIONThe results of our BLEU-based evaluation are summarized in Table 2. We compare our Ham-basedPPG with several relevant baselines like Statistical Machine Translation (SMT) proposed by He,et al. (2012) and RNN-based Poem Generator (RNNPG) proposed by Zhang, et al. (2014). In theformer model, a poem is generated iteratively by translating the previous line into the next line. Inthe latter model, all the lines are generated based on a context vector encoded from the previous lines.7Under review as a conference paper at ICLR 2019We also set different attention depths in order to learn the relationship between overall performanceand the attention depth d.Model BLEU 1 BLEU 2 BLEU 3 BLEU5-Char 7-Char 5-Char 7-Char 5-Char 7-Char 5-Char 7-CharSMT 0.056 0.124 0.052 0.150 0.054 0.176 0.054 0.150RNNPG 0.058 0.187 0.062 0.210 0.067 0.207 0.062 0.202PPG 0.061 0.185 0.069 0.193 0.073 0.198 0.068 0.1925-level Ham PPG 0.063 0.210 0.070 0.237 0.075 0.226 0.070 0.22410-level Ham PPG 0.062 0.217 0.075 0.267 0.076 0.259 0.072 0.24420-level Ham PPG 0.062 0.221 0.074 0.258 0.076 0.260 0.071 0.246Table 2: BLEU-based evaluation resultsWhen attention depth dis not large enough, the larger dis, the better performance our model willachieve. PPG with 10-level Ham has almost 25% of improvement on the averaged BLEU scorecompared with ordinary PPG . On the other hand, through the last two rows of the table above andTheorem 2, we know that with the continuing increase of d, our performance will converge sooner orlater. So in order to balance between performance and training cost, we suggest attention depth dtobe between 5 and 10.Figure 3: Three quatrains generated by 10-level Ham PPG model7 C ONCLUDING REMARKSIn this paper we have developed Hierarchical Attention Mechanism (Ham) . So far as we know,This is the first attention mechanism which introduces hierarchical mechanisms into attention mecha-nisms and takes the weighted sum of different attention levels as the output, so it combines low-levelfeatures and high-level features of input sequences to output a more suitable intermediate result fordecoders.We tested the proposed model Ham on the task of Chinese poem generation and machine readingcomprehension. The experiment revealed that the proposed Ham outperforms the conventionalmodels significantly, achieving the state-of-the-art results. In the future, we would like to studymore applications of Ham on other NLP tasks such as neural machine translation, abstractivesummarization, paraphrase generalization and so on.Recall that Ham belongs to soft attention where every token of input sequences is calculated byattention function. We will extend Ham to hard attention and local attention, to show whetherthe performance can be better and whether Ham can fit reinforcement learning environment better.Also, we will attempt to extend Multi-head Attention Mechanism of Vaswani, et al. (2017) to itshierarchical version and apply to neural machine translation.8Under review as a conference paper at ICLR 2019
BkeUUGo937
A simple extension of multi-level attention, but needs more extensive comparison to existing methods
4: Ok but not good enough - rejection
The paper introduces hierarchical attention, where they propose to weighted combine all the intermediate layers of multi-level attention. The idea is simple and seems to be promising, however originality seems incremental. In order to fully demonstrate the significance of the proposed algorithm, the authors should conduct more comparisons, for example, to multi-level attention. Just comparing with one-level attention seems unfair given the significant increase of computation. Another aspect of comparison may be to consider computation and performance improvements together and discuss the best trade-off. The authors should also include some standard benchmark datasets for comparisons. The current ones are good but it is not so clear what is the best state-of-the-arts results on them when compared with all other methods. The analysis on the network's representation and convergence is nice but it does not bring much insights. The argument for decreasing global minimal of the loss function in terms of increasing parameter size can be made for nearly all models but it is of little practical use since there is no guarantee one can reach the global optimal of these models. I recommend the authors to analyze/demonstrate how effective this weighted combination is. For example, the paper can benefit from some clear examples that show the learned weights across the layers and which ones are more important. The presentation of the paper needs some polishing. For example, there are numerous typos, grammatical errors everywhere.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Hierarchical Attention: What Really Counts in Various NLP Tasks ### Paper Abstract Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms. ### Paper Keywords ["attention", "hierarchical", "machine reading comprehension", "poem generation"] ### Paper Content ABSTRACTAttention mechanisms in sequence to sequence models have shown great abilityand wonderful performance in various natural language processing (NLP) tasks,such as sentence embedding, text generation, machine translation, machine readingcomprehension, etc. Unfortunately, existing attention mechanisms only learneither high-level or low-level features. In this paper, we think that the lack ofhierarchical mechanisms is a bottleneck in improving the performance of theattention mechanisms, and propose a novel Hierarchical Attention Mechanism(Ham) based on the weighted sum of different layers of a multi-level attention.Ham achieves a state-of-the-art BLEU score of 0:26on Chinese poem generationtask and a nearly 6:5%averaged improvement compared with the existing machinereading comprehension models such as BIDAF and Match-LSTM. Furthermore,our experiments and theorems reveal that Ham has greater generalization andrepresentation ability than existing attention mechanisms.1 I NTRODUCTIONIn recent years, long short-term memory, convolutional and recurrent networks with attention mech-anisms have been very successful on a variety of NLP tasks. Most of the tasks In NLP can beformulated as sequence to sequence problems. Moreover, encoders and decoders are vital compo-nents in sequence to sequence models while processing the inputs and outputs.An Attention mechanism works as a connector between the encoders and decoders and help thedecoder to decide which parts of the source text to pay attention to. Thus an attention mechanism canintegrates sequence models and transduction models, so it is able to connect two words in a singlepassage or paragraph without regard to their positions.Attention mechanisms have become integral components in various NLP models. For example, inmachine translation tasks, attention mechanism-based models [1, 2, 3] have ever been the state-of-the-art; in sentence embedding, self attention based model is now the state-of-the-art [4]; in machinereading comprehension, almost every recently-published model, such as BIDAF[5], Match-LSTM[6],Reinforcement Ranker Reader[7], and R-NET[8], contains attention mechanism; in abstractivesummarization model [12] which has also once been the state-of-the-art, attention mechanism is verynecessary, and in poem generation [9], attention mechanism is also widely used. More surprisingly,Vaswani, et al. (2017) showed that their model Transformer which relies solely on the attentionmechanisms can outperform those RNN or LSTM-based existing models in machine translation tasks.Thus, they stated that "Attention is all you need".However, we note that the potential issue with the existing attention mechanisms is that the basicattention mechanism learns only the low-level features while the multi-level attention mechanismlearns only the high-level features. This may make it difficult for the model to capture the intermediatefeature information, especially when the source texts are long. In order to address this issue, wepresent Ham which introduces a hierarchical mechanism into the existing multi-level attentionmechanisms. Each time when we perform a multi-level attention, instead of using the result of thelast attention level only, we use the weighted sum of the results of all the attention levels as the finaloutput.We show that Ham can learn all levels of features among the tokens in the input sequence and give aproof of its monotonicity and convergency. This work presents the design and implementation of1Under review as a conference paper at ICLR 2019Ham and our implementation performs well on a range of tasks by replacing the existing attentionmechanisms in different models of different tasks.We are able to achieve results comparable to or better than existing state-of-the-art model. On Chinesepoem generation, our model scores 0:246BLUE, an improvement of 21:78% from a RNN-basedPoem Generator model. On machine reading comprehension task, our implementation is moreeffective, model with Ham has achieved an average improvement of 6:5%compared to previousmodels.The implementation of the Hierarchical Attention Mechanism is not difficult and the code will beavailable on http://github.com after the acceptance.2 A TTENTION MECHANISMSThe attention mechanism can be described as a function whose input is a query and a set of key-valuepairs, where the query and keys are vectors with the same dimension (denoted dk), and the values aredefined asdv-vectors. Note that in most types of attention mechanisms, the values are equal to thekeys. Through the mapping of the attention mechanism, the input can be mapped to a single vector,which is as the output.2.1 T HEVANILLA ATTENTION MECHANISM (VAM)Given a query q2Rdkand an input sequence K= [k1;k2;:::; kn]2Rdknwhere ki2Rdkdenotes the word embedding of the i-th word of the sequence, the vanilla attention mechanism aimsat using a compatibility function f(ki;q)to compute a relativity score between the query qand eachword ki. This score is treated as the attention value of qtoki. Then we have nattention scoresf(ki;q)fori= 1;2;:::;n . Now we apply the softmax function to define a categorical distribution:p(ijK;q) = softmax( f(ki;q)) =exp(f(ki;q))Pnj=1exp(f(kj;q)):Futher, we compute the output which is represented as the weighted sum of the input sequence:s=nXi=1p(ijK;q)ki:The attention mechanism above is the original version which was firstly proposed by Bahdanau, et al.(2014). In the Scaled Dot-product Attention Mechanism , the compatibility function fis definedas the scaled dot product function f(ki;q) =<ki;q>pdk. Here the scaling factor1pdkis used to preventthe dot product from growing too large in magnitude.2.2 S OFT, HARD AND LOCAL ATTENTION MECHANISMSThere are three different types of mechanisms: soft, hard and local. The main difference betweenthem is the region where attention function is calculated. The VAM belongs to soft attention wherethe categorical distribution p(ijK;q)is computed over the whole input sequence of words. Thus it isalso referred to as global attention. The resulting distribution can reflect the relatedness or importancebetween the query and every word in the input sequence, and we use these importance scores asthe weights of the output sum. Soft attention takes every word in the input sequence, no matterwhat kind of word it is, into consideration. Soft attention is differentiable and parameter-free, but iscomputationally expensive and less accurate.In order to overcome the weakness of soft attention, hard attention is a natural alternative. Contraryto the widely-studied soft attention, hard attention locates accurately to only one key ki0. In otherwords, the probability of getting the special key ki0is 1 and others be 0. This implies that the choiceof the one key means everything to the performance of the model. The action of choosing is notdifferentiable, so one uses reinforcement learning methods instead, such as policy gradient method.As we have seen, soft and hard attentions are two extreme cases. Xu, et al. (2015) proposed a hybridattention mechanism. Instead of choosing every key or only one key, one chooses a subset of all the2Under review as a conference paper at ICLR 2019keys from the input sequence. When computing the attention, one can just focus on the importantpart of the keys and discard the rest, thus it is also referred to as local attention. This attentionmechanism combines the wideness and accuracy when choosing keys. The subset-choosing processis non-differentiable and reinforcement learning methods are also needed.2.3 M ULTI -HEAD ATTENTION AND MULTI -LEVEL ATTENTION MECHANISMSThe multi-head attention mechanism proposed by Vaswani, et al. (2017) plays an important role in theTransformer model which is state-of-the-art in neural machine translation. Instead of calculating asingle attention function with queries, keys and values, it linearly projects the queries, keys and valueshtimes todk,dkanddvdimensions, respectively. On each version of linear projections, the attentionfunction is performed in parallel and yields several versions of dv-dimensional scaled dot-productattentions. Subsequently, these attention values are concatenated and once again projected, resultingin the final value as the output of the multi-head attention mechanism. That is,Attention(Q;K;V ) = softmax(QKTpdk)V;MultiHead( Q;K;V ) =Concat (head 1;;headh)WO;where headi= Attention( QWQi;KWKi;VWVi)andWQi;WKi;WVi;WOiare all projectionmatrices.The multi-level attention mechanism is another variety of attention mechanisms. Instead of increasingthe number of heads, the multi-level attention increases the number of levels. For example, ina two-level attention mechanism, we calculate the attention value of the query qand the keys,which is represented as Attention( q;K;K ). This output has the same dimension as the query,giving the first level. In the second level, we treat the output as the new query and calculatethe attention value with the input sequence (or keys) Kagain. The result can be represented asAttention(Attention( q;K;K );K;K ). The second attention can learn a higher level of internalfeatures among the words of the input text.Based on the long line of previous attempt, Cui et al. (2017) proposed a novel way of treating variousdocuments in neural machine translation. They used a self-attention mechanism to encode the wordsin every document and then used a second attention over different documents to learn a higher levelof features among the words of different documents. Yang, et al. (2016) also proposed a 2-levelhierarchical attention network for document classification task. In recently proposed Transformermode [1], the authors repeated the attention mechanism Ntimes over the input sequence in orderto learn the higher level feature. This is an N-level attention mechanism through which the inputsequences can be changed again and again into sequences much more suitable for feature extractionand decoder input. This is why so-called Transformer .2.4 S ELFATTENTION MECHANISMIn the self attention mechanism the query and key are the same. In other words, the query qstemsfrom the input sequence Kitself. Using self attention mechanism, we are able to learn the relatednessof different parts of the input sequence, no matter what their distance is. With self-attention, along text can be encoded to a more suitable input for the decoder. Similar to the original attentionmechanism, self-attentions also have three different types: soft, hard and local, as well as havemulti-head version and multi-level version.3 H IERARCHICAL ATTENTION MECHANISM (HAM)In this section we present two Hierarchical Attention models built on the vanilla attention and selfattention, respectively.3.1 H IERARCHICAL VANILLA ATTENTION MECHANISM (HAM-V)We have mentioned above that multi-level attention mechanisms can learn a deeper level of featuresamong all the tokens of the input sequence and the query. In our model, we use multi-level for3Under review as a conference paper at ICLR 2019Figure 1: Hierarchical Vanilla Attention Mechanismreference but different from the existing Multi-Level Attention Mechanisms . Our Ham-V focuson all the intermediate attention results rather than just the result of the last attention level. Asshown in Figure1, given the query and the input sequence which consists of nkeys, we calculate theVanilla Attention Mechanism result of them and get Query 1. And then we continue to calculatethe attention result of Query 1 and the keys and get Query 2. Repeat this calculation dtimes. Thus,we form ad-depth attention. Finally, the output of our Ham-V is the weighted sum of the above dattention results, where the dweights are the softmax values of dtrainable parameters. The softmaxis used to convert these dweights into the probabilities. These weights can tell us the relativeimportance of the dintermediate attention results Query i. In other words, the relative importance ofthedattention levels.3.2 H IERARCHICAL SELFATTENTION MECHANISM (HAM-S)InSelf Attention Mechanisms , the query stems from the input sequence kitself. So it can be treatedas a special case of attention mechanisms. Similarly, the self-version of Hierarchical AttentionMechanisms , which is shown in Figure 2, takes only the sequence kas input. We calculate theself-attention results of the input sequence for dtimes consecutively. Finally, the output of ourHam-S is the weighted sum of these dattention results, where the dweights are the softmax valuesofdtrainable parameters w1;w2;;wdwhich is the same as Ham-V . Through the dlevels ofself-attention, our model can learn different levels of deep features among all the tokens of the inputsequence, and through the dtrainable parameters and the weighted sum mechanism, our model canlearn the relative importance of the dself-attention levels.4 A NALYSISWe present theoretical analysis of our Ham mainly in two aspects: representation ability and conver-gence. The representation ability of Ham is obviously higher than Vanilla Attention MechanismsandMulti-Level Attention Mechanisms . Because the two attention mechanisms are just two ex-treme cases of Ham . When we set w1= 1,w2=w3==wd= 0, ourHam model is equivalentto the former. When we set wd= 1;w1=w2==wd1= 0, our model is equivalent to thelatter. Thus Ham is much more general and the representation ability is much higher. Using the4Under review as a conference paper at ICLR 2019Figure 2: Hierarchical Self Attention Mechanismweighted linear combination of these dintermediate attention results, our model takes every level offeatures into consideration.We are going to prove that the global minimum value of our loss function L(Ham(c1;;cd);X; )of the whole Ham model will decrease monotonically and converges finally as the increase ofhierarchical attention depth d. Here, Ham(c1;;cd)denotes a Ham with the attention depth danddtrainable parameters c1;;cd,Xdenotes all the input data including the queries and keys, and denotes all the parameters in the other part of the whole model. It is also worth emphasizing that allloss functions in NLP tasks have positive values.Theorem 1. In Vanilla Attention Mechanism, we have thatmin16i6nkkik26kAttention( q;K;K )k26max16i6nkkik2:This result is very obvious according to the definition of Vanilla Attention Mechanism ,Attention( q;K;K )is the weighted sum of k1;k2;;knand the weights are nonnegative withtheir sum 1. Denote the weights as 1;;n. ThenkAttention( q;K;K )k2=nXi=1iki26nXi=1ikkik26max16i6nkkik2:The left-hand side of the inequality can be proved similarly. This theorem tells us that throughmultiple attention layers, the vectors of intermediate attention levels will neither explode nor vanishas the increase of attention depth d.Theorem 2. LetAd= minci;L(Ham(c1;;cd);X; )be the global minimal value of the lossfunction. ThenfAdgj+1d=1is a monotonically decreasing sequence and it will converge.It is easy to note that:Ad= minci(16i6d);L(Ham(c1;;cd);X; )= minci(16i6d);L(Ham(c1;;cd;cd+1=1);X; )> minci(16i6d+1);L(Ham(c1;;cd;cd+1);X; ) =Ad+1:5Under review as a conference paper at ICLR 2019This means the monotonicity of the sequence fAdgj+1d=1. On the other hand, since our loss functionalways has positive values, sequence fAdghas 0 as its lower bound. Therefore, the monotonicallydecreasing sequence fAdgj+1d=1will converge.5 E XPERIMENTSAttention mechanisms are widely used in various NLP tasks. In our experiments, we would liketo replace existing attention mechanisms with our novel Ham-V and replace existing self-attentionmechanisms with our Ham-S to show the powerfulness and generalization ability of our model. Weconduct our experiments on two different NLP tasks, Chinese Poem Generation and Machine ReadingComprehension.5.1 M ACHINE READING COMPREHENSIONThe first NLP task we used to test our Ham is the machine reading comprehension (MRC).We conduct our experiment on both English MRC and Chinese MRC. The baseline models weuse include BIDAF[5], Match-LSTM[6], R-NET[8]. Here, BIDAF is for both Chinese and theother two are for English. They have a major similarity that all of them use attention mech-anism as the connection between their encoders and decoders and they are all open source.Their code is available on http://github.com/baidu/DuReader ,https://github.com/MurtyShikhar/Question-Answering andhttps://github.com/NLPLearn/R-net . What we will do is to replace their attention mechanism with Ham and compare the differ-ence of their performance. Specially, the R-NET model contains two attention mechanisms whendoing question-passage matching and passage self-matching. Here, we replace them both with HamandHam-S .The Chinese dataset we use for MRC experiments is DuREADER which is introduced by He, etal. (2017) and the English dataset we use include SQUAD which can be downloaded from https://rajpurkar.github.io/SQuAD-explorer and MS-MARCO which can be downloadedfromhttp://www.msmarco.org . We randomly choose 10 percent of question-answer data astesting set and the rest as training set. The evaluation method we use is BLEU-4 and ROUGE-L forChinese, ExactMatch(EM) and F1 score for English, where EM measures the percentage of howmuch the prediction of the model matches ground truth exactly and F1 measures the overlap betweenprediction and ground truth. During our experiments, we set also different attention depths dto showthe influence of attention depth.5.2 C HINESE POEM GENERATIONIn this work we generate Chinese quatrains, each of whose lines has the same length of 5 or 7characters. The baseline model we use is Planning based Poetry Generation (PPG) proposed byWang, et al. (2016), which generates Chinese poetries with a planning based neural network. Oncewe input a Chinese text, this model will generate a highly-related Chinese quatrains as follows.Firstly, the model extracts keywords from this input text with TextRank algorithm proposed byMihalcea, et al. (2004). Next, if the number of extracted keywords is not enough for a whole quatrain,more keywords will be created by Knowledge-based method . Then it comes to the final step, poemgeneration. The quatrain is generated line by line and each line corresponds a keyword. Whengenerating a single line, one uses a bi-directional Gated Recurrent Unit (GRU) model proposed byCho, et al. (2014 ) as encoder and another GRU model as decoder. Between encoder and decoder, anattention mechanism is used for connection.It is worth emphasizing that the dataset used by PPG consists of 76,859 quatrains from the InternetandPPG randomly chooses 2000 quatrains for testing, 2000 for validation and the rest for training. Inthe encoder part of PPG , the word embedding dimensionality is set as 512 and initialized by word2vec(Mikolov, et al. (2013)). In both GRU models, the hidden layers also contain 512 hidden units butthey are initialized randomly. For more details, please read Wang, et al. (2016). The code and datasetofPPG model can be found from https://github.com/Disiok/poetry-seq2seq .6Under review as a conference paper at ICLR 2019Model Dataset BLEU-4 ROUGE-LBIDAF DuReader 33.95 44.20BIDAF with 2-level Ham DuReader 34.02 44.39BIDAF with 5-level Ham DuReader 34.79 45.10BIDAF with 10-level Ham DuReader 35.96 47.33BIDAF with 20-level Ham DuReader 35.79 47.41Model Dataset EM F1Match-LSTM SQUAD 54.29 66.87Match-LSTM with 2-level Ham SQUAD 54.41 66.99Match-LSTM with 5-level Ham SQUAD 55.29 69.03Match-LSTM with 10-level Ham SQUAD 58.37 70.70Match-LSTM with 20-level Ham SQUAD 58.47 70.81Model Dataset BLEU-1 ROUGE-LR-NET MSMARCO 41.29 43.38R-NET with 2-level Ham MSMARCO 41.37 43.89R-NET with 5-level Ham MSMARCO 43.13 45.67R-NET with 10-level Ham MSMARCO 43.48 45.75R-NET with 20-level Ham MSMARCO 43.62 45.78Table 1: Evaluation results for MRC modelsIn our experiment, we replace the attention part of PPG from a Vanilla Attention Mechanism to ourHam-V and set the compatibility function fto be the scaled dot product function, while keeping otherparts and dataset unchanged as PPG except the evaluation part. The evaluation of poem generation inPPG is done by experts and we can not keep evaluation method unchanged since it is not convincingto find experts for evaluation. Our evaluation algorithm is based on BLEU-2 score which is calculatedasBLEU =133Xi=1BLEUi;where BLEUidenotes the BLEU-2 score computed for the next (i+ 1) th lines given the previous igoldstandard lines. This averaged BLEU can tell us how much correlated the lines of a generatedquatrain are. We will show some quatrains generated by our Ham-based PPG in the appendix.6 E XPERIMENTAL RESULTS AND QUALITATIVE ANALYSIS6.1 M ACHINE READING COMPREHENSIONAs clearly visible in Table 1, the proposed model is much better than conventional model at Chineseand English machine reading comprehension. This is likely due to the fact that our human languagehas a kind of hierarchical relationship within itself both structurally and semantically. And ourhierarchical attention mechanism is much easier to capture the inherent structural and semanticalhierarchical relationship in the source texts because of their innate similarity. We also set the attentiondepthsdto be 1 (which is equivalent to ordinary models without using Ham ), 2, 5, 10 and 20.From Table 1, we can find that our Ham plays a significant role of the whole model. With the increaseof attention depth d, the performance rises quickly at first and starts to converge when dgrows larger.The biggest improvements on these three models are 7:26%,7:76% and5:64% respectively. Theiraverage is over 6:5%which is a huge progress.6.2 C HINESE POEM GENERATIONThe results of our BLEU-based evaluation are summarized in Table 2. We compare our Ham-basedPPG with several relevant baselines like Statistical Machine Translation (SMT) proposed by He,et al. (2012) and RNN-based Poem Generator (RNNPG) proposed by Zhang, et al. (2014). In theformer model, a poem is generated iteratively by translating the previous line into the next line. Inthe latter model, all the lines are generated based on a context vector encoded from the previous lines.7Under review as a conference paper at ICLR 2019We also set different attention depths in order to learn the relationship between overall performanceand the attention depth d.Model BLEU 1 BLEU 2 BLEU 3 BLEU5-Char 7-Char 5-Char 7-Char 5-Char 7-Char 5-Char 7-CharSMT 0.056 0.124 0.052 0.150 0.054 0.176 0.054 0.150RNNPG 0.058 0.187 0.062 0.210 0.067 0.207 0.062 0.202PPG 0.061 0.185 0.069 0.193 0.073 0.198 0.068 0.1925-level Ham PPG 0.063 0.210 0.070 0.237 0.075 0.226 0.070 0.22410-level Ham PPG 0.062 0.217 0.075 0.267 0.076 0.259 0.072 0.24420-level Ham PPG 0.062 0.221 0.074 0.258 0.076 0.260 0.071 0.246Table 2: BLEU-based evaluation resultsWhen attention depth dis not large enough, the larger dis, the better performance our model willachieve. PPG with 10-level Ham has almost 25% of improvement on the averaged BLEU scorecompared with ordinary PPG . On the other hand, through the last two rows of the table above andTheorem 2, we know that with the continuing increase of d, our performance will converge sooner orlater. So in order to balance between performance and training cost, we suggest attention depth dtobe between 5 and 10.Figure 3: Three quatrains generated by 10-level Ham PPG model7 C ONCLUDING REMARKSIn this paper we have developed Hierarchical Attention Mechanism (Ham) . So far as we know,This is the first attention mechanism which introduces hierarchical mechanisms into attention mecha-nisms and takes the weighted sum of different attention levels as the output, so it combines low-levelfeatures and high-level features of input sequences to output a more suitable intermediate result fordecoders.We tested the proposed model Ham on the task of Chinese poem generation and machine readingcomprehension. The experiment revealed that the proposed Ham outperforms the conventionalmodels significantly, achieving the state-of-the-art results. In the future, we would like to studymore applications of Ham on other NLP tasks such as neural machine translation, abstractivesummarization, paraphrase generalization and so on.Recall that Ham belongs to soft attention where every token of input sequences is calculated byattention function. We will extend Ham to hard attention and local attention, to show whetherthe performance can be better and whether Ham can fit reinforcement learning environment better.Also, we will attempt to extend Multi-head Attention Mechanism of Vaswani, et al. (2017) to itshierarchical version and apply to neural machine translation.8Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title A simple extension of multi-level attention, but needs more extensive comparison to existing methods ### Review Text The paper introduces hierarchical attention, where they propose to weighted combine all the intermediate layers of multi-level attention. The idea is simple and seems to be promising, however originality seems incremental. In order to fully demonstrate the significance of the proposed algorithm, the authors should conduct more comparisons, for example, to multi-level attention. Just comparing with one-level attention seems unfair given the significant increase of computation. Another aspect of comparison may be to consider computation and performance improvements together and discuss the best trade-off. The authors should also include some standard benchmark datasets for comparisons. The current ones are good but it is not so clear what is the best state-of-the-arts results on them when compared with all other methods. The analysis on the network's representation and convergence is nice but it does not bring much insights. The argument for decreasing global minimal of the loss function in terms of increasing parameter size can be made for nearly all models but it is of little practical use since there is no guarantee one can reach the global optimal of these models. I recommend the authors to analyze/demonstrate how effective this weighted combination is. For example, the paper can benefit from some clear examples that show the learned weights across the layers and which ones are more important. The presentation of the paper needs some polishing. For example, there are numerous typos, grammatical errors everywhere. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SklGryBtwr
ICLR.cc/2020/Conference
2020
Environmental drivers of systematicity and generalization in a situated agent
["Felix Hill", "Andrew Lampinen", "Rosalia Schneider", "Stephen Clark", "Matthew Botvinick", "James L. McClelland", "Adam Santoro"]
The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent's perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent's perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.
["systematicitiy", "systematic", "generalization", "combinatorial", "agent", "policy", "language", "compositionality"]
ABSTRACTThe question of whether deep neural networks are good at generalising beyondtheir immediate training experience is of critical importance for learning-basedapproaches to AI. Here, we consider tests of out-of-sample generalisation that re-quire an agent to respond to never-seen-before instructions by manipulating andpositioning objects in a 3D Unity simulated room. We first describe a compara-tively generic agent architecture that exhibits strong performance on these tests.We then identify three aspects of the training regime and environment that makea significant difference to its performance: (a) the number of object/word expe-riences in the training set; (b) the visual invariances afforded by the agent’s per-spective, or frame of reference; and (c) the variety of visual input inherent in theperceptual aspect of the agent’s perception. Our findings indicate that the degreeof generalisation that networks exhibit can depend critically on particulars of theenvironment in which a given task is instantiated. They further suggest that thepropensity for neural networks to generalise in systematic ways may increase if,like human children, those networks have access to many frames of richly varying,multi-modal observations as they learn.1 I NTRODUCTIONSince the earliest days of research on neural networks, a recurring point of debate is whether neuralnetworks exhibit generalisation beyond their training experience, in a systematic way (Smolensky,1988; Fodor & Pylyshyn, 1988; Marcus, 1998; McClelland et al., 1987). This debate has beenre-energized over the past few years, given a resurgence in neural network research overall (Lake& Baroni, 2017; Bahdanau et al., 2018; Lake, 2019). Generalisation in neural networks is notabinary question; since there are cases where networks generalise well and others where they donot, the pertinent research question is when and under what conditions neural networks are ableto generalise. Here, we establish that a conventional neural-network-based agent exposed to rawvisual input and symbolic (language-like) instructions readily learns to exhibit generalisation thatapproaches systematic behaviour, and we explore the conditions supporting its emergence. First, weshow in a 3D simulated room that an agent trained to find all objects from a set and lift onlysome of them can lift withheld test objects never lifted during training. Second, we show thatthe same agent trained to lift all of the objects and put only some of them during training canputwithheld test objects, zero-shot, in the correct location. That is, the model learns to re-composeknown concepts (verbs and nouns) in novel combinations.In order to better understand this generalisation, we conduct several experiments to isolate its con-tributing factors. We find three to be critical: (a) the number of words and objects experienced duringtraining; (b) a bounded frame of reference or perspective; and (c) the diversity of perceptual inputafforded by the temporal aspect of the agent’s perspective. Crucially, these factors can be enhancedby situating an agent in a realistic environment, rather than an abstract, simplified setting. Theseresults serve to explain differences between our findings and studies showing poor generalisation,where networks were typically trained in a supervised fashion on abstract or idealised stimuli from aWork carried out during internship at DeepMind.1Published as a conference paper at ICLR 2020single modality (e.g. Lake & Baroni, 2017). They also suggest that the human capacity to exploit thecompositionality of the world, when learning to generalise in systematic ways, might be replicatedin artificial neural networks if those networks are afforded access to a rich, interactive, multimodalstream of stimuli that better matches the experience of an embodied human learner (Clerkin et al.,2017; Kellman & Arterberry, 2000; James et al., 2014; Yurovsky et al., 2013; Anderson, 2003).Our results suggest that robust systematic generalisation may be an emergent property of an agentinteracting with a rich, situated environment (c.f. McClelland et al., 2010).1.1 S YSTEMATICITY AND GENERALISATIONSystematicity is the property of human cognition whereby “the ability to entertain a given thoughtimplies the ability to entertain thoughts with semantically related contents” (Fodor & Pylyshyn,1988). As an example of systematic thinking, Fodor & Pylyshyn (1988) point out that any humanwho can understand John loves Mary will also understand the phrase Mary loves John , whetheror not they have heard the latter phrase before. Systematic generalisation (Plaut, 1999; Bahdanauet al., 2018; Lake et al., 2019) is a desirable characteristic for a computational model because itsuggests that if the model can understand components (or words) in certain combinations, it shouldalso understand the same components in different combinations. Note that systematic generalisationis also sometimes referred to as ‘combinatorial generalisation’ (O’Reilly, 2001; Battaglia et al.,2018). However, even human reasoning is often not perfectly systematic (O’Reilly et al., 2013). Wetherefore consider this issue from the perspective of generalisation, and ask to what degree a systemgeneralises in accordance with the systematic structure implied by language.Recent discussions around systematicity and neural networks have focused on the issue of how bestto encourage this behaviour in trained models. Many recent contributions argue that generalisingsystematically requires inductive biases that are specifically designed to support some form of sym-bolic computation (such as graphs (Battaglia et al., 2018), modular-components defined by symbolicparses (Andreas et al., 2016; Bahdanau et al., 2018), explicit latent variables (Higgins et al., 2017) orother neuro-symbolic hybrid methods (Mao et al., 2019)). On the other hand, some recent work hasreported instances of strong generalisation in the absence of such specific inductive biases (Chaplotet al., 2018; Yu et al., 2018; Lake, 2019). In the following sections, we first add to this latter literatureby reporting several novel cases of emergent generalisation. Unlike in previous work, the examplesthat we present here involve tasks involving the manipulation of objects via motor-policies, as wellas language and vision. This is followed by an in-depth empirical analysis of the environmentalconditions that stimulate generalisation in these cases.2 A MINIMAL MULTI -MODAL AGENTFigure 1: Schematic of the architecture used in all ex-periments. The blue components show some criticaldifferences that differentiate it from more abstract stud-ies that reported failures in systematic generalization.All our experiments use the same agent archi-tecture, a minimal set of components for visual(pixels) perception, language (strings) percep-tion, and policy prediction contingent on cur-rent and past observations. The simplicity ofthe architecture is intended to emphasize thegenerality of the findings; for details see App C.Visual processing Visual observations at eachtimestep come in the form of a WH3real-valued tensor ( WandHdepend on theparticular environment), which is processed bya 3-layer convolutional network with 64;64;32channels in the first, second and third layers re-spectively. The flattened output of this networkis concatenated with an embedding of the lan-guage observation.Language processing Language instructions are received at every timestep as a string. The agentsplits these on whitespace and processes them with a (word-level) LSTM network with hidden state2Published as a conference paper at ICLR 2020size128. The final hidden state is concatenated with the output of the visual processor to yield amultimodal representation of the stimulus at each timestep.Memory, action and value prediction The multimodal representation is passed to a 128-unitLSTM. At each timestep, the state of this LSTM is multiplied by a weight matrix containing A128weights; the output of this operation is passed through a softmax to yield a distribution over actions(Ais the environment-dependent size of the action set). The memory state is also multiplied by a1128weight matrix to yield a value prediction.Training algorithm We train the agent using an importance-weighted actor-critic algorithm with acentral learner and distributed actors (Espeholt et al., 2018).3 D EMONSTRATING GENERALISATIONA key aspect of language understanding is the ability to flexibly combine predicates with arguments;verb-noun binding is perhaps the canonical example. Verbs typically refer to processes and actions,so we study this phenomenon in a 3D room built in the Unity game engine.1In this environment, theagent observes the world from a first-person perspective, the Unity objects are 3D renderings of ev-eryday objects, the environment has simulated physics enabling objects to be picked-up, moved andstacked, and the agent’s action-space consists of 26 actions that allow the agent to move its location,its field of vision, and to grip, lift, lower and manipulate objects. Executing a simple instruction likefind a toothbrush (which can be accomplished on average in six actions by a well-trained agentin our corresponding grid world) requires an average of around 20action decisions.3.1 A GENERAL NOTION OF LIFTINGLifting is a simple motor process that corresponds to a verb and is easily studied in this environment.We consider the example instruction lift a helicopter to be successfully executed if the agentpicks up and raises the helicopter above a height of 0.5m for 2 seconds, a sequence which requiresmultiple actions once the helicopter is located. Similarly, the instruction find a helicopter isrealised if the agent moves within two metres of a helicopter and fixes its gaze (as determined bya ray cast from the agent’s visual field) while remaining within the two metre proximity for fivetimesteps without lifting the object during this time .To measure how general the acquired notion of lifting is, we trained the agent to find each of a setX=X1[X2of different objects (allowing the agent to learn to ground objects to their names)and to lift a subset X1of those objects, in trials in a small room containing two objects positionedat random (one ‘correct’ according to the instruction, and one ‘incorrect’). The agent receives apositive reward if it finds or lifts the correct object, and the episode ends with no reward if the agentfinds or lifts the incorrect object. We then evaluate its ability to extend its notion of lifting (zero-shot) to objects x2X2. In a variant of this experiment, we trained the agent to lift all of the objectsinXwhen referring to them by their color ( lift a green object ), and to find all of the objectsinXaccording to either shape or color ( find a pencil orfind a blue object ). We againtested on whether the agent lifted objects x2X2according to their shape (so, the test trials in bothvariants are the same). As shown in Fig 2(a), in both variants the agent generalises with near-perfectaccuracy. The agent therefore learned a notion of what it is to lift an object (and how this binds tothe word lift) with sufficient generality that it can, without further training, apply it to novel objects,or to familiar objects with novel modes of linguistic reference.3.2 A GENERAL NOTION OF PUTTINGWe took our study of predicate-object binding further by randomly placing two large, flat objects(a bed and a tray) in the room and training an agent to place one of three smaller objects on top.As before, the agent received a positive reward if it placed the correct small object on the bedor the tray according to the instruction. If it placed an incorrect object on the bed or tray, theepisode ended with no reward. To test generalisation in this case, we trained the agent to lift eachof a set X=X1[X2of smaller objects (as in the previous experiment) and then to put some1http://unity3d.com3Published as a conference paper at ICLR 2020Figure 2: Test and training performance as an agent learns zero-shot predicate-argument binding for (a) ‘lifting’in two variants: (1) training instructions are e.g. find a spaceship ,find/lift a pencil , testing instruc-tions are e.g. lift a spaceship . (2) training instructions are e.g. find a spaceship/lift a greenobject , testing instructions are e.g. lift a spaceship . (b) ‘putting’: training instructions are e.g. put aspaceship on the bed ,lift a pencil , testing instructions are e.g. put a pencil on the bed .subset X1Xof those objects on both the bed and the tray as instructed. We then measured itsperformance (with no further learning) in trials where it was instructed to put objects from X2ontoeither the bed or the tray. Surprisingly, we found that the agent was able to place objects with over90% accuracy onto the bed or the tray zero-shot as instructed (Fig 2(b)). This generalisation requiresthe agent to bind its knowledge of objects (referred to by nouns, and acquired in the lifting trials)to a complex control process (acquired in the training putting trials) – involving locating the correcttarget receptacle, moving the object above it (avoiding obstacles like the bed-head) and droppingit gently. An important caveat here is that control in this environment, while finer-grained than inmany common synthetic environments, is far simpler than the real world; in particular the processrequired to lift an object does not depend on the shape of that object (only its extent, to a degree).Once the objects are held by the agent, however, their shape becomes somewhat more important andplacing something on top of something else, avoiding possible obstacles that could knock the objectfrom the agent’s grasp, is a somewhat object-dependent process.4 U NDERSTANDING THE DRIVERS OF GENERALISATION4.1 N UMBER OF TRAINING INSTRUCTIONSTo emphasize most explicitly the role of the diversity of training experience in the emergence ofsystematic generalisation, we consider the abstract notion of negation . Rumelhart et al. (1986)showed that a two-layer feedforward network with sufficient hidden units can learn to predict thenegation of binary patterns. Here, we adapt this experiment to an embodied agent with continuousvisual input and, importantly, focus not only learning, but also generalisation. We choose to considernegation since it is an example of an operator for which we found that, for our standard environmentconfiguration, our agent unequivocally fails to exhibit an ability to generalize in a systematic way.To see this, we generated trials in the Unity room with two different objects positioned at random andinstructions of the form find a box (in which case the agent was rewarded for locating and fixingits gaze on the box) and find [something that is] not a box (in which case there was a box in theroom but the agent was rewarded for locating the other object). Like Rumelhart et al. (1986), wefound that it was unproblematic to train an agent to respond correctly to both positive and negativetraining inputs. To explore generalisation, we then trained the agent to follow instructions find axfor all x2Xand negated instructions find a not xfor only x2X1(where X=X1[X2,andX1\X2=;), and tested how it interpreted negative commands for x2X2.When X1contained only a few objects ( jX1j= 6), the agent interpreted negated instructions (in-volving objects from X2) with below chance accuracy. In other words, for objects x22X2, inresponse to the instruction find a not x2, the agent was more likely to find the object x2than thecorrect referent of the instruction. This is an entirely un-systematic interpretation of the negation4Published as a conference paper at ICLR 2020predicate.2Interestingly, however, for ( jX1j= 40 ) the agent achieved above chance interpretationof held-out negated instructions, and for ( jX1j= 100 ) performance on held-out negated instructionsincreased to 0:78(Table 1).Training set Train accuracy Test accuracy6 words 1.00 0.4040 words 0.97 0.60100 words 0.91 0.78Table 1: Accuracy extending a negation predicate tonovel arguments (test accuracy) when agents are trainedto negate different numbers of words/objects.These results show that, for a test set of fixedsize, the degree of systematicity exhibited by anagent when generalizing can grow with the va-riety of words/objects experienced in the train-ing set, even for highly non-compositional op-erators such as negation. Of course, the merefact that larger training sets yield better gener-alisation in neural networks is not novel or un-expected. On the other hand, we find the emer-gence of a logical operator like negation in theagent in a reasonably systematic way (notingthat adult humans are far from perfectly sys-tematic (Lake et al., 2019))), given experience of 100 objects (again, not orders of magnitude dif-ferent from typical human experience), to be notable, particularly given the history of researchinto learning logical operators in connectionist models and the importance of negation in languageprocessing (Steedman, 1999; 2011). In what follows, we complement these observations by estab-lishing that not only the amount, but also the type of training data (or agent experience) can have asignificant impact on emergent generalisation.4.2 3D VERSUS 2D ENVIRONMENTFigure 3: Screenshots from the agent’s perspective of an episode in the grid-world and Unity environments.In both cases the instruction is put the picture frame on the bed .Our first observation is that the specifics of the environment (irrespective of the logic of the task)can play an important role in emergent generalisation. To show this most explicitly we consider asimple color-shape generalisation task. Prior studies in both 2D and 3D environments have shownthat neural-network-based agents can correctly interpret instructions referring to both the color andshape of objects ( find a red ball ) zero-shot when they have never encountered that particular com-bination during training (Chaplot et al., 2018; Hill et al., 2017; Yu et al., 2018). We replicate thedemonstration of Hill et al. (2017) in the 3D DeepMind-Lab environment (Beattie et al., 2016), andfor comparison implement an analogous task in a 2D grid-world (compare Fig 3 (a) and Fig 4).As in the original experiments, we split the available colors and shapes in the environment into setsS=s[^sandC=c[^c. We then train the agent on episodes with instructions sampled from oneof the sets cs,^csorc^s, and, once trained, we evaluate its performance on instructions from^c^s. All episodes involve the agent in a small room faced with two objects, one of which matchesthe description in the instruction. Importantly, both the color and the shape word in the instructionare needed to resolve the task, and during both training and testing the agent faces trials in which2We suspect in such cases that the agent is simply learning a non-compositional interpretation during train-ing along the lines of notxreferring to the setfy2X1:x6=yg.5Published as a conference paper at ICLR 2020the confounding object either matches the color of the target object or the shape of the target object.While it was not possible to have exactly the same shapes in set Sin both the grid world and the3D environment, the size of CandSand all other aspects of the environment engine were the samein both conditions. As shown in Table 2 (top), we found that training performance was marginallyworse on the 3D environment, but that test performance in 3D (M= 0:97; SD = 0:04)was sixpercentage points higher than in 2D (M= 0:91; SD = 0:08)(a suggestive but not significantdifference given the small sample of agents; t(8) = 1 :38; p= 0:20).To probe this effect further, we devised an analogue of the ‘putting’ task (Section 3.2) in the 2Dgrid-world. In both the 3D and 2D environments, the agent was trained on ‘lifting’ trials, in which ithad to visit a specific object, and on ‘putting’ trials, in which it had to pick up a specific object andmove it to the bed. To match the constraints of the grid-world, we reduced the total global objectset in the Unity room to ten, allocating three to both lifting and putting trials during training, andthe remaining seven only to lifting trials. The evaluation trials then involved putting the remainingseven objects on the bed (see Figure 3 for an illustration of the two environments). While the grid-world and the 3D room tasks are identical at a conceptual (and linguistic) level, the experience ofan agent is quite different in the two environments. In the grid world, the agent observes the entireenvironment (including itself) from above at every timestep, and can move to the 81 = 99possiblelocations by choosing to move up, down, left or right. To lift an object in the grid-world, the agentsimply has to move to the square occupied by that object, while ‘putting’ requires the agent to liftthe object and then walk to the square occupied by the white (striped) bed.Train accuracy Test accuracyColor-shape taskGrid world 0.99 0.91 0:093D room (DM-Lab) 0.98 0.97 0:04Putting taskGrid world 0.99 0.40 0:14Grid world, scrolling 0.93 0.60 0:143D room (Unity) 0.99 0.63 0:06Table 2: Tests of systematic generalisation in 2D and 3D environ-ments; five randomly-initialized agent replicas in each condition.As shown in Table 2 (bottom),agents in both conditions achievednear perfect performance in train-ing. However, on test trials, per-formance of the agent in the Unityroom (M= 0:63; SD = 0:06)wassignificantly better than the agent inthe grid world (M= 0:40; SD =0:14);t(8) = 3 :48; p < 0:005.In failure cases, the agent in thegrid world can be observed exhibit-ing less certainty, putting the wrongobject on the bed or running out oftime without acting decisively withthe objects.4.3 V ISUAL INVARIANCES IN AGENTS ’PERSPECTIVESTo understand more precisely why agents trained in 3D worlds generalise better, we ran a furthercondition in which we gave the agent in the grid-world an ego-centric perspective, as it has in the 3Dworld. Specifically, we adapted the grid-world so that the agent’s field of view was 55and centredon the agent (rather than 99and fixed). While this egocentric constraint made it harder for the agentto learn the training task, performance on the test set improved significantly t(8) = 2 :35; p < 0:05,accounting for most of the difference in generalisation performance between the 3D world and the(original) 2D world. This suggests that the visual invariances introduced when bounding the agent’sperspective, which in our case was implemented using an ego-centric perspective, may improve anagent’s ability to factorise experience and behaviour into chunks that can be re-used effectively innovel situations. The relevant difference is likely that the agent is always in an invariant locationin the visual input, which reduces some difficulty of perception, rather than the fact that the agentis centred per se. See Appendix D for control experiments showing that partial-observability alonedoes not induce the same boost to generalisation, nor does a difference in the information availablefrom a single frame.4.4 T EMPORAL ASPECT OF PERCEPTIONEgocentric vs. allocentric vision is not the only difference between a grid-world and a 3D world.Another difference is that, in the 3D world, the agent experiences a much richer variety of (highly6Published as a conference paper at ICLR 2020Figure 4: (a) Four (independent) training inputs from the training data for the classifier model. (b) Fourtimesteps of input from a single episode for the situated agent.correlated) visual stimuli in any particular episode. To isolate the effect of this factor on general-isation, we return to the color-shape task, which is appropriate for the present experiment becausea single train/test experiment (defined in terms of the instructions and objects that the agent ex-periences) can be constructed either as an interactive MDP for a situated agent or as a supervisedclassification problem.Regime Train accuracy Test accuracyClassifier 0.95 0.80 0:05Agent 1.00 1.00 0:00Table 3: generalisation accuracy achieved by a vision-and-language classifier trained on single screenshots versus a sit-uated agent trained in the DMLab environment.In the vision+language classifier con-dition, a supervised model must predicteither leftorright in response to a stillimage (the first frame of an episode) oftwo objects and a language instructionof the form find a red ball . In the agentcondition, our situated RL agent beginsan episode facing two objects and istrained to move towards and bump intothe object specified by the instruction. Importantly, the architecture for the vision+language classiferand the agent were identical except that the final (action and value-prediction) layer in the agent isreplaced by a single linear layer and softmax over two possible outcomes in the classifier.On the same set of training instructions, we trained the classifier to maximise the likelihood of itsobject predictions, and the agent to maximise the return from selecting the correct object. As shownin Table 3, the accuracy of the classifier on the training instructions converged at 0:95comparedto1:0for the agent (which may be explained by greater difficulty in detecting differences betweenobjects given a more distant viewpoint). More importantly, performance of the classifier on testepisodes ( M= 0:80,SD= 0:05) was significantly worse than that of the agent ( M= 1:00,SD= 0:00);t(8) = 8 :61,p < 0:0001 . We conclude that an agent that can move its locationand its gaze (receiving richer variety views for a given set of object stimuli) learns not only torecognize those objects better, but also to generalise understanding of their shape and color withgreater systematicity than an agent that can only observe a single snapshot for each instruction.3This richer experience can be thought of as effectively an implicit form of data augmentation.4.5 T HE ROLE OF LANGUAGEThus far, we have considered tasks that involve both (synthetic) language stimuli and visual ob-servations, which makes it possible to pose diverse and interesting challenges requiring systematicbehaviour. However, this approach raises the question of whether the highly regular language expe-rienced by our agents during training contributes to the consistent systematic generalisation that weobserve at test time. Language can provide a form of supervision for how to break down the world3See Yurovsky et al. (2013) for a discussion of the importance of this factor in children’s visual and word-learning development.7Published as a conference paper at ICLR 2020and/or learned behaviours into meaningful sub-parts, which in turn might stimulate systematicityand generalisation. Indeed, prior work has found that language can serve to promote compositionalbehaviour in deep RL agents (Andreas et al., 2017).To explore this hypothesis, we devised a simple task in the grid world that can be solved either withor without relying on language. The agent begins each episode in a random position in the gridcontaining eight randomly-positioned objects, four of one color and shape and four of another. Ineach episode, one of the object types was randomly designated as ‘correct’ and the agent received apositive +1 reward for collecting objects of this type. The episode ended if the agent collected all ofthe correct objects (return = +4 ) or if it collected a single incorrect object (reward = 0). Withoutaccess to language, the optimal policy is to simply select an object type at random and then (if pos-sible) the remainder of that type (which returns 2 on average). In the language condition, however,the target object type (e.g. ‘red square’) was named explicitly, so the agent could learn to achieve areturn of 4 on all episodes. To test generalisation in both conditions, as in the color-shape referenceexpressions described above, we trained the agent on episodes involving half of the possible color-shape combinations (as correct or distractor objects) and tested it on episodes involving the otherhalf. Note that, both during training and testing, in all episodes the incorrect object differed fromthe correct object in terms of either color or shape, but not both (so that awareness of both color andshape was necessary to succeed in general).For fair comparison between conditions, when measuring agent performance we consider perfor-mance only on episodes in which the first object selected by the agent was correct. As shown inFigure 5, the non-linguistic agent performed worse on the training set (when excluding the 50% oftrails in which it failed on the first object). Performance on the test episodes was also worse overallfor the non-linguistic agent, but by a similar amount to the discrepancy on the training set (thus thetest error was similar in both conditions). With increasing training, we observed this discrepancyto diminish to a negligible amount (Figure 5, right). Importantly, both linguistic and non-linguisticagents exhibited test generalisation that was substantially above chance. While not conclusive, thisanalysis raises the possibility that language may not be playing a significant role (and is certainlynot the unique cause) of the systematic generalisation that we have observed emerging in otherexperiments.Figure 5: Left An episode of the grid-world task that can be posed with or without language. Right Nor-malised episode returns on training and evaluation trials as the agent learns, adjusted to ignore episodes wherethe first object that the agent collects is incorrect.5 D ISCUSSIONWe have shown that a neural-network-based agent with standard architectural components can learnto execute goal-directed motor behaviours in response to novel instructions zero-shot. This suggeststhat, during training, the agent learns not only how to follow training instructions, but also generalinformation about how word-like symbols compose and how the combination of those words af-fects what the agent should do in its world. With respect to generalisation, our findings contrastwith other recent contributions suggesting that neural networks do not generalise well in these set-tings (Lake, 2019; Bahdanau et al., 2018), and are more aligned with older empirical analyses ofneural networks (McClelland et al., 1987; Frank, 2006). Our work builds on those earlier studies8Published as a conference paper at ICLR 2020by considering not only the patterns or functions that neural networks can learn, but also how theycompose familiar patterns to interpret entirely novel stimuli. Relative to more recent experimentson color-shape generalisation (Higgins et al., 2017; Chaplot et al., 2018; Yu et al., 2018), we studya wider range of phenomena, involving abstract modifying predicates (negation) and verbs (‘to lift’,‘to put’) corresponding to complex behaviours requiring approximately 50movement or manipu-lation actions and awareness of objects and their relations. By careful experimentation, we furtherestablish that visual invariances afforded by a bounded frame of reference, in our case implementedusing a first-person perspective of an agent acting over time, plays an important role in the emer-gence of this generalisation.An obvious limitation of our work, which technology precludes us from resolving at present, is thatwe demonstrate the impact of realism on generalisation by comparing synthetic environments ofdiffering degrees of realism, while even our most realistic environment is considerably more simpleand regular than the world itself. This is in keeping with a long history of using simulation for thescientific analysis of learning machines (Minsky & Papert, 1969; Marcus, 1998). Nonetheless, as thepossibilities for learning in situated robots develop, future work should explore whether the general-isation that we observe here is even closer to perfectly systematic under even more realistic learningconditions. An additional limitation of our work is that – much like closely-related work (Lake &Baroni, 2017; Bahdanau et al., 2018) – we do not provide a nomological explanation for the gen-eralisation (or lack thereof) that we observe. Depending on the desired level of analysis, such anexplanation may ultimately come from theoretical work on neural network generalisation (Aroraet al., 2018; Lampinen & Ganguli, 2018; Allen-Zhu et al., 2018; Arora et al., 2019). Our focus here,however, is on providing careful experiments that isolate the significance of environmental factorson systematicity and to provide measures of the robustness (i.e. variance) of these effects.We also emphasize that our agent in no way exhibits complete systematicity. The forms of gen-eralisation it exhibits do not encompass the full range of systematicity of thought/behaviour thatone might expect of a mature adult human. However, this is unsurprising given that our network hassubstantially less experience than an adult human, and learns in a substantially simpler environment.It is also worth noting that none of our experiments reflect the human ability to learn (rather thanextend existing knowledge) quickly (as in, e.g. Lake et al. (2015); Botvinick et al. (2019)). Finally,depending on one’s perspective, it is always possible to characterise the generalisation we describehere as interpolation. Rather than engage with this thorny philosophical distinction (which touchesupon, e.g., the problem of induction (Vickers, 2009)), we simply emphasize that in our experimentsthere is a categorical difference between the data on which the agent was trained and test stimulito which they respond. Finally, our experiments concur with a message expressed most clearly byAnderson’s famous contribution in Science (Anderson, 1972). Conclusions that are reached whenexperimenting with pared-down or idealised stimuli may be different from those reached when con-sidering more complex or naturalistic data, since the simplicity of the stimuli can stifle potentiallyimportant emergent phenomena. We suggest that strong generalisation may emerge when an agentis situated in a rich environment, rather than a simple, abstract setting.
rJxuMyU1ir
Official Blind Review #4
6: Weak Accept
This work studies factors which promote combinatorial generalization in a "neural network agent" embodied in a 3d simulation environment. The authors present interesting experiments and some insightful empirical findings on how a richer environment and a first-person egocentric perspective can aid a simple neural net to generalize better over previously unseen tasks. While I truly commend the effort undertaken to perform the experiments, I have several concerns which I explain below and would be happy to raise my score if they can all be addressed satisfactorily: 1) While the authors interpret the experiment results in sec 4.1 in a positive way, the results don't seem to necessarily indicate good systematic generalization. For instance, after learning with 40 words the agent only achieves 60% test accuracy. While the accuracy increases to 78% on training with 100 words, the training and test accuracy gap indicates that the performance is still far from any kind of systematic generalization. The results instead seems to be hinting that neural nets don't indeed perform combinatorial generalization on their own, but can be forced towards it by supplying them huge amounts of diverse data (which is not true for humans). Also, the fact that increasing the number of words helps in generalizing better is true for most ML models and does not come as a surprise. So the results in this subsection are somewhat trivial and do not necessarily contribute any new understanding. 2) For the experiments regarding egocentric frame in sec 4.3, I feel that the results are not really conclusive (even including the control exps in appendix D). Could it be that if one uses any frame rigidly attached (i.e. fixed displacement and rotational coordinates) to the agent's egocentric frame, one would achieve the same generalization performance? It is also possible that as suggested by authors in sec 4.4, it is just the motion of the egocentric frame which might be giving diverse views of the environment to the agent. So the frame might not even need to be egocentric, but just a moving frame which gives richer and diverse views whenever the agent moves. Please include experiments to test for these possibilities. 3) In section 4.4, the authors have trained the non-embodied classifier with just a single image frame. But this does not necessarily justify the conclusion that active perception helps in generalization. This is because the motion of the RL agent gives it both a varied set of views AND also control over what views to obtain by taking actions. In order to better understand which of these factors (or perhaps both) aid in generalization, another set of experiments is required which shows the classifier agent more images while keeping the desired object in view. In one experiment, these images should be chosen with random movements but the number of such images provided to the classifier should be increased in sub-experiments to gauge if giving more varied views bridges the performance gap between the classifier and the RL agent's generalization performance. In a second experiment, one might want to first train the RL agent, then extract a few (say 10) frames out of its enacted policy for all pairs of objects and use these frames as a part of the training set for the classifier agent. This would allow one to gauge if both varied views and actively selecting to interact with the environment can help bridge the generalization gap. 4) Lastly, sec 4.5 seems to be hinting at a potentially very incorrect conclusion: "language is not a large factor (and certainly not a necessary cause) of the systematic generalisation...". This cannot be said from the small single experiment presented in sec 4.5. For instance, that experiment has been devised in a way that an optimal policy can be found with/without language. However, if a language input is provided to explicitly state the desired object, that might speed up the training of the RL agents significantly. In such a case, it might be helpful to see if learning the policy with the language input is being accomplished with a much lower number of frames during training, as opposed to when no language input is provided. Please provide the training error plots. But regardless of the plots, the experiments can still be quite inconclusive since language helps in systematic generalization in a variety of other ways apart from what has been tested for. In general, language starts helping humans once it has been acquired to a sufficient extent since one needs noun-concept linkages, verb-action linkages etc. to have been acquired a priori before the benefits of language emerge in combinatorial generalization. Training an LSTM to understand the language commands in tandem with learning policies for picking desired objects could lead to sub-optimal or heavily over-fitted language models which may not help in generalization. Testing for the true role of language will require many more experiments, which may be somewhat out of scope for this paper given the space constraints for a single paper. But, I would advise the authors to refrain from drawing hasty inferences about the role of language without thorough experimentation. Minor issues: 1) What are the 26 actions in the Unity 3D environment in section 3? It is important to know the action space to understand how easy or hard it is for the agent to learn generalizable policies. 2) The x-axis of Figure 2 is not readable at all. Please rectify those graphs and reduce the number of ticks. -------------------------- Update after interaction during author feedback period ------------------------------- I appreciate the efforts that the authors have undertaken to address my concerns. While the paper is far from perfect, it is still a very thought provoking work and I believe that it would make a valuable contribution to the line of works on systematic generalization in embodied agents. I am updating my score to reflect the same.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Environmental drivers of systematicity and generalization in a situated agent ### Paper Abstract The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent's perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent's perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn. ### Paper Keywords ["systematicitiy", "systematic", "generalization", "combinatorial", "agent", "policy", "language", "compositionality"] ### Paper Content ABSTRACTThe question of whether deep neural networks are good at generalising beyondtheir immediate training experience is of critical importance for learning-basedapproaches to AI. Here, we consider tests of out-of-sample generalisation that re-quire an agent to respond to never-seen-before instructions by manipulating andpositioning objects in a 3D Unity simulated room. We first describe a compara-tively generic agent architecture that exhibits strong performance on these tests.We then identify three aspects of the training regime and environment that makea significant difference to its performance: (a) the number of object/word expe-riences in the training set; (b) the visual invariances afforded by the agent’s per-spective, or frame of reference; and (c) the variety of visual input inherent in theperceptual aspect of the agent’s perception. Our findings indicate that the degreeof generalisation that networks exhibit can depend critically on particulars of theenvironment in which a given task is instantiated. They further suggest that thepropensity for neural networks to generalise in systematic ways may increase if,like human children, those networks have access to many frames of richly varying,multi-modal observations as they learn.1 I NTRODUCTIONSince the earliest days of research on neural networks, a recurring point of debate is whether neuralnetworks exhibit generalisation beyond their training experience, in a systematic way (Smolensky,1988; Fodor & Pylyshyn, 1988; Marcus, 1998; McClelland et al., 1987). This debate has beenre-energized over the past few years, given a resurgence in neural network research overall (Lake& Baroni, 2017; Bahdanau et al., 2018; Lake, 2019). Generalisation in neural networks is notabinary question; since there are cases where networks generalise well and others where they donot, the pertinent research question is when and under what conditions neural networks are ableto generalise. Here, we establish that a conventional neural-network-based agent exposed to rawvisual input and symbolic (language-like) instructions readily learns to exhibit generalisation thatapproaches systematic behaviour, and we explore the conditions supporting its emergence. First, weshow in a 3D simulated room that an agent trained to find all objects from a set and lift onlysome of them can lift withheld test objects never lifted during training. Second, we show thatthe same agent trained to lift all of the objects and put only some of them during training canputwithheld test objects, zero-shot, in the correct location. That is, the model learns to re-composeknown concepts (verbs and nouns) in novel combinations.In order to better understand this generalisation, we conduct several experiments to isolate its con-tributing factors. We find three to be critical: (a) the number of words and objects experienced duringtraining; (b) a bounded frame of reference or perspective; and (c) the diversity of perceptual inputafforded by the temporal aspect of the agent’s perspective. Crucially, these factors can be enhancedby situating an agent in a realistic environment, rather than an abstract, simplified setting. Theseresults serve to explain differences between our findings and studies showing poor generalisation,where networks were typically trained in a supervised fashion on abstract or idealised stimuli from aWork carried out during internship at DeepMind.1Published as a conference paper at ICLR 2020single modality (e.g. Lake & Baroni, 2017). They also suggest that the human capacity to exploit thecompositionality of the world, when learning to generalise in systematic ways, might be replicatedin artificial neural networks if those networks are afforded access to a rich, interactive, multimodalstream of stimuli that better matches the experience of an embodied human learner (Clerkin et al.,2017; Kellman & Arterberry, 2000; James et al., 2014; Yurovsky et al., 2013; Anderson, 2003).Our results suggest that robust systematic generalisation may be an emergent property of an agentinteracting with a rich, situated environment (c.f. McClelland et al., 2010).1.1 S YSTEMATICITY AND GENERALISATIONSystematicity is the property of human cognition whereby “the ability to entertain a given thoughtimplies the ability to entertain thoughts with semantically related contents” (Fodor & Pylyshyn,1988). As an example of systematic thinking, Fodor & Pylyshyn (1988) point out that any humanwho can understand John loves Mary will also understand the phrase Mary loves John , whetheror not they have heard the latter phrase before. Systematic generalisation (Plaut, 1999; Bahdanauet al., 2018; Lake et al., 2019) is a desirable characteristic for a computational model because itsuggests that if the model can understand components (or words) in certain combinations, it shouldalso understand the same components in different combinations. Note that systematic generalisationis also sometimes referred to as ‘combinatorial generalisation’ (O’Reilly, 2001; Battaglia et al.,2018). However, even human reasoning is often not perfectly systematic (O’Reilly et al., 2013). Wetherefore consider this issue from the perspective of generalisation, and ask to what degree a systemgeneralises in accordance with the systematic structure implied by language.Recent discussions around systematicity and neural networks have focused on the issue of how bestto encourage this behaviour in trained models. Many recent contributions argue that generalisingsystematically requires inductive biases that are specifically designed to support some form of sym-bolic computation (such as graphs (Battaglia et al., 2018), modular-components defined by symbolicparses (Andreas et al., 2016; Bahdanau et al., 2018), explicit latent variables (Higgins et al., 2017) orother neuro-symbolic hybrid methods (Mao et al., 2019)). On the other hand, some recent work hasreported instances of strong generalisation in the absence of such specific inductive biases (Chaplotet al., 2018; Yu et al., 2018; Lake, 2019). In the following sections, we first add to this latter literatureby reporting several novel cases of emergent generalisation. Unlike in previous work, the examplesthat we present here involve tasks involving the manipulation of objects via motor-policies, as wellas language and vision. This is followed by an in-depth empirical analysis of the environmentalconditions that stimulate generalisation in these cases.2 A MINIMAL MULTI -MODAL AGENTFigure 1: Schematic of the architecture used in all ex-periments. The blue components show some criticaldifferences that differentiate it from more abstract stud-ies that reported failures in systematic generalization.All our experiments use the same agent archi-tecture, a minimal set of components for visual(pixels) perception, language (strings) percep-tion, and policy prediction contingent on cur-rent and past observations. The simplicity ofthe architecture is intended to emphasize thegenerality of the findings; for details see App C.Visual processing Visual observations at eachtimestep come in the form of a WH3real-valued tensor ( WandHdepend on theparticular environment), which is processed bya 3-layer convolutional network with 64;64;32channels in the first, second and third layers re-spectively. The flattened output of this networkis concatenated with an embedding of the lan-guage observation.Language processing Language instructions are received at every timestep as a string. The agentsplits these on whitespace and processes them with a (word-level) LSTM network with hidden state2Published as a conference paper at ICLR 2020size128. The final hidden state is concatenated with the output of the visual processor to yield amultimodal representation of the stimulus at each timestep.Memory, action and value prediction The multimodal representation is passed to a 128-unitLSTM. At each timestep, the state of this LSTM is multiplied by a weight matrix containing A128weights; the output of this operation is passed through a softmax to yield a distribution over actions(Ais the environment-dependent size of the action set). The memory state is also multiplied by a1128weight matrix to yield a value prediction.Training algorithm We train the agent using an importance-weighted actor-critic algorithm with acentral learner and distributed actors (Espeholt et al., 2018).3 D EMONSTRATING GENERALISATIONA key aspect of language understanding is the ability to flexibly combine predicates with arguments;verb-noun binding is perhaps the canonical example. Verbs typically refer to processes and actions,so we study this phenomenon in a 3D room built in the Unity game engine.1In this environment, theagent observes the world from a first-person perspective, the Unity objects are 3D renderings of ev-eryday objects, the environment has simulated physics enabling objects to be picked-up, moved andstacked, and the agent’s action-space consists of 26 actions that allow the agent to move its location,its field of vision, and to grip, lift, lower and manipulate objects. Executing a simple instruction likefind a toothbrush (which can be accomplished on average in six actions by a well-trained agentin our corresponding grid world) requires an average of around 20action decisions.3.1 A GENERAL NOTION OF LIFTINGLifting is a simple motor process that corresponds to a verb and is easily studied in this environment.We consider the example instruction lift a helicopter to be successfully executed if the agentpicks up and raises the helicopter above a height of 0.5m for 2 seconds, a sequence which requiresmultiple actions once the helicopter is located. Similarly, the instruction find a helicopter isrealised if the agent moves within two metres of a helicopter and fixes its gaze (as determined bya ray cast from the agent’s visual field) while remaining within the two metre proximity for fivetimesteps without lifting the object during this time .To measure how general the acquired notion of lifting is, we trained the agent to find each of a setX=X1[X2of different objects (allowing the agent to learn to ground objects to their names)and to lift a subset X1of those objects, in trials in a small room containing two objects positionedat random (one ‘correct’ according to the instruction, and one ‘incorrect’). The agent receives apositive reward if it finds or lifts the correct object, and the episode ends with no reward if the agentfinds or lifts the incorrect object. We then evaluate its ability to extend its notion of lifting (zero-shot) to objects x2X2. In a variant of this experiment, we trained the agent to lift all of the objectsinXwhen referring to them by their color ( lift a green object ), and to find all of the objectsinXaccording to either shape or color ( find a pencil orfind a blue object ). We againtested on whether the agent lifted objects x2X2according to their shape (so, the test trials in bothvariants are the same). As shown in Fig 2(a), in both variants the agent generalises with near-perfectaccuracy. The agent therefore learned a notion of what it is to lift an object (and how this binds tothe word lift) with sufficient generality that it can, without further training, apply it to novel objects,or to familiar objects with novel modes of linguistic reference.3.2 A GENERAL NOTION OF PUTTINGWe took our study of predicate-object binding further by randomly placing two large, flat objects(a bed and a tray) in the room and training an agent to place one of three smaller objects on top.As before, the agent received a positive reward if it placed the correct small object on the bedor the tray according to the instruction. If it placed an incorrect object on the bed or tray, theepisode ended with no reward. To test generalisation in this case, we trained the agent to lift eachof a set X=X1[X2of smaller objects (as in the previous experiment) and then to put some1http://unity3d.com3Published as a conference paper at ICLR 2020Figure 2: Test and training performance as an agent learns zero-shot predicate-argument binding for (a) ‘lifting’in two variants: (1) training instructions are e.g. find a spaceship ,find/lift a pencil , testing instruc-tions are e.g. lift a spaceship . (2) training instructions are e.g. find a spaceship/lift a greenobject , testing instructions are e.g. lift a spaceship . (b) ‘putting’: training instructions are e.g. put aspaceship on the bed ,lift a pencil , testing instructions are e.g. put a pencil on the bed .subset X1Xof those objects on both the bed and the tray as instructed. We then measured itsperformance (with no further learning) in trials where it was instructed to put objects from X2ontoeither the bed or the tray. Surprisingly, we found that the agent was able to place objects with over90% accuracy onto the bed or the tray zero-shot as instructed (Fig 2(b)). This generalisation requiresthe agent to bind its knowledge of objects (referred to by nouns, and acquired in the lifting trials)to a complex control process (acquired in the training putting trials) – involving locating the correcttarget receptacle, moving the object above it (avoiding obstacles like the bed-head) and droppingit gently. An important caveat here is that control in this environment, while finer-grained than inmany common synthetic environments, is far simpler than the real world; in particular the processrequired to lift an object does not depend on the shape of that object (only its extent, to a degree).Once the objects are held by the agent, however, their shape becomes somewhat more important andplacing something on top of something else, avoiding possible obstacles that could knock the objectfrom the agent’s grasp, is a somewhat object-dependent process.4 U NDERSTANDING THE DRIVERS OF GENERALISATION4.1 N UMBER OF TRAINING INSTRUCTIONSTo emphasize most explicitly the role of the diversity of training experience in the emergence ofsystematic generalisation, we consider the abstract notion of negation . Rumelhart et al. (1986)showed that a two-layer feedforward network with sufficient hidden units can learn to predict thenegation of binary patterns. Here, we adapt this experiment to an embodied agent with continuousvisual input and, importantly, focus not only learning, but also generalisation. We choose to considernegation since it is an example of an operator for which we found that, for our standard environmentconfiguration, our agent unequivocally fails to exhibit an ability to generalize in a systematic way.To see this, we generated trials in the Unity room with two different objects positioned at random andinstructions of the form find a box (in which case the agent was rewarded for locating and fixingits gaze on the box) and find [something that is] not a box (in which case there was a box in theroom but the agent was rewarded for locating the other object). Like Rumelhart et al. (1986), wefound that it was unproblematic to train an agent to respond correctly to both positive and negativetraining inputs. To explore generalisation, we then trained the agent to follow instructions find axfor all x2Xand negated instructions find a not xfor only x2X1(where X=X1[X2,andX1\X2=;), and tested how it interpreted negative commands for x2X2.When X1contained only a few objects ( jX1j= 6), the agent interpreted negated instructions (in-volving objects from X2) with below chance accuracy. In other words, for objects x22X2, inresponse to the instruction find a not x2, the agent was more likely to find the object x2than thecorrect referent of the instruction. This is an entirely un-systematic interpretation of the negation4Published as a conference paper at ICLR 2020predicate.2Interestingly, however, for ( jX1j= 40 ) the agent achieved above chance interpretationof held-out negated instructions, and for ( jX1j= 100 ) performance on held-out negated instructionsincreased to 0:78(Table 1).Training set Train accuracy Test accuracy6 words 1.00 0.4040 words 0.97 0.60100 words 0.91 0.78Table 1: Accuracy extending a negation predicate tonovel arguments (test accuracy) when agents are trainedto negate different numbers of words/objects.These results show that, for a test set of fixedsize, the degree of systematicity exhibited by anagent when generalizing can grow with the va-riety of words/objects experienced in the train-ing set, even for highly non-compositional op-erators such as negation. Of course, the merefact that larger training sets yield better gener-alisation in neural networks is not novel or un-expected. On the other hand, we find the emer-gence of a logical operator like negation in theagent in a reasonably systematic way (notingthat adult humans are far from perfectly sys-tematic (Lake et al., 2019))), given experience of 100 objects (again, not orders of magnitude dif-ferent from typical human experience), to be notable, particularly given the history of researchinto learning logical operators in connectionist models and the importance of negation in languageprocessing (Steedman, 1999; 2011). In what follows, we complement these observations by estab-lishing that not only the amount, but also the type of training data (or agent experience) can have asignificant impact on emergent generalisation.4.2 3D VERSUS 2D ENVIRONMENTFigure 3: Screenshots from the agent’s perspective of an episode in the grid-world and Unity environments.In both cases the instruction is put the picture frame on the bed .Our first observation is that the specifics of the environment (irrespective of the logic of the task)can play an important role in emergent generalisation. To show this most explicitly we consider asimple color-shape generalisation task. Prior studies in both 2D and 3D environments have shownthat neural-network-based agents can correctly interpret instructions referring to both the color andshape of objects ( find a red ball ) zero-shot when they have never encountered that particular com-bination during training (Chaplot et al., 2018; Hill et al., 2017; Yu et al., 2018). We replicate thedemonstration of Hill et al. (2017) in the 3D DeepMind-Lab environment (Beattie et al., 2016), andfor comparison implement an analogous task in a 2D grid-world (compare Fig 3 (a) and Fig 4).As in the original experiments, we split the available colors and shapes in the environment into setsS=s[^sandC=c[^c. We then train the agent on episodes with instructions sampled from oneof the sets cs,^csorc^s, and, once trained, we evaluate its performance on instructions from^c^s. All episodes involve the agent in a small room faced with two objects, one of which matchesthe description in the instruction. Importantly, both the color and the shape word in the instructionare needed to resolve the task, and during both training and testing the agent faces trials in which2We suspect in such cases that the agent is simply learning a non-compositional interpretation during train-ing along the lines of notxreferring to the setfy2X1:x6=yg.5Published as a conference paper at ICLR 2020the confounding object either matches the color of the target object or the shape of the target object.While it was not possible to have exactly the same shapes in set Sin both the grid world and the3D environment, the size of CandSand all other aspects of the environment engine were the samein both conditions. As shown in Table 2 (top), we found that training performance was marginallyworse on the 3D environment, but that test performance in 3D (M= 0:97; SD = 0:04)was sixpercentage points higher than in 2D (M= 0:91; SD = 0:08)(a suggestive but not significantdifference given the small sample of agents; t(8) = 1 :38; p= 0:20).To probe this effect further, we devised an analogue of the ‘putting’ task (Section 3.2) in the 2Dgrid-world. In both the 3D and 2D environments, the agent was trained on ‘lifting’ trials, in which ithad to visit a specific object, and on ‘putting’ trials, in which it had to pick up a specific object andmove it to the bed. To match the constraints of the grid-world, we reduced the total global objectset in the Unity room to ten, allocating three to both lifting and putting trials during training, andthe remaining seven only to lifting trials. The evaluation trials then involved putting the remainingseven objects on the bed (see Figure 3 for an illustration of the two environments). While the grid-world and the 3D room tasks are identical at a conceptual (and linguistic) level, the experience ofan agent is quite different in the two environments. In the grid world, the agent observes the entireenvironment (including itself) from above at every timestep, and can move to the 81 = 99possiblelocations by choosing to move up, down, left or right. To lift an object in the grid-world, the agentsimply has to move to the square occupied by that object, while ‘putting’ requires the agent to liftthe object and then walk to the square occupied by the white (striped) bed.Train accuracy Test accuracyColor-shape taskGrid world 0.99 0.91 0:093D room (DM-Lab) 0.98 0.97 0:04Putting taskGrid world 0.99 0.40 0:14Grid world, scrolling 0.93 0.60 0:143D room (Unity) 0.99 0.63 0:06Table 2: Tests of systematic generalisation in 2D and 3D environ-ments; five randomly-initialized agent replicas in each condition.As shown in Table 2 (bottom),agents in both conditions achievednear perfect performance in train-ing. However, on test trials, per-formance of the agent in the Unityroom (M= 0:63; SD = 0:06)wassignificantly better than the agent inthe grid world (M= 0:40; SD =0:14);t(8) = 3 :48; p < 0:005.In failure cases, the agent in thegrid world can be observed exhibit-ing less certainty, putting the wrongobject on the bed or running out oftime without acting decisively withthe objects.4.3 V ISUAL INVARIANCES IN AGENTS ’PERSPECTIVESTo understand more precisely why agents trained in 3D worlds generalise better, we ran a furthercondition in which we gave the agent in the grid-world an ego-centric perspective, as it has in the 3Dworld. Specifically, we adapted the grid-world so that the agent’s field of view was 55and centredon the agent (rather than 99and fixed). While this egocentric constraint made it harder for the agentto learn the training task, performance on the test set improved significantly t(8) = 2 :35; p < 0:05,accounting for most of the difference in generalisation performance between the 3D world and the(original) 2D world. This suggests that the visual invariances introduced when bounding the agent’sperspective, which in our case was implemented using an ego-centric perspective, may improve anagent’s ability to factorise experience and behaviour into chunks that can be re-used effectively innovel situations. The relevant difference is likely that the agent is always in an invariant locationin the visual input, which reduces some difficulty of perception, rather than the fact that the agentis centred per se. See Appendix D for control experiments showing that partial-observability alonedoes not induce the same boost to generalisation, nor does a difference in the information availablefrom a single frame.4.4 T EMPORAL ASPECT OF PERCEPTIONEgocentric vs. allocentric vision is not the only difference between a grid-world and a 3D world.Another difference is that, in the 3D world, the agent experiences a much richer variety of (highly6Published as a conference paper at ICLR 2020Figure 4: (a) Four (independent) training inputs from the training data for the classifier model. (b) Fourtimesteps of input from a single episode for the situated agent.correlated) visual stimuli in any particular episode. To isolate the effect of this factor on general-isation, we return to the color-shape task, which is appropriate for the present experiment becausea single train/test experiment (defined in terms of the instructions and objects that the agent ex-periences) can be constructed either as an interactive MDP for a situated agent or as a supervisedclassification problem.Regime Train accuracy Test accuracyClassifier 0.95 0.80 0:05Agent 1.00 1.00 0:00Table 3: generalisation accuracy achieved by a vision-and-language classifier trained on single screenshots versus a sit-uated agent trained in the DMLab environment.In the vision+language classifier con-dition, a supervised model must predicteither leftorright in response to a stillimage (the first frame of an episode) oftwo objects and a language instructionof the form find a red ball . In the agentcondition, our situated RL agent beginsan episode facing two objects and istrained to move towards and bump intothe object specified by the instruction. Importantly, the architecture for the vision+language classiferand the agent were identical except that the final (action and value-prediction) layer in the agent isreplaced by a single linear layer and softmax over two possible outcomes in the classifier.On the same set of training instructions, we trained the classifier to maximise the likelihood of itsobject predictions, and the agent to maximise the return from selecting the correct object. As shownin Table 3, the accuracy of the classifier on the training instructions converged at 0:95comparedto1:0for the agent (which may be explained by greater difficulty in detecting differences betweenobjects given a more distant viewpoint). More importantly, performance of the classifier on testepisodes ( M= 0:80,SD= 0:05) was significantly worse than that of the agent ( M= 1:00,SD= 0:00);t(8) = 8 :61,p < 0:0001 . We conclude that an agent that can move its locationand its gaze (receiving richer variety views for a given set of object stimuli) learns not only torecognize those objects better, but also to generalise understanding of their shape and color withgreater systematicity than an agent that can only observe a single snapshot for each instruction.3This richer experience can be thought of as effectively an implicit form of data augmentation.4.5 T HE ROLE OF LANGUAGEThus far, we have considered tasks that involve both (synthetic) language stimuli and visual ob-servations, which makes it possible to pose diverse and interesting challenges requiring systematicbehaviour. However, this approach raises the question of whether the highly regular language expe-rienced by our agents during training contributes to the consistent systematic generalisation that weobserve at test time. Language can provide a form of supervision for how to break down the world3See Yurovsky et al. (2013) for a discussion of the importance of this factor in children’s visual and word-learning development.7Published as a conference paper at ICLR 2020and/or learned behaviours into meaningful sub-parts, which in turn might stimulate systematicityand generalisation. Indeed, prior work has found that language can serve to promote compositionalbehaviour in deep RL agents (Andreas et al., 2017).To explore this hypothesis, we devised a simple task in the grid world that can be solved either withor without relying on language. The agent begins each episode in a random position in the gridcontaining eight randomly-positioned objects, four of one color and shape and four of another. Ineach episode, one of the object types was randomly designated as ‘correct’ and the agent received apositive +1 reward for collecting objects of this type. The episode ended if the agent collected all ofthe correct objects (return = +4 ) or if it collected a single incorrect object (reward = 0). Withoutaccess to language, the optimal policy is to simply select an object type at random and then (if pos-sible) the remainder of that type (which returns 2 on average). In the language condition, however,the target object type (e.g. ‘red square’) was named explicitly, so the agent could learn to achieve areturn of 4 on all episodes. To test generalisation in both conditions, as in the color-shape referenceexpressions described above, we trained the agent on episodes involving half of the possible color-shape combinations (as correct or distractor objects) and tested it on episodes involving the otherhalf. Note that, both during training and testing, in all episodes the incorrect object differed fromthe correct object in terms of either color or shape, but not both (so that awareness of both color andshape was necessary to succeed in general).For fair comparison between conditions, when measuring agent performance we consider perfor-mance only on episodes in which the first object selected by the agent was correct. As shown inFigure 5, the non-linguistic agent performed worse on the training set (when excluding the 50% oftrails in which it failed on the first object). Performance on the test episodes was also worse overallfor the non-linguistic agent, but by a similar amount to the discrepancy on the training set (thus thetest error was similar in both conditions). With increasing training, we observed this discrepancyto diminish to a negligible amount (Figure 5, right). Importantly, both linguistic and non-linguisticagents exhibited test generalisation that was substantially above chance. While not conclusive, thisanalysis raises the possibility that language may not be playing a significant role (and is certainlynot the unique cause) of the systematic generalisation that we have observed emerging in otherexperiments.Figure 5: Left An episode of the grid-world task that can be posed with or without language. Right Nor-malised episode returns on training and evaluation trials as the agent learns, adjusted to ignore episodes wherethe first object that the agent collects is incorrect.5 D ISCUSSIONWe have shown that a neural-network-based agent with standard architectural components can learnto execute goal-directed motor behaviours in response to novel instructions zero-shot. This suggeststhat, during training, the agent learns not only how to follow training instructions, but also generalinformation about how word-like symbols compose and how the combination of those words af-fects what the agent should do in its world. With respect to generalisation, our findings contrastwith other recent contributions suggesting that neural networks do not generalise well in these set-tings (Lake, 2019; Bahdanau et al., 2018), and are more aligned with older empirical analyses ofneural networks (McClelland et al., 1987; Frank, 2006). Our work builds on those earlier studies8Published as a conference paper at ICLR 2020by considering not only the patterns or functions that neural networks can learn, but also how theycompose familiar patterns to interpret entirely novel stimuli. Relative to more recent experimentson color-shape generalisation (Higgins et al., 2017; Chaplot et al., 2018; Yu et al., 2018), we studya wider range of phenomena, involving abstract modifying predicates (negation) and verbs (‘to lift’,‘to put’) corresponding to complex behaviours requiring approximately 50movement or manipu-lation actions and awareness of objects and their relations. By careful experimentation, we furtherestablish that visual invariances afforded by a bounded frame of reference, in our case implementedusing a first-person perspective of an agent acting over time, plays an important role in the emer-gence of this generalisation.An obvious limitation of our work, which technology precludes us from resolving at present, is thatwe demonstrate the impact of realism on generalisation by comparing synthetic environments ofdiffering degrees of realism, while even our most realistic environment is considerably more simpleand regular than the world itself. This is in keeping with a long history of using simulation for thescientific analysis of learning machines (Minsky & Papert, 1969; Marcus, 1998). Nonetheless, as thepossibilities for learning in situated robots develop, future work should explore whether the general-isation that we observe here is even closer to perfectly systematic under even more realistic learningconditions. An additional limitation of our work is that – much like closely-related work (Lake &Baroni, 2017; Bahdanau et al., 2018) – we do not provide a nomological explanation for the gen-eralisation (or lack thereof) that we observe. Depending on the desired level of analysis, such anexplanation may ultimately come from theoretical work on neural network generalisation (Aroraet al., 2018; Lampinen & Ganguli, 2018; Allen-Zhu et al., 2018; Arora et al., 2019). Our focus here,however, is on providing careful experiments that isolate the significance of environmental factorson systematicity and to provide measures of the robustness (i.e. variance) of these effects.We also emphasize that our agent in no way exhibits complete systematicity. The forms of gen-eralisation it exhibits do not encompass the full range of systematicity of thought/behaviour thatone might expect of a mature adult human. However, this is unsurprising given that our network hassubstantially less experience than an adult human, and learns in a substantially simpler environment.It is also worth noting that none of our experiments reflect the human ability to learn (rather thanextend existing knowledge) quickly (as in, e.g. Lake et al. (2015); Botvinick et al. (2019)). Finally,depending on one’s perspective, it is always possible to characterise the generalisation we describehere as interpolation. Rather than engage with this thorny philosophical distinction (which touchesupon, e.g., the problem of induction (Vickers, 2009)), we simply emphasize that in our experimentsthere is a categorical difference between the data on which the agent was trained and test stimulito which they respond. Finally, our experiments concur with a message expressed most clearly byAnderson’s famous contribution in Science (Anderson, 1972). Conclusions that are reached whenexperimenting with pared-down or idealised stimuli may be different from those reached when con-sidering more complex or naturalistic data, since the simplicity of the stimuli can stifle potentiallyimportant emergent phenomena. We suggest that strong generalisation may emerge when an agentis situated in a rich environment, rather than a simple, abstract setting.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #4 ### Review Text This work studies factors which promote combinatorial generalization in a "neural network agent" embodied in a 3d simulation environment. The authors present interesting experiments and some insightful empirical findings on how a richer environment and a first-person egocentric perspective can aid a simple neural net to generalize better over previously unseen tasks. While I truly commend the effort undertaken to perform the experiments, I have several concerns which I explain below and would be happy to raise my score if they can all be addressed satisfactorily: 1) While the authors interpret the experiment results in sec 4.1 in a positive way, the results don't seem to necessarily indicate good systematic generalization. For instance, after learning with 40 words the agent only achieves 60% test accuracy. While the accuracy increases to 78% on training with 100 words, the training and test accuracy gap indicates that the performance is still far from any kind of systematic generalization. The results instead seems to be hinting that neural nets don't indeed perform combinatorial generalization on their own, but can be forced towards it by supplying them huge amounts of diverse data (which is not true for humans). Also, the fact that increasing the number of words helps in generalizing better is true for most ML models and does not come as a surprise. So the results in this subsection are somewhat trivial and do not necessarily contribute any new understanding. 2) For the experiments regarding egocentric frame in sec 4.3, I feel that the results are not really conclusive (even including the control exps in appendix D). Could it be that if one uses any frame rigidly attached (i.e. fixed displacement and rotational coordinates) to the agent's egocentric frame, one would achieve the same generalization performance? It is also possible that as suggested by authors in sec 4.4, it is just the motion of the egocentric frame which might be giving diverse views of the environment to the agent. So the frame might not even need to be egocentric, but just a moving frame which gives richer and diverse views whenever the agent moves. Please include experiments to test for these possibilities. 3) In section 4.4, the authors have trained the non-embodied classifier with just a single image frame. But this does not necessarily justify the conclusion that active perception helps in generalization. This is because the motion of the RL agent gives it both a varied set of views AND also control over what views to obtain by taking actions. In order to better understand which of these factors (or perhaps both) aid in generalization, another set of experiments is required which shows the classifier agent more images while keeping the desired object in view. In one experiment, these images should be chosen with random movements but the number of such images provided to the classifier should be increased in sub-experiments to gauge if giving more varied views bridges the performance gap between the classifier and the RL agent's generalization performance. In a second experiment, one might want to first train the RL agent, then extract a few (say 10) frames out of its enacted policy for all pairs of objects and use these frames as a part of the training set for the classifier agent. This would allow one to gauge if both varied views and actively selecting to interact with the environment can help bridge the generalization gap. 4) Lastly, sec 4.5 seems to be hinting at a potentially very incorrect conclusion: "language is not a large factor (and certainly not a necessary cause) of the systematic generalisation...". This cannot be said from the small single experiment presented in sec 4.5. For instance, that experiment has been devised in a way that an optimal policy can be found with/without language. However, if a language input is provided to explicitly state the desired object, that might speed up the training of the RL agents significantly. In such a case, it might be helpful to see if learning the policy with the language input is being accomplished with a much lower number of frames during training, as opposed to when no language input is provided. Please provide the training error plots. But regardless of the plots, the experiments can still be quite inconclusive since language helps in systematic generalization in a variety of other ways apart from what has been tested for. In general, language starts helping humans once it has been acquired to a sufficient extent since one needs noun-concept linkages, verb-action linkages etc. to have been acquired a priori before the benefits of language emerge in combinatorial generalization. Training an LSTM to understand the language commands in tandem with learning policies for picking desired objects could lead to sub-optimal or heavily over-fitted language models which may not help in generalization. Testing for the true role of language will require many more experiments, which may be somewhat out of scope for this paper given the space constraints for a single paper. But, I would advise the authors to refrain from drawing hasty inferences about the role of language without thorough experimentation. Minor issues: 1) What are the 26 actions in the Unity 3D environment in section 3? It is important to know the action space to understand how easy or hard it is for the agent to learn generalizable policies. 2) The x-axis of Figure 2 is not readable at all. Please rectify those graphs and reduce the number of ticks. -------------------------- Update after interaction during author feedback period ------------------------------- I appreciate the efforts that the authors have undertaken to address my concerns. While the paper is far from perfect, it is still a very thought provoking work and I believe that it would make a valuable contribution to the line of works on systematic generalization in embodied agents. I am updating my score to reflect the same. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
S1iiddyDG
ICLR.cc/2018/Workshop
2018
Negative eigenvalues of the Hessian in deep neural networks
["Guillaume Alain", "Nicolas Le Roux", "Pierre-Antoine Manzagol"]
We study the loss function of a deep neural network through the eigendecomposition of its Hessian matrix. We focus on negative eigenvalues, how important they are, and how to best deal with them. The goal is to develop an optimization method specifically tailored for deep neural networks.
["Optimization", "Hessian matrix", "Neural Networks", "Negative Curvature"]
ABSTRACTWe study the loss function of a deep neural network through the eigendecompo-sition of its Hessian matrix. We focus on negative eigenvalues, how importantthey are, and how to best deal with them. The goal is to develop an optimizationmethod specifically tailored for deep neural networks.1 I NTRODUCTIONThe current mode of operation in the field of Deep Learning is that we accept the fact that saddlepoints are everywhere (Choromanska et al., 2015) and that many local minima are of such highquality that we do not need to worry about not having the global minimum. Practitioners sweep alarge collection of hyperparameter configurations, they use early stopping to prevent overfitting, andthey train their models with optimization methods such as RMSProp (Tieleman & Hinton, 2012)and ADAM (Kingma & Ba, 2015).Most optimization methods used in deep learning today were developed with the convex settingin mind. We currently do not have an efficient way to specifically manage the negative eigenval-ues of the Hessian matrix (which contains the second order derivatives and describes the curvatureof the loss). We want to develop specific methods adapted to our particular kind of non-convexproblems. Such methods will handle regions of negative curvature in a particular way, because thisphenomenon is not present in convex optimization.We present here experimental results thathelp us better understand what is happening in the directions of negative curvature,suggest that we should be using a much larger step size in those directions.2 E XPERIMENTS2.1 M ETHODOLOGYSince we are working purely in an optimization context, we are not interested in the generalizationerror. We want to focus on the challenges of minimizing a loss that features saddle points and localminima.The size of the Hessian matrix scales proportionally to the square of the number of parameters, sothere is no way to compute and store the entire Hessian. We can still extract certain properties of theHessian despite this, but we find ourselves limited to smaller models and datasets.We are going to use the architecture of the classic LeNet (LeCun et al., 1989) convolution neuralnetwork, but with ReLU as activation function. It has two convolutional layers, two fully connectedlayers, and a softmax on the last layer, for a total number of approximately d= 3:3106parametercoefficients. We performed experiments with MNIST (LeCun, 1998).This work was done during an internship with the Google Brain team in Montreal.1Workshop track - ICLR 2018We have to keep in mind that there is no guarantee that phenomena observed in this setup will alsobe found in a much larger convolutional neural network such as Inception (Szegedy et al., 2015), orone with a different design such as ResNet (He et al., 2016).While we are training our model using the typical minibatch gradient descent with RMSProp (batchsize 32), it makes sense for our analysis to study the loss L()averaged over the full training setinstead of minibatches. The same applies for the gradient g()2Rdand the Hessian matrix H()2Rdd. We made the decision to concatenate all the parameters from all layers into a single vector inRd. Though results on the Hessian of individual layers were not included in this study, we believethey would also be of interest for a better understanding of deep neural networks.Note that all the eigenvalues are real-valued because of the symmetry of the Hessian matrix, so theycan be ordered as 12:::d. See Appendix section A for details on how we can computetheklargest or smallest eigenpairs (i;vi).2.2 N EGATIVE CURVATURE IS ONLY LOCALAt any training step t, we can select an eigenpair (i;vi)and measure the loss function when weproject the gradient g(t)in the direction vi. With a step size of 2R, we look atL(tg()Tvivi): (1)This becomes particularly interesting when iis negative and when we make the mild assumptionthatviin not perfectly orthogonal to the gradient (i.e. g()Tvi6= 0).Since we observed a common behaviour all along the optimization, we show here the results for anarbitrary iteration (t= 50) . We use2[0:1;0:1]in Figure 1 and 2[1;1]in Figure 2. Wecompare the exact empirical loss (orange curve) alongside the quadratic approximation (green/bluecurve) of the same function given by the negative eigenvalue i.For small values of , the actual loss matches the curvature sufficiently well, but for larger values ofthe two are qualitatively different. Because the loss is bounded below, it would be impossible forthe loss to go down to 1. When using a regularizer such as an L2-norm penalty, the loss growsto1whenkk!1 .Note that, if we were to optimize for long enough, we would get into the neighborhood of a localminimum and we would not observe any negative eigenvalues anymore. In that later regime, thereis nothing to gain from having an optimizer designed to deal with negative eigenvalues. However,there are no theoretical results clarifying when that regime starts. In practice, when early stoppingis used as an approach to avoid overfitting, is it also unclear in what regime we stop training.Figure 1: Looking at the total loss when mov-ing byin the direction of most negative cur-vature. Evaluated at training step t= 50 .Zoomed in.Figure 2: Same direction of negative curvatureas Figure 1, but zoomed out.2Workshop track - ICLR 20182.3 M INIMIZING LOSS IN DIRECTIONS OF NEGATIVE CURVATUREWhat is the connection between iand the optimal step size to be taken in the direction of vi?We go back to the question of finding the optimal to minimize the line search problem inEquation (1). It is simple enough (albeit costly) to run through the whole training set and evalu-ate the loss at multiple values of , spanning a few orders of magnitude. For all the eigenpairs(i;vi)that we have access to, we can look atwhat is the best loss decrease that we can obtain by moving along vi? (see Figure 3)what is the optimal step size to achieve it? (see Figure 4)1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00Hessian eigenvalues i0.0200.0150.0100.0050.0000.0050.0100.0150.020best loss minimization along viComparing i withbest loss minimization along viFigure 3: Best loss decrease possible ( y-axis)when following eigenvector associated with (x-axis). Lower is better. Directions of negativecurvature (left side) were empirically observedto bring larger improvements in the loss than di-rections of positive curvature (right side). Ear-lier time steps tare shown blue, and later areshown in green. In terms of Equation (1), thisplot shows the relation between iandL().1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00Hessian eigenvalues i1.000.750.500.250.000.250.500.751.00gTvi / (best empirical step)Comparing step in (gTvi)viwith theoretical value of 1/i.Figure 4: Reporting the actual optimal step sizesfound empirically. In terms of the variables in-volved in Equation (1), this plot shows the re-lation between i(x-axis) and 1=(y-axis).On the right side of the plot, we can report thatin direction of positive curvature we have that1=i. On the left side of the plot, the smallvalues reported mean that the optimal step sizeswere quite large. Earlier time steps tare shownred, and later are shown in yellow.Figures 3 and 4 suggest that important gains are to be made in directions of negative curvature, andthat in directions of negative curvature the optimal step sizes are of a greater order of magnitudethan in directions of positive curvature. Refer to Appendix section C for a longer discussion aboutoptimal step sizes. Note that in Figures 3 and 4 we are showing a certain range where we findeigenvalues 2[1;1]. This is the most informative plot for us, but are not showing everythinghere. Keep in mind also that we are using numerical methods that report eigenvalues with the largestmagnitudejj, so those figures are missing more than 99.99% of the eigenvalues with very smallmagnitude. This is why those figures do not have any points shown around the origin.3 F UTURE WORK AND CONCLUSIONThe current results need to be validated in more settings of architectures and optimizers.Considerable work was required for us to extract negative eigenvalues for every checkpoint of train-ing. This is not a pratical thing to do during training. In Appendix E we propose a new methodthat maintains an estimate of the most negative eigenvector and uses it to update the parameters. Wehave not yet tried this method in practice.The main contribution of our work is that we have observed and studied an example where thedirections of negative curvature are not being exploited properly by the popular convex optimizer.We have seen how great gains could be made in those directions. This reinforces the belief thatthere are opportunities to develop new optimization techniques that capitalize on the specific case ofneural networks.3Workshop track - ICLR 2018ACKNOWLEDGMENTSWe thank Bart van Merri ̈enboer for fruitful discussions about optimization and the problem of saddlepoints. We thank Ying Xiao for initial discussions about his preliminary work on studying eigenval-ues, and for providing his code to quickly get the “Jacobian vector product” trick working.
Hy5LN8eFG
finds existence proof (for 1 model and 1 dataset) where learning rate should be higher than Dauphin et al's suggested 1 / abs(eigenval) for negative curvature directions
6: Marginally above acceptance threshold
The main contribution of this work is their simple empirical observation in Figure 4, that the optimal alpha (learning rate) appears to often NOT be 1 / | eigval | when the eigval is negative (contradicting the hypothesis of Dauphin et al. (2014), but instead should be much larger. This seems to this reader to be the only new insight offered by this paper, so not mentioning that connection to that well known related work until Appendix C is concerning (it should be mentioned in the main body). They mention a way to estimate the negative eigenvector, but it is disappointing that they have “not yet tried this method in practice”. This paper is clearly premature / under-developed for a standard conference publication; whether suitable for a workshop is debatable. On the one hand, it would be likely to prompt useful discussions at the workshop. However, overall, given the lack of trying to use this insight in any actual optimization algorithm on any data, it could easily be seen as more flag planting at this point than a solid contribution. It could be more suitable if their insights (Figure 4) were established for more than “only one model and one dataset” (as the authors admit). Also, there has been a fair number of related past work on examining the Hessian (and the tracking of eigenvals over time), but the authors do not cite any. As one example, LeCun has periodically shown such experiments, including seminal work at NIPS https://papers.nips.cc/paper/589-automatic-learning-rate-maximization-by-on-line-estimation-of-the-hessians-eigenvectors.pdf and more recent ones on ArXiv. In short, this might be marginally suitable as a workshop paper, but is overall under-developed and should do a better job of being very clear about its very modest contribution to date (and cite more related work).
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Negative eigenvalues of the Hessian in deep neural networks ### Paper Abstract We study the loss function of a deep neural network through the eigendecomposition of its Hessian matrix. We focus on negative eigenvalues, how important they are, and how to best deal with them. The goal is to develop an optimization method specifically tailored for deep neural networks. ### Paper Keywords ["Optimization", "Hessian matrix", "Neural Networks", "Negative Curvature"] ### Paper Content ABSTRACTWe study the loss function of a deep neural network through the eigendecompo-sition of its Hessian matrix. We focus on negative eigenvalues, how importantthey are, and how to best deal with them. The goal is to develop an optimizationmethod specifically tailored for deep neural networks.1 I NTRODUCTIONThe current mode of operation in the field of Deep Learning is that we accept the fact that saddlepoints are everywhere (Choromanska et al., 2015) and that many local minima are of such highquality that we do not need to worry about not having the global minimum. Practitioners sweep alarge collection of hyperparameter configurations, they use early stopping to prevent overfitting, andthey train their models with optimization methods such as RMSProp (Tieleman & Hinton, 2012)and ADAM (Kingma & Ba, 2015).Most optimization methods used in deep learning today were developed with the convex settingin mind. We currently do not have an efficient way to specifically manage the negative eigenval-ues of the Hessian matrix (which contains the second order derivatives and describes the curvatureof the loss). We want to develop specific methods adapted to our particular kind of non-convexproblems. Such methods will handle regions of negative curvature in a particular way, because thisphenomenon is not present in convex optimization.We present here experimental results thathelp us better understand what is happening in the directions of negative curvature,suggest that we should be using a much larger step size in those directions.2 E XPERIMENTS2.1 M ETHODOLOGYSince we are working purely in an optimization context, we are not interested in the generalizationerror. We want to focus on the challenges of minimizing a loss that features saddle points and localminima.The size of the Hessian matrix scales proportionally to the square of the number of parameters, sothere is no way to compute and store the entire Hessian. We can still extract certain properties of theHessian despite this, but we find ourselves limited to smaller models and datasets.We are going to use the architecture of the classic LeNet (LeCun et al., 1989) convolution neuralnetwork, but with ReLU as activation function. It has two convolutional layers, two fully connectedlayers, and a softmax on the last layer, for a total number of approximately d= 3:3106parametercoefficients. We performed experiments with MNIST (LeCun, 1998).This work was done during an internship with the Google Brain team in Montreal.1Workshop track - ICLR 2018We have to keep in mind that there is no guarantee that phenomena observed in this setup will alsobe found in a much larger convolutional neural network such as Inception (Szegedy et al., 2015), orone with a different design such as ResNet (He et al., 2016).While we are training our model using the typical minibatch gradient descent with RMSProp (batchsize 32), it makes sense for our analysis to study the loss L()averaged over the full training setinstead of minibatches. The same applies for the gradient g()2Rdand the Hessian matrix H()2Rdd. We made the decision to concatenate all the parameters from all layers into a single vector inRd. Though results on the Hessian of individual layers were not included in this study, we believethey would also be of interest for a better understanding of deep neural networks.Note that all the eigenvalues are real-valued because of the symmetry of the Hessian matrix, so theycan be ordered as 12:::d. See Appendix section A for details on how we can computetheklargest or smallest eigenpairs (i;vi).2.2 N EGATIVE CURVATURE IS ONLY LOCALAt any training step t, we can select an eigenpair (i;vi)and measure the loss function when weproject the gradient g(t)in the direction vi. With a step size of 2R, we look atL(tg()Tvivi): (1)This becomes particularly interesting when iis negative and when we make the mild assumptionthatviin not perfectly orthogonal to the gradient (i.e. g()Tvi6= 0).Since we observed a common behaviour all along the optimization, we show here the results for anarbitrary iteration (t= 50) . We use2[0:1;0:1]in Figure 1 and 2[1;1]in Figure 2. Wecompare the exact empirical loss (orange curve) alongside the quadratic approximation (green/bluecurve) of the same function given by the negative eigenvalue i.For small values of , the actual loss matches the curvature sufficiently well, but for larger values ofthe two are qualitatively different. Because the loss is bounded below, it would be impossible forthe loss to go down to 1. When using a regularizer such as an L2-norm penalty, the loss growsto1whenkk!1 .Note that, if we were to optimize for long enough, we would get into the neighborhood of a localminimum and we would not observe any negative eigenvalues anymore. In that later regime, thereis nothing to gain from having an optimizer designed to deal with negative eigenvalues. However,there are no theoretical results clarifying when that regime starts. In practice, when early stoppingis used as an approach to avoid overfitting, is it also unclear in what regime we stop training.Figure 1: Looking at the total loss when mov-ing byin the direction of most negative cur-vature. Evaluated at training step t= 50 .Zoomed in.Figure 2: Same direction of negative curvatureas Figure 1, but zoomed out.2Workshop track - ICLR 20182.3 M INIMIZING LOSS IN DIRECTIONS OF NEGATIVE CURVATUREWhat is the connection between iand the optimal step size to be taken in the direction of vi?We go back to the question of finding the optimal to minimize the line search problem inEquation (1). It is simple enough (albeit costly) to run through the whole training set and evalu-ate the loss at multiple values of , spanning a few orders of magnitude. For all the eigenpairs(i;vi)that we have access to, we can look atwhat is the best loss decrease that we can obtain by moving along vi? (see Figure 3)what is the optimal step size to achieve it? (see Figure 4)1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00Hessian eigenvalues i0.0200.0150.0100.0050.0000.0050.0100.0150.020best loss minimization along viComparing i withbest loss minimization along viFigure 3: Best loss decrease possible ( y-axis)when following eigenvector associated with (x-axis). Lower is better. Directions of negativecurvature (left side) were empirically observedto bring larger improvements in the loss than di-rections of positive curvature (right side). Ear-lier time steps tare shown blue, and later areshown in green. In terms of Equation (1), thisplot shows the relation between iandL().1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00Hessian eigenvalues i1.000.750.500.250.000.250.500.751.00gTvi / (best empirical step)Comparing step in (gTvi)viwith theoretical value of 1/i.Figure 4: Reporting the actual optimal step sizesfound empirically. In terms of the variables in-volved in Equation (1), this plot shows the re-lation between i(x-axis) and 1=(y-axis).On the right side of the plot, we can report thatin direction of positive curvature we have that1=i. On the left side of the plot, the smallvalues reported mean that the optimal step sizeswere quite large. Earlier time steps tare shownred, and later are shown in yellow.Figures 3 and 4 suggest that important gains are to be made in directions of negative curvature, andthat in directions of negative curvature the optimal step sizes are of a greater order of magnitudethan in directions of positive curvature. Refer to Appendix section C for a longer discussion aboutoptimal step sizes. Note that in Figures 3 and 4 we are showing a certain range where we findeigenvalues 2[1;1]. This is the most informative plot for us, but are not showing everythinghere. Keep in mind also that we are using numerical methods that report eigenvalues with the largestmagnitudejj, so those figures are missing more than 99.99% of the eigenvalues with very smallmagnitude. This is why those figures do not have any points shown around the origin.3 F UTURE WORK AND CONCLUSIONThe current results need to be validated in more settings of architectures and optimizers.Considerable work was required for us to extract negative eigenvalues for every checkpoint of train-ing. This is not a pratical thing to do during training. In Appendix E we propose a new methodthat maintains an estimate of the most negative eigenvector and uses it to update the parameters. Wehave not yet tried this method in practice.The main contribution of our work is that we have observed and studied an example where thedirections of negative curvature are not being exploited properly by the popular convex optimizer.We have seen how great gains could be made in those directions. This reinforces the belief thatthere are opportunities to develop new optimization techniques that capitalize on the specific case ofneural networks.3Workshop track - ICLR 2018ACKNOWLEDGMENTSWe thank Bart van Merri ̈enboer for fruitful discussions about optimization and the problem of saddlepoints. We thank Ying Xiao for initial discussions about his preliminary work on studying eigenval-ues, and for providing his code to quickly get the “Jacobian vector product” trick working.<|im_end|> <|im_start|>assistant ### Review Title finds existence proof (for 1 model and 1 dataset) where learning rate should be higher than Dauphin et al's suggested 1 / abs(eigenval) for negative curvature directions ### Review Text The main contribution of this work is their simple empirical observation in Figure 4, that the optimal alpha (learning rate) appears to often NOT be 1 / | eigval | when the eigval is negative (contradicting the hypothesis of Dauphin et al. (2014), but instead should be much larger. This seems to this reader to be the only new insight offered by this paper, so not mentioning that connection to that well known related work until Appendix C is concerning (it should be mentioned in the main body). They mention a way to estimate the negative eigenvector, but it is disappointing that they have “not yet tried this method in practice”. This paper is clearly premature / under-developed for a standard conference publication; whether suitable for a workshop is debatable. On the one hand, it would be likely to prompt useful discussions at the workshop. However, overall, given the lack of trying to use this insight in any actual optimization algorithm on any data, it could easily be seen as more flag planting at this point than a solid contribution. It could be more suitable if their insights (Figure 4) were established for more than “only one model and one dataset” (as the authors admit). Also, there has been a fair number of related past work on examining the Hessian (and the tracking of eigenvals over time), but the authors do not cite any. As one example, LeCun has periodically shown such experiments, including seminal work at NIPS https://papers.nips.cc/paper/589-automatic-learning-rate-maximization-by-on-line-estimation-of-the-hessians-eigenvectors.pdf and more recent ones on ArXiv. In short, this might be marginally suitable as a workshop paper, but is overall under-developed and should do a better job of being very clear about its very modest contribution to date (and cite more related work). ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
H1lxVyStPH
ICLR.cc/2020/Conference
2020
Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition
["Jongbin Ryu", "Gitaek Kwon", "Ming-Hsuan Yang", "Jongwoo Lim"]
When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.
["convolutional forest networks", "domain generalization", "visual recognition", "individual tree classifiers", "feature space", "data", "random forests", "prime importance", "high accuracy", "low correlation"]
ABSTRACTWhen constructing random forests, it is of prime importance to ensure high ac-curacy and low correlation of individual tree classifiers for good performance.Nevertheless, it is typically difficult for existing random forest methods to strikea good balance between these conflicting factors. In this work, we propose ageneralized convolutional forest networks to learn a feature space to maximizethe strength of individual tree classifiers while minimizing the respective correla-tion. The feature space is iteratively constructed by a probabilistic triplet samplingmethod based on the distribution obtained from the splits of the random forest. Thesampling process is designed to pull the data of the same label together for higherstrength and push away the data frequently falling to the same leaf nodes. Weperform extensive experiments on five image classification and two domain general-ization datasets with ResNet-18, ResNet-50 and DenseNet-161 backbone networks.Experimental results show that the proposed algorithm performs favorably againststate-of-the-art methods.1 I NTRODUCTIONRandom forests have been applied to various problems ranging from object classification (Bosch et al.,2007), object detection (Gall & Lempitsky, 2013), image segmentation (Schroff et al., 2008; Shottonet al., 2008), pedestrian detection (Marin et al., 2013; Tang et al., 2012) to semantic hashing (Qiuet al., 2018). In addition, random forests have been applied to the head pose (Fanelli et al., 2011),landmark (Cootes et al., 2012) and age (Shen et al., 2018) estimation as a regressor and the sparsefeature matching problems (Lepetit & Fua, 2006; Ozuysal et al., 2010).The main reason for the robust performance of random forests is the decision tree ensembles. Whileeach decision tree may achieve mediocre performance, the aggregated random forest performssignificantly better and less correlated if the decision trees are heterogeneous. The overall accuracyof the trees can be considered as strength , and the heterogeneity of the trees can be measured bycorrelation . In (Breiman, 2001), the upper bound of the generalization error of random forests isexpressed in terms of strength and correlation. High strength and low correlation are importantproperties to minimize the generalization error of a random forest. However, these two conflictingfactors make it is difficult to improve strength and lower correlation simultaneously. If the individualdecision trees in a random forest are strengthened independently, it is likely for the trees to resemblethe strongest tree in the forest, and consequently the correlation of the forest becomes high. To reducecorrelation, the decision trees must be in different shapes, and the strength of individual trees wouldnot be as high as the best decision tree.In this work, we introduce generalized learning for random forests with convolutional neural networksto address these issues. The proposed method iteratively improves the generalized ability of randomforests for higher strength and lowers correlation by probabilistic triplet sampling. For a triplet ofan anchor, a positive, and a negative sample, the loss function is designed to pull the anchor and thepositive closer and to push the anchor and the negative apart. The positive is sampled among theCorresponding author.1Published as a conference paper at ICLR 2020data with the same label with the anchor but in different leaf nodes of the decision trees, and thenegative is from the data in the same leaf nodes with the anchor. The former contributes to improvingthe classification accuracy by positive sampling, whereas the latter discourages the algorithm fromconstructing similar decision trees by negative sampling. Note that both data points of the same labeland of different labels with respect to the anchor can be in the negative training sample set. To directlyimprove the strength of the random forest only, we may design a method where negative examples aresampled among the data in the same leaf nodes and with different labels from the anchors. However,in practice, it suffers from early saturation and local minima. We describe the details of the proposedlearning algorithm and experimental results in the following sections.The main contribution of the proposed work are summarized as follows:We propose a generalization algorithm of random forests with convolutional neural networks.The proposed method minimizes the triplet loss function which 1) encourages to have same-labeled data in different leaf nodes move close and 2) pushes away data points that frequentlyfall in the same leaf nodes regardless of their class labels.We consider both strength and correlation of random forest simultaneously. These twoconflicting properties are handled by triple sampling. We demonstrate that the proposedmethod increases strength while maintaining a correlation in the experiments.We show that the proposed algorithm performs well on domain generalization and imagerecognition against the baseline random forest methods. Furthermore, we show the proposedmethod performs favorably against state-of-the-art methods for the same tasks.2 P RELIMINARIES2.1 R ANDOM FORESTSince the introduction of random forests (Breiman, 2001), numerous methods have been developedto increase the performance. We discuss recent and relevant methods for vision tasks in this section.In (Gall & Lempitsky, 2013), the Hough transform is used to tally probabilistic voting of detectionhypotheses of parts from a random forest for object detection. (Bosch et al., 2007) use local shapeand image appearance together to enhance the discriminative strength of the random forest. Visualbag-of-words model and gradient feature are utilized for encoding the shape and local information.In (Zhang & Suganthan, 2014), a method that uses linear discriminant analysis to increase thestrength of a decision split is proposed. However, in most cases, increasing the strength doesnot maximize the performance of the random forest because the correlation also increases. Theconflicting factors of strength and correlation in designing decision trees have been studied in recentmethods (Rodriguez-Galiano et al., 2012) to analyze the performance of random forest. (Yao et al.,2011) propose to use SVM and image patches to enhance strength while maintaining the correlationof random forest. In this method, the SVM is used as a split function of each node at a decision tree toincrease strength and use randomly selected image patches to decrease correlation. While this methodperforms well on fine-grained classification, it is not clear how this approach can be generalizedto other tasks. In contrast, we propose a generalized learning algorithm of random forests that canbe applied to various tasks using deep neural networks while considering both strength and correlation.2.2 G ENERALIZED ERROR BOUND OF RANDOM FORESTS(Breiman, 2001) shows that the generalized error PEof a random forest is bounded by:PE(1s2)s2; (1)where is the correlation and sis the strength of the random forest. The strength sis the expectationof the margin function mr()with respect to a feature Xand its label Y:s=EX;Y[mr(X;Y) ];andfmr(X;Y) =P(h(X;)=Y)maxj6=YP(h(X;)=j);whereh()is a classifier, and is a random vector used to generate decision trees. In addition,PTheta ()is the probability described the parameter for a classifier h. The strength represents the2Published as a conference paper at ICLR 2020Figure 1: Overall framework of the proposed learning algorithm. Once the random forest is con-structed with CNN features, we sample the triplets based on the probability mass function of the splitresults. The networks are then updated via the loss function of sampled triplets.expected margin of probability that the random forest makes a correct classification than a wrongclassification.To define the correlation, the raw margin function is used:frmg(;X;Y)=I (h( X;)=Y)Ih(X;)=^j(X;Y);where ^j(X;Y) = arg max j6=YP(h(X;)=j), andI()is an indicator function. When (;0)is the correlation between rmg(;X;Y)andrmg(0;X;Y), the correlation of a random forest isthe mean value of the correlations over . More details on the generalized error bound of randomforests can be found in (Breiman, 2001).From Eq. 1, it is straightforward to see that high strength of random forest reduces the upper boundof the generalization error if the correlation is suppressed (Breiman, 2001). However, these twoconditions cannot be simultaneously satisfied in practice.2.3 T RIPLET LOSSIn general, the loss function of the triplet is defined as:X(a;p;n )2Smax0;kf(a)f(p)k22kf(a)f(n)k22+b;wheref(a),f(p)andf(n)are the feature vectors of the anchor, positive, and negative data for atriplet (a;p;n )in the training set S, andbis a user-specified margin. The goal of the training processis to find the best feature fthat minimizes the loss function. In optimization process, the featurepositions of the anchor and the positive sample are pulled closer and those of the anchor and thenegative sample are pushed away.The triplet loss function has been used in numerous applications including face recognition (Parkhiet al., 2015; Schroff et al., 2015), image retrieval (Zhao et al., 2015), person re-identification (Chenget al., 2016; Zhang et al., 2016), and metric learning (Norouzi et al., 2012; Wang et al., 2014), toname a few. The triplet loss function uses both positive and negative samples at the same time,thus achieves improved performance. In the existing methods, positive samples are the data of thesame labels with the anchor, and the negative samples are the data with different labels. For facerecognition, person identities are the labels, and for person re-identification, the tracklet determinesthe positive and negative sample sets. Minimizing the triplet loss then gathers the same-labeled datatogether and separates differently-labeled data apart on the learned feature space, so that a classifiercan easily partition the data.Compared to the above approaches, our focus on how to sample the effective triplets to improve bothclassification ability and heterogeneity of random forests. We show that in random forest trainingsimply clustering data according to the labels does not bear the best result. Since it increases both thestrength and correlation of the random forest, and we present the supportive experimental results.3Published as a conference paper at ICLR 2020Algorithm 1 Learning algorithm for generalized convolutional forest networks.Input: Training Image I, Class label YOutput: Generalized Neural Networks N1:fori 1to maximum iterations do2:F0 Ni(I)3: Construct decision trees from F0andY4: ConstructPpandPnfrom split results by the decision trees5:S Sample triplets by PpandPn6:Ni+1 Update by minimizing triplet loss on S2.4 D OMAIN GENERALIZATIONGeneralizing models learned from one domain to another is an important topic in machine learningand computer vision. Learning deeply connected neural networks with a large-scale dataset helpsimprove the generalized ability such that CNN features trained on the ImageNet dataset are used asthegeneric representation for various visual domains (Sharif Razavian et al., 2014). However, it isstill difficult to adapt a model to different domains when large domain gaps exist. In addition, it iseven more challenging if we do not have any data from the target domain, or if we have a mixture ofsource and target domains data. Hence, numerous domain generalization methods have recently beendeveloped to tackle this problem. Several approaches have been developed including regularizationwith meta-learning (Balaji et al., 2018), domain-invariant conditional learning (Li et al., 2018b),adversarial back-propagation (Li et al., 2018a), and episodic training algorithm (Li et al., 2019).3 P ROPOSED ALGORITHMIn this section, we formulate the problem of improving strength and maintaining the correlation ofrandom forests and describe the generalized feature learning algorithm to address these two factorssimultaneously.We first present a feature learning algorithm which only considers strengthened features of randomforests. The modified feature learning algorithm on CNNs is then introduced to achieve high strengthand low correlation at the same time for the proposed Generalized Convolutional Forest Network(GCFN). The overall framework of the proposed GCFN method is shown in Fig. 1 and Algorithm 1.3.1 S TRENGTHENED LEARNING ALGORITHM OF RANDOM FORESTSThe classification result of an input xto a random forest is determined as:c= arg maxc1TXtP(cjt(x));wherecis the label,Tis the number of decision trees, and t(x)denotes the leaf node of a tree tintowhich xfalls. Here,P(cj)is the conditional probability of xbelonging to class c. In other words, can be thought of as a mapping function from xto a probability distribution on the label space. Tomaximize the strength of a decision tree, each leaf node should only contain data with a single label,i.e., the distribution should have a single entry with probability one and others with zeros. Intuitivelyif the data with same labels are converged together in the space, the leaf nodes are more likely tocontain single labeled data, and we can design a triplet loss to maximize the data clustering in thelearned space.The networks are updated via the probabilistic triplet sampling. To construct a triplet sample set,we randomly sample anchors faig, and for each anchor aione positive sample piand one negativesample niare randomly drawn according to the probability mass functions (PMFs) for positive andnegative pools of the anchor, i.e., piPp(ai)andniPn(ai). The positive pool consists of thedata with the same label with an anchor abut in different nodes of the tree, and the negative poolcontains the data in the same node but with different labels. Given the anchor a, the PMFs of thepositive and negative samples are defined by:Pp(x;a)/XtIt(x)6=t(a)^y(x) =y(a);and4Published as a conference paper at ICLR 2020Figure 2: Probability mass function for sampling triplets. Probability mass functions for positive andnegative samples ( PpandPnrespectively) are constructed by sample distribution with the anchorsin leaf nodes. Squares and triangles represent training data in leaf nodes of the decision trees, andthe shapes represent their labels. For an anchor (black square), the positive pool contains the data indifferent leaf nodes and with the same label (red and blue squares), and the negative pool containseither the data in the same leaf node with the same or different labels. Probability mass functions arethe normalized histogram of the positive and negative pools.Pn(x;a)/XtIt(x) =t(a)^y(x)6=y(a);whereI()is an indicator function returning 1 if true and 0 otherwise, and y(x)returns the label ofx. Both PMFs need to be normalized to sum to one. The networks Nare updated using the tripletsamplesSby minimizing the loss function Eq. 2.L=X(a;p;n)2SkN(a)N(p)k22kN(a)N(n)k22: (2)The proposed random forest on the strengthened feature space shows improved performance comparedto that of the canonical feature space, but the improvement saturates quickly and sometimes it fails toconverge. The reasons can be attributed to:the correlation of the random forest increases rapidly along with strength improvement, andthus the gain in overall performance is limited, as well asthe optimization process often falls into local minima.The data points with the same labels are pulled together and naturally, individual decision treesbecome stronger, but at the same time, the decision trees become similar. The growth of correlation isapparent because if two same-labeled data is in one leaf node, they are likely to stay close and belongto the same leaf nodes in the next iteration. In practice, the local minimum issue affects performancemore critically. The update process of learning strengthened feature is analogous to the steepestdescent algorithm in optimization, in the sense that both positive and negative samples concentrateon the strengthening random forests.3.2 G ENERALIZED LEARNING ALGORITHM OF RANDOM FORESTSTo alleviate the above-discussed issues, we present a triplet sampling method. As the positivesampling rule is effective in enhancing the strength, we design the negative sampling rule to deal withthe correlation and the local minima, as shown in Fig. 2. The PMF for negative sampling is definedas:Pn(x;a)/XtIt(x) =t(a):The role of negative sampling is two-fold. First, it prevents the correlation of the random forestgrowing quickly. If two data points belong to the same nodes of many decision trees (which causeshigh correlation), they are likely to be sampled as the negative examples and pushed away from eachother. The probability of these data points belonging to the same nodes in the next iteration becomessmaller, and the correlation of decision trees decreases.Second, it helps prevent the update process stuck in local minima and contributes to achieving higherstrength and classification accuracy than strengthened feature learning algorithm. Since the negative5Published as a conference paper at ICLR 2020sampling diffuses the data, it operates in a way similar to the regularization term in optimizationfor dealing with local minima issues. Hard negative examples can cause the learning process to fallinto a local minimum, and a recent method (Schroff et al., 2015) suggests to exclude the negativedata points too close to anchors from sampling. In this work, strengthened feature learning gives thehigh probability for negative sampling to the hard negative examples since they will be in the samenodes in most decision trees, but it is difficult to detach them from the anchors in the strengthenedfeature space. In the proposed triplet sampling strategy, the weight of the hard negative examplesis spread out to other positive samples in the same nodes. Thus it learns generic feature space forrandom forests without getting stuck in local minima. The proposed GCFN is designed to addressthe above-discussed issues. In the following, we present the various experimental validations for theproposed method.4 E XPERIMENTAL RESULTSIn this section, we present experimental results of the proposed method for domain generalizationand visual recognition tasks using the same backbone networks. In comparison with random forestbased on different spaces, the GCFN method performs favorably in all classification tasks. We alsopresent detailed discussions on the strength and correlation of the trained random forests and theproperties of the optimized learned space. Finally, we show the performance comparison of proposedrandom forests with the state-of-the-art method in such tasks. Due to space constraints, the details ofthe experimental settings are discussed in the appendix.Table 1: Comparison of random forests on canonical (can), strengthened (str) and generalized (gen)space. We measure the classification accuracy with T= 1;10;50for three datasets. In all datasets,random forests on the generalized feature space perform well. The best result in each number of treesis marked as bold. It is worth noticing that when the number of trees is 1, the random forest with thestrengthened space performs better. As the tree grows, the correlation of this random forest increases,and the proposed method with the generalized space performs better.SpaceMIT Indoor Scene-15 4D-Light1 10 50 1 10 50 1 10 50can 26.1 54.6 65.6 61.7 83.4 88.1 40.0 65.6 73.6str 48.4 66.9 70.8 84.3 88.6 89.3 68.3 73.9 76.9gen 46.0 69.3 74.0 80.1 90.2 91.6 66.9 78.3 79.7Table 2: Comparison of random forests on canonical (can), strengthened (str) and generalized (gen)space. We measure the classification accuracy with T= 50 for five datasets. Random forests withthe generalized space achieve the best result in all settings.Network SpaceMIT-Indoor Scene-15 4D-Light DTD Stanford-DogF S F S F S F S F SResNetcan 65.6 72.5 88.1 91.7 73.6 79.7 65.1 70.9 85.1 86.2str 70.8 71.7 89.3 90.2 76.9 78.3 70.3 71.4 83.4 84.1gen 74.0 75.7 91.6 92.4 79.7 81.1 71.5 72.5 85.7 86.7DenseNetcan 63.7 67.8 89.2 91.2 75.6 77.2 67.6 66.2 77.7 83.1str 71.9 72.6 89.8 90.4 77.8 81.4 69.8 70.9 83.0 84.8gen 74.6 77.2 91.7 92.0 79.2 81.1 70.7 72.2 85.4 86.84.1 E VALUATION WITH BASELINE FORESTSWe first compare the proposed feature learning algorithms for random forests in Fig. 3 of the appendix.In most cases, random forest with the generalized features outperforms that of the strengthenedfeatures in strength, correlation, PE, and classification accuracy. Although the proposed featurelearning method using the split results breaks the independence of the decision trees, such metricsare still valid to demonstrate the implications of the proposed method. Since the split results are6Published as a conference paper at ICLR 2020used only for the feature learning but are not used when building each decision tree from the learnedfeature.Although random forests on the strengthened feature space outperform canonical random forestsin classification accuracy, the performance reaches to a plateau quickly. On the other hand, theperformance of random forests with the generalized feature space increases steadily in classificationaccuracy and strength, and the correlation is maintained at the lower levels than that of strengthenedfeature space. Since the strengthened feature learning method aims to improve the strength only, thestrength grows rapidly at the beginning, but the correlation also gets higher. After several iterations,this approach falls into local minima, but the generalized feature learning method continuously reducesthe upper bound of generalization error and reaches much higher accuracy. It is worth noticing thatthe strength of the generalized feature space is often much higher than that of strengthened featurespace. This can be explained by that feature the learning process with the strengthened feature spaceis easily caught in local minima, whereas the proposed method can escape or avoid the local minima.More importantly, the correlation of the random forests on generalized features stays similar to thatof canonical feature space while its strength is much higher. Hence the generalization error PEof the proposed method is much smaller. It can also be elucidated by the overfitting problem ofbias and variance dilemma. The proposed positive sampling scheme reduces bias, while negativesampling process alleviates the overfitting problem. Although the random forest reduces the variancefrom the bagging process, it can quickly be overfitted if the positive sampling scheme reduces thebias too fast. The negative sampling process, on the other hand, regularize the steepest reduction ofthe bias; and thus, the proposed method enjoys the merits of the random forest with low bias andvariance for the classification task.Table 1 and 2 summarize the results of canonical and proposed random forests on various deepfeatures, number of trees, split functions, and different classification datasets. In all cases, theproposed random forest method on the generalized feature space performs significantly better thanthe canonical random forest schemes. In addition, the experimental results show that the proposedmethod is not designed for a specific task, but can be applied to numerous classification tasks. Thisshows the generalization ability of the proposed method, along with the experimental results ofdomain generalization in the next subsection.4.2 E VALUATION WITH STATE -OF-THE-ART METHODSWe evaluate the performance of the GCFN method with previous state-of-the-art methods for domaingeneralization and various visual recognition tasks. Here we train the GCFN method with thegeneralized feature learning algorithm and the split function ‘S’. We use the depth value dependingon the number of training samples and sufficient iterations of the learning stage to maximize theperformance of the GCFN.The domain generalization task is evaluated in Table 3 and 4. We compare GCFN with recent domaingeneralization methods such as D-SAMs (D’Innocente & Caputo, 2018), DANN (Ganin et al., 2016),MetaReg (Balaji et al., 2018), MMD-AAE (Li et al., 2018a) and Epi-FCR (Li et al., 2019). Theproposed GCFN algorithm performs favorably against state-of-the-art methods for both datasets. Theresults show that the proposed GCFN algorithm learns the generic distribution to classify unseendomain data.The results of Table 5-9 also show the GCFN works well compared to state-of-the-art methods invisual recognition tasks. We compare the classification accuracy with recent state-of-the-art resultsfrom FV-CNN (Cimpoi et al., 2015), DeepTen (Zhang et al., 2017), DEP (Xue et al., 2018) andDFT (Ryu et al., 2018) for MIT-Indoor, 4D-Light and DTD datasets in Table 5, 6 and 8. For theseexperiments, the multiscale training scheme is applied to the ResNet-50 backbone networks. Thedetails on how to utilize the multiscale scheme and backbone networks are slightly different foreach other, but we expect each to use the best settings for their method. The GCFN method istrained on the ResNet-152 backbone networks for the Scene-15 data set in comparison with theResNet+weighted_layout (Weng et al., 2016) scheme in Table 7. Although they do not use themultiscale scheme, the spatial pyramid pooling is applied to use the spatially multi-level features.The PC(ResNet-50), PC(Densenet-161) (Dubey et al., 2018) and MAMC(ResNet-50) (Sun et al.,2018) methods are evaluated with the GCFN(ResNet-50) algorithm in Table 9. Overall, the proposedGCFN method performs favorably against state-of-the-art methods. Since the random forest has beenone of the most widely used classifiers to the visual data, we show the generic performance of theproposed GCFN in the five datasets of three visual domains. These experimental results demonstratethe generalization ability of the proposed algorithm.7Published as a conference paper at ICLR 2020Table 3: Experiments on the OfficeHome dataset with the ResNet-18 backbone.Method Art Clipart Product Real-world AverageDeep All (feat.) 52.7 48.4 71.4 71.5 61.0Deep All 55.6 42.4 70.3 70.9 59.8D-SAMs 58.0 44.4 69.2 71.5 60.8GCFN 61.9 44.8 75.2 76.8 64.7Table 4: Experiments on the VLCS dataset with the AlexNet backbone.Method Pascal Labelme Caltech Sun AverageDANN 66.4 64.0 92.6 63.6 71.7MetaReg 65.0 60.2 92.3 64.2 70.4MMD-AAE 67.7 62.6 94.4 64.4 72.3Epi-FCR 67.1 64.3 94.1 65.9 72.9GCFN 73.8 61.7 93.9 67.5 74.2Table 5: Experiments on the MIT-Indoor dataset.Method DeepTen DFT DFT+ GCFNAcc 76.2 78.6 80.2 80.3Table 6: Experiments on the 4D-Light dataset.Method FV-CNN Deep-Ten GCFNAcc 77.6 81.4 82.2Table 7: Experiments on the Scene-15 dataset.Method ResNet+SVM ResNet+wl GCFNAcc 92.3 94.5 94.3Table 8: Experiments on the DTD dataset.Method FV-CNN Deep-Ten DEP GCFNAcc 72.3 69.6 73.2 76.8Table 9: Experiments on the Stanford-Dog dataset. The results are acquired on the ResNet-50 andDenseNet-161 for PC, MAMC, and GCFN.PC (ResNet) PC (DenseNet) MAMC (ResNet) GCFN (ResNet) GCFN (DenseNet)73.4 83.6 84.8 86.7 86.85 C ONCLUSIONSIn this paper, we propose the GCFN method which learns the generalized feature space iterativelysuch that the discrimination strength of each tree classifier is increased while the correlation issuppressed. The proposed learning algorithm uses the triplet sampling on the probability distributionsof split results of the decision trees. The data with the same label with the anchor but in differentnodes are likely to be positive samples to increase strength, and the data in the same nodes withthe anchor be negative samples to suppress correlation and diffuse data to avoid falling in localminima. We experimentally show that the proposed method outperforms baseline random forests onvarious experiments. Furthermore, the proposed algorithm performs favorably against state-of-the-artmethods for domain generalization and visual recognition tasks.ACKNOWLEDGEMENTSThis work was supported by the National Research Foundation of Korea(NRF) grant funded by theKorea government(MSIT) (NRF-2019R1A4A1029800), Basic Science Research Program throughthe National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2017R1A6A3A11031193), and NSF CAREER Grant No.1149783.8Published as a conference paper at ICLR 2020
HklgYgX-5S
Official Blind Review #2
6: Weak Accept
The paper aims to improve the random forest performance by iterative constructing & feeding more powerful features that are to be used by the random forest learning processing where a random subset of features are chosen from the current feature pool when making growing/stopping decision at the current split node. The idea is new and interesting, and its usefulness has been empirically shown. On the other hand, it is not clear how this additional procedure would do with the good properties of RFs such as less subjective to overfitting and bias. It would be very helpful if the paper could shred some lights in this regard.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition ### Paper Abstract When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods. ### Paper Keywords ["convolutional forest networks", "domain generalization", "visual recognition", "individual tree classifiers", "feature space", "data", "random forests", "prime importance", "high accuracy", "low correlation"] ### Paper Content ABSTRACTWhen constructing random forests, it is of prime importance to ensure high ac-curacy and low correlation of individual tree classifiers for good performance.Nevertheless, it is typically difficult for existing random forest methods to strikea good balance between these conflicting factors. In this work, we propose ageneralized convolutional forest networks to learn a feature space to maximizethe strength of individual tree classifiers while minimizing the respective correla-tion. The feature space is iteratively constructed by a probabilistic triplet samplingmethod based on the distribution obtained from the splits of the random forest. Thesampling process is designed to pull the data of the same label together for higherstrength and push away the data frequently falling to the same leaf nodes. Weperform extensive experiments on five image classification and two domain general-ization datasets with ResNet-18, ResNet-50 and DenseNet-161 backbone networks.Experimental results show that the proposed algorithm performs favorably againststate-of-the-art methods.1 I NTRODUCTIONRandom forests have been applied to various problems ranging from object classification (Bosch et al.,2007), object detection (Gall & Lempitsky, 2013), image segmentation (Schroff et al., 2008; Shottonet al., 2008), pedestrian detection (Marin et al., 2013; Tang et al., 2012) to semantic hashing (Qiuet al., 2018). In addition, random forests have been applied to the head pose (Fanelli et al., 2011),landmark (Cootes et al., 2012) and age (Shen et al., 2018) estimation as a regressor and the sparsefeature matching problems (Lepetit & Fua, 2006; Ozuysal et al., 2010).The main reason for the robust performance of random forests is the decision tree ensembles. Whileeach decision tree may achieve mediocre performance, the aggregated random forest performssignificantly better and less correlated if the decision trees are heterogeneous. The overall accuracyof the trees can be considered as strength , and the heterogeneity of the trees can be measured bycorrelation . In (Breiman, 2001), the upper bound of the generalization error of random forests isexpressed in terms of strength and correlation. High strength and low correlation are importantproperties to minimize the generalization error of a random forest. However, these two conflictingfactors make it is difficult to improve strength and lower correlation simultaneously. If the individualdecision trees in a random forest are strengthened independently, it is likely for the trees to resemblethe strongest tree in the forest, and consequently the correlation of the forest becomes high. To reducecorrelation, the decision trees must be in different shapes, and the strength of individual trees wouldnot be as high as the best decision tree.In this work, we introduce generalized learning for random forests with convolutional neural networksto address these issues. The proposed method iteratively improves the generalized ability of randomforests for higher strength and lowers correlation by probabilistic triplet sampling. For a triplet ofan anchor, a positive, and a negative sample, the loss function is designed to pull the anchor and thepositive closer and to push the anchor and the negative apart. The positive is sampled among theCorresponding author.1Published as a conference paper at ICLR 2020data with the same label with the anchor but in different leaf nodes of the decision trees, and thenegative is from the data in the same leaf nodes with the anchor. The former contributes to improvingthe classification accuracy by positive sampling, whereas the latter discourages the algorithm fromconstructing similar decision trees by negative sampling. Note that both data points of the same labeland of different labels with respect to the anchor can be in the negative training sample set. To directlyimprove the strength of the random forest only, we may design a method where negative examples aresampled among the data in the same leaf nodes and with different labels from the anchors. However,in practice, it suffers from early saturation and local minima. We describe the details of the proposedlearning algorithm and experimental results in the following sections.The main contribution of the proposed work are summarized as follows:We propose a generalization algorithm of random forests with convolutional neural networks.The proposed method minimizes the triplet loss function which 1) encourages to have same-labeled data in different leaf nodes move close and 2) pushes away data points that frequentlyfall in the same leaf nodes regardless of their class labels.We consider both strength and correlation of random forest simultaneously. These twoconflicting properties are handled by triple sampling. We demonstrate that the proposedmethod increases strength while maintaining a correlation in the experiments.We show that the proposed algorithm performs well on domain generalization and imagerecognition against the baseline random forest methods. Furthermore, we show the proposedmethod performs favorably against state-of-the-art methods for the same tasks.2 P RELIMINARIES2.1 R ANDOM FORESTSince the introduction of random forests (Breiman, 2001), numerous methods have been developedto increase the performance. We discuss recent and relevant methods for vision tasks in this section.In (Gall & Lempitsky, 2013), the Hough transform is used to tally probabilistic voting of detectionhypotheses of parts from a random forest for object detection. (Bosch et al., 2007) use local shapeand image appearance together to enhance the discriminative strength of the random forest. Visualbag-of-words model and gradient feature are utilized for encoding the shape and local information.In (Zhang & Suganthan, 2014), a method that uses linear discriminant analysis to increase thestrength of a decision split is proposed. However, in most cases, increasing the strength doesnot maximize the performance of the random forest because the correlation also increases. Theconflicting factors of strength and correlation in designing decision trees have been studied in recentmethods (Rodriguez-Galiano et al., 2012) to analyze the performance of random forest. (Yao et al.,2011) propose to use SVM and image patches to enhance strength while maintaining the correlationof random forest. In this method, the SVM is used as a split function of each node at a decision tree toincrease strength and use randomly selected image patches to decrease correlation. While this methodperforms well on fine-grained classification, it is not clear how this approach can be generalizedto other tasks. In contrast, we propose a generalized learning algorithm of random forests that canbe applied to various tasks using deep neural networks while considering both strength and correlation.2.2 G ENERALIZED ERROR BOUND OF RANDOM FORESTS(Breiman, 2001) shows that the generalized error PEof a random forest is bounded by:PE(1s2)s2; (1)where is the correlation and sis the strength of the random forest. The strength sis the expectationof the margin function mr()with respect to a feature Xand its label Y:s=EX;Y[mr(X;Y) ];andfmr(X;Y) =P(h(X;)=Y)maxj6=YP(h(X;)=j);whereh()is a classifier, and is a random vector used to generate decision trees. In addition,PTheta ()is the probability described the parameter for a classifier h. The strength represents the2Published as a conference paper at ICLR 2020Figure 1: Overall framework of the proposed learning algorithm. Once the random forest is con-structed with CNN features, we sample the triplets based on the probability mass function of the splitresults. The networks are then updated via the loss function of sampled triplets.expected margin of probability that the random forest makes a correct classification than a wrongclassification.To define the correlation, the raw margin function is used:frmg(;X;Y)=I (h( X;)=Y)Ih(X;)=^j(X;Y);where ^j(X;Y) = arg max j6=YP(h(X;)=j), andI()is an indicator function. When (;0)is the correlation between rmg(;X;Y)andrmg(0;X;Y), the correlation of a random forest isthe mean value of the correlations over . More details on the generalized error bound of randomforests can be found in (Breiman, 2001).From Eq. 1, it is straightforward to see that high strength of random forest reduces the upper boundof the generalization error if the correlation is suppressed (Breiman, 2001). However, these twoconditions cannot be simultaneously satisfied in practice.2.3 T RIPLET LOSSIn general, the loss function of the triplet is defined as:X(a;p;n )2Smax0;kf(a)f(p)k22kf(a)f(n)k22+b;wheref(a),f(p)andf(n)are the feature vectors of the anchor, positive, and negative data for atriplet (a;p;n )in the training set S, andbis a user-specified margin. The goal of the training processis to find the best feature fthat minimizes the loss function. In optimization process, the featurepositions of the anchor and the positive sample are pulled closer and those of the anchor and thenegative sample are pushed away.The triplet loss function has been used in numerous applications including face recognition (Parkhiet al., 2015; Schroff et al., 2015), image retrieval (Zhao et al., 2015), person re-identification (Chenget al., 2016; Zhang et al., 2016), and metric learning (Norouzi et al., 2012; Wang et al., 2014), toname a few. The triplet loss function uses both positive and negative samples at the same time,thus achieves improved performance. In the existing methods, positive samples are the data of thesame labels with the anchor, and the negative samples are the data with different labels. For facerecognition, person identities are the labels, and for person re-identification, the tracklet determinesthe positive and negative sample sets. Minimizing the triplet loss then gathers the same-labeled datatogether and separates differently-labeled data apart on the learned feature space, so that a classifiercan easily partition the data.Compared to the above approaches, our focus on how to sample the effective triplets to improve bothclassification ability and heterogeneity of random forests. We show that in random forest trainingsimply clustering data according to the labels does not bear the best result. Since it increases both thestrength and correlation of the random forest, and we present the supportive experimental results.3Published as a conference paper at ICLR 2020Algorithm 1 Learning algorithm for generalized convolutional forest networks.Input: Training Image I, Class label YOutput: Generalized Neural Networks N1:fori 1to maximum iterations do2:F0 Ni(I)3: Construct decision trees from F0andY4: ConstructPpandPnfrom split results by the decision trees5:S Sample triplets by PpandPn6:Ni+1 Update by minimizing triplet loss on S2.4 D OMAIN GENERALIZATIONGeneralizing models learned from one domain to another is an important topic in machine learningand computer vision. Learning deeply connected neural networks with a large-scale dataset helpsimprove the generalized ability such that CNN features trained on the ImageNet dataset are used asthegeneric representation for various visual domains (Sharif Razavian et al., 2014). However, it isstill difficult to adapt a model to different domains when large domain gaps exist. In addition, it iseven more challenging if we do not have any data from the target domain, or if we have a mixture ofsource and target domains data. Hence, numerous domain generalization methods have recently beendeveloped to tackle this problem. Several approaches have been developed including regularizationwith meta-learning (Balaji et al., 2018), domain-invariant conditional learning (Li et al., 2018b),adversarial back-propagation (Li et al., 2018a), and episodic training algorithm (Li et al., 2019).3 P ROPOSED ALGORITHMIn this section, we formulate the problem of improving strength and maintaining the correlation ofrandom forests and describe the generalized feature learning algorithm to address these two factorssimultaneously.We first present a feature learning algorithm which only considers strengthened features of randomforests. The modified feature learning algorithm on CNNs is then introduced to achieve high strengthand low correlation at the same time for the proposed Generalized Convolutional Forest Network(GCFN). The overall framework of the proposed GCFN method is shown in Fig. 1 and Algorithm 1.3.1 S TRENGTHENED LEARNING ALGORITHM OF RANDOM FORESTSThe classification result of an input xto a random forest is determined as:c= arg maxc1TXtP(cjt(x));wherecis the label,Tis the number of decision trees, and t(x)denotes the leaf node of a tree tintowhich xfalls. Here,P(cj)is the conditional probability of xbelonging to class c. In other words, can be thought of as a mapping function from xto a probability distribution on the label space. Tomaximize the strength of a decision tree, each leaf node should only contain data with a single label,i.e., the distribution should have a single entry with probability one and others with zeros. Intuitivelyif the data with same labels are converged together in the space, the leaf nodes are more likely tocontain single labeled data, and we can design a triplet loss to maximize the data clustering in thelearned space.The networks are updated via the probabilistic triplet sampling. To construct a triplet sample set,we randomly sample anchors faig, and for each anchor aione positive sample piand one negativesample niare randomly drawn according to the probability mass functions (PMFs) for positive andnegative pools of the anchor, i.e., piPp(ai)andniPn(ai). The positive pool consists of thedata with the same label with an anchor abut in different nodes of the tree, and the negative poolcontains the data in the same node but with different labels. Given the anchor a, the PMFs of thepositive and negative samples are defined by:Pp(x;a)/XtIt(x)6=t(a)^y(x) =y(a);and4Published as a conference paper at ICLR 2020Figure 2: Probability mass function for sampling triplets. Probability mass functions for positive andnegative samples ( PpandPnrespectively) are constructed by sample distribution with the anchorsin leaf nodes. Squares and triangles represent training data in leaf nodes of the decision trees, andthe shapes represent their labels. For an anchor (black square), the positive pool contains the data indifferent leaf nodes and with the same label (red and blue squares), and the negative pool containseither the data in the same leaf node with the same or different labels. Probability mass functions arethe normalized histogram of the positive and negative pools.Pn(x;a)/XtIt(x) =t(a)^y(x)6=y(a);whereI()is an indicator function returning 1 if true and 0 otherwise, and y(x)returns the label ofx. Both PMFs need to be normalized to sum to one. The networks Nare updated using the tripletsamplesSby minimizing the loss function Eq. 2.L=X(a;p;n)2SkN(a)N(p)k22kN(a)N(n)k22: (2)The proposed random forest on the strengthened feature space shows improved performance comparedto that of the canonical feature space, but the improvement saturates quickly and sometimes it fails toconverge. The reasons can be attributed to:the correlation of the random forest increases rapidly along with strength improvement, andthus the gain in overall performance is limited, as well asthe optimization process often falls into local minima.The data points with the same labels are pulled together and naturally, individual decision treesbecome stronger, but at the same time, the decision trees become similar. The growth of correlation isapparent because if two same-labeled data is in one leaf node, they are likely to stay close and belongto the same leaf nodes in the next iteration. In practice, the local minimum issue affects performancemore critically. The update process of learning strengthened feature is analogous to the steepestdescent algorithm in optimization, in the sense that both positive and negative samples concentrateon the strengthening random forests.3.2 G ENERALIZED LEARNING ALGORITHM OF RANDOM FORESTSTo alleviate the above-discussed issues, we present a triplet sampling method. As the positivesampling rule is effective in enhancing the strength, we design the negative sampling rule to deal withthe correlation and the local minima, as shown in Fig. 2. The PMF for negative sampling is definedas:Pn(x;a)/XtIt(x) =t(a):The role of negative sampling is two-fold. First, it prevents the correlation of the random forestgrowing quickly. If two data points belong to the same nodes of many decision trees (which causeshigh correlation), they are likely to be sampled as the negative examples and pushed away from eachother. The probability of these data points belonging to the same nodes in the next iteration becomessmaller, and the correlation of decision trees decreases.Second, it helps prevent the update process stuck in local minima and contributes to achieving higherstrength and classification accuracy than strengthened feature learning algorithm. Since the negative5Published as a conference paper at ICLR 2020sampling diffuses the data, it operates in a way similar to the regularization term in optimizationfor dealing with local minima issues. Hard negative examples can cause the learning process to fallinto a local minimum, and a recent method (Schroff et al., 2015) suggests to exclude the negativedata points too close to anchors from sampling. In this work, strengthened feature learning gives thehigh probability for negative sampling to the hard negative examples since they will be in the samenodes in most decision trees, but it is difficult to detach them from the anchors in the strengthenedfeature space. In the proposed triplet sampling strategy, the weight of the hard negative examplesis spread out to other positive samples in the same nodes. Thus it learns generic feature space forrandom forests without getting stuck in local minima. The proposed GCFN is designed to addressthe above-discussed issues. In the following, we present the various experimental validations for theproposed method.4 E XPERIMENTAL RESULTSIn this section, we present experimental results of the proposed method for domain generalizationand visual recognition tasks using the same backbone networks. In comparison with random forestbased on different spaces, the GCFN method performs favorably in all classification tasks. We alsopresent detailed discussions on the strength and correlation of the trained random forests and theproperties of the optimized learned space. Finally, we show the performance comparison of proposedrandom forests with the state-of-the-art method in such tasks. Due to space constraints, the details ofthe experimental settings are discussed in the appendix.Table 1: Comparison of random forests on canonical (can), strengthened (str) and generalized (gen)space. We measure the classification accuracy with T= 1;10;50for three datasets. In all datasets,random forests on the generalized feature space perform well. The best result in each number of treesis marked as bold. It is worth noticing that when the number of trees is 1, the random forest with thestrengthened space performs better. As the tree grows, the correlation of this random forest increases,and the proposed method with the generalized space performs better.SpaceMIT Indoor Scene-15 4D-Light1 10 50 1 10 50 1 10 50can 26.1 54.6 65.6 61.7 83.4 88.1 40.0 65.6 73.6str 48.4 66.9 70.8 84.3 88.6 89.3 68.3 73.9 76.9gen 46.0 69.3 74.0 80.1 90.2 91.6 66.9 78.3 79.7Table 2: Comparison of random forests on canonical (can), strengthened (str) and generalized (gen)space. We measure the classification accuracy with T= 50 for five datasets. Random forests withthe generalized space achieve the best result in all settings.Network SpaceMIT-Indoor Scene-15 4D-Light DTD Stanford-DogF S F S F S F S F SResNetcan 65.6 72.5 88.1 91.7 73.6 79.7 65.1 70.9 85.1 86.2str 70.8 71.7 89.3 90.2 76.9 78.3 70.3 71.4 83.4 84.1gen 74.0 75.7 91.6 92.4 79.7 81.1 71.5 72.5 85.7 86.7DenseNetcan 63.7 67.8 89.2 91.2 75.6 77.2 67.6 66.2 77.7 83.1str 71.9 72.6 89.8 90.4 77.8 81.4 69.8 70.9 83.0 84.8gen 74.6 77.2 91.7 92.0 79.2 81.1 70.7 72.2 85.4 86.84.1 E VALUATION WITH BASELINE FORESTSWe first compare the proposed feature learning algorithms for random forests in Fig. 3 of the appendix.In most cases, random forest with the generalized features outperforms that of the strengthenedfeatures in strength, correlation, PE, and classification accuracy. Although the proposed featurelearning method using the split results breaks the independence of the decision trees, such metricsare still valid to demonstrate the implications of the proposed method. Since the split results are6Published as a conference paper at ICLR 2020used only for the feature learning but are not used when building each decision tree from the learnedfeature.Although random forests on the strengthened feature space outperform canonical random forestsin classification accuracy, the performance reaches to a plateau quickly. On the other hand, theperformance of random forests with the generalized feature space increases steadily in classificationaccuracy and strength, and the correlation is maintained at the lower levels than that of strengthenedfeature space. Since the strengthened feature learning method aims to improve the strength only, thestrength grows rapidly at the beginning, but the correlation also gets higher. After several iterations,this approach falls into local minima, but the generalized feature learning method continuously reducesthe upper bound of generalization error and reaches much higher accuracy. It is worth noticing thatthe strength of the generalized feature space is often much higher than that of strengthened featurespace. This can be explained by that feature the learning process with the strengthened feature spaceis easily caught in local minima, whereas the proposed method can escape or avoid the local minima.More importantly, the correlation of the random forests on generalized features stays similar to thatof canonical feature space while its strength is much higher. Hence the generalization error PEof the proposed method is much smaller. It can also be elucidated by the overfitting problem ofbias and variance dilemma. The proposed positive sampling scheme reduces bias, while negativesampling process alleviates the overfitting problem. Although the random forest reduces the variancefrom the bagging process, it can quickly be overfitted if the positive sampling scheme reduces thebias too fast. The negative sampling process, on the other hand, regularize the steepest reduction ofthe bias; and thus, the proposed method enjoys the merits of the random forest with low bias andvariance for the classification task.Table 1 and 2 summarize the results of canonical and proposed random forests on various deepfeatures, number of trees, split functions, and different classification datasets. In all cases, theproposed random forest method on the generalized feature space performs significantly better thanthe canonical random forest schemes. In addition, the experimental results show that the proposedmethod is not designed for a specific task, but can be applied to numerous classification tasks. Thisshows the generalization ability of the proposed method, along with the experimental results ofdomain generalization in the next subsection.4.2 E VALUATION WITH STATE -OF-THE-ART METHODSWe evaluate the performance of the GCFN method with previous state-of-the-art methods for domaingeneralization and various visual recognition tasks. Here we train the GCFN method with thegeneralized feature learning algorithm and the split function ‘S’. We use the depth value dependingon the number of training samples and sufficient iterations of the learning stage to maximize theperformance of the GCFN.The domain generalization task is evaluated in Table 3 and 4. We compare GCFN with recent domaingeneralization methods such as D-SAMs (D’Innocente & Caputo, 2018), DANN (Ganin et al., 2016),MetaReg (Balaji et al., 2018), MMD-AAE (Li et al., 2018a) and Epi-FCR (Li et al., 2019). Theproposed GCFN algorithm performs favorably against state-of-the-art methods for both datasets. Theresults show that the proposed GCFN algorithm learns the generic distribution to classify unseendomain data.The results of Table 5-9 also show the GCFN works well compared to state-of-the-art methods invisual recognition tasks. We compare the classification accuracy with recent state-of-the-art resultsfrom FV-CNN (Cimpoi et al., 2015), DeepTen (Zhang et al., 2017), DEP (Xue et al., 2018) andDFT (Ryu et al., 2018) for MIT-Indoor, 4D-Light and DTD datasets in Table 5, 6 and 8. For theseexperiments, the multiscale training scheme is applied to the ResNet-50 backbone networks. Thedetails on how to utilize the multiscale scheme and backbone networks are slightly different foreach other, but we expect each to use the best settings for their method. The GCFN method istrained on the ResNet-152 backbone networks for the Scene-15 data set in comparison with theResNet+weighted_layout (Weng et al., 2016) scheme in Table 7. Although they do not use themultiscale scheme, the spatial pyramid pooling is applied to use the spatially multi-level features.The PC(ResNet-50), PC(Densenet-161) (Dubey et al., 2018) and MAMC(ResNet-50) (Sun et al.,2018) methods are evaluated with the GCFN(ResNet-50) algorithm in Table 9. Overall, the proposedGCFN method performs favorably against state-of-the-art methods. Since the random forest has beenone of the most widely used classifiers to the visual data, we show the generic performance of theproposed GCFN in the five datasets of three visual domains. These experimental results demonstratethe generalization ability of the proposed algorithm.7Published as a conference paper at ICLR 2020Table 3: Experiments on the OfficeHome dataset with the ResNet-18 backbone.Method Art Clipart Product Real-world AverageDeep All (feat.) 52.7 48.4 71.4 71.5 61.0Deep All 55.6 42.4 70.3 70.9 59.8D-SAMs 58.0 44.4 69.2 71.5 60.8GCFN 61.9 44.8 75.2 76.8 64.7Table 4: Experiments on the VLCS dataset with the AlexNet backbone.Method Pascal Labelme Caltech Sun AverageDANN 66.4 64.0 92.6 63.6 71.7MetaReg 65.0 60.2 92.3 64.2 70.4MMD-AAE 67.7 62.6 94.4 64.4 72.3Epi-FCR 67.1 64.3 94.1 65.9 72.9GCFN 73.8 61.7 93.9 67.5 74.2Table 5: Experiments on the MIT-Indoor dataset.Method DeepTen DFT DFT+ GCFNAcc 76.2 78.6 80.2 80.3Table 6: Experiments on the 4D-Light dataset.Method FV-CNN Deep-Ten GCFNAcc 77.6 81.4 82.2Table 7: Experiments on the Scene-15 dataset.Method ResNet+SVM ResNet+wl GCFNAcc 92.3 94.5 94.3Table 8: Experiments on the DTD dataset.Method FV-CNN Deep-Ten DEP GCFNAcc 72.3 69.6 73.2 76.8Table 9: Experiments on the Stanford-Dog dataset. The results are acquired on the ResNet-50 andDenseNet-161 for PC, MAMC, and GCFN.PC (ResNet) PC (DenseNet) MAMC (ResNet) GCFN (ResNet) GCFN (DenseNet)73.4 83.6 84.8 86.7 86.85 C ONCLUSIONSIn this paper, we propose the GCFN method which learns the generalized feature space iterativelysuch that the discrimination strength of each tree classifier is increased while the correlation issuppressed. The proposed learning algorithm uses the triplet sampling on the probability distributionsof split results of the decision trees. The data with the same label with the anchor but in differentnodes are likely to be positive samples to increase strength, and the data in the same nodes withthe anchor be negative samples to suppress correlation and diffuse data to avoid falling in localminima. We experimentally show that the proposed method outperforms baseline random forests onvarious experiments. Furthermore, the proposed algorithm performs favorably against state-of-the-artmethods for domain generalization and visual recognition tasks.ACKNOWLEDGEMENTSThis work was supported by the National Research Foundation of Korea(NRF) grant funded by theKorea government(MSIT) (NRF-2019R1A4A1029800), Basic Science Research Program throughthe National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2017R1A6A3A11031193), and NSF CAREER Grant No.1149783.8Published as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The paper aims to improve the random forest performance by iterative constructing & feeding more powerful features that are to be used by the random forest learning processing where a random subset of features are chosen from the current feature pool when making growing/stopping decision at the current split node. The idea is new and interesting, and its usefulness has been empirically shown. On the other hand, it is not clear how this additional procedure would do with the good properties of RFs such as less subjective to overfitting and bias. It would be very helpful if the paper could shred some lights in this regard. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
B1gi-TVKwB
ICLR.cc/2020/Conference
2020
Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving
["Daniel Graves"]
An algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: (1) the off-policy predictions as a compact representation of state, (2) the behavior policy distribution for estimating the off-policy predictions, and (3) the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs). A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment. Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model.
["Predictive representations", "general value functions", "reinforcement learning", "off-policy learning", "behavior estimation"]
ABSTRACTAn algorithm is introduced for learning a predictive state representation withoff-policy temporal difference (TD) learning that is then used to learn to steer avehicle with reinforcement learning. There are three components being learnedsimultaneously: (1) the off-policy predictions as a compact representation of state,(2) the behavior policy distribution for estimating the off-policy predictions, and (3)the deterministic policy gradient for learning to act. A behavior policy discriminatoris learned and used for estimating the important sampling ratios needed to learnthe predictive representation off-policy with general value functions (GVFs). Alinear deterministic policy gradient method is used to train the agent with onlythe predictive representations while the predictions are being learned. All threecomponents are combined, demonstrated and evaluated on the problem of steeringthe vehicle from images in the TORCS racing simulator environment. Steeringfrom only images is a challenging problem where evaluation is completed on aheld-out set of tracks that were never seen during training in order to measure thegeneralization of the predictions and controller. Experiments show the proposedmethod is able to steer smoothly and navigate many but not all of the tracksavailable in TORCS with performance that exceeds DDPG using only images asinput and approaches the performance of an ideal non-vision based kinematicsmodel.1 I NTRODUCTIONPredicting the future is an important topic in machine learning and is believed to be an important partof how humans process and interact with the world, cf Clark (2013). Study of the brain shows that itis highly predictive of future events and outcomes. Despite these advances, there is still much workneeded to bridge the worlds of predictive learning and control. Most predictive control approacheslearn either a forward model or a backward model Lesort et al. (2018) however these next-stepmodels suffer from compounding errors Sutton (1988). This paper introduces a predictive controlarchitecture using one kind of off-policy predictive learning, called general value functions (GVFs)Sutton et al. (2011)White (2015)Modayil et al. (2012)Daniel Graves (2019)Schaul & Ring (2013),that learns to predict the relevant aspects of the environment, decided by an expert, from raw sensordata such as pixel data captured from a camera. GVFs answer the predictive question, "if I followpolicy, how much total cumulant will I see in the future?" The value of the GVF framework isnot yet fully understood and realized despite the connections to neuroscience; but some early workhas investigated its advantages for predictive representations and found that the representations arecompact and general Schaul & Ring (2013). An objective of this research is to better understand thevalue that GVFs have to offer in real-world applications. Our work is based on the hypothesis thatpredictive representations are good for generalization Rafols et al. (2005) Schaul & Ring (2013). Weare motivated by the belief that GVFs, like RL, could allow for behavior that is anticipative of futureconsequences rather than reactive to the current state.General value functions (GVFs) are an understudied topic of interest in AI research fields andapplications. There is a considerable focus on understanding how to learn these predictions but1Under review as a conference paper at ICLR 2020limited efforts on understanding how to use them in real applications. This is unfortunate, as to-date, research into applications of GVFs suggest they have potential in real world robotics and itsapplications Günther et al. (2016)Pilarski et al. (2011)Pilarski et al. (2013)Sutton et al. (2011)White(2015)Modayil et al. (2012). However, several elements have been missing to apply these predictionsto a larger scale problem such as autonomous driving: (1) how to characterize the behavior policy toachieve off-policy learning when it is unknown, (2) what predictions are useful, and (3) how to usethose predictions to control the vehicle. Our objective is two-fold: (1) introduce a novel architecturecombining elements of predictive learning, adversarial learning and reinforcement learning, and (2)demonstrate how this architecture can be used to steer a vehicle in a racing simulator.1.1 R ELATED WORKSSteering a vehicle is a challenging problem where the bicycle model is the classical approach Padenet al. (2016). However, the bicycle model requires knowing the angle of the vehicle with respectto the road direction in order to compute the desired steering angle. Steering directly from imageshas been a long desired goal in autonomous driving where approaches like Salvucci & Gray (2004)advocate for a two point model which inspired the multi-point predictive representation proposed inthis paper.In comparison, learning to regress an image directly to the steering angle in an end-to-end mannerhas been a recent hot topic Bojarski et al. (2016)Garimella et al. (2017)Chen & Huang (2017)Sallabet al. (2017). However, a serious challenge is ensuring robustness of the controller when learningend-to-end Bojarski et al. (2016). In particular, the agent is not typically trained on recovery modescenarios and so there are generalization and data coverage issues; for this reason, authors Bojarskiet al. (2016) introduced augmented images in training by artificially shifting and rotating them tohelp the network learn to recover with some limited success.The approach in Chen et al. (2015) learns to predict the current road angle directly from imagesand then uses a classical steering controller to control the vehicle. The proposed approach is similarexcept we predict future road angles and lane centeredness at different temporal horizons which isthen passed to a controller module to choose steering angles. Policy gradient with the predictive staterepresentation is the approach used in this paper but this can also be replaced with other controllers.This architecture allows for a degree of interpretability in the controller that is not easily achievedwith end-to-end approaches Bojarski et al. (2016) despite work on understanding and improving itsrobustness.2 P REDICTIVE LEARNINGWe consider an environment described by a set of states S, a set of actions A, and Markov transitiondynamics with probability P(s0js;a)of transitioning to next state s0after taking action afrom states. This setting is nearly identical to a Markov Decision Process (MDP) where the only difference isthe absence of a reward signal to maximize. The goal is to learn an estimator that predicts the returnGtof a cumulant ctdefined byGt1Xk=0(kYj=0t+j+1)ct+k+1 (1)wherectis a cumulant signal to be predicted, and 0t<1is the continuation function. Thegeneral value function is defined asV(s) =E[Gtjst=s;at=a;at+1:T1;T] (2)where(ajs),(s;a;s0), andc(s;a;s0)are the policy, continuation and cumulant functions, respec-tively, that make up the predictive question Sutton et al. (2011) where V(s)represents the totaldiscounted cumulant starting from state sand acting under policy . Unfortuantely, there are currentlyno algorithms to learn the predictive question through interaction with the environment; thus, ,,andcare typically defined by an expert. Cumulants are commonly scaled by a factor of 1whenis a constant in non-episodic predictions.A GVF can be approximated with a function approximator, such as a neural network, parameterizedbyto predict equation 1. The agent usually collects experience under a different behavior policy2Under review as a conference paper at ICLR 2020(ajs)where off-policy policy evaluation methods are needed to learn the GVF. The parameters are optimized with gradient descent minimizing the following loss functionL() =Esd;a[2] =Esd;a[2] (3)where=E[y^v(s;)js;a]is the TD error and =(ajs)(ajs)is the importance sampling ratio tocorrect for the difference between the target policy distribution and behavior distribution . Notethat only the behavior policy distribution is corrected rather than the state distribution d. The targetyis produced by bootstrapping a prediction Sutton (1988) of the value of the next state followingtarget policy given byy=Es0P[c+^v(s0;)js;a] (4)whereyis a bootstrap prediction using recent parameters that are assumed constant in the gradientcomputation. Some approaches use older parameters of the network to make a bootstrappedprediction to improve stability in the learning Mnih et al. (2013). However, this was not found to benecessary when learning GVFs since the target policy is fixed and the learning is simply off-policypolicy evaluation. dis the state distribution of the behavior policy and the time subscript on candhas been dropped to simplify notation.The gradient of the loss function equation 3 is given byrL() =Esd;a[r^v(s;)] (5)An alternative approach to using importance sampling ratios is to apply importance resamplingSchlegel et al. (2019). With importance resampling, a replay buffer Dof sizeNis required andthe gradient is multiplied with the average importance sampling ratio of the samples in the buffer=PNi=1iN. The importance resampling gradient is given byrL() =Es;aD[r^v(s;)] (6)where the transitions in the replay buffer are sampled according to D(ai;si) =(aijsi)PNj=1(ajjsj)fortransition with state siand actionaiin replay buffer D. This approach is proven to have lowervariance than equation 5 with linear function approximation Schlegel et al. (2019). An efficientdata structure for the replay buffer is the SumTree used in prioritized experience replay Schaul et al.(2016). This is a natural approach to learning predictions with deep reinforcement learning sincesampling a mini-batch from the replay buffer helps to decorrelate sample updates in deep functionapproximation Mnih et al. (2013).A behavior policy needs to be defined to adequately explore the environment when learning GVFs.This may be an evolving policy that is learned by RL, a random policy for exploring the environment,or a human driver collecting data safely. It is common, especially in the case of human drivers, forthe behavior policy distribution (ajs)of the agent to be unknown. We propose an algorithm usingthe density ratio trick to learn the behavior policy distribution in an adversarial way. It is well suitedfor problems with low dimensional action spaces like autonomous driving.The ratio of two probability densities can be expressed as a ratio of discriminator class probabilitiesthat distinguish samples from the two distributions. Let us define a probability density function(ajs)for the distribution to compare to the behavior distribution (ajs)and class labels y= +1 andy=1that denote the class of the distribution that the state action pair was sampled from: (ajs)or(ajs)respectively. A discriminator g(a;s)is learned that distinguishes state action pairs fromthese two distributions using the cross-entropy loss. The ratio of the densities can be computed usingonly the discriminator g(a;s).(ajs)(ajs)=p(ajs;y= +1)p(ajs;y=1)=p(y= +1ja;s)p(ajs)=p(y= +1)p(y=1ja;s)p(ajs)=p(y=1)=p(y= +1ja;s)p(y=1ja;s)=g(a;s)1g(a;s)(7)Here we assume that p(y= +1) =p(y=1). From this result, we can estimate (ajs)with^(ajs)as follows^(ajs) =g(a;s)1g(a;s)(ajs) (8)3Under review as a conference paper at ICLR 2020where(ajs)is a known distribution over action conditioned on state. The uniform distribution overthe action is independent of state and has the advantage of being effective and easy to implement.The algorithm for training a GVF off-policy with an unknown behavior distribution is given byAlgorithm 1 Off-policy GVF training algorithm with unknown (ajs)1:Initialize ^v,g(a;s), and replay memory D,2:Observe initial state s03:fort=0,T do4: Sample action atfrom unknown (ajs)5: Execute action atand observe state st+16: Compute cumulant ct+1=c(st;at;st+1)7: Compute continuation t+1=(st;at;st+1)8: Compute behavior density value ^(atjst)according to equation 89: Compute importance sampling ratio t=(atjst)^(atjst)10: Store transition (st;at;ct+1;t+1;st+1;t)inD11: Sample random minibatch Aof transitions (si;ai;ci+1;i+1;si+1)fromDaccording toprobabilityiPnj=1j12: Computeyi=ci+1+i+1^v(si+1;)for minibatch A13: Perform gradient descent step on (yi^v(si;))2according to equation 6 for minibatch A14: Sample random minibatch Bof state action pairs (si;ai)fromDaccording to a uniformprobability and assign label y= +1 to each pair15: Randomly select half the samples in the minibatch Band temporarily replace the label withy=1and action with at(ajs)16: Update behavior discriminator g(a;s)with modified minibatch B3 P REDICTIVE CONTROLLet us consider an MDP with a predictive representation (s)mapping state sto predictions (s).The reward for the problem is denoted as r. The problem is to find a policy (aj(s))that maximizesfuture return or accumulated discounted reward. We hypothesize that this approach should be easierto train than learning (ajs)directly for the following reasons:the target policy of the predictions is fixed making for faster learningthe compact abstraction (s)allows for simple (possibly even linear) policy and action-valuefunctionsthe cumulant signal cmay only be available during training or is expensive to obtainThe last advantage is particularly important in autonomous driving where localization techniquesoften require a collection of expensive sensors and high definition map data that is not always availableor easily scalable to a fleet of autonomous vehicles. In this way, one can train a neural network tomap images captured by inexpensive cameras to predictions of lane centeredness and road anglecaptured by any number of highly accurate but expensive localization approaches with the hopes ofgeneralizing features for lane control.The agent learns to steer with deterministic policy gradient (DPG) Silver et al. (2014) using thepredictions as the state of the agent. When linear policy function approximation is used, the controllerlearned is essentially equivalent to a prediction-based PID controller only where there is a deepmapping from images captured by a camera to predictions of the future used to control the vehicle.Using predictions for PID control is not new; this approach can be used to tackle problems with hightemporal delay between the error signal and the corrective actions. One can also add integral andderivative terms of the predictions to the state space representation of the agent.Action-value Q((s);a)and policy((s))networks are trained according to DPG Silver et al.(2014) where the action value approximates the expected discounted return, i.e. Q((s);a) =E[P1i=0irt+i+1]. In addition, a policy network ((s))produces an action according to thecurrent predictive state representation (s)that maximizes the expected discounted return. Because4Under review as a conference paper at ICLR 2020of the interesting connection between DPG and PID control when linear function approximation isused, the policy network is parameterized asat=((st)) = |(st) (9)where is a matrix denoting parameters to be learned by DPG. They also represent the gaincoefficients for a proportional controller which allows for interpretability of the learned parameters.The action-value network Q((s);a)is given by a small neural network that maps the predictivestate representations (s)and actionato an estimate of the action-value in the current state sifaistaken and the optimal policy followed thereafter.In autonomous driving, knowing the road curvature ahead can be informative for making decisionsto maintain lane centeredness in a tight turn. In Figure 1, it is demonstrated how future off-policypredictions of lane centeredness can be used to predict the deviation (or error) between the truecenter of lane and the projected lane centeredness along the current direction of the vehicle. Thesepredictions must be off-policy because if they were on-policy they would tell us no information toinform the agent how much adjustment is needed to make corrective actions to stay in the center ofthe lane.Figure 1: Future predictions capturing information about the shape of the road ahead(a) Lane centeredness (b) Road angleFigure 2: (a) the lane centeredness position is the distance from the center of the lane to the centerof the vehicle. (b) the road angle is the angle between the direction of the vehicle and the directionof the road.The lane centeredness and road angle are two kinds of predictions that are useful in steering thevehicle as depicted in Figure 2. We represent the road curvature as a set of predictions of future lanecenteredness and road angles at different time horizons. The predictive state representation isgiven by the feature vector (s)(st) = [V(st;=0);:::V(s;=m);V(s;=0);:::V(s;=m)] (10)whereV(st;=0)is a GVF prediction under target policy , cumulant(lane centeredness)and continuation function 0whileV(st;=0)is a GVF prediction of cumulant (road angle).There aremnumber offunctions for predicting lane centeredness and mnumber offunctionsfor predicting road angle at different temporal horizons. Because the predictions represent deviationsfrom the desired lane centeredness and road angle, the policy network of DPG can be linear.4 E XPERIMENTS IN TORCSThe predictive learning approach is applied to the challenging problem of learning to steer a vehiclein the TORCS Wymann et al. (2013) racing environment. A kinematic-based steering approach Padenet al. (2016) is used as a baseline for all the experiments. The reward in the TORCS environment is5Under review as a conference paper at ICLR 2020given byrt=vtcostwherevtis the speed of the vehicle in km/h, tis the angle between the roaddirection and the vehicle direction. A simple scaling factor of 0:01was applied to the reward in orderto reduce the variance of the action-value. Notice that this reward doesn’t force the agent to stay inthe center of the lane; however, this strategy is likely a good idea to achieve high total reward on allthe test tracks. The target speed of all the agents is 50 km/h where vehicle speed was controlled by aseparate manually tuned PID controller.The agents were trained on 85% of the 40 tracks available in TORCS. The rest of the trackswere used for testing (6 in total); all of the results in this section are on the testing tracks whichwere never presented to the agents during training. This was done to measure the generalizationperformance of the policies learned by the agents which can be a serious problem in RL, cf. Zhaoet al. (2019)Farebrother et al. (2018). Tracks where the road was not level were excluded since theyproved to be challenging likely because a non-zero action was required to keep the vehicle centered.4.1 T ESTRESULTSIn all the figures, blue corresponds to DDPG-Image with only image and speed as input, greencorresponds to DDPG-ImageLowDim with images, current speed, andprovided as input, orangecorresponds to the new GVF-DPG approach with image, current speed and last two actions sincethe target policy of the prediction depends on the last action taken, and red corresponds to theclassical front wheel steering model. A history of two images were provided to each method. Asupervised method of predicting the current lane centeredness and road angle directly from the imagewas attempted with negative results: the controller learned was consistently unstable. Results wererepeated over 5 runs. The total score achieved on the test tracks by the agents is plotted in Figure 3over 1M training iterations.It is clear that DDPG-ImageLowDim performs best of the learned methods on all test tracks. Thislow dimensional information provides ground truth to the agent which may not always be available inall locations; however, it makes a good baseline target for what we hope our proposed method couldachieve through generalization. With DDPG-Image, it is clear that it does not learn to steer fromimages very well.Figure 3: Test scores (accumulated reward) during trainingIt could be argued that DDPG-Image may improve with more iterations; however this is likely notthe case. The reason is shown in Figure 4 where DDPG-Image consistently converges to a solutionthat oscillates between the extreme left and right steering actions very rapidly. This oscillation is soextreme that the agent is unable to achieve the target speed of 50 km/h; instead it travels at 40 km/hon average for all the test tracks. The performance gap also suggests that DDPG-ImageLowDim maybe relying more on the low dimensional lane information rather than the image.To highlight how uncomfortable the two DDPG agents drive compared to the GVF-DPG agent, Figure4 shows the standard deviation of the change in action during 1M iterations of training. On most6Under review as a conference paper at ICLR 2020tracks, it is apparent that the GVF-DPG approach controls the vehicle more smoothly than the otherlearned methods.Figure 4: Standard deviation of the change in action during trainingThe performance of the individual test tracks is given in the following Figure 5. The GVF-DPGapproach does not steer successfully on all test tracks: it fails immediately on wheel-2, part-waythrough on a-wheelway, and drives well on most of alpine-2. The DDPG-Image fails to completedirt-4, wheel-2, spring, and a-speedway. Finally, DDPG-ImageLowDim successfully completes allthe test tracks; however, the agent has a strong bias to the left side of the track. The GVF-DPG agentoften follows the classical controller relatively well except on the tracks where the agent fails. Thissuggests that using a predictive representation of lane centeredness and road angle achievescloser performance to a classical controller than an end-to-end learned approach. However, morework is needed to improve the generalization abilities of the approach.The learning curves for the predictors and the policy gradient agents is given in the following Figure 6.It is interesting that the learning curves of the action-value function estimator of the GVF-DPG agentis much smaller than the other agents and quite smooth. The reason is believed to be because thepredictive state representation is constrained to values between [1;+1]acting as a sort of regularizerto the state representation of the agent. The learning curve of the DDPG-ImageLowDim howevereventually approaches the low error of the GVF-DPG agent. The predictors converge relativelyquickly as shown in Figure 6(b). The behavior estimator in Figure 6(c) stabilizes relatively quicklyduring learning as well; it is postulated that the error does not decrease further since the behaviorpolicy is changing slowly over time and the behavior estimator must track this change.5 C ONCLUSIONSA method of learning a predictive representation off-policy is presented where the behavior policydistribution is estimated via an adversarial method employing the density ratio trick. It is demonstratedthat deep off-policy predictions can be learned with a deep behavior policy estimation to predict futurelane centeredness and road angles from images. The predictive representation is learned with lineardeterministic policy gradient. All of these components are combined together in a framework calledGVF-DPG and learned simultaneously on the challenging problem of steering a vehicle in TORCSfrom only images. The results show that the GVF-DPG is able to steer smoothly with less change inaction and achieve better performance than DDPG from only images and similar performance to thekinematics model in several but not all of the test tracks. This work is also a demonstration that wecan learn off-policy predictions, characterize the behavior policy and learn the controller all at thesame time despite the challenges of the behavior policy evolving with the agent and the predictivestate representation changing over time.Our work demonstrates that a learned prediction-based vision-only steering controller could po-tentially be viable with more work on improving the generalizability of the off-policy predictions.7Under review as a conference paper at ICLR 2020(a) Alpine-2 (b) Evo-2-r(c) Dirt-4 (d) Wheel-2(e) Spring (f) A-SpeedwayFigure 5: The lane centeredness position on the (a) alpine-2, (b) evo-2-r, (c) dirt-4, (d) wheel-2, (e)spring, and (f) a-speedway tracks in TORCS.(a) Action-Value (b) Predictors (c) BehaviorFigure 6: Log-loss learning curves for the (a) Q-values of the DPG agents, (b) mean squared TD(temporal difference) errors of the GVF predictors, and (c) MSE of the behavior model estimatorThis work supports the predictive state representation hypothesis in Rafols et al. (2005) that deeppredictions can improve the generalization of RL to new road environments when using only im-ages as input. For future work, we hope to study how to learn the question for the predictive staterepresentation: ,, andc. Moreover, because the behavior policy is unknown and estimated, ourresults suggest that collecting real-world human driving to train predictions off-policy without theneed for a simulator could be a viable approach to steering a vehicle from images. This is potentiallyadvantageous since the human driver can explore the road safely.8Under review as a conference paper at ICLR 2020
H1eZawypYB
Official Blind Review #1
1: Reject
This paper combines predictive state representations (PSRs) with DPG and tests the overall performance on a TORC simulated task. Specifically, the authors propose to train a generalized value function (GVF) where the cumulant is how far the car is from the center of the road as well as the angle of the car. To train the GVF, the author propose to perform off-policy learning with importance resampling. To estimate the ratios, the authors propose to learn a discriminator that predicts which policy the actions came from, and show how to use this discriminator to estimate the likelihood ratio. For DPG, a small neural network is used for the Q function, and a linear policy is used. The authors demonstrate that their method outperforms DDPG-from-images on held-out TORC racing tasks, while not quite reaching the performance of DDPG-from-ground-truth state. Using PSRs is a promising direction of work, but I found the contribution of this work rather obfuscated. It seems like there are two sources of novelty: (1) the use of a different number of continuation functions and (2) using a discriminator to estimate the importance ratio, but no details were given about these implementations. The paper would be greatly strengthened by reducing the amount of time spent summarizing prior work, and more thoroughly describing these contributions. Studying one of these contributions in more detail, rather than analyzing the performance of the final policies on a simulated task would also help make the hypothesis and insight of this paper more clear. In particular, I felt like these answers were unanswered: 1. What parts of the algorithm are important to the good final performance? 2. How important is it to use different discount factors? 3. How does this work relate to prior work on off-policy evaluation? 4. How important was it for a target policy to *not* be used? 5. How was the discriminator trained (e.g. hyperparameters)? The authors use ground-truth information when training. I am surprised that a supervised learning method trained with the ground-truth information was unable to recover the performance of DDPG-ImageLowDim. Can more details be given in this training procedure? Is it difficult to train a model to predict the current angle given the image? I find this a bit surprising. Also, if an OU-process was used for exploration, how were the behavior policy likelihoods estimated using one-step backups? Since an OU-process isn't Markovian, it seems like this would require doing trajectory-level likelihoods, rather than on-step likelihood as suggested by Equation 3. Minor comments: - Why don't the other methods receive the last two actions? - The authors should cite [1,2] when referencing PSRs. - Please include legends and axis-labels to the Figures. - I don't understand the sentence, "These predictions must be off-policy because if they were on-policy they would tell us no information to inform the agent how much adjustment is needed to make corrective actions to stay in the center of the lane." In particular, why wouldn't on-policy predictions be more useful to an agent? [1] Singh et al. Predictive state representations: A new theory for modeling dynamical systems. AUAI. 2004. [2] Littman et al. Predictive representations of state. NeurIPS. 2001.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving ### Paper Abstract An algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: (1) the off-policy predictions as a compact representation of state, (2) the behavior policy distribution for estimating the off-policy predictions, and (3) the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs). A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment. Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model. ### Paper Keywords ["Predictive representations", "general value functions", "reinforcement learning", "off-policy learning", "behavior estimation"] ### Paper Content ABSTRACTAn algorithm is introduced for learning a predictive state representation withoff-policy temporal difference (TD) learning that is then used to learn to steer avehicle with reinforcement learning. There are three components being learnedsimultaneously: (1) the off-policy predictions as a compact representation of state,(2) the behavior policy distribution for estimating the off-policy predictions, and (3)the deterministic policy gradient for learning to act. A behavior policy discriminatoris learned and used for estimating the important sampling ratios needed to learnthe predictive representation off-policy with general value functions (GVFs). Alinear deterministic policy gradient method is used to train the agent with onlythe predictive representations while the predictions are being learned. All threecomponents are combined, demonstrated and evaluated on the problem of steeringthe vehicle from images in the TORCS racing simulator environment. Steeringfrom only images is a challenging problem where evaluation is completed on aheld-out set of tracks that were never seen during training in order to measure thegeneralization of the predictions and controller. Experiments show the proposedmethod is able to steer smoothly and navigate many but not all of the tracksavailable in TORCS with performance that exceeds DDPG using only images asinput and approaches the performance of an ideal non-vision based kinematicsmodel.1 I NTRODUCTIONPredicting the future is an important topic in machine learning and is believed to be an important partof how humans process and interact with the world, cf Clark (2013). Study of the brain shows that itis highly predictive of future events and outcomes. Despite these advances, there is still much workneeded to bridge the worlds of predictive learning and control. Most predictive control approacheslearn either a forward model or a backward model Lesort et al. (2018) however these next-stepmodels suffer from compounding errors Sutton (1988). This paper introduces a predictive controlarchitecture using one kind of off-policy predictive learning, called general value functions (GVFs)Sutton et al. (2011)White (2015)Modayil et al. (2012)Daniel Graves (2019)Schaul & Ring (2013),that learns to predict the relevant aspects of the environment, decided by an expert, from raw sensordata such as pixel data captured from a camera. GVFs answer the predictive question, "if I followpolicy, how much total cumulant will I see in the future?" The value of the GVF framework isnot yet fully understood and realized despite the connections to neuroscience; but some early workhas investigated its advantages for predictive representations and found that the representations arecompact and general Schaul & Ring (2013). An objective of this research is to better understand thevalue that GVFs have to offer in real-world applications. Our work is based on the hypothesis thatpredictive representations are good for generalization Rafols et al. (2005) Schaul & Ring (2013). Weare motivated by the belief that GVFs, like RL, could allow for behavior that is anticipative of futureconsequences rather than reactive to the current state.General value functions (GVFs) are an understudied topic of interest in AI research fields andapplications. There is a considerable focus on understanding how to learn these predictions but1Under review as a conference paper at ICLR 2020limited efforts on understanding how to use them in real applications. This is unfortunate, as to-date, research into applications of GVFs suggest they have potential in real world robotics and itsapplications Günther et al. (2016)Pilarski et al. (2011)Pilarski et al. (2013)Sutton et al. (2011)White(2015)Modayil et al. (2012). However, several elements have been missing to apply these predictionsto a larger scale problem such as autonomous driving: (1) how to characterize the behavior policy toachieve off-policy learning when it is unknown, (2) what predictions are useful, and (3) how to usethose predictions to control the vehicle. Our objective is two-fold: (1) introduce a novel architecturecombining elements of predictive learning, adversarial learning and reinforcement learning, and (2)demonstrate how this architecture can be used to steer a vehicle in a racing simulator.1.1 R ELATED WORKSSteering a vehicle is a challenging problem where the bicycle model is the classical approach Padenet al. (2016). However, the bicycle model requires knowing the angle of the vehicle with respectto the road direction in order to compute the desired steering angle. Steering directly from imageshas been a long desired goal in autonomous driving where approaches like Salvucci & Gray (2004)advocate for a two point model which inspired the multi-point predictive representation proposed inthis paper.In comparison, learning to regress an image directly to the steering angle in an end-to-end mannerhas been a recent hot topic Bojarski et al. (2016)Garimella et al. (2017)Chen & Huang (2017)Sallabet al. (2017). However, a serious challenge is ensuring robustness of the controller when learningend-to-end Bojarski et al. (2016). In particular, the agent is not typically trained on recovery modescenarios and so there are generalization and data coverage issues; for this reason, authors Bojarskiet al. (2016) introduced augmented images in training by artificially shifting and rotating them tohelp the network learn to recover with some limited success.The approach in Chen et al. (2015) learns to predict the current road angle directly from imagesand then uses a classical steering controller to control the vehicle. The proposed approach is similarexcept we predict future road angles and lane centeredness at different temporal horizons which isthen passed to a controller module to choose steering angles. Policy gradient with the predictive staterepresentation is the approach used in this paper but this can also be replaced with other controllers.This architecture allows for a degree of interpretability in the controller that is not easily achievedwith end-to-end approaches Bojarski et al. (2016) despite work on understanding and improving itsrobustness.2 P REDICTIVE LEARNINGWe consider an environment described by a set of states S, a set of actions A, and Markov transitiondynamics with probability P(s0js;a)of transitioning to next state s0after taking action afrom states. This setting is nearly identical to a Markov Decision Process (MDP) where the only difference isthe absence of a reward signal to maximize. The goal is to learn an estimator that predicts the returnGtof a cumulant ctdefined byGt1Xk=0(kYj=0t+j+1)ct+k+1 (1)wherectis a cumulant signal to be predicted, and 0t<1is the continuation function. Thegeneral value function is defined asV(s) =E[Gtjst=s;at=a;at+1:T1;T] (2)where(ajs),(s;a;s0), andc(s;a;s0)are the policy, continuation and cumulant functions, respec-tively, that make up the predictive question Sutton et al. (2011) where V(s)represents the totaldiscounted cumulant starting from state sand acting under policy . Unfortuantely, there are currentlyno algorithms to learn the predictive question through interaction with the environment; thus, ,,andcare typically defined by an expert. Cumulants are commonly scaled by a factor of 1whenis a constant in non-episodic predictions.A GVF can be approximated with a function approximator, such as a neural network, parameterizedbyto predict equation 1. The agent usually collects experience under a different behavior policy2Under review as a conference paper at ICLR 2020(ajs)where off-policy policy evaluation methods are needed to learn the GVF. The parameters are optimized with gradient descent minimizing the following loss functionL() =Esd;a[2] =Esd;a[2] (3)where=E[y^v(s;)js;a]is the TD error and =(ajs)(ajs)is the importance sampling ratio tocorrect for the difference between the target policy distribution and behavior distribution . Notethat only the behavior policy distribution is corrected rather than the state distribution d. The targetyis produced by bootstrapping a prediction Sutton (1988) of the value of the next state followingtarget policy given byy=Es0P[c+^v(s0;)js;a] (4)whereyis a bootstrap prediction using recent parameters that are assumed constant in the gradientcomputation. Some approaches use older parameters of the network to make a bootstrappedprediction to improve stability in the learning Mnih et al. (2013). However, this was not found to benecessary when learning GVFs since the target policy is fixed and the learning is simply off-policypolicy evaluation. dis the state distribution of the behavior policy and the time subscript on candhas been dropped to simplify notation.The gradient of the loss function equation 3 is given byrL() =Esd;a[r^v(s;)] (5)An alternative approach to using importance sampling ratios is to apply importance resamplingSchlegel et al. (2019). With importance resampling, a replay buffer Dof sizeNis required andthe gradient is multiplied with the average importance sampling ratio of the samples in the buffer=PNi=1iN. The importance resampling gradient is given byrL() =Es;aD[r^v(s;)] (6)where the transitions in the replay buffer are sampled according to D(ai;si) =(aijsi)PNj=1(ajjsj)fortransition with state siand actionaiin replay buffer D. This approach is proven to have lowervariance than equation 5 with linear function approximation Schlegel et al. (2019). An efficientdata structure for the replay buffer is the SumTree used in prioritized experience replay Schaul et al.(2016). This is a natural approach to learning predictions with deep reinforcement learning sincesampling a mini-batch from the replay buffer helps to decorrelate sample updates in deep functionapproximation Mnih et al. (2013).A behavior policy needs to be defined to adequately explore the environment when learning GVFs.This may be an evolving policy that is learned by RL, a random policy for exploring the environment,or a human driver collecting data safely. It is common, especially in the case of human drivers, forthe behavior policy distribution (ajs)of the agent to be unknown. We propose an algorithm usingthe density ratio trick to learn the behavior policy distribution in an adversarial way. It is well suitedfor problems with low dimensional action spaces like autonomous driving.The ratio of two probability densities can be expressed as a ratio of discriminator class probabilitiesthat distinguish samples from the two distributions. Let us define a probability density function(ajs)for the distribution to compare to the behavior distribution (ajs)and class labels y= +1 andy=1that denote the class of the distribution that the state action pair was sampled from: (ajs)or(ajs)respectively. A discriminator g(a;s)is learned that distinguishes state action pairs fromthese two distributions using the cross-entropy loss. The ratio of the densities can be computed usingonly the discriminator g(a;s).(ajs)(ajs)=p(ajs;y= +1)p(ajs;y=1)=p(y= +1ja;s)p(ajs)=p(y= +1)p(y=1ja;s)p(ajs)=p(y=1)=p(y= +1ja;s)p(y=1ja;s)=g(a;s)1g(a;s)(7)Here we assume that p(y= +1) =p(y=1). From this result, we can estimate (ajs)with^(ajs)as follows^(ajs) =g(a;s)1g(a;s)(ajs) (8)3Under review as a conference paper at ICLR 2020where(ajs)is a known distribution over action conditioned on state. The uniform distribution overthe action is independent of state and has the advantage of being effective and easy to implement.The algorithm for training a GVF off-policy with an unknown behavior distribution is given byAlgorithm 1 Off-policy GVF training algorithm with unknown (ajs)1:Initialize ^v,g(a;s), and replay memory D,2:Observe initial state s03:fort=0,T do4: Sample action atfrom unknown (ajs)5: Execute action atand observe state st+16: Compute cumulant ct+1=c(st;at;st+1)7: Compute continuation t+1=(st;at;st+1)8: Compute behavior density value ^(atjst)according to equation 89: Compute importance sampling ratio t=(atjst)^(atjst)10: Store transition (st;at;ct+1;t+1;st+1;t)inD11: Sample random minibatch Aof transitions (si;ai;ci+1;i+1;si+1)fromDaccording toprobabilityiPnj=1j12: Computeyi=ci+1+i+1^v(si+1;)for minibatch A13: Perform gradient descent step on (yi^v(si;))2according to equation 6 for minibatch A14: Sample random minibatch Bof state action pairs (si;ai)fromDaccording to a uniformprobability and assign label y= +1 to each pair15: Randomly select half the samples in the minibatch Band temporarily replace the label withy=1and action with at(ajs)16: Update behavior discriminator g(a;s)with modified minibatch B3 P REDICTIVE CONTROLLet us consider an MDP with a predictive representation (s)mapping state sto predictions (s).The reward for the problem is denoted as r. The problem is to find a policy (aj(s))that maximizesfuture return or accumulated discounted reward. We hypothesize that this approach should be easierto train than learning (ajs)directly for the following reasons:the target policy of the predictions is fixed making for faster learningthe compact abstraction (s)allows for simple (possibly even linear) policy and action-valuefunctionsthe cumulant signal cmay only be available during training or is expensive to obtainThe last advantage is particularly important in autonomous driving where localization techniquesoften require a collection of expensive sensors and high definition map data that is not always availableor easily scalable to a fleet of autonomous vehicles. In this way, one can train a neural network tomap images captured by inexpensive cameras to predictions of lane centeredness and road anglecaptured by any number of highly accurate but expensive localization approaches with the hopes ofgeneralizing features for lane control.The agent learns to steer with deterministic policy gradient (DPG) Silver et al. (2014) using thepredictions as the state of the agent. When linear policy function approximation is used, the controllerlearned is essentially equivalent to a prediction-based PID controller only where there is a deepmapping from images captured by a camera to predictions of the future used to control the vehicle.Using predictions for PID control is not new; this approach can be used to tackle problems with hightemporal delay between the error signal and the corrective actions. One can also add integral andderivative terms of the predictions to the state space representation of the agent.Action-value Q((s);a)and policy((s))networks are trained according to DPG Silver et al.(2014) where the action value approximates the expected discounted return, i.e. Q((s);a) =E[P1i=0irt+i+1]. In addition, a policy network ((s))produces an action according to thecurrent predictive state representation (s)that maximizes the expected discounted return. Because4Under review as a conference paper at ICLR 2020of the interesting connection between DPG and PID control when linear function approximation isused, the policy network is parameterized asat=((st)) = |(st) (9)where is a matrix denoting parameters to be learned by DPG. They also represent the gaincoefficients for a proportional controller which allows for interpretability of the learned parameters.The action-value network Q((s);a)is given by a small neural network that maps the predictivestate representations (s)and actionato an estimate of the action-value in the current state sifaistaken and the optimal policy followed thereafter.In autonomous driving, knowing the road curvature ahead can be informative for making decisionsto maintain lane centeredness in a tight turn. In Figure 1, it is demonstrated how future off-policypredictions of lane centeredness can be used to predict the deviation (or error) between the truecenter of lane and the projected lane centeredness along the current direction of the vehicle. Thesepredictions must be off-policy because if they were on-policy they would tell us no information toinform the agent how much adjustment is needed to make corrective actions to stay in the center ofthe lane.Figure 1: Future predictions capturing information about the shape of the road ahead(a) Lane centeredness (b) Road angleFigure 2: (a) the lane centeredness position is the distance from the center of the lane to the centerof the vehicle. (b) the road angle is the angle between the direction of the vehicle and the directionof the road.The lane centeredness and road angle are two kinds of predictions that are useful in steering thevehicle as depicted in Figure 2. We represent the road curvature as a set of predictions of future lanecenteredness and road angles at different time horizons. The predictive state representation isgiven by the feature vector (s)(st) = [V(st;=0);:::V(s;=m);V(s;=0);:::V(s;=m)] (10)whereV(st;=0)is a GVF prediction under target policy , cumulant(lane centeredness)and continuation function 0whileV(st;=0)is a GVF prediction of cumulant (road angle).There aremnumber offunctions for predicting lane centeredness and mnumber offunctionsfor predicting road angle at different temporal horizons. Because the predictions represent deviationsfrom the desired lane centeredness and road angle, the policy network of DPG can be linear.4 E XPERIMENTS IN TORCSThe predictive learning approach is applied to the challenging problem of learning to steer a vehiclein the TORCS Wymann et al. (2013) racing environment. A kinematic-based steering approach Padenet al. (2016) is used as a baseline for all the experiments. The reward in the TORCS environment is5Under review as a conference paper at ICLR 2020given byrt=vtcostwherevtis the speed of the vehicle in km/h, tis the angle between the roaddirection and the vehicle direction. A simple scaling factor of 0:01was applied to the reward in orderto reduce the variance of the action-value. Notice that this reward doesn’t force the agent to stay inthe center of the lane; however, this strategy is likely a good idea to achieve high total reward on allthe test tracks. The target speed of all the agents is 50 km/h where vehicle speed was controlled by aseparate manually tuned PID controller.The agents were trained on 85% of the 40 tracks available in TORCS. The rest of the trackswere used for testing (6 in total); all of the results in this section are on the testing tracks whichwere never presented to the agents during training. This was done to measure the generalizationperformance of the policies learned by the agents which can be a serious problem in RL, cf. Zhaoet al. (2019)Farebrother et al. (2018). Tracks where the road was not level were excluded since theyproved to be challenging likely because a non-zero action was required to keep the vehicle centered.4.1 T ESTRESULTSIn all the figures, blue corresponds to DDPG-Image with only image and speed as input, greencorresponds to DDPG-ImageLowDim with images, current speed, andprovided as input, orangecorresponds to the new GVF-DPG approach with image, current speed and last two actions sincethe target policy of the prediction depends on the last action taken, and red corresponds to theclassical front wheel steering model. A history of two images were provided to each method. Asupervised method of predicting the current lane centeredness and road angle directly from the imagewas attempted with negative results: the controller learned was consistently unstable. Results wererepeated over 5 runs. The total score achieved on the test tracks by the agents is plotted in Figure 3over 1M training iterations.It is clear that DDPG-ImageLowDim performs best of the learned methods on all test tracks. Thislow dimensional information provides ground truth to the agent which may not always be available inall locations; however, it makes a good baseline target for what we hope our proposed method couldachieve through generalization. With DDPG-Image, it is clear that it does not learn to steer fromimages very well.Figure 3: Test scores (accumulated reward) during trainingIt could be argued that DDPG-Image may improve with more iterations; however this is likely notthe case. The reason is shown in Figure 4 where DDPG-Image consistently converges to a solutionthat oscillates between the extreme left and right steering actions very rapidly. This oscillation is soextreme that the agent is unable to achieve the target speed of 50 km/h; instead it travels at 40 km/hon average for all the test tracks. The performance gap also suggests that DDPG-ImageLowDim maybe relying more on the low dimensional lane information rather than the image.To highlight how uncomfortable the two DDPG agents drive compared to the GVF-DPG agent, Figure4 shows the standard deviation of the change in action during 1M iterations of training. On most6Under review as a conference paper at ICLR 2020tracks, it is apparent that the GVF-DPG approach controls the vehicle more smoothly than the otherlearned methods.Figure 4: Standard deviation of the change in action during trainingThe performance of the individual test tracks is given in the following Figure 5. The GVF-DPGapproach does not steer successfully on all test tracks: it fails immediately on wheel-2, part-waythrough on a-wheelway, and drives well on most of alpine-2. The DDPG-Image fails to completedirt-4, wheel-2, spring, and a-speedway. Finally, DDPG-ImageLowDim successfully completes allthe test tracks; however, the agent has a strong bias to the left side of the track. The GVF-DPG agentoften follows the classical controller relatively well except on the tracks where the agent fails. Thissuggests that using a predictive representation of lane centeredness and road angle achievescloser performance to a classical controller than an end-to-end learned approach. However, morework is needed to improve the generalization abilities of the approach.The learning curves for the predictors and the policy gradient agents is given in the following Figure 6.It is interesting that the learning curves of the action-value function estimator of the GVF-DPG agentis much smaller than the other agents and quite smooth. The reason is believed to be because thepredictive state representation is constrained to values between [1;+1]acting as a sort of regularizerto the state representation of the agent. The learning curve of the DDPG-ImageLowDim howevereventually approaches the low error of the GVF-DPG agent. The predictors converge relativelyquickly as shown in Figure 6(b). The behavior estimator in Figure 6(c) stabilizes relatively quicklyduring learning as well; it is postulated that the error does not decrease further since the behaviorpolicy is changing slowly over time and the behavior estimator must track this change.5 C ONCLUSIONSA method of learning a predictive representation off-policy is presented where the behavior policydistribution is estimated via an adversarial method employing the density ratio trick. It is demonstratedthat deep off-policy predictions can be learned with a deep behavior policy estimation to predict futurelane centeredness and road angles from images. The predictive representation is learned with lineardeterministic policy gradient. All of these components are combined together in a framework calledGVF-DPG and learned simultaneously on the challenging problem of steering a vehicle in TORCSfrom only images. The results show that the GVF-DPG is able to steer smoothly with less change inaction and achieve better performance than DDPG from only images and similar performance to thekinematics model in several but not all of the test tracks. This work is also a demonstration that wecan learn off-policy predictions, characterize the behavior policy and learn the controller all at thesame time despite the challenges of the behavior policy evolving with the agent and the predictivestate representation changing over time.Our work demonstrates that a learned prediction-based vision-only steering controller could po-tentially be viable with more work on improving the generalizability of the off-policy predictions.7Under review as a conference paper at ICLR 2020(a) Alpine-2 (b) Evo-2-r(c) Dirt-4 (d) Wheel-2(e) Spring (f) A-SpeedwayFigure 5: The lane centeredness position on the (a) alpine-2, (b) evo-2-r, (c) dirt-4, (d) wheel-2, (e)spring, and (f) a-speedway tracks in TORCS.(a) Action-Value (b) Predictors (c) BehaviorFigure 6: Log-loss learning curves for the (a) Q-values of the DPG agents, (b) mean squared TD(temporal difference) errors of the GVF predictors, and (c) MSE of the behavior model estimatorThis work supports the predictive state representation hypothesis in Rafols et al. (2005) that deeppredictions can improve the generalization of RL to new road environments when using only im-ages as input. For future work, we hope to study how to learn the question for the predictive staterepresentation: ,, andc. Moreover, because the behavior policy is unknown and estimated, ourresults suggest that collecting real-world human driving to train predictions off-policy without theneed for a simulator could be a viable approach to steering a vehicle from images. This is potentiallyadvantageous since the human driver can explore the road safely.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text This paper combines predictive state representations (PSRs) with DPG and tests the overall performance on a TORC simulated task. Specifically, the authors propose to train a generalized value function (GVF) where the cumulant is how far the car is from the center of the road as well as the angle of the car. To train the GVF, the author propose to perform off-policy learning with importance resampling. To estimate the ratios, the authors propose to learn a discriminator that predicts which policy the actions came from, and show how to use this discriminator to estimate the likelihood ratio. For DPG, a small neural network is used for the Q function, and a linear policy is used. The authors demonstrate that their method outperforms DDPG-from-images on held-out TORC racing tasks, while not quite reaching the performance of DDPG-from-ground-truth state. Using PSRs is a promising direction of work, but I found the contribution of this work rather obfuscated. It seems like there are two sources of novelty: (1) the use of a different number of continuation functions and (2) using a discriminator to estimate the importance ratio, but no details were given about these implementations. The paper would be greatly strengthened by reducing the amount of time spent summarizing prior work, and more thoroughly describing these contributions. Studying one of these contributions in more detail, rather than analyzing the performance of the final policies on a simulated task would also help make the hypothesis and insight of this paper more clear. In particular, I felt like these answers were unanswered: 1. What parts of the algorithm are important to the good final performance? 2. How important is it to use different discount factors? 3. How does this work relate to prior work on off-policy evaluation? 4. How important was it for a target policy to *not* be used? 5. How was the discriminator trained (e.g. hyperparameters)? The authors use ground-truth information when training. I am surprised that a supervised learning method trained with the ground-truth information was unable to recover the performance of DDPG-ImageLowDim. Can more details be given in this training procedure? Is it difficult to train a model to predict the current angle given the image? I find this a bit surprising. Also, if an OU-process was used for exploration, how were the behavior policy likelihoods estimated using one-step backups? Since an OU-process isn't Markovian, it seems like this would require doing trajectory-level likelihoods, rather than on-step likelihood as suggested by Equation 3. Minor comments: - Why don't the other methods receive the last two actions? - The authors should cite [1,2] when referencing PSRs. - Please include legends and axis-labels to the Figures. - I don't understand the sentence, "These predictions must be off-policy because if they were on-policy they would tell us no information to inform the agent how much adjustment is needed to make corrective actions to stay in the center of the lane." In particular, why wouldn't on-policy predictions be more useful to an agent? [1] Singh et al. Predictive state representations: A new theory for modeling dynamical systems. AUAI. 2004. [2] Littman et al. Predictive representations of state. NeurIPS. 2001. ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
o-K3HVUeEw
robot-learning.org/CoRL/2023/Conference
2023
Composable Part-Based Manipulation
["Weiyu Liu", "Jiayuan Mao", "Joy Hsu", "Tucker Hermans", "Animesh Garg", "Jiajun Wu"]
In this paper, we propose composable part-based manipulation (CPM), a novel approach that leverages object-part decomposition and part-part correspondences to improve learning and generalization of robotic manipulation skills. By considering the functional correspondences between object parts, we conceptualize functional actions, such as pouring and constrained placing, as combinations of different correspondence constraints. CPM comprises a collection of composable diffusion models, where each model captures a different inter-object correspondence. These diffusion models can generate parameters for manipulation skills based on the specific object parts. Leveraging part-based correspondences coupled with the task decomposition into distinct constraints enables strong generalization to novel objects and object categories. We validate our approach in both simulated and real-world scenarios, demonstrating its effectiveness in achieving robust and generalized manipulation capabilities.
["Manipulation", "Part Decomposition", "Diffusion Model"]
Composable Part-Based ManipulationWeiyu Liu1, Jiayuan Mao2, Joy Hsu1, Tucker Hermans3,4, Animesh Garg3,5, Jiajun Wu11Stanford2MIT3NVIDIA4University of Utah5Georgia TechAbstract: In this paper, we propose composable part-based manipulation (CPM),a novel approach that leverages object-part decomposition and part-part correspon-dences to improve learning and generalization of robotic manipulation skills. Byconsidering the functional correspondences between object parts, we conceptualizefunctional actions, such as pouring and constrained placing, as combinations ofdifferent correspondence constraints. CPM comprises a collection of composablediffusion models, where each model captures a different inter-object correspon-dence. These diffusion models can generate parameters for manipulation skillsbased on the specific object parts. Leveraging part-based correspondences coupledwith the task decomposition into distinct constraints enables strong generalizationto novel objects and object categories. We validate our approach in both simulatedand real-world scenarios, demonstrating its effectiveness in achieving robust andgeneralized manipulation capabilities. For videos and additional results, see ourwebsite: https://cpmcorl2023.github.io/.Keywords: Manipulation, Part Decomposition, Diffusion ModelrimhandlebodyPour<align, rim, rim><facing-up, handle, body><tilt, body, body>Test: Pour from mug to bowl in real worldTrain: Pour from glass, pan, and bowl to bowl in simulationFigure 1: CPM composes part-based diffusion models to predict target object poses directly from point clouds.In this example, we show that the “pouring” action is decomposed into three part-based correspondences, whichgeneralize manipulation across object categories, and from simulation to the real world1 IntroductionCompositionality provides appealing benefits in robotic manipulation, as it enables efficient learning,reasoning, and planning. Prior works have extensively studied the decomposition of scenes intoobjects and their relationships [ 1,2,3], as well as the division of long-horizon plans into primitiveskills [ 3,4], in order to navigate complex environments and devise long-horizon plans. In this paper,we present a different view of compositionality by considering object-part decomposition based onfunctionality (e.g., rim, handle, body ), and leverage such decomposition to improve the learning ofgeometric and physical relationships for robot manipulation.In the context of language descriptions of objects, part names not only describe the geometric shapesof the parts but also capture their functional affordances. For instance, as depicted in Figure 1, for theaction of “pouring”, the rims define the boundary for alignment between the objects, the body of thepouring vessel should be tilted for the action, and its handle provides a constraint on the directionthe object should face when pouring. Leveraging this knowledge of part affordances, we posit thata family of functional actions, such as pouring and constrained placing, can be conceptualized asa combination of functional correspondences between object parts. Modeling actions using such a7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.decomposition yields two important generalizations. First, it enables action generalization to novelinstances from the same object category. Second and more importantly, it facilitates generalizationtounseen object categories. For example, after learning part affordances for the “pouring” action,our robot trained on “pour from bowls ” and “... pans ” can generalize to “pour from mugs ”, with noadditional training necessary for manipulation with the new object category.Motivated by these insights, we present the composable part-based manipulation (CPM). CPMcomprises a collection of diffusion models, where each model captures the correspondence betweenparts of different objects. These conditional diffusion models take the geometry of the object partsas input and generate parameters for manipulation skills, such as the starting and ending poses of abowl during the pouring action. Specifically, each model outputs a distribution of feasible trajectoriesthat satisfy a particular correspondence. After learning a collection of composable diffusion models,we represent actions as combinations of part-part correspondences. During inference, we leveragethe composition of primitive diffusion models to sample trajectories that adhere to all the partcorrespondences. This approach improves generalization to novel object categories over models thatdo not reason about both parts and composable correspondence constraints.In summary, this paper makes two key contributions. First, we propose composable part-based manip-ulation, which models manipulation actions as a composition of part-part correspondences betweenobjects. Second, we develop diffusion models trained to capture primitive functional correspondencesthat can be flexibly recombined during inference. CPM achieves strong generalization across variousdimensions, including novel object instances and object categories. We validate the efficacy of CPMon both PyBullet-based simulations and real-robot experiments.2 Related WorkObject representations for manipulation. Prior works use segmentations of common objectparts (e.g., blades, lids, and handles) for manipulating articulated objects [ 5,6,7,8] as well as fortransfer to novel objects [ 9,10]. A common approach that has been shown effective across differentmanipulation domains [ 11,12,13] first predicts which part of an object the robot should focus on(e.g., the handle), and then predicts an action relative to the part. Closely related is visual affordancedetection [ 14,15,16], which segments objects into different functional regions, such as graspableparts and support surfaces of objects. These functional regions can be shared by more distinctobjects, and can be useful for generalizing task-oriented grasping between object categories [ 17,18].Keypoints are another representation that shows robustness to large intra-category shape variationand topology changes [ 19]. Each keypoint set can provide essential pose information, that lacks inprevious segmentation approaches, to support tasks such as hanging mugs on pegs by their handles.The initial supervised approach [ 19] has been extended to methods that discover keypoints frominteractions [ 20,21] and from unlabeled videos [ 22]. Recently, implicit object representations havebeen used to provide correspondence between any point within the same object category generalizingacross 6-DoF pose changes [ 23,24,25]. Large pretrained vision models also support the developmentof object representations; recent works leverage these models to significantly reduce domain-specifictraining data, showing strong results for open-vocabulary part segmentation [ 26], few-shot affordancesegmentation [ 27], and one-shot pose estimation on any novel object from the same category [ 28].Despite this huge progress, we still lack object representations that support strong generalization ofmanipulation to new object categories. We focus on tackling this problem.Learning interactions of objects. Works in robotics have established the importance of modelinginteractions of objects. Recent approaches directly work on 3D observations, without relying onknown object models. Learning spatial relations between objects enables the picking and placingof objects at specific locations [ 1,29,30,2,31], such as placing an object in the middle drawer,stacking objects, and setting the table. These relations can be extended to represent the logical stateof the world to support planning for long-horizon tasks [ 3,32,33]. Other works focus on learninglower-level interactions between objects, such as placing an object stably on a messy tabletop andpushing an object using a tool [ 34,35]. For example, O2O-afford [ 34] correlates feature mapsextracted from two objects using a point convolution and outputs a point-wise interaction heatmap.2<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart1<latexit sha1_base64="jqQn1QL4mOITLlizeUeaD0EKzaM=">AAACi3icbVFdaxQxFM2MH62j1Wl99CW4LFQoy0wrKtKHWlF8kordtrCzDplsZjc0H0NyR1xCfqiP/Rm+mdmO0G29kHA4956c5KRqBLeQZb+j+N79Bw83Nh8lj59sPX2Wbu+cWd0aysZUC20uKmKZ4IqNgYNgF41hRFaCnVeXH7v++U9mLNfqFJYNm0oyV7zmlECgytQNccEay4VWP9wuvPKlKyrpvvuS7+GvPhnieVnAggFJhl3jtB/44Lv9s99bzdSBBPYLABzoomyIAe+TQhJYUCKC6ObZ/6S+TAfZKFsVvgvyHgxQXydl+qeYadpKpoAKYu0kzxqYuuDGqWDBsLWsIfSSzNkkQEUks1O3CsnjYWBmuNYmLAV4xd5UOCKtXcoqTHb3trd7Hfm/3qSF+t3UcdW0wBS9NqpbgUHjLnE844ZREMsACDU83BXTBTGEQviXNZdKrr3BdV6gtbA+CVnlt5O5C872R/mb0f6314Oj4z61TfQCvUS7KEdv0RH6gk7QGFF0FW1EabQdb8UH8fv48Ho0jnrNc7RW8ae/smvH5w==</latexit>T(t)AF<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="qT+6EftEtJ4oZA+aT07HQ481bfE=">AAACy3icfVFdb9MwFHXC1ygf6+CRF0NVaUhVlUwI9jhAAl5AnVi3SU2JHNdprfkjsm8QxeSRX8Uv4afwhp1l0rohrpTo6Jx7fK6vi0pwC0nyO4pv3Lx1+87W3d69+w8ebvd3Hh1bXRvKplQLbU4LYpngik2Bg2CnlWFEFoKdFGdvg37ylRnLtTqCdcXmkiwVLzkl4Km8/2uYscpyodUXtwvPm9xlhXSfm5yP8KemN8TLPIMVA9IbBuGoa3jdhP+7ZtT2lJ4E9g0AHOgsr4iBxvOZJLCiRHjX5cMvvMH53/ALMZweRhjhNoX7FC58Qt4fJOOkLXwdpB0YoK4mef9PttC0lkwBFcTaWZpUMHd+XE4F84G1ZRWhZ2TJZh4qIpmdu3bJDR56ZoFLbfynALfsZYcj0tq1LHxnuLe9qgXyX9qshnJ/7riqamCKngeVtcCgcXgxvOCGURBrDwg13M+K6YoYQsG/60ZKITfu4EIWaC1s0/O7Sq9u5jo43hunL8d7hy8GB2+6rW2hJ+gZ2kUpeoUO0Ac0QVNEo6fR+2gSHcYfYxt/j3+ct8ZR53mMNir++Rey7uGE</latexit>✏✓,tilt<latexit sha1_base64="+fqA0FQehuFsG/PLoRhstfX2gEk=">AAAC0XicfVFdaxNBFJ1dv2r8ivroy2AIVghht4jtY1UQX5RKm6SQjcvsZDYZOh/Lzl0xDAPiq7/Kn+FP8c2Z7RaaVrywy+Gce+bcuVNUghtIkt9RfOPmrdt3du727t1/8PBR//GTqdFNTdmEaqHr04IYJrhiE+Ag2GlVMyILwWbF2bugz76y2nCtTmBTsYUkK8VLTgl4Ku//GmasMlxo9cXuwkuX26yQ9tjlfIQ/ud4Qr/IM1gxIbxiEk67hjQv/927U9pSeBPYNACzoLK9IDc7zmSSwpkR41+XDL7zB+d/wCzGcHkYY4TaFgy0J5WqV5U3lXN4fJOOkLXwdpB0YoK6O8v6fbKlpI5kCKogx8zSpYGH90JwK5mMbwypCz8iKzT1URDKzsO2qHR56ZolLXftPAW7Zyw5LpDEbWfjOcHtzVQvkv7R5A+XBwnJVNcAUPQ8qG4FB4/BueMlrRkFsPCC05n5WTNekJhT8626lFHLrDjZkgdbCuJ7fVXp1M9fBdG+cvh7vfX41OHzbbW0HPUPP0S5K0T46RB/QEZogGr2IPkbTaBYfx5v4e/zjvDWOOs9TtFXxz78VnOQL</latexit>✏✓,facingup<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="+X11N6DDJgkTslNumghcvbcIAB8=">AAACzHicfVFda9swFJW9j3bZV9Y+7kU0BDoIwS5j62PXQelTabemLcSZkRU5EZUlI12PBaHX/ar9kf2UvU1yXWjasQs2h3Pu0bm6KmrBDSTJ7yh+9PjJ043NZ73nL16+et1/s3VhVKMpm1AllL4qiGGCSzYBDoJd1ZqRqhDssrj+HPTL70wbruQ5rGo2q8hC8pJTAp7K+7+GGasNF0p+s7vwzuU2Kyr71eV8hE9cb4gXeQZLBqQ3DMJ51/DJhf+RG7U9pSeB/QAACyrLa6LBeT6rCCwpEd519/Bbb3D+N/xWDKeHEUa4TeFgieAL6VzeHyTjpC38EKQdGKCuTvP+n2yuaFMxCVQQY6ZpUsPM+nk5FcwnNobVhF6TBZt6KEnFzMy2W3Z46Jk5LpX2nwTcsncdllTGrKrCd4aLm/taIP+lTRso92eWy7oBJulNUNkIDAqHJ8NzrhkFsfKAUM39rJguiSYU/MOupRTV2h1syAKlhHE9v6v0/mYegou9cfphvHf2fnBw2G1tE71FO2gXpegjOkDH6BRNEI12ouPoLPoSn8QQ29jdtMZR59lGaxX//Aug+uHc</latexit>✏✓,align<latexit sha1_base64="e/VTTwH/614p3YtdKa7KcUjTzLg=">AAACEnicbVDLSgNBEJz1GeMr6tHLYBA8hd0gKp4CXjxGMA9IljA7mSRD5rHM9AphyS94EvRbvIlXf8BP8eZssgeT2NBQVHVT3RXFglvw/W9vbX1jc2u7sFPc3ds/OCwdHTetTgxlDaqFNu2IWCa4Yg3gIFg7NozISLBWNL7L9NYTM5Zr9QiTmIWSDBUfcEogo7o2kb1S2a/4s8KrIMhBGeVV75V+un1NE8kUUEGs7QR+DGFKDHAq2LTYTSyLCR2TIes4qIhkNkxnt07xuWP6eKCNawV4xv7dSIm0diIjNykJjOyylpH/aZ0EBjdhylWcAFN0bjRIBAaNs8dxnxtGQUwcINRwdyumI2IIBRfPgkskF35IMy/QWthp0WUVLCezCprVSnBVqT5clmu3eWoFdIrO0AUK0DWqoXtURw1E0Qg9o1f05r14796H9zkfXfPynRO0UN7XL377ntQ=</latexit>XPoint Cloud TransformerDiffusion Transformer Pose Encoder<latexit sha1_base64="JFIyIqOEkSL96w1PNKK6ZpOrrrc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuqG5cV7AOaECbTSTt2JhNmJkoJAb/GlaAf4NIvcFfc+hPunLRd2NYDwz2cey9n7gliSqSy7aGxsLi0vLJaWCuub2xubZs7uw3JE4FwHXHKRSuAElMS4boiiuJWLDBkAcXNoH+d95sPWEjCozs1iLHHYDciIUFQack3992Apa3MT/Nam9TL7PQ+y3yzZJftEax54kxIqXoSBo8f7nvNN3/cDkcJw5FCFErZduxYeSkUiiCKs6KbSBxD1Idd3NY0ggxLLx3dkFlHWulYIRf6RcoaqX83UsikHLBATzKoenK2l4v/9dqJCi+8lERxonCExkZhQi3FrTwQq0MERooONIFIEP1XC/WggEjp2KZcAjZ1Q5p7Kc6pzIo6K2c2mXnSqJSds3LlVod2BcYogANwCI6BA85BFdyAGqgDBJ7AM3gFb8aL8WkMja/x6IIx2dkDUzC+fwHKJqxO</latexit>XPA,j<latexit sha1_base64="wrnhrhzab79ObCCMxCN6TuVT3zc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuiIC4j2AekIUymk3boTCbMTIQSAn6NK0G/wbUrd8WtP+HOSduFbT0w3MO593LmnjChRCrbHhlLyyura+uljfLm1vbOrrm335Q8FQg3EKdctEMoMSUxbiiiKG4nAkMWUtwKBzdFv/WIhSQ8flDDBPsM9mISEQSVlgLzsBOyrJ0HWVHdab3Nzwd5HpgVu2qPYS0SZ0oq9bOPd1eUPDcwfzpdjlKGY4UolNJz7ET5GRSKIIrzcieVOIFoAHvY0zSGDEs/G9+QWyda6VoRF/rFyhqrfzcyyKQcslBPMqj6cr5XiP/1vFRFV35G4iRVOEYToyilluJWEYjVJQIjRYeaQCSI/quF+lBApHRsMy4hm7khK7wU51TmZZ2VM5/MImnWqs5FtXavQ7sGE5TAETgGp8ABl6AO7oALGgCBJ/AMXsGb8WJ8GiPjazK6ZEx3DsAMjO9fO/Sr7A==</latexit>XPF,k<latexit sha1_base64="3u1nRu5P5VJ+x+NSxU2Dim9fHLs=">AAACLHicbVC7TsMwFHXKq5RXeGwwRCAkEKhKOgBjBQsDQ0EUkNpQOa7bmthxZN8gVVEWvoYJCb6FBSHExj+w4bQMFDiS5eNz79XxPUHMmQbXfbEKY+MTk1PF6dLM7Nz8gr24dKFlogitE8mlugqwppxFtA4MOL2KFcUi4PQyCI/y+uUtVZrJ6Bz6MfUF7kaswwgGI7XstWYg0vPsOt2C7ayV5q+auW/CbNfLWvaGW3YHcP4S75tsVCsr4fvJzlmtZX8225IkgkZAONa64bkx+ClWwAinWamZaBpjEuIubRgaYUG1nw62yJxNo7SdjlTmROAM1J8TKRZa90VgOgWGnv5dy8X/ao0EOgd+yqI4ARqRoVEn4Q5IJ4/EaTNFCfC+IZgoZv7qkB5WmIAJbsQlECM7pLkXSMl1VjJZeb+T+UsuKmVvr1w5NaEdoiGKaBWtoy3koX1URceohuqIoDt0jx7Rk/VgPVuv1tuwtWB9zyyjEVgfX0ooq8Y=</latexit>T(t)Pjk,1<latexit sha1_base64="30ibTJ7MVRmCwncRUkcebQ8ruUw=">AAACS3icdVDLSgMxFM3UV62v+ti5CUpBUcpMFyq4KbpxIVKlVaHWkklTjU0mQ3JHKMN8jF/jStClKz/CleLCTNuFrXoh5OScezm5xw8FN+C6r05mbHxicio7nZuZnZtfyC8unRsVacpqVAmlL31imOABqwEHwS5DzYj0BbvwO4epfnHPtOEqqEI3ZA1JbgLe5pSApZr5/QK+8mVcTa7jDdhMmnH6qtj7rpNse0nuf/EkaebX3aLbK/wbeAOwXi6tdN6Ot84qzfzHVUvRSLIAqCDG1D03hEZMNHAqmDWLDAsJ7ZAbVrcwIJKZRtxbMsEFy7RwW2l7AsA99udETKQxXenbTkng1oxqKfmXVo+gvdeIeRBGwALaN2pHAoPCaWK4xTWjILoWEKq5/Sumt0QTCjbXIRdfDu0Qp16glDBJzmbljSbzG5yXit5OsXRqQztA/cqiVbSGNpCHdlEZHaEKqiGKHtAjekYvzpPz7nw6X/3WjDOYWUZDlZn4BpW9t04=</latexit>T(t)Pjk,N<latexit sha1_base64="T37olucuzTaUf0zP+2cBB5KiAz4=">AAACaHicfVBNTxsxFHS2tNDQj6U9VIiLBUWiEop2OUCPiF56qoIgEClJV17nhbix1yv7LVJk+W/1yKW/oveeKrW3VuKGN+FAoOqTLI9n3vPYk5dSWEyS743o0dLjJ8srT5urz56/eBmvvTqzujIcOlxLbbo5syBFAR0UKKFbGmAql3CeTz7U+vklGCt0cYrTEgaKXRRiJDjDQGVxd5v2c+VO/We3g+985upTO+xfJn439c3/yZ98sw+lFTJc5Po4BmS7tX7iM+GzeCtpJbOiD0F6C7YO3/75+u1y9W87i6/7Q80rBQVyyaztpUmJA8cMCi4hWFUWSsYn7AJ6ARZMgR24WQKebgdmSEfahFUgnbF3JxxT1k5VHjoVw7G9r9Xkv7RehaP3AyeKskIo+NxoVEmKmtZx0qEwwFFOA2DciPBWysfMMI4h9AWXXC38wdVeqLW0vhmySu8n8xCc7bXS/dbecQjtiMxrhWyQTbJDUnJADslH0iYdwskV+UF+kd+Nn1EcvYnW561R43bmNVmoaPMGE4TB+w==</latexit>✏✓,Si<latexit sha1_base64="v5wU0bmUU3WRz92nFB9wmicl/wk=">AAACLHicbVDLSgMxFM3UV62vUZe6CBZBQcqMiLoU3bisaKvQ1iGTphqax5DcEcowG7/GlaDf4kbErf/gzvSxsNYDgcM593JyT5wIbiEI3r3C1PTM7FxxvrSwuLS84q+u1a1ODWU1qoU2NzGxTHDFasBBsJvEMCJjwa7j7lnfv35gxnKtrqCXsJYkd4p3OCXgpMjfbLLEcqHVbbYDu3mUNWOZXeYR38NhHvnloBIMgCdJOCJlNEI18r+bbU1TyRRQQaxthEECrYwY4FSwvNRMLUsI7ZI71nBUEclsKxtckeNtp7RxRxv3FOCB+nsjI9LanozdpCRwb/96ffE/r5FC57iVcZWkwBQdBnVSgUHjfiW4zQ2jIHqOEGq4+yum98QQCq64sZRYjt2Q9bNAa2Hzkusq/NvMJKnvV8LDyv7FQfnkdNRaEW2gLbSDQnSETtA5qqIaougRPaEX9Oo9e2/eh/c5HC14o511NAbv6wfEqKjP</latexit>✏(t)Si,1<latexit sha1_base64="CqQquVftToqWZTkc4IA4qIyj4ms=">AAACLHicbVDLSgMxFM3Ud31VXeoiWIQKUmaKqEvRjSupaFVo65BJ0zY0jyG5I5RhNn6NK0G/xY2IW//BneljYdUDgcM593JyTxQLbsH337zc1PTM7Nz8Qn5xaXlltbC2fm11YiirUS20uY2IZYIrVgMOgt3GhhEZCXYT9U4H/s09M5ZrdQX9mDUl6Sje5pSAk8LCVoPFlgut7tIS7GZh2ohkepmFfA+fZ2Gh6Jf9IfBfEoxJEY1RDQtfjZamiWQKqCDW1gM/hmZKDHAqWJZvJJbFhPZIh9UdVUQy20yHV2R4xykt3NbGPQV4qP7cSIm0ti8jNykJdO1vbyD+59UTaB81U67iBJiio6B2IjBoPKgEt7hhFETfEUINd3/FtEsMoeCKm0iJ5MQN6SALtBY2y7uugt/N/CXXlXJwUK5c7BePT8atzaNNtI1KKECH6BidoSqqIYoe0CN6Ri/ek/fqvXsfo9GcN97ZQBPwPr8B9SSo7A==</latexit>✏(t)Si,N<latexit sha1_base64="mhJ+AVftmU1m0Dn1ijHEV2CsyMw=">AAADE3ichVJNj9MwEHXCxy7lqwtHLhZVpUWqqmSFWI4LSIgTKmK7u1JTIsd1WlPHjuIJorL8GzghwW/hhrjyA/gp3LDTrLTdghgpyei9efPG42Sl4Bqi6FcQXrl67frO7o3OzVu379zt7t070aquKBtTJVR1lhHNBJdsDBwEOysrRopMsNNs+cLzpx9YpbmSx7Aq2bQgc8lzTgk4KN0Lwn7CSs2Fku/MPjyyqUmywry1KR/g17bTx/M0gQUD0ul74rgteGb9+6UdNDW5A4F9BAADKklLUoF1eFIQWFAinOpi83OtV/7H/Jz2/f0QA9z4cDBE8Llct9hwibZd/jHGyH3fD5bWpt1eNIyawNtJ3CY91MYo7f5OZorWBZNABdF6EkclTI07NaeCOcNas5LQJZmziUslKZiemuayLO47ZIZzVblHAm7QiwpDCq1XReYq/dz6MufBv3GTGvKnU8NlWQOTdG2U1wKDwv7m8YxXjIJYuYTQirtZMV2QilBw/8eGS1ZsnMF4L1BKaNtxu4ovb2Y7OTkYxk+GB28e946et1vbRQ/QQ7SPYnSIjtArNEJjRAMefAq+BF/Dz+G38Hv4Y10aBq3mPtqI8OcfT4T9fQ==</latexit>T(t)Pj,k<latexit sha1_base64="kOps49F//byAFz3unoCUCeIPHac=">AAAC73ichVFNb9NAEF2bj5bwFeDIZUUUqUhRZFcIOBYQiBMqomkrxcFab9bJKvvh7o4R0cq/gxviyk/ixO/gxq7rSk2DxEi2nt7Mmzc7U1SCW0iSX1F87fqNmzu7t3q379y9d7//4OGx1bWhbEK10Oa0IJYJrtgEOAh2WhlGZCHYSbF6E/InX5ixXKsjWFdsJslC8ZJTAp7K+7+HGassF1p9dnvwtMldVkj3qcn5CH9oekO8yDNYMiC9YUgcdQWvmvB/14zamtKTwL4CgAOd5RUx0Hg+kwSWlAivutz8QhuU/zG/SIf+YYgRbn04OCL4QvkWmx7JlkfeHyTjpA28DdIODFAXh3n/TzbXtJZMARXE2mmaVDBz/kmcCuYNa8sqQldkwaYeKiKZnbn2EA0eemaOS238pwC37GWFI9LatSx8ZZjbXs0F8l+5aQ3ly5njqqqBKXpuVNYCg8bhqnjODaMg1h4QarifFdMlMYSCv/2GSyE33uCCF2gtbNPzu0qvbmYbHO+P0+fj/Y/PBgevu63tosfoCdpDKXqBDtB7dIgmiEZvo1UEUR2fxd/i7/GP89I46jSP0EbEP/8Cm1LwBg==</latexit>T(0)AF“Pour”Sampled Target PosesSegmented PartsInitial SceneExecution<latexit sha1_base64="hmc/Ol2wByB0lLttonACgh+OJ/I=">AAADHXichVLfb9MwEHbCj40wWAePvFhUlYZUVcmENh4HSIgnVMS6TWpK5LhOa+rYUXxBVFb+EJ6Q4G/hDfGK9qfsDTvNpHUb4qQ4p++7u+/u7LQQXEMYnnn+rdt37m5s3gvubz14uN3ZeXSsVVVSNqJKqPI0JZoJLtkIOAh2WpSM5KlgJ+niteNPPrNScyWPYFmwSU5mkmecErBQsuNt9WJWaC6U/Gh24VmdmDjNzYc64X38rg56eJbEMGdAgp4jjtqAl7U739T9JiazILAvAGBAxUlBSqgtHucE5pQIm3W5+EWuy/yP+AXt6rsm+rjR4WCI4DO5KrGmEt6g8o8+hvb/qb+wEUU7Y9LphoOwMXzdiVqni1obJp3zeKpolTMJVBCtx1FYwMTY+TkVrA7iSrOC0AWZsbF1JcmZnpjm2mrcs8gUZ6q0nwTcoJczDMm1XuapjXQD6KucA2/ixhVkLyaGy6ICJulKKKsEBoXdG8BTXjIKYmkdQktue8V0TkpCwb6UNZU0X5vBOC1QSug6sLuKrm7munO8N4j2B3vvn3cPX7Vb20RP0FO0iyJ0gA7RWzREI0Q97X31vns//G/+T/+X/3sV6nttzmO0Zv6fv1BDAWA=</latexit>p✓<latexit sha1_base64="pTqE88oby3YNTSrDuoIS2e/D9pA=">AAADJXichVLfixMxEM6uv871V0998yVYCieUsnuI+ngqiE9S8Xp30K1LNk3b2GwSNrNiCfvH+CTo3+KbCD75d/hmst3C9e7Egc0O3zcz38wkuRbcQBz/CsJLl69cvbZzPbpx89btO53du0dGVSVlI6qEKk9yYpjgko2Ag2AnumSkyAU7zpcvPX/8kZWGK3kIK80mBZlLPuOUgIOy3eB+L2XacKHke7sHj+rMpnlh39UZ7+M3ddTD8yyFBQMS9Txx2AY8r/35qu43MTMHAvsEABZUmmlSQu3wtCCwoES4rNPFN7k+8z/iG9rX9030caPDwRLB53JdYkslvkDlH30M3f9Df+kj9GZIN61e8KzTjQdxY/i8k7ROF7U2zDp/0qmiVcEkUEGMGSexhol1e+BUsDpKK8M0oUsyZ2PnSlIwM7HN9dW455ApnqnSfRJwg57OsKQwZlXkLtIPYs5yHryIG1cwezaxXOoKmKRroVklMCjs3wKe8pJRECvnEFpy1yumC1ISCu7FbKnkxdYM1muBUsLUkdtVcnYz552j/UHyZLD/9nH34EW7tR30AD1EeyhBT9EBeo2GaIRoYIPPwdfgW/gl/B7+CH+uQ8OgzbmHtiz8/RfuEgRC</latexit>g(a) System Overview(b) Composable Part-Based Diffusion Models(c) Point-Cloud Diffusion Transformer<latexit sha1_base64="pqldO+U6c+9Jdc5t/BwlGU7Q3+I=">AAADb3ichVJNb9NAEF3HfJTw0RQOHJDQiihSK4XIrhBwLCAhTiiIpq0Up9Z6s06WrL0r7xgRrfzz+BH8Bk5IcODGruNC07RiJHtHM+/NmxlNogTXEATfvJZ/7fqNm1u32rfv3L233dm5f6RlWVA2olLI4iQhmgmesxFwEOxEFYxkiWDHyeKNyx9/ZoXmMj+EpWKTjMxynnJKwIbiHe+0FzGluZD5qdmFvSo2UZKZj1XM+/h91e7hWRzBnAFp91zisAG8qtz/bdWvMakNAvsCAAZkFCtSQGXjUUZgTomwrPPFz7iO+R/xs7Sr75ro41qHgyGCz/JViTWV4BKVK/oY2vdTf+EQ6u+Qdlw15xtl4Wm4SbsS8k877nSDQVAb3nTCxumixoZx53c0lbTMWA5UEK3HYaBgYuxGORXMapaaKUIXZMbG1s1JxvTE1IdQ4Z6NTHEqC/vlgOvoeYYhmdbLLLFI17q+mHPBy3LjEtKXE8NzVQLL6UooLQUGid1V4SkvGAWxtA6hBbe9YjonBaFgb29NJcnWZjBOC6QUut5VeHEzm87R/iB8Ptj/8Kx78LrZ2hZ6hJ6gXRSiF+gAvUNDNELU++p99356v1o//If+Yx+voC2v4TxAa+bv/QGPux3u</latexit>T(t1)AFFigure 2: (a) Given a task, the partial point clouds of the anchor and function objects, and their parts extractedfrom a learned segmentation model gφ, we sample a sequence of transformations from a learned distribution pθto parameterize the function object’s trajectory. (b) CPM can be generalized to novel object categories becauseit decomposes each action to a collection of functional correspondences between object parts. To sample thetarget transformations that satisfy all functional correspondences, CPM combines the noise predictions from acollection of primitive diffusion models at inference time. (c) Each primitive diffusion model learns a target posedistribution that satisfies a particular part-part correspondence, based on the point clouds of the object parts.Functionals defined on top of object-wise signed distance functions can also represent constraints oninteractions between objects such as contact and containment [ 36]. Flow-based methods can alsolearn static relations between objects [ 37] as well as tool use [ 38], directly from point clouds. A maindifference between our work and these methods is that we bridge the modeling of interactions andobject representations through object-part decomposition and learned part-part correspondences, andenjoy empirically validated improvement in generalization.Composable diffusion models. A set of recent works have investigated the potential of diffusionmodels in robotics [ 39,40,41,42,43,44,45,46,2,47]. Research demonstrates that diffusionmodels can generate multimodal distributions over actions [ 41] and can handle spatial ambiguities insymmetric objects [ 2]. In image domains, prior work has shown a connection between conditionaldiffusion models and energy-based models, and proposed techniques to generate images by combiningdiffusion noises for different language conditions [ 48]. Recent work provides a more principled wayto sample from individually trained models using MCMC [ 49]. Another approach combines diffusionmodels by using additional trained adapters for generating faces [ 50]. CPM combines both lines ofwork to propose composable diffusion models for robotic manipulation. In doing so we must addresstwo challenges of adapting diffusion models to (1) output poses instead of pixels and (2) combineactions in different part frames, while retaining generalization to different distributions.3 Composable Part-Based ManipulationIn this work, our goal is to model functional actions involving an anchor object Athat remains staticand a function object Fthat is being actively manipulated. Shown in Fig. 2 (a), given a task Mandthe partial point clouds of two objects XAandXFin the world frame {W}, we want to predict asequence of SE(3)transformations, i.e., TW={TW,1, ..,TW,N}, which parameterized a trajectoryof the function object Fin the world frame in order to achieve the desired interaction with the anchorobjectA(e.g., pouring). Throughout the paper, we choose N= 2; i.e., we predict the starting poseand the ending poses of the object motion. Then, we use SE(3)interpolation between the two posesto generate the continuous motion trajectory. We define that the object frames of {A}and{F}arecentered at the centroids of the respective point clouds XAandXF, and have the same orientationas the world frame *. Each transformation TWin the world frame can thus be computed by therelative pose between the two objects TAFasTW=TWATAF(TWF)−1. A key challenge we aimto address is generalizing the functional actions from training objects to unseen object instances and,more importantly, novel object categories a robot may have never encountered during training.*The transformation from {W}to an object frame can be computed given this definition. For example,TWF = (RWF,tWF), where RWF is set to an identity matrix and tWF is set to the centroid of XF.33.1 Action as Part-Based Functional CorrespondencesComposable part-based manipulation (CPM) models each action Mas a composition of functionalcorrespondences between object parts. We formalize the symbolic representation of each correspon-denceC∈ CMas⟨Si,PA,j,PF,k⟩, where CMis the set of correspondences for M,Siis a spatialrelation, PA,jandPF,kare two parts of the anchor and the functional objects, respectively. Considerthe example of pouring from a mug to a bowl, as depicted in Fig. 1. This “pour” action contains thefollowing three correspondences: ⟨align,rim(mug),rim(bowl)⟩,⟨tilt,body(mug),body(bowl)⟩, and⟨facing-up ,handle (mug),body(bowl)⟩.The task of predicting robot motion can be cast as the task of finding a robot trajectory that simultane-ously satisfies all the part-based functional correspondences. Instead of manually specifying theseconstraints given object point clouds and their poses, we propose to learn a neural network gφtorecognize the functional parts of objects based on their point clouds and another learned generativemodel pθto parameterize a distribution of T. Using gφ, we can extract point clouds for a givenpart, for example gφ(XF,PF,k) =XPF,k. Learning to recognize functional parts can be treatedas predicting a per-point part segmentation problem and have been studied extensively in priorwork [ 14,15,16,27,51]. Therefore, we focus on the second part which enables the robot to learnmanipulation trajectories of objects, based on the recognized parts.3.2 Generative Modeling of Functional Correspondences with Diffusion ModelsFor each functional correspondence tuple ⟨Si,PA,j,PF,k⟩, we learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here TPjkdenotes the relative transformations TPA,jPF,k†. We usea point-cloud conditioned diffusion model to parameterize this distribution. In particular, eachprimitive diffusion denoise model εθ,Sitakes in the current diffusion time step t, two part pointcloudsXPA,jandXPF,k, and the noisy transformations TPjkas input, and predicts the noise overTPjk. As illustrated in Fig. 2 (c), the model is based on a transformer encoder. First, we encodepoint clouds for the two parts separately using a point cloud transformer [ 52]. Then we encode eachtransformation using a trained MLP. We input the point cloud and transformation encodings, togetherwith the diffusion time step tto the transformer encoder. The output of the transformer encoder is thepredicted noise over the transformations TPjk. We provide details for the architecture in Appendix A.During training, we optimize the following loss for randomly sampled diffusion time step tandrandom Gaussian noise εsampled from a multivariate Gaussian distribution:LMSE=ε−εθ,Sip1−βtT(0)Pjk+pβtε|XPA,j,XPF,k, t22,where T(0)Pjkis the target transformations to predict and βtis the diffusion noise schedule [ 53].The added noise and the predicted noise are both in the tangent space of SE(3). We build on thetechnique introduced for the SE(3)Denoising Score Matching (DSM) model [ 40], but use DenoisingDiffusion Probabilistic Model (DDPM) [ 53] for more stable training. In practice, we first computethe exponential map of the transformations and then apply the noise. This can be viewed as predictingthe score function for an exponential energy function of SE(3)poses.3.3 Inference-Time Composition of Diffusion ModelsOne of the key features of diffusion models is their compositionality. That is, suppose we have a setof diffusion models, each trained for one specific type of functional correspondences, we can combinetheir predicted noises during inference time to generate a trajectory that adheres to all functionalcorrespondences, as illustrated in Fig. 2 (b). Since each diffusion model implicitly parameterizes anenergy-based model: pθ,Si(T |·)∝exp(−Eθ,Si(T |·))through its noise prediction [ 48,49], samplingfrom the composition of the diffusion models corresponds to sampling from the “intersection” ofdistributions for the individual functional correspondences, or formally, fromQC∈CMpθ,Si(T |·).†Similar to the definition of the object frame, the part frames {PA,j}and{PF,k}are centered at thecentroids of the respective point clouds XPA,jandXPF,kand have the same orientation as the world frame.4Pourfrom mugto bowlPourfrom bowlto mugPlaceknifesafely in mugFigure 3: We generate task demonstrations using the PartNet and ShapeNetSem datasets for the “pouring” and“safe placing” tasks. We create demonstrations for a variety of function and anchor object combinations.In particular, during inference time, starting from T(T)AFrandomly sampled from standard Gaussiandistributions, given the set of constraints CM, we iteratively update the pose prediction by:T(t−1)AF =1αt T(t)AF−1−αt√1− ̄αtXC∈CMεθ,Siftopart(T(t)AF)|XPA,j,XPF,k, t!+σtε,where Tis the number of diffusion steps, αt= 1−βtis the denoise schedule, ̄αt=QTi=1αtis thecumulated denoise schedule, σtis a fixed sampling-time noise schedule, and εis a randomly sampledGaussian noise. The differentiable operation ftopart takesT(t)AFand transforms it to the part framePjkby(TAPA,j)−1T(t)AFTFPF,k, for which each individual diffusion model is trained on.4 Data CollectionWe demonstrate CPM on the “pouring” and “safe placing” tasks. These two tasks require differentfunctional affordances. The pouring action pours from an anchor object to a target object, andrequires alignment of rims, collision avoidance of the handle and the container body , and bodytilt. The safe-placing action places a sharp function object into an anchor object, and requires headcontainment for safety, tiptouching bottom , and a body -body placement constraint. To validate ourapproach, we collect 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. To generate the demonstrations, we first source 13 categories of 3D objects fromPartNet [ 54] and the subset of ShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. Wethen extract aligned parts either from segmentations and category-level canonical poses in PartNet orfrom manually labeled 3D keypoints for ShapeNetSem objects. We procedurally generate parametersof the actions from the aligned parts (as illustrated in Fig. 3), simulate the interactions by tracing thetrajectories defined by the parameters, and render RGB-D images using multiple cameres set up inthe simulator. Details of the dataset are presented in Appendix C.5 ExperimentsThe subsequent section will showcase the performance of CPM in comparison to baselines and othervariants of our method in simulation. In particular, we evaluate in two important generalizationsettings: 1) generalization to novel object instances from seen object categories, and 2) generalizationto object instances from unseen object categories. We then discuss the deployment of CPM trained insimulation on a real robot.5.1 Experimental SetupWe evaluate all methods in the PyBullet physics simulator [ 57]. To isolate the problem of predictingtarget transformations Tfrom other components of the system (e.g., grasp sampling and motionplanning), we actuate the center of mass of the function object F. We report average task completionscores from 1500 trials indicating failure (0) and success (100), with credits assigned for partialcompletion. The score is computed based on model-based classifiers designed for each task. To test5Table 1: CPM demonstrates strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 19.21 37.11TAX-Pose 21.71 76.97PC-DDPM 75.83 51.55Part-Aware PC-DDPM 75.28 42.68CPM (ours) 80.00 70.99Table 2: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.23 20.91 6.06 32.04 26.15 31.93 26.44TAX-Pose 23.32 3.82 8.64 46.14 50.90 67.60 36.80PC-DDPM 63.02 75.95 71.39 64.39 40.60 46.63 32.34Part-Aware PC-DDPM 58.98 72.11 67.11 66.17 39.76 48.04 28.15CPM (ours) 79.32 81.44 77.57 62.13 55.94 59.45 63.35generalization to novel objects from seen categories, we randomly split the data for each task Minto 80% training and 20% testing. To test generalization to unseen object categories, we conducta separate experiment for each target category of the function objects, where we withhold datainvolving the target category and train on the remaining data. Details of the evaluation are discussedin Appendix D. We present results with binary success as metric in Appendix E.5.2 Compared MethodsBaselines. We compare CPM with four main baselines. The first is Transformer-BC, which uses amultimodal transformer encoder-decoder from prior work [ 30] to condition on point clouds of theobjects and autoregressively predict target transformations. The second baseline is based on TAX-Pose [ 37] which predicts relative poses between two objects from point-wise soft correspondences.The third is PC-DDPM; similar to recent work [ 40,47], a conditional denoising diffusion probabilisticmodel [ 53] is trained to predict target transformations based on input point clouds of both the functionand the anchor objects. The fourth baseline is the Part-Aware PC-DDPM, which takes in both pointclouds of the objects and per-point segmentation masks that indicate object parts. We discuss thebaseline implementations in details in Appendix B.CPM variants. We evaluate several variants of our model. The first is DDPM with 6D rotationrepresentation instead of SE(3). This variant of CPM learns different diffusion models for differentparts. However, it does not compose pose predictions in different part frames. This model is directlyadapted from existing composable diffusion models for image generation [ 48,49]. The second isDDPM with training-time composition; this model jointly train all primitive diffusion models bycomposes thier noise predictions at training time. The last group are the individual primitive diffusionmodels, which use single DDPM models corresponding to different part-part correspondences,without any composition.5.3 Simulation ResultsComparisons to baselines. We evaluate CPM’s generalization capability in two settings. First,Table 1 shows a comparison of generalization to novel objects from seen categories. Overall, ourmodel achieves strong performance on both tasks of “pouring” and “safe placing”. We note thatTAX-Pose struggles with pouring that requires modeling multimodal actions because the methodextracts a single relative pose estimate from a fixed set of correspondences. The autoregressiveTransformer-BC is also not enough to capture the full distribution of the pouring action. We note thatalthough Part-Aware PC-DDPM leverages the same part segmentation as CPM, it fails to achievestronger performance compared to the PC-DDPM baseline, which only uses the object point cloudsas input. We attribute this to its potential overfitting to the part segmentations within the training data.By contrast, CPM is able to effectively leverage part segmentations by learning primitive diffusionmodels and composing them at inference time. Our model shows substantial improvements in the6Table 3: We ablate the contributions of CPM on the ability to generalize to novel categories of objects.Target Pose Rep Part Frames Composition Pouring Safe Placing6D Rot + 3D Trans No Inf-time 71.22 68.77SE(3) Yes Train-time 69.89 48.46SE(3) Yes Inf-time 75.11 59.58Table 4: We explore the effect of composition, comparing to individual diffusion models, in generalizationacross both “pouring” and “safe placing” tasks. *We note that for the align andfacing-up evaluation, a smallpercentage of examples were removed as they do not contain the involved parts in the partial object point clouds.Pouring⟨align,rim,rim⟩ 70.05*⟨facing-up ,handle ,body⟩ 16.42*⟨tilt, body, body ⟩ 68.69CPM 75.11Safe Placing⟨contain, head, body ⟩ 41.22⟨touch, tip, bottom ⟩ 9.34⟨place, body, body ⟩ 39.86CPM 59.58“safe placing” task compared to other diffusion-based methods, largely due to each part constraintsignificantly restricting the target pose distribution in this task. For instance, the constraint thatrequires the tipof the function object to touch the bottom of the anchor object effectively constrainsthe target pose.Our second set of experiments assesses the model’s capacity to generalize to unseen object categories,thereby highlighting the efficacy of part-based correspondences. Results can be found in Table 2.Remarkably, CPM demonstrates its capability to generalize across object categories for both tasks ina zero-shot manner. CPM’s performance dips slightly for pans as the rim’s of pans are significantlylarger compared to rim’s encountered during training (for example, those of bowls and mugs). Asa comparison, all baselines fall short in consistently generalizing to new categories for both tasks.TAX-Pose is not able to maintain strong performance for safe placing when generalizing to moregeometrically complicated objects including scissors and forks. Our methods are robust to changes inlocal geometry and overall topology by leveraging compositions of part-based correspondences.BodyContain HeadTip Touch BottomComposedFigure 4: We illustrate the learned distribution of eachprimitive diffusion model, which generates diverse sam-ples conforming to the specified constraints, as well asthe distribution from the combined full CPM model. Thehighest-ranked sample is highlighted.Ablation. First, we assess the significance ofourSE(3)encoding, part frame-based transfor-mation, and inference-time composition withinthe context of generalizing to unseen categoriesof objects. As depicted in Table 3, our full CPMwith part frames and inference-time composi-tion shows superior performance compared tothe model trained with training-time composi-tion. This verifies the importance of our designsto support part-based composition and gener-alization. Compared to the variant based on6D Rotation + 3D Translation encoding, CPMyields a better performance on the pouring task,a scenario where the rotation of the function ob-ject plays a pivotal role. On the safe placingtask, which involves less rotation of objects, weobserve a more comparable performance with our model. These results highlight the importance ofSE(3)diffusion model in rotation prediction.Second, we compare the performance of composed part-based diffusion models with the performanceof primitive diffusion models. Shown in Table 4, the composed model outperforms individualdiffusion models, showing the efficacy of our composition paradigm. In addition, these resultsshow the importance of different part-based constraints for the given tasks. In the “pouring” task,align andtiltstrongly constrain the target pose for the function object, while for the “safe placing”task, contain andplace constraints are more salient. Fig. 4 provides a qualitative visualization byshowcasing the part-conditioned distribution associated with each individual diffusion model for7Figure 5: We show sampled frames from trajectories of CPM’s policy. The model is trained only on demonstra-tions with pans, bowls, and wine glasses in simulation and generalizes to mugs in the real world.various constraints, as well as the corresponding composed distribution. The quantitative performanceofcontain andplace primitive models for these tasks aligns with this qualitative comparison, asthey have learned distributions that are close to the composed model. The CPM paradigm allowsus to train each primitive diffusion model independently, encouraging each model to concentrate ondistinct functional affordances, thus enabling them to learn and generalize to diverse distributions ofsamples. During inference, the composition of distributions learned by individual models enablesCPM to find solutions that satisfy all correspondence constraints.5.4 Real-World TransferFinally, we show a real-world robot manipulation experiment for the “pouring” task, highlightingthe transferability of our CPM to real-world manipulation. In this setting, we use the primitivediffusion models trained on simulation data with function objects of glasses, pans, and bowls, andzero-shot transfer to mugs in the real-world experiment. Our setup includes a Franka Emika robotmounted in a tabletop environment. To conduct pouring, we perform plane segmentation and k-meansclustering to extract object point clouds from the scene point cloud captured by two calibrated AzureKinect RGB-D cameras. Next, we apply a pre-trained point transformer (PT) model [ 58] for partsegmentation. The segmentation model is trained on simulation data only. We then apply CPMtrained in simulation for the pouring task. To execute the trajectory, we use the Contact-GraspNet [ 59]to sample robot grasps on the function object and Operational Space Controller [ 60] with impedancefrom Deoxys [ 60] to following a sequence of end-effector pose waypoints computed from the targettransformations. Figure 5 shows our real-world setup and example trajectories predicted by CPM onunseen mugs with different shapes and sizes.6 Limitations and ConclusionWe introduced composable part-based manipulation (CPM), as an approach that leverages object-partdecomposition and part-part correspondences for robotic manipulation. We show that representingactions as combinations of constraints between object parts enables strong generalization. Through thecomposition of primitive diffusion models, we gain generalization capabilities across novel instancesof objects as well as unseen object categories, in simulation and in real-world robot experiments.In this paper, we focus on manipulation tasks involving two objects. Extending CPM to learn skillsinvolving more objects would be important for future work, in particular for manipulating piles orstacks of objects. Second, we parameterize each manipulation action by the starting and endingposes. Extending the transformer-based diffusion model to output more waypoints to parameterizelonger trajectory is important for potentially a wider range of tasks. In addition, CPM does not modeltemporal constraints over the trajectory. One possible extension is to learn trajectory samplers fortemporal constraints and trajectories with loops. CPM assumes external part segmentations. Althoughmany categories can be segmented by off-the-shelf computer vision models [ 26], extending thesystem to jointly learn or finetune part segmentation is important. Finally, composing a larger numberof diffusion models may require more efficient sampling techniques such as [ 61]. We provide anextended discussion of CPM’s assumptions in Appendix F and suggest directions for future research.8AcknowledgmentsWe extend our gratitude to the members of the NVIDIA Seattle Robotics Lab, the RAIL research labat Georgia Tech, and the Stanford Vision and Learning Lab for insightful discussions. This work is inpart supported by NSF grant 2214177, 2211258, AFOSR grant FA9550-22-1-0249, FA9550-23-1-0127, ONR MURI grant N00014-22-1-2740, the Stanford Institute for Human-Centered ArtificialIntelligence (HAI), the MIT-IBM Watson Lab, the MIT Quest for Intelligence, the Center for Brain,Minds, and Machines (CBMM, funded by NSF STC award CCF-1231216), and Analog Devices,JPMC, and Salesforce. Any opinions, findings, and conclusions or recommendations expressed inthis material are those of the authors and do not necessarily reflect the views of our sponsors.References[1]C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting Stable Configurations for SemanticPlacement of Novel Objects. In CoRL , 2021. 1, 2[2]W. Liu, Y . Du, T. Hermans, S. Chernova, and C. Paxton. StructDiffusion: Language-GuidedCreation of Physically-Valid Structures using Unseen Objects. In RSS, 2023. 1, 2, 3[3]Y . Huang, A. Conkey, and T. Hermans. Planning for Multi-Object Manipulation with GraphNeural Network Relational Classifiers. In ICRA , 2023. 1, 2[4]C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated Task and Motion Planning. Annual Review of Control, Robotics, and AutonomousSystems , 4:265–293, 2021. 1[5]K. Mo, L. J. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2Act: From Pixels toActions for Articulated 3D Objects. In CVPR , 2021. 2[6]Z. Xu, Z. He, and S. Song. UMPNet: Universal Manipulation Policy Network for ArticulatedObjects. RA-L , 2022. 2[7]R. Wu, Y . Zhao, K. Mo, Z. Guo, Y . Wang, T. Wu, Q. Fan, X. Chen, L. Guibas, and H. Dong.V AT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculatedObjects. In ICLR , 2021. 2[8]H. Geng, H. Xu, C. Zhao, C. Xu, L. Yi, S. Huang, and H. Wang. GAPartNet: Cross-CategoryDomain-Generalizable Object Perception and Manipulation via Generalizable and ActionableParts. In CVPR , 2023. 2[9]J. Aleotti and S. Caselli. Manipulation Planning of Similar Objects by Part Correspondence. InICRA , 2011. 2[10] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy, and T. Asfour. Part-based graspplanning for familiar objects. In IEEE-RAS International Conference on Humanoid Robots(Humanoids) , 2016. 2[11] P. Parashar, J. Vakil, S. Powers, and C. Paxton. Spatial-Language Attention Policies for EfficientRobot Learning. arXiv:2304.11235 , 2023. 2[12] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding Language with Visual Affordances overUnstructured Data. In ICRA , 2023. 2[13] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate Once, Imitate Immediately(DOME): Learning Visual Servoing for One-Shot Imitation Learning. In IROS , 2022. 2[14] A. Myers, C. L. Teo, C. Ferm ̈uller, and Y . Aloimonos. Affordance Detection of Tool Parts fromGeometric Features. In ICRA , 2015. 2, 49[15] T.-T. Do, A. Nguyen, and I. Reid. AffordanceNet: An End-to-End Deep Learning Approach forObject Affordance Detection. In ICRA , 2018. 2, 4, 16[16] S. Deng, X. Xu, C. Wu, K. Chen, and K. Jia. 3D AffordanceNet: A Benchmark for VisualObject Affordance Understanding. In CVPR , 2021. 2, 4, 16[17] W. Liu, A. Daruna, and S. Chernova. CAGE: Context-Aware Grasping Engine. In ICRA , 2020.2[18] P. Ard ́on,`E. Pairet, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan. Learning Grasp AffordanceReasoning through Semantic Relations. In IROS , 2019. 2[19] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation. In ISRR , 2022. 2[20] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. KETO: Learning Keypoint Representationsfor Tool Manipulation. In ICRA , 2020. 2[21] D. Turpin, L. Wang, S. Tsogkas, S. Dickinson, and A. Garg. GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels. In RSS, 2021. 2[22] L. Manuelli, Y . Li, P. Florence, and R. Tedrake. Keypoints into the Future: Self-SupervisedCorrespondence in Model-Based Reinforcement Learning. In CoRL , 2020. 2[23] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation.InICRA , 2022. 2[24] E. Chun, Y . Du, A. Simeonov, T. Lozano-Perez, and L. Kaelbling. Local Neural DescriptorFields: Locally Conditioned Object Representations for Manipulation. In ICRA , 2023. 2[25] J.-S. Ha, D. Driess, and M. Toussaint. Deep Visual Constraints: Neural Implicit Models forManipulation Planning from Visual Input. RA-L , 7(4):10857–10864, 2022. 2[26] M. Liu, Y . Zhu, H. Cai, S. Han, Z. Ling, F. Porikli, and H. Su. PartSLIP: Low-Shot PartSegmentation for 3D Point Clouds via Pretrained Image-Language Models. In CVPR , 2023. 2,8, 16[27] D. Hadjivelichkov, S. Zwane, L. Agapito, M. P. Deisenroth, and D. Kanoulas. One-Shot Transferof Affordance Regions? AffCorrs! In CoRL , 2022. 2, 4[28] W. Goodwin, I. Havoutis, and I. Posner. You Only Look at One: Category-Level ObjectRepresentations for Pose Estimation From a Single Example. In CoRL , 2022. 2[29] W. Yuan, C. Paxton, K. Desingh, and D. Fox. SORNet: Spatial Object-Centric Representationsfor Sequential Manipulation. In CoRL , 2021. 2[30] W. Liu, C. Paxton, T. Hermans, and D. Fox. StructFormer: Learning Spatial Structure forLanguage-Guided Semantic Rearrangement of Novel Objects. In ICRA , 2022. 2, 6, 14[31] M. Shridhar, L. Manuelli, and D. Fox. CLIPort: What and Where Pathways for RoboticManipulation. In CoRL , 2021. 2[32] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning SymbolicOperators for Task and Motion Planning. In IROS , 2021. 2[33] K. Kase, C. Paxton, H. Mazhar, T. Ogata, and D. Fox. Transferable Task Execution from Pixelsthrough Deep Planning Domain Learning. In ICRA , 2020. 2[34] K. Mo, Y . Qin, F. Xiang, H. Su, and L. Guibas. O2O-Afford: Annotation-Free Large-ScaleObject-Object Affordance Learning. In CoRL , 2022. 210[35] J. Liang and A. Boularias. Learning Category-Level Manipulation Tasks from Point Cloudswith Dynamic Graph CNNs. In ICRA , 2023. 2[36] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. In CoRL , 2022. 3[37] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. TAX-Pose: Task-Specific Cross-PoseEstimation for Robot Manipulation. In CoRL , 2022. 3, 6[38] D. Seita, Y . Wang, S. J. Shetty, E. Y . Li, Z. Erickson, and D. Held. ToolFlowNet: RoboticManipulation with Tools via Predicting Tool Flow from Point Clouds. In CoRL , 2022. 3[39] M. Janner, Y . Du, J. Tenenbaum, and S. Levine. Planning with Diffusion for Flexible BehaviorSynthesis. In ICML , 2022. 3[40] J. Urain, N. Funk, J. Peters, and G. Chalvatzaki. SE(3)-DiffusionFields: Learning Smooth CostFunctions for Joint Grasp and Motion Optimization through Diffusion. In ICRA , 2023. 3, 4, 6,14[41] S. Huang, Z. Wang, P. Li, B. Jia, T. Liu, Y . Zhu, W. Liang, and S.-C. Zhu. Diffusion-BasedGeneration, Optimization, and Planning in 3D Scenes. In CVPR , 2023. 3[42] I. Kapelyukh, V . V osylius, and E. Johns. DALL-E-Bot: Introducing Web-Scale DiffusionModels to Robotics. RA-L , 2023. 3[43] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is Conditional GenerativeModeling all you need for Decision-Making? In ICLR , 2023. 3[44] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter,et al. Scaling Robot Learning with Semantically Imagined Experience. arXiv:2302.11550 ,2023. 3[45] U. A. Mishra and Y . Chen. ReorientDiff: Diffusion Model based Reorientation for ObjectManipulation. arXiv:2303.12700 , 2023. 3[46] C. Higuera, B. Boots, and M. Mukadam. Learning to Read Braille: Bridging the Tactile RealityGap with Diffusion Models. arXiv:2304.01182 , 2023. 3[47] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion Policy:Visuomotor Policy Learning via Action Diffusion. In RSS, 2023. 3, 6, 14[48] N. Liu, S. Li, Y . Du, A. Torralba, and J. B. Tenenbaum. Compositional Visual Generation withComposable Diffusion Models. In ECCV , 2022. 3, 4, 6[49] Y . Du, C. Durkan, R. Strudel, J. B. Tenenbaum, S. Dieleman, R. Fergus, J. Sohl-Dickstein,A. Doucet, and W. Grathwohl. Reduce, Reuse, Recycle: Compositional Generation withEnergy-Based Diffusion Models and MCMC. In ICML , 2023. 3, 4, 6[50] Z. Huang, K. C. Chan, Y . Jiang, and Z. Liu. Collaborative Diffusion for Multi-Modal FaceGeneration and Editing. In CVPR , 2023. 3[51] R. Xu, F.-J. Chu, C. Tang, W. Liu, and P. A. Vela. An Affordance Keypoint Detection Networkfor Robot Manipulation. RA-L , 6(2):2870–2877, 2021. 4[52] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu. PCT: Point CloudTransformer. Computational Visual Media , 2021. 4, 13, 14[53] J. Ho, A. Jain, and P. Abbeel. Denoising Diffusion Probabilistic Models. In NeruIPS , 2020. 4,6, 1411[54] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. PartNet: A Large-scaleBenchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding. In CVPR ,2019. 5, 14[55] M. Savva, A. X. Chang, and P. Hanrahan. Semantically-Enriched 3D Models for Common-senseKnowledge. In CVPRW , 2015. 5, 14[56] C. Eppner, A. Mousavian, and D. Fox. ACRONYM: A Large-Scale Grasp Dataset Based onSimulation. In ICRA , 2021. 5, 14[57] E. Coumans and Y . Bai. Pybullet, A Python Module for Physics Simulation in Robotics, Gamesand Machine Learning, 2017. 5[58] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V . Koltun. Point Transformer. In ICCV , 2021. 8[59] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-GraspNet: Efficient 6-DoFGrasp Generation in Cluttered Scenes. In ICRA , 2021. 8[60] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. VIOLA: Imitation Learning for Vision-Based Manipula-tion with Object Proposal Priors. In CoRL , 2022. 8[61] Q. Zhang and Y . Chen. Fast Sampling of Diffusion Models with Exponential Integrator. InICLR , 2022. 812A Network ArchitectureFor each functional correspondence ⟨Si,PA,j,PF,k⟩, we aim to learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here we discuss the network architecture for primitive diffusion modelεθ,Sithat learns to estimate the generative distribution. We leverage modality-specific encoders toconvert the multimodal inputs to latent tokens that are later processed by a transformer network.Object encoder. Given part point clouds XPA,jandXPF,k, we use a learned encoder hptoencode each part separately as hp(XPA,j)andhp(XPF,k). This encoder is built on the Point CloudTransformer (PCT) [52].Diffusion encodings. Since the goal transformations TPjk={TPjk,n}Nn=1are iteratively refined bythe diffusion model and need to feed back to the model during inference, we use a MLP to encodeeach goal transformation separately hT(TPjk,n). To compute the time-dependent Gaussian posteriorfor reverse diffusion, we obtain a latent code for tusing a Sinusoidal embedding htime(t).Positional encoding. We use a learned position embedding hpos(l)to indicate the position index lofthe part point clouds and poses in input sequences to the subsequent transformer.Diffusion Transformer. The diffusion model predicts the goal poses T(0)Pjkstarting from the last timestep of the reverse diffusion process T(T)Pjk∼ N(0,I), which is sampled from a multivariate normaldistribution with independent components. We use a transformer encoder as the backbone for thediffusion model εθ,Si{T(t)Pjk,n}Nn=1|XPA,j,XPF,k, t, which predicts the time-dependent noise{ε(t)1, ..., ε(t)N}. We obtain the transformer input for the parts χand the target poses τasχ(t)A= [hp(XPA,j);hpos(0);htime(t)]χ(t)F= [hp(XPF,k);hpos(1);htime(t)]τ(t)n= [hT(T(t)Pjk,n);hpos(n−2);htime(t)]where [; ]is the concatenation at the feature dimension. The model takes in the sequence{χ(t)A, χ(t)F, τ(t)1, ..., τ(t)N}and predicts {ε(t)1, ..., ε(t)N}for the object poses.Parameters. We provide network and training parameters in Table A1.Table A1: Model ParametersParameter ValueNumber of PA,jandPF,kpoints 512PCT point cloud encoder hpout dim 200Position embedding hpos learned embeddingPosition embedding hposdim 16Time embedding htime SinusoidalTime embedding htime dim 40Pose encoder hTout dim 200Transformer number of layers 4Transformer number of heads 4Transformer hidden dim 128Transformer dropout 0.0Diffusion steps T 200Diffusion noise schedule βt LinearStart value β0 0.0001End value βT 0.02Loss HuberEpochs 2000Optimizer AdamLearning rate 1e-4Gradient clip value 1.013B Implementation Details for BaselinesWe discuss the implementation of each baseline below:•Transformer-BC: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. This baseline uses a multimodal transformer encoder-decoder fromprior work [ 30] to condition on point clouds of the objects and autoregressively predicttarget transformations. The point clouds are first individually encoded with a point cloudtransformer [ 52]. The point cloud embeddings are fed to the transformer encoder. Thetransformer decoder autoregressively decodes the target poses {TAF,1, ..,TAF,N}.•TAX-Pose: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. We use the code and hyperparameters from the official repository‡. We use the variant that does not require pretrained object embeddings because we usedifferent objects from the paper. As discussed in Appendix F.1.2 of the original paper,pretraining mainly helps to reduce training time. Because the TAX-Pose model onlypredicts one relative pose for each pair of point clouds, we learn a separate model for eachtransformation in TAF. Specifically, one TAX-Pose is trained to predict start pose andanother TAX-Pose is trained to predict end pose.•PC-DDPM: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. Similar to recent work [ 40,47], a conditional denoising diffusionprobabilistic model [ 53] is trained to predict target transformations based on input pointclouds of both the function and the anchor objects. This model has the same architecture,including encoders, latent embeddings, and the diffusion transformer, as the primitivediffusion models, which is discussed in Appendix A.•Part-Aware PC-DDPM: this baseline takes point clouds XA∈RNX×3andXF∈RNX×3and two segmentation masks IA∈RNX×NIandIF∈RNX×NIas input, and predictstarget transformations TAF.NXis the number of points for each point cloud and NIisthe number of known object parts. Each channel of the segmentation mask is a binarymask indicating points for a specific part. Each segmentation mask encodes all parts thatcan be extracted from an object point cloud. For simulation experiment, the segmentationmasks come from groundtruth part segmentation. While CPM use the segmentation masksto extract part point clouds, this baseline directly encode the segmentation mask togetherwith the object point cloud. This baseline shares most of the network architecture asPC-DDPM except that point cloud encoder now encodes [XA;IA]∈RNX×(3+NI)and[XF;IF]∈RNX×(3+NI).C Dataset DetailsIn total, we collected 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. For each experiment, we use a subset of these demonstrations for training the models,and the remaining data for initializing the simulation. We provide a breakdown of the dataset inTable A2. Because the expert policies do not have 100% success rate, the models will only be trainedon the successful demonstrations. Below we discuss our data collection process in details.Sourcing 3D objects. We source a wide variety of 3D objects from PartNet [ 54] and the subset ofShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. We use 13 object categoriesto investigate generalization, including mug,pan,bowl ,wine glass ,knife ,can opener ,scissors ,screwdriver ,fork,spoon ,marker ,pen, and flashlight . Some object categories are reused for differenttasks; for example, mug is used as an anchor for safe placing but also as an object for pouring.Extracting aligned parts. Our generative diffusion models uses part segmentations of objects tolearn primitive diffusion models. For 3D objects from PartNet, we use the segmentations provided in‡Code from https://github.com/r-pad/taxpose.14Table A2: Simulation and Demonstration DataTask Object Source Number of Simulations Number of Success DemonstrationsSafe PlacingPen PartNet 1000 568Fork PartNet 1000 390ScrewDriver PartNet 1000 145Spoon PartNet 1000 410Knife Acronym 1000 496Scissors PartNet 1000 354Flashlight PartNet 1000 141CanOpener PartNet 1000 101Marker PartNet 1000 231PouringMug PartNet 2000 1051WineGlass PartNet 2000 1542Bowl Acronym 2000 776Pan PartNet 2000 1153the dataset. For 3D objects from ShapeNetSem, we first label 3D keypoints, then from the labeledkeypoints, we procedurally extract parts. As ShapeNet provides canonical poses for 3D models, wecan also align the extracted functional parts for each object category.Simulating trajectories and rendering. We simulate the robot-object interactions by tracing thetrajectories defined by the parameters. We first use multiple cameras to render RGB-D images, whichyield realistic object point clouds. We then map the functional parts to the point clouds with thecorrect transformation and scaling. Finally, we obtain point cloud segments of each affordance part.Because these parts are extracted from the rendered point clouds, they can be incomplete, whichincreases the robustness of our method and helps transferability to real-world settings.D Evaluation DetailsIn Section 5, we report task completion scores. For each experiment, we randomly draw 100 samplesfrom the withheld testing data to initialize simulation for evaluation. This procedure ensures that theaction can be successfully performed for the pair of anchor and function objects. To systematicallyevaluate multimodal actions (e.g, pouring from different directions), we sample from each model 5times and simulate the predicted actions. We repeat each experiment with 3 different random seeds,resulting in a total of 1500 trials.The task score indicates task completion between failure (0) and success (100), with credits assignedfor partial completion. The score is computed based on model-based classifiers designed for eachtask. Now we describe how the score is computed in more detail:•Pouring: we first use PyBullet’s collision test to check whether the function object and anchorobject will ever interpenetrate during the execution of the action by rigidly transformingthe function object to the predicted poses. If the objects interpenetrate, we assign a scoreof zero because the action cannot be realistically executed. Then we simulate the pouringaction, and use the percentage of particles successfully transferred from the function objectto the anchor object as the partial score.•Safe Placing: similar to pouring, we check interpenetration for the start pose of the placementaction. If the objects interpenetrate, we assign a score of zero. Then we simulate theplacement action until contact between the anchor and function object. If the orientationof the function object is incorrect (e.g., the blade of the knife is outside of the container),we assign a score of zero. If the orientation is correct, the percentage of the trajectoryparameterized by the predicted transformations that is successfully executed is used as thepartial score.15E Additional ResultsBesides reporting the task completion scores, we include additional task success rates in Table A3 andTable A4. For pouring, a trial is considered successful if there is no interpenetration between objectsand 70% of particles are successfully transferred. For safe placing, a successful trial requires nointerpenetration at the predicted start pose for the function object, correct orientation of the functionobject, and 70% of the predicting trajectory being successfully executed without collision betweenobjects. We observe similar trends as the results presented in Section 5.Table A3: CPM shows strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 17.53 ±3.13 33.27 ±2.14TAX-Pose 21.33 ±0.58 74.00±1.00PC-DDPM 70.67 ±1.27 48.73 ±2.97Part-Aware PC-DDPM 73.60 ±2.60 36.53 ±2.20CPM (ours) 76.87±1.70 68.87±2.25Table A4: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.00 ±2.51 19.20 ±0.92 5.80 ±1.93 29.33 ±2.04 24.00 ±1.51 27.33 ±2.34 18.47 ±2.93TAX-Pose 21.00 ±1.00 3.00 ±1.00 8.00 ±1.00 42.67 ±2.08 47.67 ±3.21 62.67±4.04 33.33±1.15PC-DDPM 56.53 ±2.00 70.67 ±3.06 68.67 ±4.31 59.93 ±2.80 38.00 ±3.83 43.47 ±1.68 28.47 ±0.83Part-Aware PC-DDPM 54.87 ±2.10 68.33 ±2.97 65.20 ±4.61 62.00±3.56 28.67±1.68 42.40 ±3.12 17.67 ±2.70CPM (ours) 76.40±1.78 78.93 ±3.14 76.00 ±5.26 54.67±1.50 53.93±2.91 56.53±2.04 62.07±1.72F AssumptionsDuring training, our method assumes 1) a description of the manipulation skill as a set of partcorrespondences, 2) access to the dataset of successful trajectories, and 3) access to part segmentationsfor objects in the dataset. During testing, our method assumes the part segmentations for objectsbeing manipulated. We contend that these assumptions align with our current focus. Nonetheless,subsequent research should aim to address them.First, the description of manipulation skills is in symbolic text, e.g., pouring from mugs to bowlscontains three constraints. They can be easily annotated by humans as there is no need to specifyany continuous parameters or mathematical formulas. An interesting future direction is to leveragelarge language models to more efficiently extract constraints. CPM then learns the grounding of theseconstraints from data.Second, we assume access to successful manipulation trajectories. That is, we do not assume anyadditional annotations, such as programs for generating these trajectories. The key focus of thepaper is to improve the data efficiency of learning such skills, in particular for generalization acrosscategories. An important future direction is to improve the data efficiency of this method and learnfrom noisy human demonstrations.Finally, relying on external part segmentation is limiting, but 2D or 3D part segmentation modelsare generally available for many object categories [ 15,16,26]. An exciting future direction is toextend the current framework to automatically discover functional part segmentations leveragingmanipulation data.16
KL7LOtlRuX
Strengths - Authors show real-world demonstration showing impressive successful sim-to-real transfer, including generalization to unseen mugs - The authors proposed method of using part-part correspondences as the basis of modelling manipulation actions, and the integration of this with diffusion models is very interesting and novel. I believe this paper is very relevant to the CoRL community. Weaknesses The main weakness of the paper is that in places it is unclear and missing key details. In addition I believe there is room for improvement in the baselines used for comparison. - On line 135, and in the equation below line 144, the authors state that each individual takes as input two object point-clouds, but in the equation below line 160, these are now partial point-clouds. From the rest of the text, particularly the discussions of the baselines, I am inclined to think that the authors use partial point-clouds of the parts as input but it is unclear. - In the section 'Inference-time combination' the authors discuss performing the diffusion in SE3. This is further ablated in the experiments with the model with a 6D rotation parameterization. However, performing diffusion on SE3 is not a contribution of this paper, since that has been proposed in [40]. The section 'Inference-time combination' should be rewritten to make clear that this is from prior work and not part of the authors contribution. - I found the description of the baselines and their relationship to CPM very unclear. In particular, for both PC-DDPM and part-aware-PC-DDPM, it is very unclear to me what their input/output is. Is there a model per correspondence as with CPM, or a model per task? If neither, how are the correspondences/tasks encoded? This becomes especially important for part-aware-PC-DDPM because from the text I do not understand what the difference between part-aware-PC-DDPM and CPM is. - I cannot find anywhere in the text a definition of the performance metric. Is it a success rate? Over how many evaluations? The authors proposed approach shows improved performance including on unseen tasks according to this metric, but I cannot list this as a strength of the paper until I know what this performance metric means. - The authors baselines are only other diffusion models, and all of the baselines are developed by the authors. I think they would be much more convincing if they were contrasted from another method from the literature. Perhaps something from the offline RL literature such as CQL. - One key limitation that is not discussed in the limitations section is the quite restrictive set of tasks that this method can be applied to, namely consisting of two-object interactions with only one object actively manipulated.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Composable Part-Based Manipulation ### Paper Abstract In this paper, we propose composable part-based manipulation (CPM), a novel approach that leverages object-part decomposition and part-part correspondences to improve learning and generalization of robotic manipulation skills. By considering the functional correspondences between object parts, we conceptualize functional actions, such as pouring and constrained placing, as combinations of different correspondence constraints. CPM comprises a collection of composable diffusion models, where each model captures a different inter-object correspondence. These diffusion models can generate parameters for manipulation skills based on the specific object parts. Leveraging part-based correspondences coupled with the task decomposition into distinct constraints enables strong generalization to novel objects and object categories. We validate our approach in both simulated and real-world scenarios, demonstrating its effectiveness in achieving robust and generalized manipulation capabilities. ### Paper Keywords ["Manipulation", "Part Decomposition", "Diffusion Model"] ### Paper Content Composable Part-Based ManipulationWeiyu Liu1, Jiayuan Mao2, Joy Hsu1, Tucker Hermans3,4, Animesh Garg3,5, Jiajun Wu11Stanford2MIT3NVIDIA4University of Utah5Georgia TechAbstract: In this paper, we propose composable part-based manipulation (CPM),a novel approach that leverages object-part decomposition and part-part correspon-dences to improve learning and generalization of robotic manipulation skills. Byconsidering the functional correspondences between object parts, we conceptualizefunctional actions, such as pouring and constrained placing, as combinations ofdifferent correspondence constraints. CPM comprises a collection of composablediffusion models, where each model captures a different inter-object correspon-dence. These diffusion models can generate parameters for manipulation skillsbased on the specific object parts. Leveraging part-based correspondences coupledwith the task decomposition into distinct constraints enables strong generalizationto novel objects and object categories. We validate our approach in both simulatedand real-world scenarios, demonstrating its effectiveness in achieving robust andgeneralized manipulation capabilities. For videos and additional results, see ourwebsite: https://cpmcorl2023.github.io/.Keywords: Manipulation, Part Decomposition, Diffusion ModelrimhandlebodyPour<align, rim, rim><facing-up, handle, body><tilt, body, body>Test: Pour from mug to bowl in real worldTrain: Pour from glass, pan, and bowl to bowl in simulationFigure 1: CPM composes part-based diffusion models to predict target object poses directly from point clouds.In this example, we show that the “pouring” action is decomposed into three part-based correspondences, whichgeneralize manipulation across object categories, and from simulation to the real world1 IntroductionCompositionality provides appealing benefits in robotic manipulation, as it enables efficient learning,reasoning, and planning. Prior works have extensively studied the decomposition of scenes intoobjects and their relationships [ 1,2,3], as well as the division of long-horizon plans into primitiveskills [ 3,4], in order to navigate complex environments and devise long-horizon plans. In this paper,we present a different view of compositionality by considering object-part decomposition based onfunctionality (e.g., rim, handle, body ), and leverage such decomposition to improve the learning ofgeometric and physical relationships for robot manipulation.In the context of language descriptions of objects, part names not only describe the geometric shapesof the parts but also capture their functional affordances. For instance, as depicted in Figure 1, for theaction of “pouring”, the rims define the boundary for alignment between the objects, the body of thepouring vessel should be tilted for the action, and its handle provides a constraint on the directionthe object should face when pouring. Leveraging this knowledge of part affordances, we posit thata family of functional actions, such as pouring and constrained placing, can be conceptualized asa combination of functional correspondences between object parts. Modeling actions using such a7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.decomposition yields two important generalizations. First, it enables action generalization to novelinstances from the same object category. Second and more importantly, it facilitates generalizationtounseen object categories. For example, after learning part affordances for the “pouring” action,our robot trained on “pour from bowls ” and “... pans ” can generalize to “pour from mugs ”, with noadditional training necessary for manipulation with the new object category.Motivated by these insights, we present the composable part-based manipulation (CPM). CPMcomprises a collection of diffusion models, where each model captures the correspondence betweenparts of different objects. These conditional diffusion models take the geometry of the object partsas input and generate parameters for manipulation skills, such as the starting and ending poses of abowl during the pouring action. Specifically, each model outputs a distribution of feasible trajectoriesthat satisfy a particular correspondence. After learning a collection of composable diffusion models,we represent actions as combinations of part-part correspondences. During inference, we leveragethe composition of primitive diffusion models to sample trajectories that adhere to all the partcorrespondences. This approach improves generalization to novel object categories over models thatdo not reason about both parts and composable correspondence constraints.In summary, this paper makes two key contributions. First, we propose composable part-based manip-ulation, which models manipulation actions as a composition of part-part correspondences betweenobjects. Second, we develop diffusion models trained to capture primitive functional correspondencesthat can be flexibly recombined during inference. CPM achieves strong generalization across variousdimensions, including novel object instances and object categories. We validate the efficacy of CPMon both PyBullet-based simulations and real-robot experiments.2 Related WorkObject representations for manipulation. Prior works use segmentations of common objectparts (e.g., blades, lids, and handles) for manipulating articulated objects [ 5,6,7,8] as well as fortransfer to novel objects [ 9,10]. A common approach that has been shown effective across differentmanipulation domains [ 11,12,13] first predicts which part of an object the robot should focus on(e.g., the handle), and then predicts an action relative to the part. Closely related is visual affordancedetection [ 14,15,16], which segments objects into different functional regions, such as graspableparts and support surfaces of objects. These functional regions can be shared by more distinctobjects, and can be useful for generalizing task-oriented grasping between object categories [ 17,18].Keypoints are another representation that shows robustness to large intra-category shape variationand topology changes [ 19]. Each keypoint set can provide essential pose information, that lacks inprevious segmentation approaches, to support tasks such as hanging mugs on pegs by their handles.The initial supervised approach [ 19] has been extended to methods that discover keypoints frominteractions [ 20,21] and from unlabeled videos [ 22]. Recently, implicit object representations havebeen used to provide correspondence between any point within the same object category generalizingacross 6-DoF pose changes [ 23,24,25]. Large pretrained vision models also support the developmentof object representations; recent works leverage these models to significantly reduce domain-specifictraining data, showing strong results for open-vocabulary part segmentation [ 26], few-shot affordancesegmentation [ 27], and one-shot pose estimation on any novel object from the same category [ 28].Despite this huge progress, we still lack object representations that support strong generalization ofmanipulation to new object categories. We focus on tackling this problem.Learning interactions of objects. Works in robotics have established the importance of modelinginteractions of objects. Recent approaches directly work on 3D observations, without relying onknown object models. Learning spatial relations between objects enables the picking and placingof objects at specific locations [ 1,29,30,2,31], such as placing an object in the middle drawer,stacking objects, and setting the table. These relations can be extended to represent the logical stateof the world to support planning for long-horizon tasks [ 3,32,33]. Other works focus on learninglower-level interactions between objects, such as placing an object stably on a messy tabletop andpushing an object using a tool [ 34,35]. For example, O2O-afford [ 34] correlates feature mapsextracted from two objects using a point convolution and outputs a point-wise interaction heatmap.2<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart1<latexit sha1_base64="jqQn1QL4mOITLlizeUeaD0EKzaM=">AAACi3icbVFdaxQxFM2MH62j1Wl99CW4LFQoy0wrKtKHWlF8kordtrCzDplsZjc0H0NyR1xCfqiP/Rm+mdmO0G29kHA4956c5KRqBLeQZb+j+N79Bw83Nh8lj59sPX2Wbu+cWd0aysZUC20uKmKZ4IqNgYNgF41hRFaCnVeXH7v++U9mLNfqFJYNm0oyV7zmlECgytQNccEay4VWP9wuvPKlKyrpvvuS7+GvPhnieVnAggFJhl3jtB/44Lv9s99bzdSBBPYLABzoomyIAe+TQhJYUCKC6ObZ/6S+TAfZKFsVvgvyHgxQXydl+qeYadpKpoAKYu0kzxqYuuDGqWDBsLWsIfSSzNkkQEUks1O3CsnjYWBmuNYmLAV4xd5UOCKtXcoqTHb3trd7Hfm/3qSF+t3UcdW0wBS9NqpbgUHjLnE844ZREMsACDU83BXTBTGEQviXNZdKrr3BdV6gtbA+CVnlt5O5C872R/mb0f6314Oj4z61TfQCvUS7KEdv0RH6gk7QGFF0FW1EabQdb8UH8fv48Ho0jnrNc7RW8ae/smvH5w==</latexit>T(t)AF<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="qT+6EftEtJ4oZA+aT07HQ481bfE=">AAACy3icfVFdb9MwFHXC1ygf6+CRF0NVaUhVlUwI9jhAAl5AnVi3SU2JHNdprfkjsm8QxeSRX8Uv4afwhp1l0rohrpTo6Jx7fK6vi0pwC0nyO4pv3Lx1+87W3d69+w8ebvd3Hh1bXRvKplQLbU4LYpngik2Bg2CnlWFEFoKdFGdvg37ylRnLtTqCdcXmkiwVLzkl4Km8/2uYscpyodUXtwvPm9xlhXSfm5yP8KemN8TLPIMVA9IbBuGoa3jdhP+7ZtT2lJ4E9g0AHOgsr4iBxvOZJLCiRHjX5cMvvMH53/ALMZweRhjhNoX7FC58Qt4fJOOkLXwdpB0YoK4mef9PttC0lkwBFcTaWZpUMHd+XE4F84G1ZRWhZ2TJZh4qIpmdu3bJDR56ZoFLbfynALfsZYcj0tq1LHxnuLe9qgXyX9qshnJ/7riqamCKngeVtcCgcXgxvOCGURBrDwg13M+K6YoYQsG/60ZKITfu4EIWaC1s0/O7Sq9u5jo43hunL8d7hy8GB2+6rW2hJ+gZ2kUpeoUO0Ac0QVNEo6fR+2gSHcYfYxt/j3+ct8ZR53mMNir++Rey7uGE</latexit>✏✓,tilt<latexit sha1_base64="+fqA0FQehuFsG/PLoRhstfX2gEk=">AAAC0XicfVFdaxNBFJ1dv2r8ivroy2AIVghht4jtY1UQX5RKm6SQjcvsZDYZOh/Lzl0xDAPiq7/Kn+FP8c2Z7RaaVrywy+Gce+bcuVNUghtIkt9RfOPmrdt3du727t1/8PBR//GTqdFNTdmEaqHr04IYJrhiE+Ag2GlVMyILwWbF2bugz76y2nCtTmBTsYUkK8VLTgl4Ku//GmasMlxo9cXuwkuX26yQ9tjlfIQ/ud4Qr/IM1gxIbxiEk67hjQv/927U9pSeBPYNACzoLK9IDc7zmSSwpkR41+XDL7zB+d/wCzGcHkYY4TaFgy0J5WqV5U3lXN4fJOOkLXwdpB0YoK6O8v6fbKlpI5kCKogx8zSpYGH90JwK5mMbwypCz8iKzT1URDKzsO2qHR56ZolLXftPAW7Zyw5LpDEbWfjOcHtzVQvkv7R5A+XBwnJVNcAUPQ8qG4FB4/BueMlrRkFsPCC05n5WTNekJhT8626lFHLrDjZkgdbCuJ7fVXp1M9fBdG+cvh7vfX41OHzbbW0HPUPP0S5K0T46RB/QEZogGr2IPkbTaBYfx5v4e/zjvDWOOs9TtFXxz78VnOQL</latexit>✏✓,facingup<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="+X11N6DDJgkTslNumghcvbcIAB8=">AAACzHicfVFda9swFJW9j3bZV9Y+7kU0BDoIwS5j62PXQelTabemLcSZkRU5EZUlI12PBaHX/ar9kf2UvU1yXWjasQs2h3Pu0bm6KmrBDSTJ7yh+9PjJ043NZ73nL16+et1/s3VhVKMpm1AllL4qiGGCSzYBDoJd1ZqRqhDssrj+HPTL70wbruQ5rGo2q8hC8pJTAp7K+7+GGasNF0p+s7vwzuU2Kyr71eV8hE9cb4gXeQZLBqQ3DMJ51/DJhf+RG7U9pSeB/QAACyrLa6LBeT6rCCwpEd519/Bbb3D+N/xWDKeHEUa4TeFgieAL6VzeHyTjpC38EKQdGKCuTvP+n2yuaFMxCVQQY6ZpUsPM+nk5FcwnNobVhF6TBZt6KEnFzMy2W3Z46Jk5LpX2nwTcsncdllTGrKrCd4aLm/taIP+lTRso92eWy7oBJulNUNkIDAqHJ8NzrhkFsfKAUM39rJguiSYU/MOupRTV2h1syAKlhHE9v6v0/mYegou9cfphvHf2fnBw2G1tE71FO2gXpegjOkDH6BRNEI12ouPoLPoSn8QQ29jdtMZR59lGaxX//Aug+uHc</latexit>✏✓,align<latexit sha1_base64="e/VTTwH/614p3YtdKa7KcUjTzLg=">AAACEnicbVDLSgNBEJz1GeMr6tHLYBA8hd0gKp4CXjxGMA9IljA7mSRD5rHM9AphyS94EvRbvIlXf8BP8eZssgeT2NBQVHVT3RXFglvw/W9vbX1jc2u7sFPc3ds/OCwdHTetTgxlDaqFNu2IWCa4Yg3gIFg7NozISLBWNL7L9NYTM5Zr9QiTmIWSDBUfcEogo7o2kb1S2a/4s8KrIMhBGeVV75V+un1NE8kUUEGs7QR+DGFKDHAq2LTYTSyLCR2TIes4qIhkNkxnt07xuWP6eKCNawV4xv7dSIm0diIjNykJjOyylpH/aZ0EBjdhylWcAFN0bjRIBAaNs8dxnxtGQUwcINRwdyumI2IIBRfPgkskF35IMy/QWthp0WUVLCezCprVSnBVqT5clmu3eWoFdIrO0AUK0DWqoXtURw1E0Qg9o1f05r14796H9zkfXfPynRO0UN7XL377ntQ=</latexit>XPoint Cloud TransformerDiffusion Transformer Pose Encoder<latexit sha1_base64="JFIyIqOEkSL96w1PNKK6ZpOrrrc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuqG5cV7AOaECbTSTt2JhNmJkoJAb/GlaAf4NIvcFfc+hPunLRd2NYDwz2cey9n7gliSqSy7aGxsLi0vLJaWCuub2xubZs7uw3JE4FwHXHKRSuAElMS4boiiuJWLDBkAcXNoH+d95sPWEjCozs1iLHHYDciIUFQack3992Apa3MT/Nam9TL7PQ+y3yzZJftEax54kxIqXoSBo8f7nvNN3/cDkcJw5FCFErZduxYeSkUiiCKs6KbSBxD1Idd3NY0ggxLLx3dkFlHWulYIRf6RcoaqX83UsikHLBATzKoenK2l4v/9dqJCi+8lERxonCExkZhQi3FrTwQq0MERooONIFIEP1XC/WggEjp2KZcAjZ1Q5p7Kc6pzIo6K2c2mXnSqJSds3LlVod2BcYogANwCI6BA85BFdyAGqgDBJ7AM3gFb8aL8WkMja/x6IIx2dkDUzC+fwHKJqxO</latexit>XPA,j<latexit sha1_base64="wrnhrhzab79ObCCMxCN6TuVT3zc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuiIC4j2AekIUymk3boTCbMTIQSAn6NK0G/wbUrd8WtP+HOSduFbT0w3MO593LmnjChRCrbHhlLyyura+uljfLm1vbOrrm335Q8FQg3EKdctEMoMSUxbiiiKG4nAkMWUtwKBzdFv/WIhSQ8flDDBPsM9mISEQSVlgLzsBOyrJ0HWVHdab3Nzwd5HpgVu2qPYS0SZ0oq9bOPd1eUPDcwfzpdjlKGY4UolNJz7ET5GRSKIIrzcieVOIFoAHvY0zSGDEs/G9+QWyda6VoRF/rFyhqrfzcyyKQcslBPMqj6cr5XiP/1vFRFV35G4iRVOEYToyilluJWEYjVJQIjRYeaQCSI/quF+lBApHRsMy4hm7khK7wU51TmZZ2VM5/MImnWqs5FtXavQ7sGE5TAETgGp8ABl6AO7oALGgCBJ/AMXsGb8WJ8GiPjazK6ZEx3DsAMjO9fO/Sr7A==</latexit>XPF,k<latexit sha1_base64="3u1nRu5P5VJ+x+NSxU2Dim9fHLs=">AAACLHicbVC7TsMwFHXKq5RXeGwwRCAkEKhKOgBjBQsDQ0EUkNpQOa7bmthxZN8gVVEWvoYJCb6FBSHExj+w4bQMFDiS5eNz79XxPUHMmQbXfbEKY+MTk1PF6dLM7Nz8gr24dKFlogitE8mlugqwppxFtA4MOL2KFcUi4PQyCI/y+uUtVZrJ6Bz6MfUF7kaswwgGI7XstWYg0vPsOt2C7ayV5q+auW/CbNfLWvaGW3YHcP4S75tsVCsr4fvJzlmtZX8225IkgkZAONa64bkx+ClWwAinWamZaBpjEuIubRgaYUG1nw62yJxNo7SdjlTmROAM1J8TKRZa90VgOgWGnv5dy8X/ao0EOgd+yqI4ARqRoVEn4Q5IJ4/EaTNFCfC+IZgoZv7qkB5WmIAJbsQlECM7pLkXSMl1VjJZeb+T+UsuKmVvr1w5NaEdoiGKaBWtoy3koX1URceohuqIoDt0jx7Rk/VgPVuv1tuwtWB9zyyjEVgfX0ooq8Y=</latexit>T(t)Pjk,1<latexit sha1_base64="30ibTJ7MVRmCwncRUkcebQ8ruUw=">AAACS3icdVDLSgMxFM3UV62v+ti5CUpBUcpMFyq4KbpxIVKlVaHWkklTjU0mQ3JHKMN8jF/jStClKz/CleLCTNuFrXoh5OScezm5xw8FN+C6r05mbHxicio7nZuZnZtfyC8unRsVacpqVAmlL31imOABqwEHwS5DzYj0BbvwO4epfnHPtOEqqEI3ZA1JbgLe5pSApZr5/QK+8mVcTa7jDdhMmnH6qtj7rpNse0nuf/EkaebX3aLbK/wbeAOwXi6tdN6Ot84qzfzHVUvRSLIAqCDG1D03hEZMNHAqmDWLDAsJ7ZAbVrcwIJKZRtxbMsEFy7RwW2l7AsA99udETKQxXenbTkng1oxqKfmXVo+gvdeIeRBGwALaN2pHAoPCaWK4xTWjILoWEKq5/Sumt0QTCjbXIRdfDu0Qp16glDBJzmbljSbzG5yXit5OsXRqQztA/cqiVbSGNpCHdlEZHaEKqiGKHtAjekYvzpPz7nw6X/3WjDOYWUZDlZn4BpW9t04=</latexit>T(t)Pjk,N<latexit sha1_base64="T37olucuzTaUf0zP+2cBB5KiAz4=">AAACaHicfVBNTxsxFHS2tNDQj6U9VIiLBUWiEop2OUCPiF56qoIgEClJV17nhbix1yv7LVJk+W/1yKW/oveeKrW3VuKGN+FAoOqTLI9n3vPYk5dSWEyS743o0dLjJ8srT5urz56/eBmvvTqzujIcOlxLbbo5syBFAR0UKKFbGmAql3CeTz7U+vklGCt0cYrTEgaKXRRiJDjDQGVxd5v2c+VO/We3g+985upTO+xfJn439c3/yZ98sw+lFTJc5Po4BmS7tX7iM+GzeCtpJbOiD0F6C7YO3/75+u1y9W87i6/7Q80rBQVyyaztpUmJA8cMCi4hWFUWSsYn7AJ6ARZMgR24WQKebgdmSEfahFUgnbF3JxxT1k5VHjoVw7G9r9Xkv7RehaP3AyeKskIo+NxoVEmKmtZx0qEwwFFOA2DciPBWysfMMI4h9AWXXC38wdVeqLW0vhmySu8n8xCc7bXS/dbecQjtiMxrhWyQTbJDUnJADslH0iYdwskV+UF+kd+Nn1EcvYnW561R43bmNVmoaPMGE4TB+w==</latexit>✏✓,Si<latexit sha1_base64="v5wU0bmUU3WRz92nFB9wmicl/wk=">AAACLHicbVDLSgMxFM3UV62vUZe6CBZBQcqMiLoU3bisaKvQ1iGTphqax5DcEcowG7/GlaDf4kbErf/gzvSxsNYDgcM593JyT5wIbiEI3r3C1PTM7FxxvrSwuLS84q+u1a1ODWU1qoU2NzGxTHDFasBBsJvEMCJjwa7j7lnfv35gxnKtrqCXsJYkd4p3OCXgpMjfbLLEcqHVbbYDu3mUNWOZXeYR38NhHvnloBIMgCdJOCJlNEI18r+bbU1TyRRQQaxthEECrYwY4FSwvNRMLUsI7ZI71nBUEclsKxtckeNtp7RxRxv3FOCB+nsjI9LanozdpCRwb/96ffE/r5FC57iVcZWkwBQdBnVSgUHjfiW4zQ2jIHqOEGq4+yum98QQCq64sZRYjt2Q9bNAa2Hzkusq/NvMJKnvV8LDyv7FQfnkdNRaEW2gLbSDQnSETtA5qqIaougRPaEX9Oo9e2/eh/c5HC14o511NAbv6wfEqKjP</latexit>✏(t)Si,1<latexit sha1_base64="CqQquVftToqWZTkc4IA4qIyj4ms=">AAACLHicbVDLSgMxFM3Ud31VXeoiWIQKUmaKqEvRjSupaFVo65BJ0zY0jyG5I5RhNn6NK0G/xY2IW//BneljYdUDgcM593JyTxQLbsH337zc1PTM7Nz8Qn5xaXlltbC2fm11YiirUS20uY2IZYIrVgMOgt3GhhEZCXYT9U4H/s09M5ZrdQX9mDUl6Sje5pSAk8LCVoPFlgut7tIS7GZh2ohkepmFfA+fZ2Gh6Jf9IfBfEoxJEY1RDQtfjZamiWQKqCDW1gM/hmZKDHAqWJZvJJbFhPZIh9UdVUQy20yHV2R4xykt3NbGPQV4qP7cSIm0ti8jNykJdO1vbyD+59UTaB81U67iBJiio6B2IjBoPKgEt7hhFETfEUINd3/FtEsMoeCKm0iJ5MQN6SALtBY2y7uugt/N/CXXlXJwUK5c7BePT8atzaNNtI1KKECH6BidoSqqIYoe0CN6Ri/ek/fqvXsfo9GcN97ZQBPwPr8B9SSo7A==</latexit>✏(t)Si,N<latexit sha1_base64="mhJ+AVftmU1m0Dn1ijHEV2CsyMw=">AAADE3ichVJNj9MwEHXCxy7lqwtHLhZVpUWqqmSFWI4LSIgTKmK7u1JTIsd1WlPHjuIJorL8GzghwW/hhrjyA/gp3LDTrLTdghgpyei9efPG42Sl4Bqi6FcQXrl67frO7o3OzVu379zt7t070aquKBtTJVR1lhHNBJdsDBwEOysrRopMsNNs+cLzpx9YpbmSx7Aq2bQgc8lzTgk4KN0Lwn7CSs2Fku/MPjyyqUmywry1KR/g17bTx/M0gQUD0ul74rgteGb9+6UdNDW5A4F9BAADKklLUoF1eFIQWFAinOpi83OtV/7H/Jz2/f0QA9z4cDBE8Llct9hwibZd/jHGyH3fD5bWpt1eNIyawNtJ3CY91MYo7f5OZorWBZNABdF6EkclTI07NaeCOcNas5LQJZmziUslKZiemuayLO47ZIZzVblHAm7QiwpDCq1XReYq/dz6MufBv3GTGvKnU8NlWQOTdG2U1wKDwv7m8YxXjIJYuYTQirtZMV2QilBw/8eGS1ZsnMF4L1BKaNtxu4ovb2Y7OTkYxk+GB28e946et1vbRQ/QQ7SPYnSIjtArNEJjRAMefAq+BF/Dz+G38Hv4Y10aBq3mPtqI8OcfT4T9fQ==</latexit>T(t)Pj,k<latexit sha1_base64="kOps49F//byAFz3unoCUCeIPHac=">AAAC73ichVFNb9NAEF2bj5bwFeDIZUUUqUhRZFcIOBYQiBMqomkrxcFab9bJKvvh7o4R0cq/gxviyk/ixO/gxq7rSk2DxEi2nt7Mmzc7U1SCW0iSX1F87fqNmzu7t3q379y9d7//4OGx1bWhbEK10Oa0IJYJrtgEOAh2WhlGZCHYSbF6E/InX5ixXKsjWFdsJslC8ZJTAp7K+7+HGassF1p9dnvwtMldVkj3qcn5CH9oekO8yDNYMiC9YUgcdQWvmvB/14zamtKTwL4CgAOd5RUx0Hg+kwSWlAivutz8QhuU/zG/SIf+YYgRbn04OCL4QvkWmx7JlkfeHyTjpA28DdIODFAXh3n/TzbXtJZMARXE2mmaVDBz/kmcCuYNa8sqQldkwaYeKiKZnbn2EA0eemaOS238pwC37GWFI9LatSx8ZZjbXs0F8l+5aQ3ly5njqqqBKXpuVNYCg8bhqnjODaMg1h4QarifFdMlMYSCv/2GSyE33uCCF2gtbNPzu0qvbmYbHO+P0+fj/Y/PBgevu63tosfoCdpDKXqBDtB7dIgmiEZvo1UEUR2fxd/i7/GP89I46jSP0EbEP/8Cm1LwBg==</latexit>T(0)AF“Pour”Sampled Target PosesSegmented PartsInitial SceneExecution<latexit sha1_base64="hmc/Ol2wByB0lLttonACgh+OJ/I=">AAADHXichVLfb9MwEHbCj40wWAePvFhUlYZUVcmENh4HSIgnVMS6TWpK5LhOa+rYUXxBVFb+EJ6Q4G/hDfGK9qfsDTvNpHUb4qQ4p++7u+/u7LQQXEMYnnn+rdt37m5s3gvubz14uN3ZeXSsVVVSNqJKqPI0JZoJLtkIOAh2WpSM5KlgJ+niteNPPrNScyWPYFmwSU5mkmecErBQsuNt9WJWaC6U/Gh24VmdmDjNzYc64X38rg56eJbEMGdAgp4jjtqAl7U739T9JiazILAvAGBAxUlBSqgtHucE5pQIm3W5+EWuy/yP+AXt6rsm+rjR4WCI4DO5KrGmEt6g8o8+hvb/qb+wEUU7Y9LphoOwMXzdiVqni1obJp3zeKpolTMJVBCtx1FYwMTY+TkVrA7iSrOC0AWZsbF1JcmZnpjm2mrcs8gUZ6q0nwTcoJczDMm1XuapjXQD6KucA2/ixhVkLyaGy6ICJulKKKsEBoXdG8BTXjIKYmkdQktue8V0TkpCwb6UNZU0X5vBOC1QSug6sLuKrm7munO8N4j2B3vvn3cPX7Vb20RP0FO0iyJ0gA7RWzREI0Q97X31vns//G/+T/+X/3sV6nttzmO0Zv6fv1BDAWA=</latexit>p✓<latexit sha1_base64="pTqE88oby3YNTSrDuoIS2e/D9pA=">AAADJXichVLfixMxEM6uv871V0998yVYCieUsnuI+ngqiE9S8Xp30K1LNk3b2GwSNrNiCfvH+CTo3+KbCD75d/hmst3C9e7Egc0O3zcz38wkuRbcQBz/CsJLl69cvbZzPbpx89btO53du0dGVSVlI6qEKk9yYpjgko2Ag2AnumSkyAU7zpcvPX/8kZWGK3kIK80mBZlLPuOUgIOy3eB+L2XacKHke7sHj+rMpnlh39UZ7+M3ddTD8yyFBQMS9Txx2AY8r/35qu43MTMHAvsEABZUmmlSQu3wtCCwoES4rNPFN7k+8z/iG9rX9030caPDwRLB53JdYkslvkDlH30M3f9Df+kj9GZIN61e8KzTjQdxY/i8k7ROF7U2zDp/0qmiVcEkUEGMGSexhol1e+BUsDpKK8M0oUsyZ2PnSlIwM7HN9dW455ApnqnSfRJwg57OsKQwZlXkLtIPYs5yHryIG1cwezaxXOoKmKRroVklMCjs3wKe8pJRECvnEFpy1yumC1ISCu7FbKnkxdYM1muBUsLUkdtVcnYz552j/UHyZLD/9nH34EW7tR30AD1EeyhBT9EBeo2GaIRoYIPPwdfgW/gl/B7+CH+uQ8OgzbmHtiz8/RfuEgRC</latexit>g(a) System Overview(b) Composable Part-Based Diffusion Models(c) Point-Cloud Diffusion Transformer<latexit sha1_base64="pqldO+U6c+9Jdc5t/BwlGU7Q3+I=">AAADb3ichVJNb9NAEF3HfJTw0RQOHJDQiihSK4XIrhBwLCAhTiiIpq0Up9Z6s06WrL0r7xgRrfzz+BH8Bk5IcODGruNC07RiJHtHM+/NmxlNogTXEATfvJZ/7fqNm1u32rfv3L233dm5f6RlWVA2olLI4iQhmgmesxFwEOxEFYxkiWDHyeKNyx9/ZoXmMj+EpWKTjMxynnJKwIbiHe+0FzGluZD5qdmFvSo2UZKZj1XM+/h91e7hWRzBnAFp91zisAG8qtz/bdWvMakNAvsCAAZkFCtSQGXjUUZgTomwrPPFz7iO+R/xs7Sr75ro41qHgyGCz/JViTWV4BKVK/oY2vdTf+EQ6u+Qdlw15xtl4Wm4SbsS8k877nSDQVAb3nTCxumixoZx53c0lbTMWA5UEK3HYaBgYuxGORXMapaaKUIXZMbG1s1JxvTE1IdQ4Z6NTHEqC/vlgOvoeYYhmdbLLLFI17q+mHPBy3LjEtKXE8NzVQLL6UooLQUGid1V4SkvGAWxtA6hBbe9YjonBaFgb29NJcnWZjBOC6QUut5VeHEzm87R/iB8Ptj/8Kx78LrZ2hZ6hJ6gXRSiF+gAvUNDNELU++p99356v1o//If+Yx+voC2v4TxAa+bv/QGPux3u</latexit>T(t1)AFFigure 2: (a) Given a task, the partial point clouds of the anchor and function objects, and their parts extractedfrom a learned segmentation model gφ, we sample a sequence of transformations from a learned distribution pθto parameterize the function object’s trajectory. (b) CPM can be generalized to novel object categories becauseit decomposes each action to a collection of functional correspondences between object parts. To sample thetarget transformations that satisfy all functional correspondences, CPM combines the noise predictions from acollection of primitive diffusion models at inference time. (c) Each primitive diffusion model learns a target posedistribution that satisfies a particular part-part correspondence, based on the point clouds of the object parts.Functionals defined on top of object-wise signed distance functions can also represent constraints oninteractions between objects such as contact and containment [ 36]. Flow-based methods can alsolearn static relations between objects [ 37] as well as tool use [ 38], directly from point clouds. A maindifference between our work and these methods is that we bridge the modeling of interactions andobject representations through object-part decomposition and learned part-part correspondences, andenjoy empirically validated improvement in generalization.Composable diffusion models. A set of recent works have investigated the potential of diffusionmodels in robotics [ 39,40,41,42,43,44,45,46,2,47]. Research demonstrates that diffusionmodels can generate multimodal distributions over actions [ 41] and can handle spatial ambiguities insymmetric objects [ 2]. In image domains, prior work has shown a connection between conditionaldiffusion models and energy-based models, and proposed techniques to generate images by combiningdiffusion noises for different language conditions [ 48]. Recent work provides a more principled wayto sample from individually trained models using MCMC [ 49]. Another approach combines diffusionmodels by using additional trained adapters for generating faces [ 50]. CPM combines both lines ofwork to propose composable diffusion models for robotic manipulation. In doing so we must addresstwo challenges of adapting diffusion models to (1) output poses instead of pixels and (2) combineactions in different part frames, while retaining generalization to different distributions.3 Composable Part-Based ManipulationIn this work, our goal is to model functional actions involving an anchor object Athat remains staticand a function object Fthat is being actively manipulated. Shown in Fig. 2 (a), given a task Mandthe partial point clouds of two objects XAandXFin the world frame {W}, we want to predict asequence of SE(3)transformations, i.e., TW={TW,1, ..,TW,N}, which parameterized a trajectoryof the function object Fin the world frame in order to achieve the desired interaction with the anchorobjectA(e.g., pouring). Throughout the paper, we choose N= 2; i.e., we predict the starting poseand the ending poses of the object motion. Then, we use SE(3)interpolation between the two posesto generate the continuous motion trajectory. We define that the object frames of {A}and{F}arecentered at the centroids of the respective point clouds XAandXF, and have the same orientationas the world frame *. Each transformation TWin the world frame can thus be computed by therelative pose between the two objects TAFasTW=TWATAF(TWF)−1. A key challenge we aimto address is generalizing the functional actions from training objects to unseen object instances and,more importantly, novel object categories a robot may have never encountered during training.*The transformation from {W}to an object frame can be computed given this definition. For example,TWF = (RWF,tWF), where RWF is set to an identity matrix and tWF is set to the centroid of XF.33.1 Action as Part-Based Functional CorrespondencesComposable part-based manipulation (CPM) models each action Mas a composition of functionalcorrespondences between object parts. We formalize the symbolic representation of each correspon-denceC∈ CMas⟨Si,PA,j,PF,k⟩, where CMis the set of correspondences for M,Siis a spatialrelation, PA,jandPF,kare two parts of the anchor and the functional objects, respectively. Considerthe example of pouring from a mug to a bowl, as depicted in Fig. 1. This “pour” action contains thefollowing three correspondences: ⟨align,rim(mug),rim(bowl)⟩,⟨tilt,body(mug),body(bowl)⟩, and⟨facing-up ,handle (mug),body(bowl)⟩.The task of predicting robot motion can be cast as the task of finding a robot trajectory that simultane-ously satisfies all the part-based functional correspondences. Instead of manually specifying theseconstraints given object point clouds and their poses, we propose to learn a neural network gφtorecognize the functional parts of objects based on their point clouds and another learned generativemodel pθto parameterize a distribution of T. Using gφ, we can extract point clouds for a givenpart, for example gφ(XF,PF,k) =XPF,k. Learning to recognize functional parts can be treatedas predicting a per-point part segmentation problem and have been studied extensively in priorwork [ 14,15,16,27,51]. Therefore, we focus on the second part which enables the robot to learnmanipulation trajectories of objects, based on the recognized parts.3.2 Generative Modeling of Functional Correspondences with Diffusion ModelsFor each functional correspondence tuple ⟨Si,PA,j,PF,k⟩, we learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here TPjkdenotes the relative transformations TPA,jPF,k†. We usea point-cloud conditioned diffusion model to parameterize this distribution. In particular, eachprimitive diffusion denoise model εθ,Sitakes in the current diffusion time step t, two part pointcloudsXPA,jandXPF,k, and the noisy transformations TPjkas input, and predicts the noise overTPjk. As illustrated in Fig. 2 (c), the model is based on a transformer encoder. First, we encodepoint clouds for the two parts separately using a point cloud transformer [ 52]. Then we encode eachtransformation using a trained MLP. We input the point cloud and transformation encodings, togetherwith the diffusion time step tto the transformer encoder. The output of the transformer encoder is thepredicted noise over the transformations TPjk. We provide details for the architecture in Appendix A.During training, we optimize the following loss for randomly sampled diffusion time step tandrandom Gaussian noise εsampled from a multivariate Gaussian distribution:LMSE=ε−εθ,Sip1−βtT(0)Pjk+pβtε|XPA,j,XPF,k, t22,where T(0)Pjkis the target transformations to predict and βtis the diffusion noise schedule [ 53].The added noise and the predicted noise are both in the tangent space of SE(3). We build on thetechnique introduced for the SE(3)Denoising Score Matching (DSM) model [ 40], but use DenoisingDiffusion Probabilistic Model (DDPM) [ 53] for more stable training. In practice, we first computethe exponential map of the transformations and then apply the noise. This can be viewed as predictingthe score function for an exponential energy function of SE(3)poses.3.3 Inference-Time Composition of Diffusion ModelsOne of the key features of diffusion models is their compositionality. That is, suppose we have a setof diffusion models, each trained for one specific type of functional correspondences, we can combinetheir predicted noises during inference time to generate a trajectory that adheres to all functionalcorrespondences, as illustrated in Fig. 2 (b). Since each diffusion model implicitly parameterizes anenergy-based model: pθ,Si(T |·)∝exp(−Eθ,Si(T |·))through its noise prediction [ 48,49], samplingfrom the composition of the diffusion models corresponds to sampling from the “intersection” ofdistributions for the individual functional correspondences, or formally, fromQC∈CMpθ,Si(T |·).†Similar to the definition of the object frame, the part frames {PA,j}and{PF,k}are centered at thecentroids of the respective point clouds XPA,jandXPF,kand have the same orientation as the world frame.4Pourfrom mugto bowlPourfrom bowlto mugPlaceknifesafely in mugFigure 3: We generate task demonstrations using the PartNet and ShapeNetSem datasets for the “pouring” and“safe placing” tasks. We create demonstrations for a variety of function and anchor object combinations.In particular, during inference time, starting from T(T)AFrandomly sampled from standard Gaussiandistributions, given the set of constraints CM, we iteratively update the pose prediction by:T(t−1)AF =1αt T(t)AF−1−αt√1− ̄αtXC∈CMεθ,Siftopart(T(t)AF)|XPA,j,XPF,k, t!+σtε,where Tis the number of diffusion steps, αt= 1−βtis the denoise schedule, ̄αt=QTi=1αtis thecumulated denoise schedule, σtis a fixed sampling-time noise schedule, and εis a randomly sampledGaussian noise. The differentiable operation ftopart takesT(t)AFand transforms it to the part framePjkby(TAPA,j)−1T(t)AFTFPF,k, for which each individual diffusion model is trained on.4 Data CollectionWe demonstrate CPM on the “pouring” and “safe placing” tasks. These two tasks require differentfunctional affordances. The pouring action pours from an anchor object to a target object, andrequires alignment of rims, collision avoidance of the handle and the container body , and bodytilt. The safe-placing action places a sharp function object into an anchor object, and requires headcontainment for safety, tiptouching bottom , and a body -body placement constraint. To validate ourapproach, we collect 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. To generate the demonstrations, we first source 13 categories of 3D objects fromPartNet [ 54] and the subset of ShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. Wethen extract aligned parts either from segmentations and category-level canonical poses in PartNet orfrom manually labeled 3D keypoints for ShapeNetSem objects. We procedurally generate parametersof the actions from the aligned parts (as illustrated in Fig. 3), simulate the interactions by tracing thetrajectories defined by the parameters, and render RGB-D images using multiple cameres set up inthe simulator. Details of the dataset are presented in Appendix C.5 ExperimentsThe subsequent section will showcase the performance of CPM in comparison to baselines and othervariants of our method in simulation. In particular, we evaluate in two important generalizationsettings: 1) generalization to novel object instances from seen object categories, and 2) generalizationto object instances from unseen object categories. We then discuss the deployment of CPM trained insimulation on a real robot.5.1 Experimental SetupWe evaluate all methods in the PyBullet physics simulator [ 57]. To isolate the problem of predictingtarget transformations Tfrom other components of the system (e.g., grasp sampling and motionplanning), we actuate the center of mass of the function object F. We report average task completionscores from 1500 trials indicating failure (0) and success (100), with credits assigned for partialcompletion. The score is computed based on model-based classifiers designed for each task. To test5Table 1: CPM demonstrates strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 19.21 37.11TAX-Pose 21.71 76.97PC-DDPM 75.83 51.55Part-Aware PC-DDPM 75.28 42.68CPM (ours) 80.00 70.99Table 2: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.23 20.91 6.06 32.04 26.15 31.93 26.44TAX-Pose 23.32 3.82 8.64 46.14 50.90 67.60 36.80PC-DDPM 63.02 75.95 71.39 64.39 40.60 46.63 32.34Part-Aware PC-DDPM 58.98 72.11 67.11 66.17 39.76 48.04 28.15CPM (ours) 79.32 81.44 77.57 62.13 55.94 59.45 63.35generalization to novel objects from seen categories, we randomly split the data for each task Minto 80% training and 20% testing. To test generalization to unseen object categories, we conducta separate experiment for each target category of the function objects, where we withhold datainvolving the target category and train on the remaining data. Details of the evaluation are discussedin Appendix D. We present results with binary success as metric in Appendix E.5.2 Compared MethodsBaselines. We compare CPM with four main baselines. The first is Transformer-BC, which uses amultimodal transformer encoder-decoder from prior work [ 30] to condition on point clouds of theobjects and autoregressively predict target transformations. The second baseline is based on TAX-Pose [ 37] which predicts relative poses between two objects from point-wise soft correspondences.The third is PC-DDPM; similar to recent work [ 40,47], a conditional denoising diffusion probabilisticmodel [ 53] is trained to predict target transformations based on input point clouds of both the functionand the anchor objects. The fourth baseline is the Part-Aware PC-DDPM, which takes in both pointclouds of the objects and per-point segmentation masks that indicate object parts. We discuss thebaseline implementations in details in Appendix B.CPM variants. We evaluate several variants of our model. The first is DDPM with 6D rotationrepresentation instead of SE(3). This variant of CPM learns different diffusion models for differentparts. However, it does not compose pose predictions in different part frames. This model is directlyadapted from existing composable diffusion models for image generation [ 48,49]. The second isDDPM with training-time composition; this model jointly train all primitive diffusion models bycomposes thier noise predictions at training time. The last group are the individual primitive diffusionmodels, which use single DDPM models corresponding to different part-part correspondences,without any composition.5.3 Simulation ResultsComparisons to baselines. We evaluate CPM’s generalization capability in two settings. First,Table 1 shows a comparison of generalization to novel objects from seen categories. Overall, ourmodel achieves strong performance on both tasks of “pouring” and “safe placing”. We note thatTAX-Pose struggles with pouring that requires modeling multimodal actions because the methodextracts a single relative pose estimate from a fixed set of correspondences. The autoregressiveTransformer-BC is also not enough to capture the full distribution of the pouring action. We note thatalthough Part-Aware PC-DDPM leverages the same part segmentation as CPM, it fails to achievestronger performance compared to the PC-DDPM baseline, which only uses the object point cloudsas input. We attribute this to its potential overfitting to the part segmentations within the training data.By contrast, CPM is able to effectively leverage part segmentations by learning primitive diffusionmodels and composing them at inference time. Our model shows substantial improvements in the6Table 3: We ablate the contributions of CPM on the ability to generalize to novel categories of objects.Target Pose Rep Part Frames Composition Pouring Safe Placing6D Rot + 3D Trans No Inf-time 71.22 68.77SE(3) Yes Train-time 69.89 48.46SE(3) Yes Inf-time 75.11 59.58Table 4: We explore the effect of composition, comparing to individual diffusion models, in generalizationacross both “pouring” and “safe placing” tasks. *We note that for the align andfacing-up evaluation, a smallpercentage of examples were removed as they do not contain the involved parts in the partial object point clouds.Pouring⟨align,rim,rim⟩ 70.05*⟨facing-up ,handle ,body⟩ 16.42*⟨tilt, body, body ⟩ 68.69CPM 75.11Safe Placing⟨contain, head, body ⟩ 41.22⟨touch, tip, bottom ⟩ 9.34⟨place, body, body ⟩ 39.86CPM 59.58“safe placing” task compared to other diffusion-based methods, largely due to each part constraintsignificantly restricting the target pose distribution in this task. For instance, the constraint thatrequires the tipof the function object to touch the bottom of the anchor object effectively constrainsthe target pose.Our second set of experiments assesses the model’s capacity to generalize to unseen object categories,thereby highlighting the efficacy of part-based correspondences. Results can be found in Table 2.Remarkably, CPM demonstrates its capability to generalize across object categories for both tasks ina zero-shot manner. CPM’s performance dips slightly for pans as the rim’s of pans are significantlylarger compared to rim’s encountered during training (for example, those of bowls and mugs). Asa comparison, all baselines fall short in consistently generalizing to new categories for both tasks.TAX-Pose is not able to maintain strong performance for safe placing when generalizing to moregeometrically complicated objects including scissors and forks. Our methods are robust to changes inlocal geometry and overall topology by leveraging compositions of part-based correspondences.BodyContain HeadTip Touch BottomComposedFigure 4: We illustrate the learned distribution of eachprimitive diffusion model, which generates diverse sam-ples conforming to the specified constraints, as well asthe distribution from the combined full CPM model. Thehighest-ranked sample is highlighted.Ablation. First, we assess the significance ofourSE(3)encoding, part frame-based transfor-mation, and inference-time composition withinthe context of generalizing to unseen categoriesof objects. As depicted in Table 3, our full CPMwith part frames and inference-time composi-tion shows superior performance compared tothe model trained with training-time composi-tion. This verifies the importance of our designsto support part-based composition and gener-alization. Compared to the variant based on6D Rotation + 3D Translation encoding, CPMyields a better performance on the pouring task,a scenario where the rotation of the function ob-ject plays a pivotal role. On the safe placingtask, which involves less rotation of objects, weobserve a more comparable performance with our model. These results highlight the importance ofSE(3)diffusion model in rotation prediction.Second, we compare the performance of composed part-based diffusion models with the performanceof primitive diffusion models. Shown in Table 4, the composed model outperforms individualdiffusion models, showing the efficacy of our composition paradigm. In addition, these resultsshow the importance of different part-based constraints for the given tasks. In the “pouring” task,align andtiltstrongly constrain the target pose for the function object, while for the “safe placing”task, contain andplace constraints are more salient. Fig. 4 provides a qualitative visualization byshowcasing the part-conditioned distribution associated with each individual diffusion model for7Figure 5: We show sampled frames from trajectories of CPM’s policy. The model is trained only on demonstra-tions with pans, bowls, and wine glasses in simulation and generalizes to mugs in the real world.various constraints, as well as the corresponding composed distribution. The quantitative performanceofcontain andplace primitive models for these tasks aligns with this qualitative comparison, asthey have learned distributions that are close to the composed model. The CPM paradigm allowsus to train each primitive diffusion model independently, encouraging each model to concentrate ondistinct functional affordances, thus enabling them to learn and generalize to diverse distributions ofsamples. During inference, the composition of distributions learned by individual models enablesCPM to find solutions that satisfy all correspondence constraints.5.4 Real-World TransferFinally, we show a real-world robot manipulation experiment for the “pouring” task, highlightingthe transferability of our CPM to real-world manipulation. In this setting, we use the primitivediffusion models trained on simulation data with function objects of glasses, pans, and bowls, andzero-shot transfer to mugs in the real-world experiment. Our setup includes a Franka Emika robotmounted in a tabletop environment. To conduct pouring, we perform plane segmentation and k-meansclustering to extract object point clouds from the scene point cloud captured by two calibrated AzureKinect RGB-D cameras. Next, we apply a pre-trained point transformer (PT) model [ 58] for partsegmentation. The segmentation model is trained on simulation data only. We then apply CPMtrained in simulation for the pouring task. To execute the trajectory, we use the Contact-GraspNet [ 59]to sample robot grasps on the function object and Operational Space Controller [ 60] with impedancefrom Deoxys [ 60] to following a sequence of end-effector pose waypoints computed from the targettransformations. Figure 5 shows our real-world setup and example trajectories predicted by CPM onunseen mugs with different shapes and sizes.6 Limitations and ConclusionWe introduced composable part-based manipulation (CPM), as an approach that leverages object-partdecomposition and part-part correspondences for robotic manipulation. We show that representingactions as combinations of constraints between object parts enables strong generalization. Through thecomposition of primitive diffusion models, we gain generalization capabilities across novel instancesof objects as well as unseen object categories, in simulation and in real-world robot experiments.In this paper, we focus on manipulation tasks involving two objects. Extending CPM to learn skillsinvolving more objects would be important for future work, in particular for manipulating piles orstacks of objects. Second, we parameterize each manipulation action by the starting and endingposes. Extending the transformer-based diffusion model to output more waypoints to parameterizelonger trajectory is important for potentially a wider range of tasks. In addition, CPM does not modeltemporal constraints over the trajectory. One possible extension is to learn trajectory samplers fortemporal constraints and trajectories with loops. CPM assumes external part segmentations. Althoughmany categories can be segmented by off-the-shelf computer vision models [ 26], extending thesystem to jointly learn or finetune part segmentation is important. Finally, composing a larger numberof diffusion models may require more efficient sampling techniques such as [ 61]. We provide anextended discussion of CPM’s assumptions in Appendix F and suggest directions for future research.8AcknowledgmentsWe extend our gratitude to the members of the NVIDIA Seattle Robotics Lab, the RAIL research labat Georgia Tech, and the Stanford Vision and Learning Lab for insightful discussions. This work is inpart supported by NSF grant 2214177, 2211258, AFOSR grant FA9550-22-1-0249, FA9550-23-1-0127, ONR MURI grant N00014-22-1-2740, the Stanford Institute for Human-Centered ArtificialIntelligence (HAI), the MIT-IBM Watson Lab, the MIT Quest for Intelligence, the Center for Brain,Minds, and Machines (CBMM, funded by NSF STC award CCF-1231216), and Analog Devices,JPMC, and Salesforce. Any opinions, findings, and conclusions or recommendations expressed inthis material are those of the authors and do not necessarily reflect the views of our sponsors.References[1]C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting Stable Configurations for SemanticPlacement of Novel Objects. In CoRL , 2021. 1, 2[2]W. Liu, Y . Du, T. Hermans, S. Chernova, and C. Paxton. StructDiffusion: Language-GuidedCreation of Physically-Valid Structures using Unseen Objects. In RSS, 2023. 1, 2, 3[3]Y . Huang, A. Conkey, and T. Hermans. Planning for Multi-Object Manipulation with GraphNeural Network Relational Classifiers. In ICRA , 2023. 1, 2[4]C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated Task and Motion Planning. Annual Review of Control, Robotics, and AutonomousSystems , 4:265–293, 2021. 1[5]K. Mo, L. J. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2Act: From Pixels toActions for Articulated 3D Objects. In CVPR , 2021. 2[6]Z. Xu, Z. He, and S. Song. UMPNet: Universal Manipulation Policy Network for ArticulatedObjects. RA-L , 2022. 2[7]R. Wu, Y . Zhao, K. Mo, Z. Guo, Y . Wang, T. Wu, Q. Fan, X. Chen, L. Guibas, and H. Dong.V AT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculatedObjects. In ICLR , 2021. 2[8]H. Geng, H. Xu, C. Zhao, C. Xu, L. Yi, S. Huang, and H. Wang. GAPartNet: Cross-CategoryDomain-Generalizable Object Perception and Manipulation via Generalizable and ActionableParts. In CVPR , 2023. 2[9]J. Aleotti and S. Caselli. Manipulation Planning of Similar Objects by Part Correspondence. InICRA , 2011. 2[10] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy, and T. Asfour. Part-based graspplanning for familiar objects. In IEEE-RAS International Conference on Humanoid Robots(Humanoids) , 2016. 2[11] P. Parashar, J. Vakil, S. Powers, and C. Paxton. Spatial-Language Attention Policies for EfficientRobot Learning. arXiv:2304.11235 , 2023. 2[12] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding Language with Visual Affordances overUnstructured Data. In ICRA , 2023. 2[13] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate Once, Imitate Immediately(DOME): Learning Visual Servoing for One-Shot Imitation Learning. In IROS , 2022. 2[14] A. Myers, C. L. Teo, C. Ferm ̈uller, and Y . Aloimonos. Affordance Detection of Tool Parts fromGeometric Features. In ICRA , 2015. 2, 49[15] T.-T. Do, A. Nguyen, and I. Reid. AffordanceNet: An End-to-End Deep Learning Approach forObject Affordance Detection. In ICRA , 2018. 2, 4, 16[16] S. Deng, X. Xu, C. Wu, K. Chen, and K. Jia. 3D AffordanceNet: A Benchmark for VisualObject Affordance Understanding. In CVPR , 2021. 2, 4, 16[17] W. Liu, A. Daruna, and S. Chernova. CAGE: Context-Aware Grasping Engine. In ICRA , 2020.2[18] P. Ard ́on,`E. Pairet, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan. Learning Grasp AffordanceReasoning through Semantic Relations. In IROS , 2019. 2[19] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation. In ISRR , 2022. 2[20] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. KETO: Learning Keypoint Representationsfor Tool Manipulation. In ICRA , 2020. 2[21] D. Turpin, L. Wang, S. Tsogkas, S. Dickinson, and A. Garg. GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels. In RSS, 2021. 2[22] L. Manuelli, Y . Li, P. Florence, and R. Tedrake. Keypoints into the Future: Self-SupervisedCorrespondence in Model-Based Reinforcement Learning. In CoRL , 2020. 2[23] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation.InICRA , 2022. 2[24] E. Chun, Y . Du, A. Simeonov, T. Lozano-Perez, and L. Kaelbling. Local Neural DescriptorFields: Locally Conditioned Object Representations for Manipulation. In ICRA , 2023. 2[25] J.-S. Ha, D. Driess, and M. Toussaint. Deep Visual Constraints: Neural Implicit Models forManipulation Planning from Visual Input. RA-L , 7(4):10857–10864, 2022. 2[26] M. Liu, Y . Zhu, H. Cai, S. Han, Z. Ling, F. Porikli, and H. Su. PartSLIP: Low-Shot PartSegmentation for 3D Point Clouds via Pretrained Image-Language Models. In CVPR , 2023. 2,8, 16[27] D. Hadjivelichkov, S. Zwane, L. Agapito, M. P. Deisenroth, and D. Kanoulas. One-Shot Transferof Affordance Regions? AffCorrs! In CoRL , 2022. 2, 4[28] W. Goodwin, I. Havoutis, and I. Posner. You Only Look at One: Category-Level ObjectRepresentations for Pose Estimation From a Single Example. In CoRL , 2022. 2[29] W. Yuan, C. Paxton, K. Desingh, and D. Fox. SORNet: Spatial Object-Centric Representationsfor Sequential Manipulation. In CoRL , 2021. 2[30] W. Liu, C. Paxton, T. Hermans, and D. Fox. StructFormer: Learning Spatial Structure forLanguage-Guided Semantic Rearrangement of Novel Objects. In ICRA , 2022. 2, 6, 14[31] M. Shridhar, L. Manuelli, and D. Fox. CLIPort: What and Where Pathways for RoboticManipulation. In CoRL , 2021. 2[32] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning SymbolicOperators for Task and Motion Planning. In IROS , 2021. 2[33] K. Kase, C. Paxton, H. Mazhar, T. Ogata, and D. Fox. Transferable Task Execution from Pixelsthrough Deep Planning Domain Learning. In ICRA , 2020. 2[34] K. Mo, Y . Qin, F. Xiang, H. Su, and L. Guibas. O2O-Afford: Annotation-Free Large-ScaleObject-Object Affordance Learning. In CoRL , 2022. 210[35] J. Liang and A. Boularias. Learning Category-Level Manipulation Tasks from Point Cloudswith Dynamic Graph CNNs. In ICRA , 2023. 2[36] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. In CoRL , 2022. 3[37] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. TAX-Pose: Task-Specific Cross-PoseEstimation for Robot Manipulation. In CoRL , 2022. 3, 6[38] D. Seita, Y . Wang, S. J. Shetty, E. Y . Li, Z. Erickson, and D. Held. ToolFlowNet: RoboticManipulation with Tools via Predicting Tool Flow from Point Clouds. In CoRL , 2022. 3[39] M. Janner, Y . Du, J. Tenenbaum, and S. Levine. Planning with Diffusion for Flexible BehaviorSynthesis. In ICML , 2022. 3[40] J. Urain, N. Funk, J. Peters, and G. Chalvatzaki. SE(3)-DiffusionFields: Learning Smooth CostFunctions for Joint Grasp and Motion Optimization through Diffusion. In ICRA , 2023. 3, 4, 6,14[41] S. Huang, Z. Wang, P. Li, B. Jia, T. Liu, Y . Zhu, W. Liang, and S.-C. Zhu. Diffusion-BasedGeneration, Optimization, and Planning in 3D Scenes. In CVPR , 2023. 3[42] I. Kapelyukh, V . V osylius, and E. Johns. DALL-E-Bot: Introducing Web-Scale DiffusionModels to Robotics. RA-L , 2023. 3[43] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is Conditional GenerativeModeling all you need for Decision-Making? In ICLR , 2023. 3[44] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter,et al. Scaling Robot Learning with Semantically Imagined Experience. arXiv:2302.11550 ,2023. 3[45] U. A. Mishra and Y . Chen. ReorientDiff: Diffusion Model based Reorientation for ObjectManipulation. arXiv:2303.12700 , 2023. 3[46] C. Higuera, B. Boots, and M. Mukadam. Learning to Read Braille: Bridging the Tactile RealityGap with Diffusion Models. arXiv:2304.01182 , 2023. 3[47] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion Policy:Visuomotor Policy Learning via Action Diffusion. In RSS, 2023. 3, 6, 14[48] N. Liu, S. Li, Y . Du, A. Torralba, and J. B. Tenenbaum. Compositional Visual Generation withComposable Diffusion Models. In ECCV , 2022. 3, 4, 6[49] Y . Du, C. Durkan, R. Strudel, J. B. Tenenbaum, S. Dieleman, R. Fergus, J. Sohl-Dickstein,A. Doucet, and W. Grathwohl. Reduce, Reuse, Recycle: Compositional Generation withEnergy-Based Diffusion Models and MCMC. In ICML , 2023. 3, 4, 6[50] Z. Huang, K. C. Chan, Y . Jiang, and Z. Liu. Collaborative Diffusion for Multi-Modal FaceGeneration and Editing. In CVPR , 2023. 3[51] R. Xu, F.-J. Chu, C. Tang, W. Liu, and P. A. Vela. An Affordance Keypoint Detection Networkfor Robot Manipulation. RA-L , 6(2):2870–2877, 2021. 4[52] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu. PCT: Point CloudTransformer. Computational Visual Media , 2021. 4, 13, 14[53] J. Ho, A. Jain, and P. Abbeel. Denoising Diffusion Probabilistic Models. In NeruIPS , 2020. 4,6, 1411[54] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. PartNet: A Large-scaleBenchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding. In CVPR ,2019. 5, 14[55] M. Savva, A. X. Chang, and P. Hanrahan. Semantically-Enriched 3D Models for Common-senseKnowledge. In CVPRW , 2015. 5, 14[56] C. Eppner, A. Mousavian, and D. Fox. ACRONYM: A Large-Scale Grasp Dataset Based onSimulation. In ICRA , 2021. 5, 14[57] E. Coumans and Y . Bai. Pybullet, A Python Module for Physics Simulation in Robotics, Gamesand Machine Learning, 2017. 5[58] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V . Koltun. Point Transformer. In ICCV , 2021. 8[59] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-GraspNet: Efficient 6-DoFGrasp Generation in Cluttered Scenes. In ICRA , 2021. 8[60] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. VIOLA: Imitation Learning for Vision-Based Manipula-tion with Object Proposal Priors. In CoRL , 2022. 8[61] Q. Zhang and Y . Chen. Fast Sampling of Diffusion Models with Exponential Integrator. InICLR , 2022. 812A Network ArchitectureFor each functional correspondence ⟨Si,PA,j,PF,k⟩, we aim to learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here we discuss the network architecture for primitive diffusion modelεθ,Sithat learns to estimate the generative distribution. We leverage modality-specific encoders toconvert the multimodal inputs to latent tokens that are later processed by a transformer network.Object encoder. Given part point clouds XPA,jandXPF,k, we use a learned encoder hptoencode each part separately as hp(XPA,j)andhp(XPF,k). This encoder is built on the Point CloudTransformer (PCT) [52].Diffusion encodings. Since the goal transformations TPjk={TPjk,n}Nn=1are iteratively refined bythe diffusion model and need to feed back to the model during inference, we use a MLP to encodeeach goal transformation separately hT(TPjk,n). To compute the time-dependent Gaussian posteriorfor reverse diffusion, we obtain a latent code for tusing a Sinusoidal embedding htime(t).Positional encoding. We use a learned position embedding hpos(l)to indicate the position index lofthe part point clouds and poses in input sequences to the subsequent transformer.Diffusion Transformer. The diffusion model predicts the goal poses T(0)Pjkstarting from the last timestep of the reverse diffusion process T(T)Pjk∼ N(0,I), which is sampled from a multivariate normaldistribution with independent components. We use a transformer encoder as the backbone for thediffusion model εθ,Si{T(t)Pjk,n}Nn=1|XPA,j,XPF,k, t, which predicts the time-dependent noise{ε(t)1, ..., ε(t)N}. We obtain the transformer input for the parts χand the target poses τasχ(t)A= [hp(XPA,j);hpos(0);htime(t)]χ(t)F= [hp(XPF,k);hpos(1);htime(t)]τ(t)n= [hT(T(t)Pjk,n);hpos(n−2);htime(t)]where [; ]is the concatenation at the feature dimension. The model takes in the sequence{χ(t)A, χ(t)F, τ(t)1, ..., τ(t)N}and predicts {ε(t)1, ..., ε(t)N}for the object poses.Parameters. We provide network and training parameters in Table A1.Table A1: Model ParametersParameter ValueNumber of PA,jandPF,kpoints 512PCT point cloud encoder hpout dim 200Position embedding hpos learned embeddingPosition embedding hposdim 16Time embedding htime SinusoidalTime embedding htime dim 40Pose encoder hTout dim 200Transformer number of layers 4Transformer number of heads 4Transformer hidden dim 128Transformer dropout 0.0Diffusion steps T 200Diffusion noise schedule βt LinearStart value β0 0.0001End value βT 0.02Loss HuberEpochs 2000Optimizer AdamLearning rate 1e-4Gradient clip value 1.013B Implementation Details for BaselinesWe discuss the implementation of each baseline below:•Transformer-BC: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. This baseline uses a multimodal transformer encoder-decoder fromprior work [ 30] to condition on point clouds of the objects and autoregressively predicttarget transformations. The point clouds are first individually encoded with a point cloudtransformer [ 52]. The point cloud embeddings are fed to the transformer encoder. Thetransformer decoder autoregressively decodes the target poses {TAF,1, ..,TAF,N}.•TAX-Pose: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. We use the code and hyperparameters from the official repository‡. We use the variant that does not require pretrained object embeddings because we usedifferent objects from the paper. As discussed in Appendix F.1.2 of the original paper,pretraining mainly helps to reduce training time. Because the TAX-Pose model onlypredicts one relative pose for each pair of point clouds, we learn a separate model for eachtransformation in TAF. Specifically, one TAX-Pose is trained to predict start pose andanother TAX-Pose is trained to predict end pose.•PC-DDPM: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. Similar to recent work [ 40,47], a conditional denoising diffusionprobabilistic model [ 53] is trained to predict target transformations based on input pointclouds of both the function and the anchor objects. This model has the same architecture,including encoders, latent embeddings, and the diffusion transformer, as the primitivediffusion models, which is discussed in Appendix A.•Part-Aware PC-DDPM: this baseline takes point clouds XA∈RNX×3andXF∈RNX×3and two segmentation masks IA∈RNX×NIandIF∈RNX×NIas input, and predictstarget transformations TAF.NXis the number of points for each point cloud and NIisthe number of known object parts. Each channel of the segmentation mask is a binarymask indicating points for a specific part. Each segmentation mask encodes all parts thatcan be extracted from an object point cloud. For simulation experiment, the segmentationmasks come from groundtruth part segmentation. While CPM use the segmentation masksto extract part point clouds, this baseline directly encode the segmentation mask togetherwith the object point cloud. This baseline shares most of the network architecture asPC-DDPM except that point cloud encoder now encodes [XA;IA]∈RNX×(3+NI)and[XF;IF]∈RNX×(3+NI).C Dataset DetailsIn total, we collected 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. For each experiment, we use a subset of these demonstrations for training the models,and the remaining data for initializing the simulation. We provide a breakdown of the dataset inTable A2. Because the expert policies do not have 100% success rate, the models will only be trainedon the successful demonstrations. Below we discuss our data collection process in details.Sourcing 3D objects. We source a wide variety of 3D objects from PartNet [ 54] and the subset ofShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. We use 13 object categoriesto investigate generalization, including mug,pan,bowl ,wine glass ,knife ,can opener ,scissors ,screwdriver ,fork,spoon ,marker ,pen, and flashlight . Some object categories are reused for differenttasks; for example, mug is used as an anchor for safe placing but also as an object for pouring.Extracting aligned parts. Our generative diffusion models uses part segmentations of objects tolearn primitive diffusion models. For 3D objects from PartNet, we use the segmentations provided in‡Code from https://github.com/r-pad/taxpose.14Table A2: Simulation and Demonstration DataTask Object Source Number of Simulations Number of Success DemonstrationsSafe PlacingPen PartNet 1000 568Fork PartNet 1000 390ScrewDriver PartNet 1000 145Spoon PartNet 1000 410Knife Acronym 1000 496Scissors PartNet 1000 354Flashlight PartNet 1000 141CanOpener PartNet 1000 101Marker PartNet 1000 231PouringMug PartNet 2000 1051WineGlass PartNet 2000 1542Bowl Acronym 2000 776Pan PartNet 2000 1153the dataset. For 3D objects from ShapeNetSem, we first label 3D keypoints, then from the labeledkeypoints, we procedurally extract parts. As ShapeNet provides canonical poses for 3D models, wecan also align the extracted functional parts for each object category.Simulating trajectories and rendering. We simulate the robot-object interactions by tracing thetrajectories defined by the parameters. We first use multiple cameras to render RGB-D images, whichyield realistic object point clouds. We then map the functional parts to the point clouds with thecorrect transformation and scaling. Finally, we obtain point cloud segments of each affordance part.Because these parts are extracted from the rendered point clouds, they can be incomplete, whichincreases the robustness of our method and helps transferability to real-world settings.D Evaluation DetailsIn Section 5, we report task completion scores. For each experiment, we randomly draw 100 samplesfrom the withheld testing data to initialize simulation for evaluation. This procedure ensures that theaction can be successfully performed for the pair of anchor and function objects. To systematicallyevaluate multimodal actions (e.g, pouring from different directions), we sample from each model 5times and simulate the predicted actions. We repeat each experiment with 3 different random seeds,resulting in a total of 1500 trials.The task score indicates task completion between failure (0) and success (100), with credits assignedfor partial completion. The score is computed based on model-based classifiers designed for eachtask. Now we describe how the score is computed in more detail:•Pouring: we first use PyBullet’s collision test to check whether the function object and anchorobject will ever interpenetrate during the execution of the action by rigidly transformingthe function object to the predicted poses. If the objects interpenetrate, we assign a scoreof zero because the action cannot be realistically executed. Then we simulate the pouringaction, and use the percentage of particles successfully transferred from the function objectto the anchor object as the partial score.•Safe Placing: similar to pouring, we check interpenetration for the start pose of the placementaction. If the objects interpenetrate, we assign a score of zero. Then we simulate theplacement action until contact between the anchor and function object. If the orientationof the function object is incorrect (e.g., the blade of the knife is outside of the container),we assign a score of zero. If the orientation is correct, the percentage of the trajectoryparameterized by the predicted transformations that is successfully executed is used as thepartial score.15E Additional ResultsBesides reporting the task completion scores, we include additional task success rates in Table A3 andTable A4. For pouring, a trial is considered successful if there is no interpenetration between objectsand 70% of particles are successfully transferred. For safe placing, a successful trial requires nointerpenetration at the predicted start pose for the function object, correct orientation of the functionobject, and 70% of the predicting trajectory being successfully executed without collision betweenobjects. We observe similar trends as the results presented in Section 5.Table A3: CPM shows strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 17.53 ±3.13 33.27 ±2.14TAX-Pose 21.33 ±0.58 74.00±1.00PC-DDPM 70.67 ±1.27 48.73 ±2.97Part-Aware PC-DDPM 73.60 ±2.60 36.53 ±2.20CPM (ours) 76.87±1.70 68.87±2.25Table A4: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.00 ±2.51 19.20 ±0.92 5.80 ±1.93 29.33 ±2.04 24.00 ±1.51 27.33 ±2.34 18.47 ±2.93TAX-Pose 21.00 ±1.00 3.00 ±1.00 8.00 ±1.00 42.67 ±2.08 47.67 ±3.21 62.67±4.04 33.33±1.15PC-DDPM 56.53 ±2.00 70.67 ±3.06 68.67 ±4.31 59.93 ±2.80 38.00 ±3.83 43.47 ±1.68 28.47 ±0.83Part-Aware PC-DDPM 54.87 ±2.10 68.33 ±2.97 65.20 ±4.61 62.00±3.56 28.67±1.68 42.40 ±3.12 17.67 ±2.70CPM (ours) 76.40±1.78 78.93 ±3.14 76.00 ±5.26 54.67±1.50 53.93±2.91 56.53±2.04 62.07±1.72F AssumptionsDuring training, our method assumes 1) a description of the manipulation skill as a set of partcorrespondences, 2) access to the dataset of successful trajectories, and 3) access to part segmentationsfor objects in the dataset. During testing, our method assumes the part segmentations for objectsbeing manipulated. We contend that these assumptions align with our current focus. Nonetheless,subsequent research should aim to address them.First, the description of manipulation skills is in symbolic text, e.g., pouring from mugs to bowlscontains three constraints. They can be easily annotated by humans as there is no need to specifyany continuous parameters or mathematical formulas. An interesting future direction is to leveragelarge language models to more efficiently extract constraints. CPM then learns the grounding of theseconstraints from data.Second, we assume access to successful manipulation trajectories. That is, we do not assume anyadditional annotations, such as programs for generating these trajectories. The key focus of thepaper is to improve the data efficiency of learning such skills, in particular for generalization acrosscategories. An important future direction is to improve the data efficiency of this method and learnfrom noisy human demonstrations.Finally, relying on external part segmentation is limiting, but 2D or 3D part segmentation modelsare generally available for many object categories [ 15,16,26]. An exciting future direction is toextend the current framework to automatically discover functional part segmentations leveragingmanipulation data.16<|im_end|> <|im_start|>assistant ### Review Title ### Review Text Strengths - Authors show real-world demonstration showing impressive successful sim-to-real transfer, including generalization to unseen mugs - The authors proposed method of using part-part correspondences as the basis of modelling manipulation actions, and the integration of this with diffusion models is very interesting and novel. I believe this paper is very relevant to the CoRL community. Weaknesses The main weakness of the paper is that in places it is unclear and missing key details. In addition I believe there is room for improvement in the baselines used for comparison. - On line 135, and in the equation below line 144, the authors state that each individual takes as input two object point-clouds, but in the equation below line 160, these are now partial point-clouds. From the rest of the text, particularly the discussions of the baselines, I am inclined to think that the authors use partial point-clouds of the parts as input but it is unclear. - In the section 'Inference-time combination' the authors discuss performing the diffusion in SE3. This is further ablated in the experiments with the model with a 6D rotation parameterization. However, performing diffusion on SE3 is not a contribution of this paper, since that has been proposed in [40]. The section 'Inference-time combination' should be rewritten to make clear that this is from prior work and not part of the authors contribution. - I found the description of the baselines and their relationship to CPM very unclear. In particular, for both PC-DDPM and part-aware-PC-DDPM, it is very unclear to me what their input/output is. Is there a model per correspondence as with CPM, or a model per task? If neither, how are the correspondences/tasks encoded? This becomes especially important for part-aware-PC-DDPM because from the text I do not understand what the difference between part-aware-PC-DDPM and CPM is. - I cannot find anywhere in the text a definition of the performance metric. Is it a success rate? Over how many evaluations? The authors proposed approach shows improved performance including on unseen tasks according to this metric, but I cannot list this as a strength of the paper until I know what this performance metric means. - The authors baselines are only other diffusion models, and all of the baselines are developed by the authors. I think they would be much more convincing if they were contrasted from another method from the literature. Perhaps something from the offline RL literature such as CQL. - One key limitation that is not discussed in the limitations section is the quite restrictive set of tasks that this method can be applied to, namely consisting of two-object interactions with only one object actively manipulated. ### Review Rating ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
HyeJmlrFvH
ICLR.cc/2020/Conference
2020
Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
["Ali Ramezani-Kebrya", "Fartash Faghri", "Ilya Markov", "Vitalii Aksenov", "Dan Alistarh", "Daniel M. Roy"]
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.
["sgd", "nonuniform quantization", "variants", "qsgd", "size", "complexity", "models", "datasets", "need", "stochastic gradient descent"]
ABSTRACTAs the size and complexity of models and datasets grow, so does the need forcommunication-efficient variants of stochastic gradient descent that can be de-ployed on clusters to perform parallel model training. Alistarh et al. (2017) describetwo variants of data-parallel SGD that quantize and encode gradients to lessencommunication costs. For the first variant, QSGD, they provide strong theoreticalguarantees. For the second variant, which we call QSGDinf, they demonstrate im-pressive empirical gains for distributed training of large neural networks. Buildingon their work, we propose an alternative scheme for quantizing gradients and showthat it yields stronger theoretical guarantees than exist for QSGD while matchingthe empirical performance of QSGDinf.1 I NTRODUCTIONDeep learning is booming thanks to enormous datasets and very large models, leading to the fact thatthe largest datasets and models can no longer be trained on a single machine. One common solutionto this problem is to use distributed systems for training. The most common algorithms underlyingdeep learning are stochastic gradient descent (SGD) and its variants, which led to a significant amountof research on building and understanding distributed versions of SGD.Implementations of SGD on distributed systems and data-parallel versions of SGD are scalable andtake advantage of multi-GPU systems. Data-parallel SGD, in particular, has received significantattention due to its excellent scalability properties (Zinkevich et al., 2010; Bekkerman et al., 2011;Recht et al., 2011; Dean et al., 2012; Coates et al., 2013; Chilimbi et al., 2014; Li et al., 2014; Duchiet al., 2015; Xing et al., 2015; Zhang et al., 2015; Alistarh et al., 2017). In data-parallel SGD, a largedataset is partitioned among Kprocessors. These processors work together to minimize an objectivefunction. Each processor has access to the current parameter vector of the model. At each SGDiteration, each processor computes an updated stochastic gradient using its own local data. It thenshares the gradient update with its peers. The processors collect and aggregate stochastic gradients tocompute the updated parameter vector.Increasing the number of processing machines reduces the computational costs significantly. However,the communication costs to share and synchronize huge gradient vectors and parameters increasesdramatically as the size of the distributed systems grows. Communication costs may thwart theanticipated benefits of reducing computational costs. Indeed, in practical scenarios, the communica-tion time required to share stochastic gradients and parameters is the main performance bottleneck(Recht et al., 2011; Li et al., 2014; Seide et al., 2014; Strom, 2015; Alistarh et al., 2017). Reducingcommunication costs in data-parallel SGD is an important problem.One promising solution to the problem of reducing communication costs of data-parallel SGD isgradient compression, e.g., through gradient quantization (Dean et al., 2012; Seide et al., 2014; Saet al., 2015; Gupta et al., 2015; Abadi et al., 2016; Zhou et al., 2016; Alistarh et al., 2017; Wen et al.,2017; Bernstein et al., 2018). (This should not be confused with weight quantization/sparsification,as studied by Wen et al. (2016); Hubara et al. (2016); Park et al. (2017); Wen et al. (2017), which wedo not discuss here.) Unlike full-precision data-parallel SGD, where each processor is required tobroadcast its local gradient in full-precision, i.e.,transmit and receive huge full-precision vectors ateach iteration, quantization requires each processor to transmit only a few communication bits periteration for each component of the stochastic gradient.One popular such proposal for communication-compression is quantized SGD (QSGD), due toAlistarh et al. (2017). In QSGD, stochastic gradient vectors are normalized to have unit L2norm,1Under review as a conference paper at ICLR 2020and then compressed by quantizing each element to a uniform grid of quantization levels using arandomized method. While most lossy compression schemes do not provide convergence guarantees,QSGD’s quantization scheme, is designed to be unbiased, which implies that the quantized stochasticgradient is itself a stochastic gradient, only with higher variance determined by the dimension andnumber of quantization levels. As a result, Alistarh et al. (2017) are able to establish a numberof theoretical guarantees for QSGD, including that it converges under standard assumptions. Bychanging the number of quantization levels, QSGD allows the user to trade-off communicationbandwidth and convergence time.Despite their theoretical guarantees based on quantizing after L2normalization, Alistarh et al. optto present empirical results using L¥normalization. We call this variation QSGDinf. While theempirical performance of QSGDinf is strong, their theoretical guarantees on the number of bitstransmitted no longer apply. Indeed, in our own empirical evaluation of QSGD, we find the varianceinduced by quantization is substantial, and the performance is far from that of SGD and QSGDinf.Given the popularity of this scheme, it is natural to ask one can obtain guarantees as strong asthose of QSGD while matching the practical performance of the QSGDinf heuristic. In this work,we answer this question in the affirmative by providing a new quantization scheme which fits intoQSGD in a way that allows us to establish stronger theoretical guarantees on the variance, bandwidth,and cost to achieve a prescribed gap. Instead of QSGD’s uniform quantization scheme, we usean unbiased nonuniform logarithmic scheme, similar to those introduced in telephony systems foraudio compression (Cattermole, 1969). We call the resulting algorithm nonuniformly quantizedstochastic gradient descent (NUQSGD). Like QSGD, NUQSGD is a quantized data-parallel SGDalgorithm with strong theoretical guarantees that allows the user to trade off communication costs withconvergence speed. Unlike QSGD, NUQSGD has strong empirical performance on deep models andlarge datasets, matching that of QSGDinf. In particular, we provide a new efficient implementationfor these schemes using a modern computational framework (Pytorch), and benchmark it on classiclarge-scale image classification tasks.The intuition behind the nonuniform quantization scheme underlying NUQSGD is that, after L2nor-malization, many elements of the normalized stochastic gradient will be near-zero. By concentratingquantization levels near zero, we are able to establish stronger bounds on the excess variance. In theoverparametrized regime of interest, these bounds decrease rapidly as the number of quantizationlevels increases. Combined with a bound on the expected code-length, we obtain a bound on the totalcommunication costs of achieving an expected suboptimality gap. The resulting bound is slightlystronger than the one provided by QSGD.To study how quantization affects convergence on state-of-the-art deep models, we compareNUQSGD, QSGD, and QSGDinf, focusing on training loss, variance, and test accuracy on standarddeep models and large datasets. Using the same number of bits per iteration, experimental resultsshow that NUQSGD has smaller variance than QSGD, as expected by our theoretical results. Thissmaller variance also translates to improved optimization performance, in terms of both training lossand test accuracy. We also observe that NUQSGD matches the performance of QSGDinf in terms ofvariance and loss/accuracy. Further, our distributed implementation shows that the resulting algo-rithm considerably reduces communication cost of distributed training, without adversely impactingaccuracy. Our empirical results show that NUQSGD can provide faster end-to-end parallel trainingrelative to data-parallel SGD, QSGD, and Error-Feedback SignSGD (Karimireddy et al., 2019) onthe ImageNet dataset.Summary of Contributions.•We establish stronger theoretical guarantees for the excess variance and communication costs ofour gradient quantization method than those available for QSGD’s uniform quantization method.•We then establish stronger convergence guarantees for the resulting algorithm, NUQSGD, understandard assumptions.•We demonstrate that NUQSGD has strong empirical performance on deep models and large datasets,both in terms of accuracy and scalability. Thus, NUQSGD closes the gap between the theoreticalguarantees of QSGD and the empirical performance of QSGDinf.2Under review as a conference paper at ICLR 20201.1 R ELATED WORKSeide et al. (2014) proposed signSGD, an efficient heuristic scheme to reduce communication costsdrastically by quantizing each gradient component to two values. Bernstein et al. (2018) laterprovided convergence guarantees for signSGD. Note that the quantization employed by signSGD isnot unbiased, and so a new analysis was required. As the number of levels is fixed, SignSGD doesnot provide any trade-off between communication costs and convergence speed.Sa et al. (2015) introduced Buckwild!, a lossy compressed SGD with convergence guarantees. Theauthors provided bounds on the error probability of SGD, assuming convexity and gradient sparsity.Wen et al. (2017) proposed TernGrad, a stochastic quantization scheme with three levels. TernGradalso significantly reduces communication costs and obtains reasonable accuracy with a small degra-dation to performance compared to full-precision SGD. Convergence guarantees for TernGrad relyon a nonstandard gradient norm assumption. As discussed, Alistarh et al. (2017) proposed QSGD, amore general stochastic quantization scheme, for which they provide both theoretical guarantees andexperimental validation (although for different variants of the same algorithm). We note that theirimplementation was only provided in Microsoft CNTK; by contrast, here we provide a more genericimplementation in Horovod (Sergeev and Del Balso, 2018), a communication back-end which cansupport a range of modern frameworks such as Tensorflow, Keras, Pytorch, and MXNet.NUQSGD uses a logarithmic quantization scheme. Such schemes have long been used in telephonysystems for audio compression (Cattermole, 1969). Logarithmic quantization schemes have appearedin other contexts recently: Hou and Kwok (2018) studied weight distributions of long short-termmemory networks and proposed to use logarithm quantization for network compression. Zhang et al.(2017) proposed a gradient compression scheme and introduced an optimal quantization scheme, butfor the setting where the points to be quantized are known in advance. As a result, their scheme is notapplicable to the communication setting of quantized data-parallel SGD.2 P RELIMINARIES : DATA-PARALLEL SGD AND CONVERGENCEWe consider a high-dimensional machine learning model, parametrized by a vector w2Rd. LetWRddenote a closed and convex set. Our objective is to minimize f:W!R, which is an unknown,differentiable, convex, and b-smooth function. The following summary is based on (Alistarh et al.,2017).Recall that a function fisb-smooth if, for all u;v2W, we havekÑf(u)Ñf(v)kbkuvk,wherekk denotes the Euclidean norm. Let (S;S;m)be a probability space (and let Edenoteexpectation). Assume we have access to stochastic gradients of f,i.e.,we have access to a functiong:WS!Rdsuch that, if sm, thenE[g(w;s)] =Ñf(w)for all w2W. In the rest of the paper,we let g(w)denote the stochastic gradient for notational simplicity. The update rule for conventionalfull-precision projected SGD is wt+1=PW(wtag(wt));where wtis the current parameter input,ais the learning rate, and PWis the Euclidean projection onto W.We say the stochastic gradient has a second-moment upper bound BwhenE[kg(w)k2]Bforallw2W. Similarly, the stochastic gradient has a variance upper bound s2whenE[kg(w)Ñf(w)k2]s2for all w2W. Note that a second-moment upper bound implies a variance upperbound, because the stochastic gradient is unbiased.We have classical convergence guarantees for conventional full-precision SGD given access tostochastic gradients at each iteration:Theorem 1 (Bubeck 2015, Theorem 6.3) .Letf:W!Rdenote a convex and b-smooth functionand let R2,supw2Wkww0k2. Suppose that the projected SGD update is executed for Titerationswitha=1=(b+1=g)where g=rp2=T=s. Given repeated and independent access to stochasticgradients with a variance upper bound s2, projected SGD satisfiesEhf1TTåt=0wtiminw2Wf(w)Rr2s2T+bR2T: (1)Minibatched (with larger batch sizes) and data-parallel SGD are two common SGD variants used inpractice to reduce variance and improve computational efficiency of conventional SGD.3Under review as a conference paper at ICLR 2020Input: local data, local copy of the parameter vector wt, learning rate a, and K1fort=1toTdo2 fori=1toKdo// each transmitter processor (in parallel)3 Compute gi(wt);// stochastic gradient4 Encode ci;t ENCODEgi(wt);5 Broadcast ci;tto all processors;6 forl=1toKdo// each receiver processor (in parallel)7 fori=1toKdo// each transmitter processor8 Receive ci;tfrom processor ifor each i;9 Decode ˆ gi(wt) DECODEci;t;10 Aggregate wt+1 PW(wtaKåKi=1ˆgi(wt));Algorithm 1: Data-parallel (synchronized) SGD.Following (Alistarh et al., 2017), we consider data-parallel SGD, a synchronous distributed frameworkconsisting of Kprocessors that partition a large dataset among themselves. This framework modelsreal-world systems with multiple GPU resources. Each processor keeps a local copy of the parametervector and has access to independent and private stochastic gradients of f.At each iteration, each processor computes its own stochastic gradient based on its local data andthen broadcasts it to all peers. Each processor receives and aggregates the stochastic gradients fromall peers to obtain the updated parameter vector. In detail, the update rule for full-precision data-parallel SGD is wt+1=PW(wtaKåKl=1gl(wt))where gl(wt)is the stochastic gradient computedand broadcasted by processor l. Provided that gl(wt)is a stochastic gradient with a variance upperbound s2for all l, then1KåKl=1gl(wt)is a stochastic gradient with a variance upper bounds2K. Thus,aggregation improves convergence of SGD by reducing the first term of the upper bound in (1).Assume each processor computes a minibatch gradient of size B. Then, this update rule is essentiallya minibatched update with size BK.Data-parallel SGD is described in Algorithm 1. Full-precision data-parallel SGD is a special caseof Algorithm 1 with identity encoding and decoding mappings. Otherwise, the decoded stochasticgradient ˆ gi(wt)is likely to be different from the original local stochastic gradient gi(wt).By Theorem 1, we have the following convergence guarantees for full-precision data-parallel SGD:Corollary 1 (Alistarh et al. 2017, Corollary 2.2) .Letf,R, and gbe as defined in Theorem 1and let e>0. Suppose that the projected SGD update is executed for Titerations with a=1=(b+pK=g)onKprocessors, each with access to independent stochastic gradients of fwith asecond-moment bound B. The smallest Tfor the full-precision data-parallel SGD that guaranteesEf(1TåTt=0wt)min w2Wf(w)eis Te=OR2max(2BKe2;be).3 N ONUNIFORMLY QUANTIZED STOCHASTIC GRADIENT DESCENTData-parallel SGD reduces computational costs significantly. However, the communication costsof broadcasting stochastic gradients is the main performance bottleneck in large-scale distributedsystems. In order to reduce communication costs and accelerate training, Alistarh et al. (2017)introduced a compression scheme that produces a compressed and unbiased stochastic gradient,suitable for use in SGD.At each iteration of QSGD, each processor broadcasts an encoding of its own compressed stochasticgradient, decodes the stochastic gradients received from other processors, and sums all the quantizedvectors to produce a stochastic gradient. In order to compress the gradients, every coordinate (withrespect to the standard basis) of the stochastic gradient is normalized by the Euclidean norm of thegradient and then stochastically quantized to one of a small number quantization levels distributeduniformly in the unit interval. The stochasticity of the quantization is necessary to not introduce bias.Alistarh et al. (2017) give a simple argument that provides a lower bound on the number of coordinatesthat are quantized to zero in expectation. Encoding these zeros efficiently provides communicationsavings at each iteration. However, the cost of their scheme is greatly increased variance in thegradient, and thus slower overall convergence. In order to optimize overall performance, we mustbalance communication savings with variance.4Under review as a conference paper at ICLR 202011/21/41/80Figure 1: An example of nonuniformstochastic quantization with s=3. Thepoint between the arrows represents thevalue of the normalized coordinate.105106107108109d010002000300040005000600070008000Variance Upper-boundQSGD s=4QSGD s=6QSGD s=8NUQSGD s=4NUQSGD s=6NUQSGD s=8Figure 2: Variance upper bounds.By simple counting arguments, the distribution of the (normalized) coordinates cannot be uniform.Indeed, this is the basis of the lower bound on the number of zeros. These arguments make noassumptions on the data distribution, and rely entirely on the fact that the quantities being quantizedare the coordinates of a unit-norm vector. Uniform quantization does not capture the properties ofsuch vectors, leading to substantial gradient variance.3.1 N ONUNIFORM QUANTIZATIONIn this paper, we propose and study a new scheme to quantize normalized gradient vectors. Insteadof uniformly distributed quantization levels, as proposed by Alistarh et al. (2017), we considerquantization levels that are nonuniformly distributed in the unit interval, as depicted in Figure 1.In order to obtain a quantized gradient that is suitable for SGD, we need the quantized gradient toremain unbiased. Alistarh et al. (2017) achieve this via a randomized quantization scheme, which canbe easily generalized to the case of nonuniform quantization levels.Using a carefully parametrized generalization of the unbiased quantization scheme introduced byAlistarh et al., we can control both the cost of communication and the variance of the gradient.Compared to a uniform quantization scheme, our scheme reduces quantization error and varianceby better matching the properties of normalized vectors. In particular, by increasing the number ofquantization levels near zero, we obtain a stronger variance bound. Empirically, our scheme alsobetter matches the distribution of normalized coordinates observed on real datasets and networks.We now describe the nonuniform quantization scheme: Let s2f1;2;g be the number of internalquantization levels, and let L= (l0;l1;;ls+1)denote the sequence of quantization levels, wherel0=0<l1<<ls+1=1. For r2[0;1], let ̃s(r)and p(r)satisfy l ̃s(r)rl ̃s(r)+1andr=1p(r)l ̃s(r)+p(r)l ̃s(r)+1, respectively. Define t(r) =l ̃s(r)+1l ̃s(r). Note that ̃ s(r)2f0;1;;sg.Definition 1. The nonuniform quantization of a vector v2RdisQs(v),[Qs(v1);;Qs(vd)]Twhere Q s(vi) =kvksign(vi)hi(v;s) (2)where, letting ri=jvij=kvk, the hi(v;s)’s are independent random variables such that hi(v;s) =l ̃s(ri)with probability 1p(ri)and h i(v;s) =l ̃s(ri)+1otherwise.We note that the distribution of hi(v;s)satisfies E[hi(v;s)] = riand achieves the minimum varianceover all distributions that satisfy E[hi(v;s)] = riwith support L. In the following, we focus on aspecial case of nonuniform quantization with ˆL= (0;1=2s;;2s1=2s;1)as the quantization levels.The intuition behind this quantization scheme is that it is very unlikely to observe large values of riinthe stochastic gradient vectors of machine learning models. Stochastic gradients are observed to bedense vectors (Bernstein et al., 2018). Hence, it is natural to use fine intervals for small rivalues toreduce quantization error and control the variance.After quantizing the stochastic gradient with a small number of discrete levels, each processormust encode its local gradient into a binary string for broadcasting. We describe this encoding inAppendix A.5Under review as a conference paper at ICLR 20204 T HEORETICAL GUARANTEESIn this section, we provide theoretical guarantees for NUQSGD, giving variance and code-lengthbounds, and using these in turn to compare NUQSGD and QSGD. Please note that the proofs ofTheorems 2, 3, 4, and 5 are provided in Appendices B, C, D, and E respectively.Theorem 2 (Variance bound) .Letv2Rd. The nonuniform quantization of vsatisfies E[Qs(v)] = v.Furthermore, provided that s log(d)=2, we haveE[kQs(v)vk2]eQkvk2(3)where eQ=minfminf22s(d22s);2spd22sg+O(s);d=3(22s+1+1)g.The result in Theorem 2 implies that if g(w)is a stochastic gradient with a second-moment boundh, then Qs(g(w))is a stochastic gradient with a variance upper bound eQh. In the range of interestwhere dis sufficiently large, i.e.,s=o(log(d)), the variance upper bound decreases with the numberof quantization levels. To obtain this data-independent bound, we establish upper bounds on thenumber of coordinates of vfalling into intervals defined by ˆL.Theorem 3 (Code-length bound) .Letv2Rd. Provided dis large enough to ensure 22s+pd2sd=e,the expectation E[jENCODE (v)j]of the number of communication bits needed to transmit Qs(v)isbounded above byNQ=C+3ns;d+(1+o(1))ns;dlogdns;d+(1+o(1))ns;dloglog8(22s+d)ns;d(4)where C =b(1+o(1))and n s;d=22s+2spd.Theorem 3 provides a bound on the expected number of communication bits to encode the quantizedstochastic gradient. Note that 22s+pd2sd=eis a mild assumption in practice. As one wouldexpect, the bound, (4), increases monotonically in dands. In the sparse case, if we choose s=o(logd)levels, then the upper bound on the expected code-length is O2spdlogpd2s.Combining the upper bounds above on the variance and code-length, Corollary 1 implies the followingguarantees for NUQSGD:Theorem 4 (NUQSGD for smooth convex optimization) .LetfandRbe defined as in Theorem 1, leteQbe defined as in Theorem 2, let e>0,ˆB= (1+eQ)B, and let g>0be given by g2=2R2=(ˆBT).With ENCODE andDECODE defined as in Appendix A, suppose that Algorithm 1 is executed for Titerations with a learning rate a=1=(b+pK=g)onKprocessors, each with access to independentstochastic gradients of fwith a second-moment bound B. Then Te=Omax2ˆBKe2;beR2iterationssuffice to guarantee Ef1TåTt=0wtmin w2Wf(w)e:In addition, NUQSGD requires at mostNQcommunication bits per iteration in expectation.On nonconvex problems, (weaker) convergence guarantees can be established along the lines of, e.g.,(Ghadimi and Lan, 2013, Theorem 2.1).NUQSGD vs QSGD. How do QSGD and NUQSGD compare in terms of bounds on the expectednumber of communication bits required to achieve a given suboptimality gap e? The quantity thatcontrols our guarantee on the convergence speed in both algorithms is the variance upper bound,which in turn is controlled by the quantization schemes. Note that the number of quantizationlevels, s, is usually a small number in practice. On the other hand, the dimension, d, can be verylarge, especially in overparameterized networks. In Figure 2, we show that the quantization schemeunderlying NUQSGD results in substantially smaller variance upper bounds for plausible ranges of sandd. Note that these bounds do not make any assumptions on the dataset or the structure of thenetwork.For any (nonrandom) number of iterations T, an upper bound, NA, holding uniformly over iterationskTon the expected number of bits used by an algorithm Ato communicate the gradient oniteration k, yields an upper bound TNA, on the expected number of bits communicated over Titerations by algorithm A. Taking T=TA;eto be the (minimum) number of iterations needed toguarantee an expected suboptimality gap of ebased on the properties of A, we obtain an upper bound,zA;e=TA;eNA, on the expected number of bits of communicated on a run expected to achieve asuboptimality gap of at most e.6Under review as a conference paper at ICLR 20200 2 4 6 8Training Iteration 1e410−410−310−210−1100LossSGDQSGDQSGDinfNUQSGDSuperSGD0 1 2 3 4 5 6Training Iteration 1e51002×1003×1004×100Loss0 2 4 6 8Training Iteration 1e40.250.500.751.001.25Mean Normalized VarianceFigure 3: Training loss on CIFAR10 (left) and ImageNet (middle) for ResNet models. QSGD,QSGDinf, and NUQSGD are trained by simulating the quantization and dequantizing of the gradientsfrom 8-GPUs. On CIFAR10, SGD refers to the single-GPU training versus on Imagenet it refers to2-GPU setup in the original ResNet paper. SGD is shown to highlight the significance of the gapbetween QSGD and QSGDinf. SuperSGD refers to simulating full-precision distributed trainingwithout quantization. SuperSGD is impractical in scenarios with limited bandwidth. (Right) Estimatednormalized variance on CIFAR10 on the trajectory of single-GPU SGD. Variance is measured forfixed model snapshots during training. Notice that the variance for NUQSGD and QSGDinf is lowerthan SGD for almost all the training and it decreases after the learning rate drops.Theorem 5 (Expected number of communication bits) .Provided that s=o(log(d))and2ˆBKe2>be,zNUQSGD ;e=O1e2pd(d22s)logpd2sandzQSGD ;e=O(1e2dlogpd).Focusing on the dominant terms in the expressions of overall number of communication bits requiredto guarantee a suboptimality gap of e, we observe that NUQSGD provides slightly stronger guarantees.Note that our stronger guarantees come without any assumption about the data.5 E XPERIMENTAL EVALUATIONIn this section, we examine the practical performance of NUQSGD in terms of both convergence(accuracy) and speedup. The goal is to empirically show that NUQSGD can provide the sameperformance and accuracy compared to the QSGDInf heuristic, which has no theoretical compressionguarantees. For this, we implement and test these three methods (NUQSGD, QSGD, and QSGDInf),together with the distributed full-precision SGD baseline, which we call SuperSGD. We split ourstudy across two axes: first, we examine the convergence of the methods and their induced variance.Second, we provide an efficient implementation of all four methods in Pytorch using the Horovodcommunication back-end (Sergeev and Del Balso, 2018), adapted to efficiently support quantization,and examine speedup relative to the full-precision baseline. We investigate the impact of quantizationon training performance by measuring loss, variance, accuracy, and speedup for ResNet models (Heet al., 2016) applied to ImageNet (Deng et al., 2009) and CIFAR10 (Krizhevsky).We evaluate these methods on two image classification datasets: ImageNet and CIFAR10. We trainResNet110 on CIFAR10 and ResNet18 on ImageNet with mini-batch size 128and base learningrate 0 :1. In all experiments, momentum and weight decay are set to 0 :9 and 104, respectively. Thebucket size and the number of quantization bits are set to 8192 and4, respectively. We observe similarresults in experiments with various bucket sizes and number of bits. We simulate a scenario with kGPUs for all three quantization methods by estimating the gradient from kindependent mini-batchesand aggregating them after quantization and dequantization.In Figure 3 (left and middle), we show the training loss with 8GPUs. We observe that NUQSGDand QSGDinf improve training loss compared to QSGD on ImageNet. We observe significant gapin training loss on CIFAR10 where the gap grows as training proceeds. We also observe similarperformance gaps in test accuracy (provided in Appendix F). In particular, unlike NUQSGD, QSGDdoes not achieve test accuracy of full-precision SGD. Figure 3 (right) shows the mean normalizedvariance of the gradient (defined in Appendix F) versus training iteration on the trajectory of single-GPU SGD on CIFAR10. These observations validate our theoretical results that NUQSGD hassmaller variance for large models with small number of quantization bits.Efficient Implementation and Speedup. To examine speedup behavior, we implemented all quanti-zation methods in Horovod (Sergeev and Del Balso, 2018), a communication back-end supportingPytorch, Tensorflow and MXNet. Doing so efficiently requires non-trivial refactoring of this frame-work, since it does not support communication compression—our framework will be open-sourced7Under review as a conference paper at ICLR 2020Figure 4: Scalability behavior for NUQSGD versus the full-precision baseline when training ResNet34and ResNet50 on ImageNet. The ResNet34 graph examines strong scaling (left), splitting a globalbatch of size 256onto the available GPUs, whereas the ResNet50 graph examines strong scaling(middle) keeping a fixed per-GPU batch size of 16. Each time bar is split into computation (bottom),encoding cost (middle), and transmission cost (top). Notice the significant negative scalability of theSGD baseline in both scenarios. By contrast, the 4-bit communication-compressed implementationachieves positive scaling, while the 8-bit variant stops scaling between 4 and 8 nodes due to thehigher communication and encoding costs. End-to-end training time for ResNet50/ImageNet forNUQSGD and EF-SignSGD versus the SuperSGD baseline (right).upon publication. Our implementation diverges slightly from the theoretical analysis. First, Horovodapplies “tensor fusion” to multiple layers, by merging the resulting gradient tensors for more efficienttransmission. This causes the gradients for different layers to be quantized together, which can lead toloss of accuracy (due to e.g. different normalization factors across the layers). We addressed this bytuning the way in which tensor fusion is applied to the layers such that it minimizes the accuracy loss.Second, we noticed that quantizing the gradients corresponding to the biases has a significant adverseeffect on accuracy; since the communication impact of biases is negligible, we transmit them at fullprecision. We apply this for all methods considered. Finally, for efficiency reasons, we directly packthe quantized values into 32-bit numbers, without additional encoding. We implemented compressionand de-compression via efficient CUDA kernels.Our baselines are full-precision SGD (SuperSGD), Error-Feedback SignSGD (Karimireddy et al.,2019), and the QSGDinf heuristic, which we compare against the 4-bit and 8-bit NUQSGD variantsexecuting the same pattern. The implementation of the QSGDinf heuristic provides almost identicalconvergence numbers, and is sometimes omitted for visibility. (QSGD yields inferior convergenceon this dataset and is therefore omitted.) All variants are implemented using a standard all-to-allreduction pattern. Figures 4 (left), (middle) show the execution time per epoch for ResNet34 andResNet50 models on ImageNet, on a cluster machine with 8 NVIDIA 2080 Ti GPUs, for the hyper-parameter values quoted above. The results confirm the efficiency and scalability of the compressedvariant, mainly due to the reduced communication volume. We note that the overhead of compressionand decompression is less than 1% of the batch computation time for NUQSGD.Figure 4 (right) presents end-to-end speedup numbers (time versus accuracy) for ResNet50/ImageNet,executed on 4 GPUs, under the same hyperparameter settings as the full-precision baseline, withbucket size 512. First, notice that NUQSGD variants match the target accuracy of the 32-bit model,with non-trivial speedup over the standard data-parallel variant, directly proportional to the per-epoch speedup. The QSGDinf heuristic yields similar accuracy and performance, and is thereforeomitted. Second, we found that unfortunately EF-SignSGD does not converge under these standardhyperparameter settings. To address this issue, we performed a non-trivial amount of hyperparametertuning for this algorithm: in particular, we found that the scaling factors and the bucket size mustbe carefully adjusted for convergence on ImageNet. We were able to recover full accuracy withEF-SignSGD on ResNet50, but that the cost of quantizing into buckets of size 64. Unfortunately, inthis setting the algorithm transmits a non-trivial amount of scaling data, and the GPU implementationbecomes less efficient due to error computation and reduced parallelism. The end-to-end speedup ofthis tuned variant is inferior to NUQSGD-4bit, and only slightly superior to that of NUQSGD-8bit.Please see Figure 9 in the Appendix and the accompanying text for details.6 C ONCLUSIONSWe study data-parallel and communication-efficient version of stochastic gradient descent. Build-ing on QSGD (Alistarh et al., 2017), we study a nonuniform quantization scheme. We establish8Under review as a conference paper at ICLR 2020upper bounds on the variance of nonuniform quantization and the expected code-length. In theoverparametrized regime of interest, the former decreases as the number of quantization levelsincreases, while the latter increases with the number of quantization levels. Thus, this schemeprovides a trade-off between the communication efficiency and the convergence speed. We compareNUQSGD and QSGD in terms of their variance bounds and the expected number of communicationbits required to meet a certain convergence error, and show that NUQSGD provides stronger guar-antees. Experimental results are consistent with our theoretical results and confirm that NUQSGDmatches the performance of QSGDinf when applied to practical deep models and datasets includingImageNet. Thus, NUQSGD closes the gap between the theoretical guarantees of QSGD and empiricalperformance of QSGDinf. One limitation of our study which we aim to address in future work isthat we focus on all-to-all reduction patterns, which interact easily with communication compression.In particular, we aim to examine the interaction between more complex reduction patterns, such asring-based reductions (Hannun et al., 2014), which may yield superior performance in bandwidth-bottlenecked settings, but which interact with communication-compression in non-trivial ways, sincethey may lead a gradient to be quantized at each reduction step.
SJxLkbwatH
Official Blind Review #1
3: Weak Reject
In this paper, the authors propose a new gradient compression method, which is called nonuniform quantization. The algorithm is a reasonable variant of SGD with uniform quantization. The paper is well written. The experiments show good performance. However, there are several weakness in this paper: 1. In this paper, a very important reference and baseline is missing, which is call error-feedback SGD [1]. Although the title of [1] focuses on SignSGD, it provides a general algorithm for arbitrary compressor with a error/variance bound similar to Theorem 2 in this paper, no matter the compressor is unbiased or not. Since [1] provides the SOTA results for quantized SGD, the proposed algorithm should be compared to it in the experiments. 2. This paper claims to have strong theoretical guarantees. However, the theoretical analysis only works for convex functions. Note that the theoretical analysis in [1] also works for non-convex functions. 3. Regardless of the convergence guarantees (which is weak considering the existing theorems in [1]). the proposed algorithm, NUQSGD, does not show improvement on the convergence, compared to the baseline QSGDinf. 4. In Figure 3, the experiments only show loss vs. # of iterations, which does not show the actual training time. In Figure 4, training time is only shown for NUQSGD, which ignores the other baselines including QSGD and QSGDinf. What I really what to see is training loss (or testing accuracy) vs. training time (or communication overhead, such as number of bits), so that we can evaluate the trade-off between communication overhead and the convergence, compared to the baselines. Minor issue (I hope the authors can consider the following suggestions in a revised version. However, since the issue is minor, it doesn't affect the score): !. In Definition 1, in some cases $s$ is c constant integer, and in some other case $s$ become a function, which is very confusing and not friendly to the readers. I also hope the authors can highlight the definition of $r$ and $p$, which are essential for understanding the nonuniform quantization mechanism. -------------- Reference [1] Karimireddy, Sai Praneeth et al. “Error Feedback Fixes SignSGD and other Gradient Compression Schemes.” ICML (2019).
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization ### Paper Abstract As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf. ### Paper Keywords ["sgd", "nonuniform quantization", "variants", "qsgd", "size", "complexity", "models", "datasets", "need", "stochastic gradient descent"] ### Paper Content ABSTRACTAs the size and complexity of models and datasets grow, so does the need forcommunication-efficient variants of stochastic gradient descent that can be de-ployed on clusters to perform parallel model training. Alistarh et al. (2017) describetwo variants of data-parallel SGD that quantize and encode gradients to lessencommunication costs. For the first variant, QSGD, they provide strong theoreticalguarantees. For the second variant, which we call QSGDinf, they demonstrate im-pressive empirical gains for distributed training of large neural networks. Buildingon their work, we propose an alternative scheme for quantizing gradients and showthat it yields stronger theoretical guarantees than exist for QSGD while matchingthe empirical performance of QSGDinf.1 I NTRODUCTIONDeep learning is booming thanks to enormous datasets and very large models, leading to the fact thatthe largest datasets and models can no longer be trained on a single machine. One common solutionto this problem is to use distributed systems for training. The most common algorithms underlyingdeep learning are stochastic gradient descent (SGD) and its variants, which led to a significant amountof research on building and understanding distributed versions of SGD.Implementations of SGD on distributed systems and data-parallel versions of SGD are scalable andtake advantage of multi-GPU systems. Data-parallel SGD, in particular, has received significantattention due to its excellent scalability properties (Zinkevich et al., 2010; Bekkerman et al., 2011;Recht et al., 2011; Dean et al., 2012; Coates et al., 2013; Chilimbi et al., 2014; Li et al., 2014; Duchiet al., 2015; Xing et al., 2015; Zhang et al., 2015; Alistarh et al., 2017). In data-parallel SGD, a largedataset is partitioned among Kprocessors. These processors work together to minimize an objectivefunction. Each processor has access to the current parameter vector of the model. At each SGDiteration, each processor computes an updated stochastic gradient using its own local data. It thenshares the gradient update with its peers. The processors collect and aggregate stochastic gradients tocompute the updated parameter vector.Increasing the number of processing machines reduces the computational costs significantly. However,the communication costs to share and synchronize huge gradient vectors and parameters increasesdramatically as the size of the distributed systems grows. Communication costs may thwart theanticipated benefits of reducing computational costs. Indeed, in practical scenarios, the communica-tion time required to share stochastic gradients and parameters is the main performance bottleneck(Recht et al., 2011; Li et al., 2014; Seide et al., 2014; Strom, 2015; Alistarh et al., 2017). Reducingcommunication costs in data-parallel SGD is an important problem.One promising solution to the problem of reducing communication costs of data-parallel SGD isgradient compression, e.g., through gradient quantization (Dean et al., 2012; Seide et al., 2014; Saet al., 2015; Gupta et al., 2015; Abadi et al., 2016; Zhou et al., 2016; Alistarh et al., 2017; Wen et al.,2017; Bernstein et al., 2018). (This should not be confused with weight quantization/sparsification,as studied by Wen et al. (2016); Hubara et al. (2016); Park et al. (2017); Wen et al. (2017), which wedo not discuss here.) Unlike full-precision data-parallel SGD, where each processor is required tobroadcast its local gradient in full-precision, i.e.,transmit and receive huge full-precision vectors ateach iteration, quantization requires each processor to transmit only a few communication bits periteration for each component of the stochastic gradient.One popular such proposal for communication-compression is quantized SGD (QSGD), due toAlistarh et al. (2017). In QSGD, stochastic gradient vectors are normalized to have unit L2norm,1Under review as a conference paper at ICLR 2020and then compressed by quantizing each element to a uniform grid of quantization levels using arandomized method. While most lossy compression schemes do not provide convergence guarantees,QSGD’s quantization scheme, is designed to be unbiased, which implies that the quantized stochasticgradient is itself a stochastic gradient, only with higher variance determined by the dimension andnumber of quantization levels. As a result, Alistarh et al. (2017) are able to establish a numberof theoretical guarantees for QSGD, including that it converges under standard assumptions. Bychanging the number of quantization levels, QSGD allows the user to trade-off communicationbandwidth and convergence time.Despite their theoretical guarantees based on quantizing after L2normalization, Alistarh et al. optto present empirical results using L¥normalization. We call this variation QSGDinf. While theempirical performance of QSGDinf is strong, their theoretical guarantees on the number of bitstransmitted no longer apply. Indeed, in our own empirical evaluation of QSGD, we find the varianceinduced by quantization is substantial, and the performance is far from that of SGD and QSGDinf.Given the popularity of this scheme, it is natural to ask one can obtain guarantees as strong asthose of QSGD while matching the practical performance of the QSGDinf heuristic. In this work,we answer this question in the affirmative by providing a new quantization scheme which fits intoQSGD in a way that allows us to establish stronger theoretical guarantees on the variance, bandwidth,and cost to achieve a prescribed gap. Instead of QSGD’s uniform quantization scheme, we usean unbiased nonuniform logarithmic scheme, similar to those introduced in telephony systems foraudio compression (Cattermole, 1969). We call the resulting algorithm nonuniformly quantizedstochastic gradient descent (NUQSGD). Like QSGD, NUQSGD is a quantized data-parallel SGDalgorithm with strong theoretical guarantees that allows the user to trade off communication costs withconvergence speed. Unlike QSGD, NUQSGD has strong empirical performance on deep models andlarge datasets, matching that of QSGDinf. In particular, we provide a new efficient implementationfor these schemes using a modern computational framework (Pytorch), and benchmark it on classiclarge-scale image classification tasks.The intuition behind the nonuniform quantization scheme underlying NUQSGD is that, after L2nor-malization, many elements of the normalized stochastic gradient will be near-zero. By concentratingquantization levels near zero, we are able to establish stronger bounds on the excess variance. In theoverparametrized regime of interest, these bounds decrease rapidly as the number of quantizationlevels increases. Combined with a bound on the expected code-length, we obtain a bound on the totalcommunication costs of achieving an expected suboptimality gap. The resulting bound is slightlystronger than the one provided by QSGD.To study how quantization affects convergence on state-of-the-art deep models, we compareNUQSGD, QSGD, and QSGDinf, focusing on training loss, variance, and test accuracy on standarddeep models and large datasets. Using the same number of bits per iteration, experimental resultsshow that NUQSGD has smaller variance than QSGD, as expected by our theoretical results. Thissmaller variance also translates to improved optimization performance, in terms of both training lossand test accuracy. We also observe that NUQSGD matches the performance of QSGDinf in terms ofvariance and loss/accuracy. Further, our distributed implementation shows that the resulting algo-rithm considerably reduces communication cost of distributed training, without adversely impactingaccuracy. Our empirical results show that NUQSGD can provide faster end-to-end parallel trainingrelative to data-parallel SGD, QSGD, and Error-Feedback SignSGD (Karimireddy et al., 2019) onthe ImageNet dataset.Summary of Contributions.•We establish stronger theoretical guarantees for the excess variance and communication costs ofour gradient quantization method than those available for QSGD’s uniform quantization method.•We then establish stronger convergence guarantees for the resulting algorithm, NUQSGD, understandard assumptions.•We demonstrate that NUQSGD has strong empirical performance on deep models and large datasets,both in terms of accuracy and scalability. Thus, NUQSGD closes the gap between the theoreticalguarantees of QSGD and the empirical performance of QSGDinf.2Under review as a conference paper at ICLR 20201.1 R ELATED WORKSeide et al. (2014) proposed signSGD, an efficient heuristic scheme to reduce communication costsdrastically by quantizing each gradient component to two values. Bernstein et al. (2018) laterprovided convergence guarantees for signSGD. Note that the quantization employed by signSGD isnot unbiased, and so a new analysis was required. As the number of levels is fixed, SignSGD doesnot provide any trade-off between communication costs and convergence speed.Sa et al. (2015) introduced Buckwild!, a lossy compressed SGD with convergence guarantees. Theauthors provided bounds on the error probability of SGD, assuming convexity and gradient sparsity.Wen et al. (2017) proposed TernGrad, a stochastic quantization scheme with three levels. TernGradalso significantly reduces communication costs and obtains reasonable accuracy with a small degra-dation to performance compared to full-precision SGD. Convergence guarantees for TernGrad relyon a nonstandard gradient norm assumption. As discussed, Alistarh et al. (2017) proposed QSGD, amore general stochastic quantization scheme, for which they provide both theoretical guarantees andexperimental validation (although for different variants of the same algorithm). We note that theirimplementation was only provided in Microsoft CNTK; by contrast, here we provide a more genericimplementation in Horovod (Sergeev and Del Balso, 2018), a communication back-end which cansupport a range of modern frameworks such as Tensorflow, Keras, Pytorch, and MXNet.NUQSGD uses a logarithmic quantization scheme. Such schemes have long been used in telephonysystems for audio compression (Cattermole, 1969). Logarithmic quantization schemes have appearedin other contexts recently: Hou and Kwok (2018) studied weight distributions of long short-termmemory networks and proposed to use logarithm quantization for network compression. Zhang et al.(2017) proposed a gradient compression scheme and introduced an optimal quantization scheme, butfor the setting where the points to be quantized are known in advance. As a result, their scheme is notapplicable to the communication setting of quantized data-parallel SGD.2 P RELIMINARIES : DATA-PARALLEL SGD AND CONVERGENCEWe consider a high-dimensional machine learning model, parametrized by a vector w2Rd. LetWRddenote a closed and convex set. Our objective is to minimize f:W!R, which is an unknown,differentiable, convex, and b-smooth function. The following summary is based on (Alistarh et al.,2017).Recall that a function fisb-smooth if, for all u;v2W, we havekÑf(u)Ñf(v)kbkuvk,wherekk denotes the Euclidean norm. Let (S;S;m)be a probability space (and let Edenoteexpectation). Assume we have access to stochastic gradients of f,i.e.,we have access to a functiong:WS!Rdsuch that, if sm, thenE[g(w;s)] =Ñf(w)for all w2W. In the rest of the paper,we let g(w)denote the stochastic gradient for notational simplicity. The update rule for conventionalfull-precision projected SGD is wt+1=PW(wtag(wt));where wtis the current parameter input,ais the learning rate, and PWis the Euclidean projection onto W.We say the stochastic gradient has a second-moment upper bound BwhenE[kg(w)k2]Bforallw2W. Similarly, the stochastic gradient has a variance upper bound s2whenE[kg(w)Ñf(w)k2]s2for all w2W. Note that a second-moment upper bound implies a variance upperbound, because the stochastic gradient is unbiased.We have classical convergence guarantees for conventional full-precision SGD given access tostochastic gradients at each iteration:Theorem 1 (Bubeck 2015, Theorem 6.3) .Letf:W!Rdenote a convex and b-smooth functionand let R2,supw2Wkww0k2. Suppose that the projected SGD update is executed for Titerationswitha=1=(b+1=g)where g=rp2=T=s. Given repeated and independent access to stochasticgradients with a variance upper bound s2, projected SGD satisfiesEhf1TTåt=0wtiminw2Wf(w)Rr2s2T+bR2T: (1)Minibatched (with larger batch sizes) and data-parallel SGD are two common SGD variants used inpractice to reduce variance and improve computational efficiency of conventional SGD.3Under review as a conference paper at ICLR 2020Input: local data, local copy of the parameter vector wt, learning rate a, and K1fort=1toTdo2 fori=1toKdo// each transmitter processor (in parallel)3 Compute gi(wt);// stochastic gradient4 Encode ci;t ENCODEgi(wt);5 Broadcast ci;tto all processors;6 forl=1toKdo// each receiver processor (in parallel)7 fori=1toKdo// each transmitter processor8 Receive ci;tfrom processor ifor each i;9 Decode ˆ gi(wt) DECODEci;t;10 Aggregate wt+1 PW(wtaKåKi=1ˆgi(wt));Algorithm 1: Data-parallel (synchronized) SGD.Following (Alistarh et al., 2017), we consider data-parallel SGD, a synchronous distributed frameworkconsisting of Kprocessors that partition a large dataset among themselves. This framework modelsreal-world systems with multiple GPU resources. Each processor keeps a local copy of the parametervector and has access to independent and private stochastic gradients of f.At each iteration, each processor computes its own stochastic gradient based on its local data andthen broadcasts it to all peers. Each processor receives and aggregates the stochastic gradients fromall peers to obtain the updated parameter vector. In detail, the update rule for full-precision data-parallel SGD is wt+1=PW(wtaKåKl=1gl(wt))where gl(wt)is the stochastic gradient computedand broadcasted by processor l. Provided that gl(wt)is a stochastic gradient with a variance upperbound s2for all l, then1KåKl=1gl(wt)is a stochastic gradient with a variance upper bounds2K. Thus,aggregation improves convergence of SGD by reducing the first term of the upper bound in (1).Assume each processor computes a minibatch gradient of size B. Then, this update rule is essentiallya minibatched update with size BK.Data-parallel SGD is described in Algorithm 1. Full-precision data-parallel SGD is a special caseof Algorithm 1 with identity encoding and decoding mappings. Otherwise, the decoded stochasticgradient ˆ gi(wt)is likely to be different from the original local stochastic gradient gi(wt).By Theorem 1, we have the following convergence guarantees for full-precision data-parallel SGD:Corollary 1 (Alistarh et al. 2017, Corollary 2.2) .Letf,R, and gbe as defined in Theorem 1and let e>0. Suppose that the projected SGD update is executed for Titerations with a=1=(b+pK=g)onKprocessors, each with access to independent stochastic gradients of fwith asecond-moment bound B. The smallest Tfor the full-precision data-parallel SGD that guaranteesEf(1TåTt=0wt)min w2Wf(w)eis Te=OR2max(2BKe2;be).3 N ONUNIFORMLY QUANTIZED STOCHASTIC GRADIENT DESCENTData-parallel SGD reduces computational costs significantly. However, the communication costsof broadcasting stochastic gradients is the main performance bottleneck in large-scale distributedsystems. In order to reduce communication costs and accelerate training, Alistarh et al. (2017)introduced a compression scheme that produces a compressed and unbiased stochastic gradient,suitable for use in SGD.At each iteration of QSGD, each processor broadcasts an encoding of its own compressed stochasticgradient, decodes the stochastic gradients received from other processors, and sums all the quantizedvectors to produce a stochastic gradient. In order to compress the gradients, every coordinate (withrespect to the standard basis) of the stochastic gradient is normalized by the Euclidean norm of thegradient and then stochastically quantized to one of a small number quantization levels distributeduniformly in the unit interval. The stochasticity of the quantization is necessary to not introduce bias.Alistarh et al. (2017) give a simple argument that provides a lower bound on the number of coordinatesthat are quantized to zero in expectation. Encoding these zeros efficiently provides communicationsavings at each iteration. However, the cost of their scheme is greatly increased variance in thegradient, and thus slower overall convergence. In order to optimize overall performance, we mustbalance communication savings with variance.4Under review as a conference paper at ICLR 202011/21/41/80Figure 1: An example of nonuniformstochastic quantization with s=3. Thepoint between the arrows represents thevalue of the normalized coordinate.105106107108109d010002000300040005000600070008000Variance Upper-boundQSGD s=4QSGD s=6QSGD s=8NUQSGD s=4NUQSGD s=6NUQSGD s=8Figure 2: Variance upper bounds.By simple counting arguments, the distribution of the (normalized) coordinates cannot be uniform.Indeed, this is the basis of the lower bound on the number of zeros. These arguments make noassumptions on the data distribution, and rely entirely on the fact that the quantities being quantizedare the coordinates of a unit-norm vector. Uniform quantization does not capture the properties ofsuch vectors, leading to substantial gradient variance.3.1 N ONUNIFORM QUANTIZATIONIn this paper, we propose and study a new scheme to quantize normalized gradient vectors. Insteadof uniformly distributed quantization levels, as proposed by Alistarh et al. (2017), we considerquantization levels that are nonuniformly distributed in the unit interval, as depicted in Figure 1.In order to obtain a quantized gradient that is suitable for SGD, we need the quantized gradient toremain unbiased. Alistarh et al. (2017) achieve this via a randomized quantization scheme, which canbe easily generalized to the case of nonuniform quantization levels.Using a carefully parametrized generalization of the unbiased quantization scheme introduced byAlistarh et al., we can control both the cost of communication and the variance of the gradient.Compared to a uniform quantization scheme, our scheme reduces quantization error and varianceby better matching the properties of normalized vectors. In particular, by increasing the number ofquantization levels near zero, we obtain a stronger variance bound. Empirically, our scheme alsobetter matches the distribution of normalized coordinates observed on real datasets and networks.We now describe the nonuniform quantization scheme: Let s2f1;2;g be the number of internalquantization levels, and let L= (l0;l1;;ls+1)denote the sequence of quantization levels, wherel0=0<l1<<ls+1=1. For r2[0;1], let ̃s(r)and p(r)satisfy l ̃s(r)rl ̃s(r)+1andr=1p(r)l ̃s(r)+p(r)l ̃s(r)+1, respectively. Define t(r) =l ̃s(r)+1l ̃s(r). Note that ̃ s(r)2f0;1;;sg.Definition 1. The nonuniform quantization of a vector v2RdisQs(v),[Qs(v1);;Qs(vd)]Twhere Q s(vi) =kvksign(vi)hi(v;s) (2)where, letting ri=jvij=kvk, the hi(v;s)’s are independent random variables such that hi(v;s) =l ̃s(ri)with probability 1p(ri)and h i(v;s) =l ̃s(ri)+1otherwise.We note that the distribution of hi(v;s)satisfies E[hi(v;s)] = riand achieves the minimum varianceover all distributions that satisfy E[hi(v;s)] = riwith support L. In the following, we focus on aspecial case of nonuniform quantization with ˆL= (0;1=2s;;2s1=2s;1)as the quantization levels.The intuition behind this quantization scheme is that it is very unlikely to observe large values of riinthe stochastic gradient vectors of machine learning models. Stochastic gradients are observed to bedense vectors (Bernstein et al., 2018). Hence, it is natural to use fine intervals for small rivalues toreduce quantization error and control the variance.After quantizing the stochastic gradient with a small number of discrete levels, each processormust encode its local gradient into a binary string for broadcasting. We describe this encoding inAppendix A.5Under review as a conference paper at ICLR 20204 T HEORETICAL GUARANTEESIn this section, we provide theoretical guarantees for NUQSGD, giving variance and code-lengthbounds, and using these in turn to compare NUQSGD and QSGD. Please note that the proofs ofTheorems 2, 3, 4, and 5 are provided in Appendices B, C, D, and E respectively.Theorem 2 (Variance bound) .Letv2Rd. The nonuniform quantization of vsatisfies E[Qs(v)] = v.Furthermore, provided that s log(d)=2, we haveE[kQs(v)vk2]eQkvk2(3)where eQ=minfminf22s(d22s);2spd22sg+O(s);d=3(22s+1+1)g.The result in Theorem 2 implies that if g(w)is a stochastic gradient with a second-moment boundh, then Qs(g(w))is a stochastic gradient with a variance upper bound eQh. In the range of interestwhere dis sufficiently large, i.e.,s=o(log(d)), the variance upper bound decreases with the numberof quantization levels. To obtain this data-independent bound, we establish upper bounds on thenumber of coordinates of vfalling into intervals defined by ˆL.Theorem 3 (Code-length bound) .Letv2Rd. Provided dis large enough to ensure 22s+pd2sd=e,the expectation E[jENCODE (v)j]of the number of communication bits needed to transmit Qs(v)isbounded above byNQ=C+3ns;d+(1+o(1))ns;dlogdns;d+(1+o(1))ns;dloglog8(22s+d)ns;d(4)where C =b(1+o(1))and n s;d=22s+2spd.Theorem 3 provides a bound on the expected number of communication bits to encode the quantizedstochastic gradient. Note that 22s+pd2sd=eis a mild assumption in practice. As one wouldexpect, the bound, (4), increases monotonically in dands. In the sparse case, if we choose s=o(logd)levels, then the upper bound on the expected code-length is O2spdlogpd2s.Combining the upper bounds above on the variance and code-length, Corollary 1 implies the followingguarantees for NUQSGD:Theorem 4 (NUQSGD for smooth convex optimization) .LetfandRbe defined as in Theorem 1, leteQbe defined as in Theorem 2, let e>0,ˆB= (1+eQ)B, and let g>0be given by g2=2R2=(ˆBT).With ENCODE andDECODE defined as in Appendix A, suppose that Algorithm 1 is executed for Titerations with a learning rate a=1=(b+pK=g)onKprocessors, each with access to independentstochastic gradients of fwith a second-moment bound B. Then Te=Omax2ˆBKe2;beR2iterationssuffice to guarantee Ef1TåTt=0wtmin w2Wf(w)e:In addition, NUQSGD requires at mostNQcommunication bits per iteration in expectation.On nonconvex problems, (weaker) convergence guarantees can be established along the lines of, e.g.,(Ghadimi and Lan, 2013, Theorem 2.1).NUQSGD vs QSGD. How do QSGD and NUQSGD compare in terms of bounds on the expectednumber of communication bits required to achieve a given suboptimality gap e? The quantity thatcontrols our guarantee on the convergence speed in both algorithms is the variance upper bound,which in turn is controlled by the quantization schemes. Note that the number of quantizationlevels, s, is usually a small number in practice. On the other hand, the dimension, d, can be verylarge, especially in overparameterized networks. In Figure 2, we show that the quantization schemeunderlying NUQSGD results in substantially smaller variance upper bounds for plausible ranges of sandd. Note that these bounds do not make any assumptions on the dataset or the structure of thenetwork.For any (nonrandom) number of iterations T, an upper bound, NA, holding uniformly over iterationskTon the expected number of bits used by an algorithm Ato communicate the gradient oniteration k, yields an upper bound TNA, on the expected number of bits communicated over Titerations by algorithm A. Taking T=TA;eto be the (minimum) number of iterations needed toguarantee an expected suboptimality gap of ebased on the properties of A, we obtain an upper bound,zA;e=TA;eNA, on the expected number of bits of communicated on a run expected to achieve asuboptimality gap of at most e.6Under review as a conference paper at ICLR 20200 2 4 6 8Training Iteration 1e410−410−310−210−1100LossSGDQSGDQSGDinfNUQSGDSuperSGD0 1 2 3 4 5 6Training Iteration 1e51002×1003×1004×100Loss0 2 4 6 8Training Iteration 1e40.250.500.751.001.25Mean Normalized VarianceFigure 3: Training loss on CIFAR10 (left) and ImageNet (middle) for ResNet models. QSGD,QSGDinf, and NUQSGD are trained by simulating the quantization and dequantizing of the gradientsfrom 8-GPUs. On CIFAR10, SGD refers to the single-GPU training versus on Imagenet it refers to2-GPU setup in the original ResNet paper. SGD is shown to highlight the significance of the gapbetween QSGD and QSGDinf. SuperSGD refers to simulating full-precision distributed trainingwithout quantization. SuperSGD is impractical in scenarios with limited bandwidth. (Right) Estimatednormalized variance on CIFAR10 on the trajectory of single-GPU SGD. Variance is measured forfixed model snapshots during training. Notice that the variance for NUQSGD and QSGDinf is lowerthan SGD for almost all the training and it decreases after the learning rate drops.Theorem 5 (Expected number of communication bits) .Provided that s=o(log(d))and2ˆBKe2>be,zNUQSGD ;e=O1e2pd(d22s)logpd2sandzQSGD ;e=O(1e2dlogpd).Focusing on the dominant terms in the expressions of overall number of communication bits requiredto guarantee a suboptimality gap of e, we observe that NUQSGD provides slightly stronger guarantees.Note that our stronger guarantees come without any assumption about the data.5 E XPERIMENTAL EVALUATIONIn this section, we examine the practical performance of NUQSGD in terms of both convergence(accuracy) and speedup. The goal is to empirically show that NUQSGD can provide the sameperformance and accuracy compared to the QSGDInf heuristic, which has no theoretical compressionguarantees. For this, we implement and test these three methods (NUQSGD, QSGD, and QSGDInf),together with the distributed full-precision SGD baseline, which we call SuperSGD. We split ourstudy across two axes: first, we examine the convergence of the methods and their induced variance.Second, we provide an efficient implementation of all four methods in Pytorch using the Horovodcommunication back-end (Sergeev and Del Balso, 2018), adapted to efficiently support quantization,and examine speedup relative to the full-precision baseline. We investigate the impact of quantizationon training performance by measuring loss, variance, accuracy, and speedup for ResNet models (Heet al., 2016) applied to ImageNet (Deng et al., 2009) and CIFAR10 (Krizhevsky).We evaluate these methods on two image classification datasets: ImageNet and CIFAR10. We trainResNet110 on CIFAR10 and ResNet18 on ImageNet with mini-batch size 128and base learningrate 0 :1. In all experiments, momentum and weight decay are set to 0 :9 and 104, respectively. Thebucket size and the number of quantization bits are set to 8192 and4, respectively. We observe similarresults in experiments with various bucket sizes and number of bits. We simulate a scenario with kGPUs for all three quantization methods by estimating the gradient from kindependent mini-batchesand aggregating them after quantization and dequantization.In Figure 3 (left and middle), we show the training loss with 8GPUs. We observe that NUQSGDand QSGDinf improve training loss compared to QSGD on ImageNet. We observe significant gapin training loss on CIFAR10 where the gap grows as training proceeds. We also observe similarperformance gaps in test accuracy (provided in Appendix F). In particular, unlike NUQSGD, QSGDdoes not achieve test accuracy of full-precision SGD. Figure 3 (right) shows the mean normalizedvariance of the gradient (defined in Appendix F) versus training iteration on the trajectory of single-GPU SGD on CIFAR10. These observations validate our theoretical results that NUQSGD hassmaller variance for large models with small number of quantization bits.Efficient Implementation and Speedup. To examine speedup behavior, we implemented all quanti-zation methods in Horovod (Sergeev and Del Balso, 2018), a communication back-end supportingPytorch, Tensorflow and MXNet. Doing so efficiently requires non-trivial refactoring of this frame-work, since it does not support communication compression—our framework will be open-sourced7Under review as a conference paper at ICLR 2020Figure 4: Scalability behavior for NUQSGD versus the full-precision baseline when training ResNet34and ResNet50 on ImageNet. The ResNet34 graph examines strong scaling (left), splitting a globalbatch of size 256onto the available GPUs, whereas the ResNet50 graph examines strong scaling(middle) keeping a fixed per-GPU batch size of 16. Each time bar is split into computation (bottom),encoding cost (middle), and transmission cost (top). Notice the significant negative scalability of theSGD baseline in both scenarios. By contrast, the 4-bit communication-compressed implementationachieves positive scaling, while the 8-bit variant stops scaling between 4 and 8 nodes due to thehigher communication and encoding costs. End-to-end training time for ResNet50/ImageNet forNUQSGD and EF-SignSGD versus the SuperSGD baseline (right).upon publication. Our implementation diverges slightly from the theoretical analysis. First, Horovodapplies “tensor fusion” to multiple layers, by merging the resulting gradient tensors for more efficienttransmission. This causes the gradients for different layers to be quantized together, which can lead toloss of accuracy (due to e.g. different normalization factors across the layers). We addressed this bytuning the way in which tensor fusion is applied to the layers such that it minimizes the accuracy loss.Second, we noticed that quantizing the gradients corresponding to the biases has a significant adverseeffect on accuracy; since the communication impact of biases is negligible, we transmit them at fullprecision. We apply this for all methods considered. Finally, for efficiency reasons, we directly packthe quantized values into 32-bit numbers, without additional encoding. We implemented compressionand de-compression via efficient CUDA kernels.Our baselines are full-precision SGD (SuperSGD), Error-Feedback SignSGD (Karimireddy et al.,2019), and the QSGDinf heuristic, which we compare against the 4-bit and 8-bit NUQSGD variantsexecuting the same pattern. The implementation of the QSGDinf heuristic provides almost identicalconvergence numbers, and is sometimes omitted for visibility. (QSGD yields inferior convergenceon this dataset and is therefore omitted.) All variants are implemented using a standard all-to-allreduction pattern. Figures 4 (left), (middle) show the execution time per epoch for ResNet34 andResNet50 models on ImageNet, on a cluster machine with 8 NVIDIA 2080 Ti GPUs, for the hyper-parameter values quoted above. The results confirm the efficiency and scalability of the compressedvariant, mainly due to the reduced communication volume. We note that the overhead of compressionand decompression is less than 1% of the batch computation time for NUQSGD.Figure 4 (right) presents end-to-end speedup numbers (time versus accuracy) for ResNet50/ImageNet,executed on 4 GPUs, under the same hyperparameter settings as the full-precision baseline, withbucket size 512. First, notice that NUQSGD variants match the target accuracy of the 32-bit model,with non-trivial speedup over the standard data-parallel variant, directly proportional to the per-epoch speedup. The QSGDinf heuristic yields similar accuracy and performance, and is thereforeomitted. Second, we found that unfortunately EF-SignSGD does not converge under these standardhyperparameter settings. To address this issue, we performed a non-trivial amount of hyperparametertuning for this algorithm: in particular, we found that the scaling factors and the bucket size mustbe carefully adjusted for convergence on ImageNet. We were able to recover full accuracy withEF-SignSGD on ResNet50, but that the cost of quantizing into buckets of size 64. Unfortunately, inthis setting the algorithm transmits a non-trivial amount of scaling data, and the GPU implementationbecomes less efficient due to error computation and reduced parallelism. The end-to-end speedup ofthis tuned variant is inferior to NUQSGD-4bit, and only slightly superior to that of NUQSGD-8bit.Please see Figure 9 in the Appendix and the accompanying text for details.6 C ONCLUSIONSWe study data-parallel and communication-efficient version of stochastic gradient descent. Build-ing on QSGD (Alistarh et al., 2017), we study a nonuniform quantization scheme. We establish8Under review as a conference paper at ICLR 2020upper bounds on the variance of nonuniform quantization and the expected code-length. In theoverparametrized regime of interest, the former decreases as the number of quantization levelsincreases, while the latter increases with the number of quantization levels. Thus, this schemeprovides a trade-off between the communication efficiency and the convergence speed. We compareNUQSGD and QSGD in terms of their variance bounds and the expected number of communicationbits required to meet a certain convergence error, and show that NUQSGD provides stronger guar-antees. Experimental results are consistent with our theoretical results and confirm that NUQSGDmatches the performance of QSGDinf when applied to practical deep models and datasets includingImageNet. Thus, NUQSGD closes the gap between the theoretical guarantees of QSGD and empiricalperformance of QSGDinf. One limitation of our study which we aim to address in future work isthat we focus on all-to-all reduction patterns, which interact easily with communication compression.In particular, we aim to examine the interaction between more complex reduction patterns, such asring-based reductions (Hannun et al., 2014), which may yield superior performance in bandwidth-bottlenecked settings, but which interact with communication-compression in non-trivial ways, sincethey may lead a gradient to be quantized at each reduction step.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text In this paper, the authors propose a new gradient compression method, which is called nonuniform quantization. The algorithm is a reasonable variant of SGD with uniform quantization. The paper is well written. The experiments show good performance. However, there are several weakness in this paper: 1. In this paper, a very important reference and baseline is missing, which is call error-feedback SGD [1]. Although the title of [1] focuses on SignSGD, it provides a general algorithm for arbitrary compressor with a error/variance bound similar to Theorem 2 in this paper, no matter the compressor is unbiased or not. Since [1] provides the SOTA results for quantized SGD, the proposed algorithm should be compared to it in the experiments. 2. This paper claims to have strong theoretical guarantees. However, the theoretical analysis only works for convex functions. Note that the theoretical analysis in [1] also works for non-convex functions. 3. Regardless of the convergence guarantees (which is weak considering the existing theorems in [1]). the proposed algorithm, NUQSGD, does not show improvement on the convergence, compared to the baseline QSGDinf. 4. In Figure 3, the experiments only show loss vs. # of iterations, which does not show the actual training time. In Figure 4, training time is only shown for NUQSGD, which ignores the other baselines including QSGD and QSGDinf. What I really what to see is training loss (or testing accuracy) vs. training time (or communication overhead, such as number of bits), so that we can evaluate the trade-off between communication overhead and the convergence, compared to the baselines. Minor issue (I hope the authors can consider the following suggestions in a revised version. However, since the issue is minor, it doesn't affect the score): !. In Definition 1, in some cases $s$ is c constant integer, and in some other case $s$ become a function, which is very confusing and not friendly to the readers. I also hope the authors can highlight the definition of $r$ and $p$, which are essential for understanding the nonuniform quantization mechanism. -------------- Reference [1] Karimireddy, Sai Praneeth et al. “Error Feedback Fixes SignSGD and other Gradient Compression Schemes.” ICML (2019). ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
S1emOTNKvS
ICLR.cc/2020/Conference
2020
Robust Graph Representation Learning via Neural Sparsification
["Cheng Zheng", "Bo Zong", "Wei Cheng", "Dongjin Song", "Jingchao Ni", "Wenchao Yu", "Haifeng Chen", "Wei Wang"]
Graph representation learning serves as the core of many important prediction tasks, ranging from product recommendation in online marketing to fraud detection in financial domain. Real-life graphs are usually large with complex local neighborhood, where each node is described by a rich set of features and easily connects to dozens or even hundreds of neighbors. Most existing graph learning techniques rely on neighborhood aggregation, however, the complexity on real-life graphs is usually high, posing non-trivial overfitting risk during model training. In this paper, we present Neural Sparsification (NeuralSparse), a supervised graph sparsification technique that mitigates the overfitting risk by reducing the complexity of input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize the sparsification process, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance on testing data. Experimental results on both benchmark and private datasets show that NeuralSparse can effectively improve testing accuracy and bring up to 7.4% improvement when working with existing graph neural networks on node classification tasks.
["graphs", "complexity", "risk", "neuralsparse", "graph neural networks", "robust graph representation", "serves", "core"]
ABSTRACTGraph representation learning serves as the core of many important predictiontasks, ranging from product recommendation in online marketing to fraud detec-tion in financial domain. Real-life graphs are usually large with complex localneighborhood, where each node is described by a rich set of features and easilyconnects to dozens or even hundreds of neighbors. Most existing graph learningtechniques rely on neighborhood aggregation, however, the complexity on real-life graphs is usually high, posing non-trivial overfitting risk during model train-ing. In this paper, we present Neural Sparsification (NeuralSparse), a supervisedgraph sparsification technique that mitigates the overfitting risk by reducing thecomplexity of input graphs. Our method takes both structural and non-structuralinformation as input, utilizes deep neural networks to parameterize the sparsi-fication process, and optimizes the parameters by feedback signals from down-stream tasks. Under the NeuralSparse framework, supervised graph sparsificationcould seamlessly connect with existing graph neural networks for more robustperformance on testing data. Experimental results on both benchmark and privatedatasets show that, NeuralSparse can effectively improve testing accuracy andbring up to 7.4% improvement when working with existing graph neural networkson node classification tasks.1 I NTRODUCTIONRepresentation learning has been in the center of many machine learning tasks on graphs, suchas name disambiguation in citation networks (Zhang et al., 2018c), spam detection in social net-works (Akoglu et al., 2015), recommendations in online marketing (Ying et al., 2018a), and manyothers (Hamilton et al., 2017; Li et al., 2018). As a class of models that can simultaneously uti-lize non-structural ( e.g., node and edge features) and structural information in graphs, Graph NeuralNetworks (GNNs) (Kipf & Welling, 2017; Hamilton et al., 2017; Li et al., 2016) construct effectiverepresentations for downstream tasks by iteratively aggregating neighborhood information (Kipf &Welling, 2017; Hamilton et al., 2017). Such methods have demonstrated state-of-the-art perfor-mance in classification and prediction tasks on graph data (Veli ˇckovi ́c et al., 2018; Chen et al., 2018;Xu et al., 2019; Veli ˇckovi ́c et al., 2019).Meanwhile, graphs from real-life applications are usually large with complex local neighborhood,where each node has rich features and dozens or even hundreds of neighbors. As shown in Figure1(a), this subgraph from Transaction dataset (detailed in Section 5.1) consists of 38 nodes ( i.e.,promising organizations and other organizations) with average node degree 15 and node featuredimension 120. The GNNs are expected to grasp useful patterns from neighboring nodes; however,as representative patterns are diluted by overwhelming information in local neighborhood, graphlearning algorithms could be misled by neighborhood aggregation. Such complexity in input graphsposes non-trivial overfitting risk to existing GNN based learning techniques.While it is straightforward yet expensive (sometimes even impractical) to address this overfittingproblem by increasing the number of labeled samples, we investigate a cheaper alternative of re-ducing input graph complexity by graph sparsification in this work. Graph sparsification (Liu et al.,2018; Zhang & Patone, 2017) aims to find smaller subgraphs from input large graphs that best pre-serve desired properties. Existing sparsification methods could lead to suboptimal performance fordownstream prediction tasks: (1) these methods are unsupervised such that the resulting sparsified1Under review as a conference paper at ICLR 2020Promising OrganizationsOther Organizations(a) Original SubgraphPromising OrganizationsOther Organizations (b) Sparsified SubgraphOriginal Graph Sparsified GraphNeuralSparseSparsified GraphSpectral Sparsifier0.40.60.8AUC0.640.790.63 (c) Testing AUCFigure 1: A subgraph of 38 organizations from Transaction dataset: (a) The original subgraph sam-pled from the Transaction dataset, where nodes and edges represent organizations and their trans-actions, respectively; (b) The sparsified subgraph by NeuralSparse; (c) Testing AUC on identifyingpromising organizations.graphs may not favor downstream tasks; and (2) they only consider structural information for sparsi-fication decision, while non-structural information in graphs, such as node/edge features, could havenon-trivial impact to the quality of sparsification. Recently, there have been GNN models attempt-ing to sample subgraphs from predefined distributions (Leskovec & Faloutsos, 2006; Adhikari et al.,2018; Hamilton et al., 2017; Chen et al., 2018). As the predefined distributions could be irrelevantto subsequent tasks, the sparsified graphs may miss important information for downstream tasks,leading to suboptimal prediction performance.Present work . We propose Neural Sparsification (NeuralSparse), a general framework that simulta-neously learns graph sparsification and graph representation by feedback signals from downstreamtasks. The NeuralSparse consists of two major components: sparsification network and GNN. Forthe sparsification network, we utilize a deep neural network to parameterize the sparsification pro-cess: how to select edges from one-hop neighborhood given a fixed budget. In the training phase,the network learns to optimize a sparsification strategy that favors downstream tasks. In the testingphase, the network sparsifies input graphs following the learned strategy, instead of sampling sub-graphs from a predefined distribution. Unlike conventional sparsification techniques, our techniquetakes both structural and non-structural information as input and optimizes the sparsification strategyby feedback from downstream tasks, instead of using (possibly irrelevant) heuristics. For the GNNcomponent, the NeuralSparse feeds the sparsified graphs to a GNN and learns a graph representationfor subsequent prediction tasks.Under the framework of NeuralSparse, we are able to leverage the standard stochastic gradientdescent and backpropagation techniques to simultaneously optimize graph sparsification and repre-sentation. As shown in Figure 1(b), the graph sparsified by the NeuralSparse has lower complexitywith average node degree around 5. As a result (illustrated in Figure 1(c)), the testing classificationaccuracy on the sparsified graph is improved by 15%, compared with its counterpart in the orig-inal input graph, while conventional techniques could not offer competitive sparsification for theclassification task.Experimental results on both public and private datasets show that the NeuralSparse is able to con-sistently provide improved performance for existing GNNs on node classification tasks, bringing upto 7% improvement.2 R ELATED WORKOur work is related to two lines of research: graph sparsification and graph representation learning.Graph sparsification . The goal of graph sparsification is to find small subgraphs from input largegraphs that best preserve desired properties. Existing techniques are mainly unsupervised and dealwith simple graphs without node/edge features for preserving predefined graph metrics (H ̈ubleret al., 2008), information propagation traces (Mathioudakis et al., 2011), graph spectrum (Calan-driello et al., 2018; Chakeri et al., 2016; Adhikari et al., 2018), node degree distribution (Eden et al.,2Under review as a conference paper at ICLR 2020Graph GSparsification Network Q#gGGraph Neural NetworksQ%YgSparsified Graph gClassification Results Y 'LossL∂L∂θ∂L∂φijijV0V1A′10A10GNNijijMLPFigure 2: The overview of NeuralSparse2018; V oudigari et al., 2016), node distance distribution (Leskovec & Faloutsos, 2006), or clusteringcoefficient (Maiya & Berger-Wolf, 2010). Importance based edge sampling has also been studied ina scenario where we could predefine edge importance (Zhao, 2015; Chen et al., 2018).Unlike existing methods that mainly work with simple graphs without node/edge features in anunsupervised manner, our method takes node/edge features as parts of input and optimizes graphsparsification by supervision signals from errors made in downstream tasks.Graph representation learning . Graph neural networks (GNNs) are the most popular techniquesthat enable vector representation learning for large graphs with complex node/edge features. Allexisting GNNs share a common spirit: extracting local structural features by neighborhood aggre-gation. Scarselli et al. (2009) explore how to extract multi-hop features by iterative neighborhoodaggregation. Inspired by the success of convolutional neural networks, multiple studies (Defferrardet al., 2016; Bruna et al., 2014) investigate how to learn convolutional filters in the graph spectraldomain under transductive settings (Zhang et al., 2018b; Zhuang & Ma, 2018). To enable inductivelearning, convolutional filters in the graph domain are proposed (Simonovsky & Komodakis, 2017;Niepert et al., 2016; Kipf & Welling, 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2018), and a few stud-ies (Hamilton et al., 2017; Lee et al., 2018) explore how to differentiate neighborhood filtering bysequential models. In addition, multiple recent works (Ying et al., 2018b; Xu et al., 2019; Abu-El-Haija et al., 2019) investigate the expressive power of GNNs. Recently, (Franceschi et al., 2019)study how to sample high-quality subgraphs from a space of all possible graphs of a complete graphso that the sampled graphs enhance the prediction power in downstream learning tasks. In particular,the proposed method only focus on transductive tasks.Our work contributes from a unique angle: by reducing the noise from input graphs, our techniquecan further boost testing performance of existing GNNs.3 P ROPOSED METHOD : NEURAL SPARSEIn this section, we introduce the core idea of our method. We start with the notations that arefrequently used in this paper. We then describe the theoretical justification behind NeuralSparse andour architecture to tackle the supervised node classification problem.Notations . In this paper, we represent an input graph of nnodes asG= (V;E;A): (1)V2Rndnincludes node features with dimensionality dn; (2)E2Rnnis a binary matrix where E(u;v) = 1if there is an edge between node uand nodev; (3)A2Rnndeencodes input edge features ofdimensionality de. In addition, we use Yto denote the prediction target in downstream tasks ( e.g.,Y2Rndlif we are dealing with a node classification problem with dlclasses).Theoretical justification . From the perspective of statistical learning, the key of a defined predictiontask is to learn P(YjG), whereYis the prediction target and Gis an input graph. Instead of directlyworking with original graphs, we would like to leverage sparsified subgraphs to mitigate overfittingrisks. In other words, we are interested in the following variant,P(YjG)Xg2SGP(Yjg)P(gjG); (1)wheregis a sparsified subgraph, and SGis a class of sparsified subgraphs of G.3Under review as a conference paper at ICLR 2020In general, because of the combinatorial complexity in graphs, it is intractable to enumerate allpossiblegas well as estimate the exact values of P(Yjg)andP(gjG). Therefore, we approximatethe distributions by tractable functions,Xg2SGP(Yjg)P(gjG)Xg2SGQ(Yjg)Q(gjG) (2)whereQandQare approximation functions for P(Yjg)andP(gjG)parameterized by and, respectively.Moreover, to make the above graph sparsification process differentiable, we employ reparameteri-zation tricks (Jang et al., 2017) to make Q(gjG)directly generate differentiable samples, suchthatXg2SGQ(Yjg)Q(gjG)/Xg0Q(gjG)Q(Yjg0) (3)whereg0Q(gjG)meansg0is a random sample drawn from Q(gjG).To this end, the key is how to find appropriate approximation functions Q(gjG)andQ(Yjg).Architecture . In this paper, we propose Neural Sparsification (NeuralSparse) to implement thetheoretical framework discussed in Equation 3. As shown in Figure 2, NeuralSparse consists of twomajor components: sparsification network and GNNs.The sparsification network is a multi-layer neural network that implements Q(gjG): TakingGas input, it generates a random sparsified subgraph of Gdrawn from a learned distribution.GNNs implement Q(Yjg)that takes a sparsified subgraph as input, extracts node representa-tions, and makes predictions for downstream tasks.Algorithm 1 Training algorithm for NeuralSparseInput: graphG= (V;E;A), integerl, and training labels Y.1:while stop criterion is not met do2: Generate sparsified subgraphs fg1;g2;;glgby sparsification network (Section 4);3: Produce prediction f^Y1;^Y2;;^Ylgby feedingfg1;g2;;glginto GNNs;4: Calculate loss function J;5: Updateandby descending J6:end whileAs the sparsified subgraph samples are differentiable, the two components can be jointly trained us-ing gradient descent based backpropagation techniques from a supervised loss function, as illustratedin Algorithm 1. While the GNNs have been widely investigated in recent works (Kipf & Welling,2017; Hamilton et al., 2017; Veli ˇckovi ́c et al., 2018), we focus on the practical implementation forsparsification network in the remaining of this paper.4 S PARSIFICATION NETWORKFollowing the theory discussed above, the goal of sparsification network is to generate sparsifiedsubgraphs for input graphs, serving as the approximation function Q(gjG). Therefore, we needto answer the following three questions in sparsification network. i). What is SGin Equation 1,the class of subgraphs we focus on? ii). How to sample sparsified subgraphs? iii). How to makesparsified subgraph sampling process differentiable for the end-to-end training? In the following,we address the questions one by one.k-neighbor subgraphs . We focus on k-neighbor subgraphs for SG(Sadhanala et al., 2016): Givenan input graph, a k-neighbor subgraph shares the same set of nodes with the input graph, and eachnode in the subgraph can select no more than kedges from its one-hop neighborhood. Althoughthe concept of sparsification network is not limited to a specific class of subgraphs, we choose k-neighbor subgraphs for the following reasons.4Under review as a conference paper at ICLR 2020We are able to adjust the estimation on the amount of task-relevant graph data by tuning thehyper-parameter k. Intuitively, when kis an under-estimate, the amount of task-relevant graphdata accessed by GNNs could be inadequate, leading to inferior performance. When kis an over-estimate, the downstream GNNs may overfit the introduced noise or irrelevant graph data, resultingin sub-optimal performance. It could be difficult to set a golden hyper-parameter that works alltime, but one has the freedom to choose the kthat is the best fit for a specific task.k-neighbor subgraphs are friendly to parallel computation. As each node selects its edges in-dependently from its neighborhood, we can utilize tensor operations in existing deep learningframeworks, such as tensorflow (Abadi et al., 2016), to speed up the sparsification process.Samplingk-neighbor subgraphs . Givenkand an input graph G= (V;E;A), we obtain a k-neighbor subgraph by repeatedly sampling edges for each node in the original graph. Without lossof generality, we sketch this sampling process by focusing on a specific node uin graphG. LetNube the set of one-hop neighbors of node u.1.vf(V(u);V(Nu);A(u)), wheref()is a function that generates a one-hop neighbor vfromthe learned distribution based on node u’s attributes, node attributes of u’s neighbors V(Nu), andtheir edge attributes A(u). In particular, the learned distribution is encoded by parameters .2. EdgeE(u;v)is selected for node u.3. The above two steps are repeated ktimes.Note that the above process performs sampling without replacement. Given a node u, each of itsadjacent edges is selected at most once. Moreover, the sampling function f()is shared amongnodes; therefore, the number of parameters is independent of the input graph size.Making samples differentiable . While conventional methods are able to generate discrete sam-ples (Sadhanala et al., 2016), these samples are not differentiable such that it is difficult to utilizethem to optimize sample generation. To make samples differentiable, we propose a Gumbel-Softmaxbased multi-layer neural network to implement the sampling function f()discussed in above.To make the discussion self-contained, we briefly discuss the idea of Gumbel-Softmax. Gumbel-Softmax is a reparameterization trick used to generate differentiable discrete samples (Jang et al.,2017; Maddison et al., 2017). Under appropriate hyper-parameter settings, Gumbel-Softmax is ableto generate continuous vectors that are as “sharp” as one-hot vectors widely used to encode discretedata.Without loss of generality, we focus on a specific node uin a graphG= (V;E;A). LetNube theset of one-hop neighbors of node u. We implement f()as follows.1.8v2Nu,zu;v=MLP(V(u);V(v);A(u;v)); (4)where MLP is a multi-layer neural network with parameters .2.8v2Nu, we employ a softmax function to compute the probability to sample the edge,u;v=exp(zu;v)Pw2Nuexp(zu;w)(5)3. Using Gumbel-Softmax, we generate differentiable samplesxu;v=exp((log(u;v) +v)=)Pw2Nuexp((log(u;w) +w)=)(6)wherexu;vis a scalar,v=log(log(s))withsrandomly drawn from Uniform (0;1), andis a hyper-parameter called temperature which controls the interpolation between discretedistribution and continuous categorical densities.Note that when we sample kedges, the computation for zu;vandu;vonly needs to be performedonce. For the hyper-parameter , we discuss how to tune it as follows.Discussion on temperature tuning . The behavior of Gumbel-Softmax is governed by a hyper-parametercalled temperature. In general, when is small, the Gumbel-Softmax distribution5Under review as a conference paper at ICLR 2020resembles the discrete distribution, which induces strong sparsity; however, small also introduceshigh variance gradient that blocks effective backpropagation. A high value of cannot produceexpected sparsification effect. Following the practice in (Jang et al., 2017), we adopt the strategy bystarting the training with a high temperature and anneal to a small value with a guided schedule.Sparsification algorithm and its complexity . As shown in Algorithm 2, given hyper-parameterk, the sparsification network visits each node’s one-hop neighbors ktimes. Letmbe the totalnumber of edges in the graph. The complexity of sampling subgraphs by the sparsification networkisO(km). Whenkis small in practice, the overall complexity is O(m).Algorithm 2 Sampling subgraphs by sparsification networkInput: graphG= (V;E;A)and integerk.1:Edge set H=;2:foru2Vdo3: forv2Nudo4:zu;v MLP(V(u);V(v);A(u;v))5: end for6: forv2Nudo7:u;v exp(zu;v)=Pw2Nuexp(zu;w)8: end for9: forj= 1;;kdo10: forv2Nudo11: xu;v exp((log(u;v) +v)=)=Pw2Nuexp((log(u;w) +w)=)12: end for13: Add the edge represented by vector [xu;v]intoH14: end for15:end forComparison with multiple related methods . Unlike GraphSAGE (Hamilton et al., 2017), Fast-GCN (Chen et al., 2018), and AS-GCN (Huang et al., 2018) that incorporate layer-wise node sam-plers to reduce the complexity of GNNs, NeuralSparse samples subgraphs before applying GNNs.As for the computation complexity, the sparsification in NeuralSparse is more friendly to paral-lel computation than the layer-conditioned approach in AS-GCN. Compared with GAT (Veli ˇckovi ́cet al., 2018; Zhang et al., 2018a), the NeuralSparse can produce sparser neighborhood, which ef-fectively mitigates overfitting risks. Unlike LDS (Franceschi et al., 2019), NeuralSparse learnsinductive graph sparsification, and its graph sampling is constrained by input graph topology.5 E XPERIMENTAL STUDYIn this section, we evaluate our proposed NeuralSparse on node classification task, including induc-tive and transductive settings. We demonstrate that NeuralSparse achieves superior classificationperformance over state-of-the-art GNN models. Moreover, we provide a case study to demonstratehow sparsified subgraphs generated by NeuralSparse could improve classification. The supplemen-tary material contains more detailed experimental information.5.1 D ATASETSWe employ five datasets from various domains and conduct node classification task following thesettings as described in Hamilton et al. (2017); Kipf & Welling (2017). The dataset statistics aresummarized in Table 1.Inductive datasets. We utilize the Reddit and PPI datasets and follow the same setting in Hamiltonet al. (2017). The Reddit dataset contains post-to-post graph with word vectors as node features. Thenode labels represent which community Reddit posts belong to. The protein-protein interaction (PPI)dataset contains graphs corresponding to different human tissues. The node features are positionalgene sets, motif gene sets and immunological signatures. The nodes are multi-labeled by geneontology. The graph in the Transaction dataset contains real transactions between organizations intwo years, with the first year for training and the second year for validation/testing. Each node6Under review as a conference paper at ICLR 2020Table 1: Dataset statisticsReddit PPI Transaction Cora CiteseerTask Inductive Inductive Inductive Transductive TransductiveNodes 232,965 56,944 95,544 2,708 3,327Edges 11,606,919 818,716 963,468 5,429 4,732Features 602 50 120 1,433 3,703Classes 41 121 2 7 6Training Nodes 152,410 44,906 47,772 140 120Validation Nodes 23,699 6,514 9,554 500 500Testing Nodes 55,334 5,524 38,218 1,000 1,000represents an organization and each edge indicates a transaction between two organizations. Nodeattributes are side information about the organizations such as account balance, cash reserve, etc. Onthis dataset, we aim to classify organizations into two categories: promising orothers for investmentin near future. The class distribution in the Transaction dataset is highly imbalanced. During thetraining under inductive setting, algorithms have only access to training nodes’ attributes and edges.In the PPI and Transaction datasets, the models have to generalize to completely unseen graphs.Transductive datasets. We use two citation benchmark datasets with transductive experimental set-ting in Yang et al. (2016); Kipf & Welling (2017). The citation graphs contain nodes correspondingto documents and edges as citations. Node features are the sparse bag-of-words representations ofdocuments and node labels indicate the topic class of the documents. In transductive learning, thetraining methods have access to all node features and edges, with a limited subset of node labels.5.2 E XPERIMENTAL SETUPBaseline models . We incorporate four state-of-the-art methods as the base GNN components, in-cluding GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veli ˇckovi ́c et al.,2018), and GIN (Xu et al., 2019). We evaluate our proposed NeuralSparse with sparsification net-work and each of the four GNNs. Besides, we also implement variants of NeuralSparse by replacingthe sparsification network with either the spectral sparsifier (SS, Sadhanala et al., 2016) or the RankDegree (RD, V oudigari et al., 2016) method.Temperature tuning. We anneal the temperature with the schedule = max(0:05;exp(rp)),wherepis the training epoch and r210f5;4;3;2;1g.is updated every Nsteps andN2f50;100;:::;500g. Compared with MNIST V AE model in Jang et al. (2017), smaller hyper-parameterfits NeuralSparse better in practice.Metrics. We evaluate the performance on the transductive datasets with accuracy (Kipf & Welling,2017). For inductive tasks on the Reddit and PPI datasets, we report micro-averaged F1 scores(Hamilton et al., 2017). Due to the highly imbalanced classes in the Transaction dataset, models areevaluated with AUC value (Huang & Ling, 2005). The results show the average of 10 runs.5.3 C LASSIFICATION PERFORMANCETable 2 summarizes classification performance of NeuralSparse and the baseline methods on alldatasets. For Reddit, PPI, Transaction, Cora and Citeseer, the hyper-parameter kis set as 30,15,10,5, and 3respectively. The hyper-parameter lis set as 1in this experiment. Note that the result ofGAT on Reddit is missing due to the out-of-memory error.Overall, NeuralSparse is able to help GNN techniques achieve competitive generalization perfor-mance with sparsified graph data. We make the following observations. (1) Compared with basicGNN models, NeuralSparse can enhance the generalization performance on node classification tasksby utilizing the sparsified subgraphs from sparsification network, especially in the inductive setting.Indeed, large neighborhood size in the original graphs could bring increased chance of introduc-ing noise into the convolutional operations, leading to sub-optimal performance. (2) With differ-ent GNN models, the NeuralSparse can consistently achieve comparable or superior performance,which demonstrates NeuralSparse is general and can be applied to multiple classification models.7Under review as a conference paper at ICLR 2020Table 2: Node classification performanceSparsifier MethodReddit PPI Transaction Cora CiteseerMicro-F1 Micro-F1 AUC Accuracy AccuracyN/AGCN 0.922 0.041 0.532 0.024 0.564 0.018 0.810 0.027 0.694 0.020GraphSAGE 0.938 0.029 0.600 0.027 0.574 0.029 0.825 0.033 0.710 0.020GAT - 0.917 0.030 0.616 0.022 0.821 0.043 0.721 0.037GIN 0.928 0.022 0.703 0.028 0.607 0.031 0.816 0.020 0.709 0.037GCN 0.912 0.022 0.521 0.024 0.562 0.035 0.780 0.045 0.684 0.033SS/ GraphSAGE 0.907 0.018 0.576 0.022 0.565 0.042 0.806 0.032 0.701 0.027RD* GAT - 0.889 0.034 0.614 0.044 0.807 0.047 0.686 0.034GIN 0.901 0.021 0.693 0.019 0.593 0.038 0.785 0.041 0.706 0.043GCN 0.946 0.020 0.600 0.014 0.610 0.022 0.821 0.014 0.715 0.014Neural GraphSAGE 0.9510.015 0.626 0.023 0.649 0.018 0.832 0.024 0.720 0.013Sparse GAT - 0.9210.015 0.6710.018 0.8340.015 0.7240.026GIN 0.937 0.027 0.744 0.015 0.634 0.023 0.824 0.027 0.719 0.015(* Report the better performance with SS or RD)Promising OrganizationsOther Organizations(a) Spectral SparsifierPromising OrganizationsOther Organizations (b) RD Sparsifier5 10 15Hyper-parameter k0.620.640.660.68AUCNeuralSparse-GATNeuralSparse-GraphSAGE (c) Hyperparameter k1 2 3 4 5Hyper-parameter l0.6450.6500.6550.6600.6650.6700.675AUCNeuralSparse-GATNeuralSparse-GraphSAGE (d) Hyperparameter lFigure 3: Sparsified subgraphs and performance vs hyper-parameters(3) In comparison with the two NeuralSparse variants SS-GraphSAGE and RD-GraphSAGE, Neu-ralSparse outperforms because of the automatically learned graph sparsification with both structuraland non-structural information as input.5.4 S ENSITIVITY TO HYPER -PARAMETERS AND SPARSIFIED SUBGRAPHSFigure 3(c) demonstrates how classification performance responds when kincreases on the Trans-action dataset. There exists an optimal kthat delivers the best classification AUC score. When kissmall, NeuralSparse can only make use of little relevant structural information in feature aggrega-tion, which leads to inferior performance. When kincreases, the aggregation convolution involvesmore complex neighborhood aggregation with higher chance of overfitting noise data, which nega-tively impacts the classification performance for unseen testing data. Figure 3(d) shows how hyper-parameterlimpacts classification performance on the Transaction dataset. When lincreases from 1to5, we observe a relatively small improvement in classification AUC score. As the parameters inthe sparsification network are shared by all edges in the graph, the estimation variance from randomsampling could already be mitigated to some extent by a number of sampled edges in a sparsifiedsubgraph. Thus, when we increase the number of sparsified subgraphs, the incremental gain couldbe small.In Figure 3(a, b), we present the sparsified graphs output by two baseline methods, SS and RD.By comparing the two plots with Figure 1(b), we make the following observations. First, the Neu-ralSparse sparsified graph tends to select edges that connect nodes of identical labels, which favorsthe downstream classification task. The observed clustering effect could further boost the confidenceof decision making. Second, instead of exploring all the neighbors, we can focus on selected con-nections/edges in sparsified graphs, which could make it easier for human experts to perform modelinterpretation and result visualization.8Under review as a conference paper at ICLR 20206 C ONCLUSIONIn this paper, we propose Neural Sparsification (NeuralSparse) to address the overfitting issuesbrought by the complexity in real-life large graphs. NeuralSparse consists of two major compo-nents: (1) The sparsification network sparsifies input graphs by sampling edges following a learneddistribution; (2) GNNs take sparsified subgraphs as input and extracts node representations for down-stream tasks. The two components in NeuralSparse can be jointly trained with supervised loss, gra-dient descent, and backpropagation techniques. The experimental study on real-life datasets showthat the NeuralSparse consistently renders more robust graph representations, and brings up to 7%improvement in accuracy over the state-of-the-art GNN models.9Under review as a conference paper at ICLR 2020
r1xkiEt2Fr
Official Blind Review #2
1: Reject
The authors argue that existing GCN-based approaches may pose non-trival overfitting risk during the training phase, especially when high-dimensional features and high-degree entities are observed in the graphs. To address the issue, the authors integrate graph sparsification with conventional graph neural nets. Experimental results show the efficacy of the proposed model in a series of benchmark datasets. In general, the paper is easy-to-follow and well-organized. My main concern is there lack some insightful discussion regarding the problem motivation and the proposed algorithm. In particular, (1) It is unclear why existing GCN-based approaches can not handle the cases shown in Fig. 1. Is there any evidence (either theoretical or empirical) or reference to support this argument? (2) The motivation example shown in Fig. 1 is confusing. Conventionally, graph sparsification aims to find smaller subgraphs from the input graphs that preserve the key structures. However, in Fig. 1 (b), the sparsified subgraph seems only downsampling the edges while preserving all the nodes as the original graph. The authors may want to clarify whether the sparisified subgraph has the identical size as the input graph. (3) Some notations are not formally defined before using them. In Eq. 2, what do Q_\theta and Q_\phi denote? (4) The statement of "trade-off between model accuracy and graph complexity by tuning the hyperparameter k" is vulnerable. If the overfitting exists, larger k may result in lower accuracy in the testing phase. (5) What is the complexity of f_\Phi()? (6) The complexity (i.e., f(km)) of the proposed model is problematic. As stated at the beginning of this paper, the paper targets the graph with "complex local neighborhood", where each node is described by rich features and neighbors. In other words, the target graph is not sparse. In this case, the complexity of the proposed algorithm can be intractable, especially when k is large and m is close to n^2.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Robust Graph Representation Learning via Neural Sparsification ### Paper Abstract Graph representation learning serves as the core of many important prediction tasks, ranging from product recommendation in online marketing to fraud detection in financial domain. Real-life graphs are usually large with complex local neighborhood, where each node is described by a rich set of features and easily connects to dozens or even hundreds of neighbors. Most existing graph learning techniques rely on neighborhood aggregation, however, the complexity on real-life graphs is usually high, posing non-trivial overfitting risk during model training. In this paper, we present Neural Sparsification (NeuralSparse), a supervised graph sparsification technique that mitigates the overfitting risk by reducing the complexity of input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize the sparsification process, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance on testing data. Experimental results on both benchmark and private datasets show that NeuralSparse can effectively improve testing accuracy and bring up to 7.4% improvement when working with existing graph neural networks on node classification tasks. ### Paper Keywords ["graphs", "complexity", "risk", "neuralsparse", "graph neural networks", "robust graph representation", "serves", "core"] ### Paper Content ABSTRACTGraph representation learning serves as the core of many important predictiontasks, ranging from product recommendation in online marketing to fraud detec-tion in financial domain. Real-life graphs are usually large with complex localneighborhood, where each node is described by a rich set of features and easilyconnects to dozens or even hundreds of neighbors. Most existing graph learningtechniques rely on neighborhood aggregation, however, the complexity on real-life graphs is usually high, posing non-trivial overfitting risk during model train-ing. In this paper, we present Neural Sparsification (NeuralSparse), a supervisedgraph sparsification technique that mitigates the overfitting risk by reducing thecomplexity of input graphs. Our method takes both structural and non-structuralinformation as input, utilizes deep neural networks to parameterize the sparsi-fication process, and optimizes the parameters by feedback signals from down-stream tasks. Under the NeuralSparse framework, supervised graph sparsificationcould seamlessly connect with existing graph neural networks for more robustperformance on testing data. Experimental results on both benchmark and privatedatasets show that, NeuralSparse can effectively improve testing accuracy andbring up to 7.4% improvement when working with existing graph neural networkson node classification tasks.1 I NTRODUCTIONRepresentation learning has been in the center of many machine learning tasks on graphs, suchas name disambiguation in citation networks (Zhang et al., 2018c), spam detection in social net-works (Akoglu et al., 2015), recommendations in online marketing (Ying et al., 2018a), and manyothers (Hamilton et al., 2017; Li et al., 2018). As a class of models that can simultaneously uti-lize non-structural ( e.g., node and edge features) and structural information in graphs, Graph NeuralNetworks (GNNs) (Kipf & Welling, 2017; Hamilton et al., 2017; Li et al., 2016) construct effectiverepresentations for downstream tasks by iteratively aggregating neighborhood information (Kipf &Welling, 2017; Hamilton et al., 2017). Such methods have demonstrated state-of-the-art perfor-mance in classification and prediction tasks on graph data (Veli ˇckovi ́c et al., 2018; Chen et al., 2018;Xu et al., 2019; Veli ˇckovi ́c et al., 2019).Meanwhile, graphs from real-life applications are usually large with complex local neighborhood,where each node has rich features and dozens or even hundreds of neighbors. As shown in Figure1(a), this subgraph from Transaction dataset (detailed in Section 5.1) consists of 38 nodes ( i.e.,promising organizations and other organizations) with average node degree 15 and node featuredimension 120. The GNNs are expected to grasp useful patterns from neighboring nodes; however,as representative patterns are diluted by overwhelming information in local neighborhood, graphlearning algorithms could be misled by neighborhood aggregation. Such complexity in input graphsposes non-trivial overfitting risk to existing GNN based learning techniques.While it is straightforward yet expensive (sometimes even impractical) to address this overfittingproblem by increasing the number of labeled samples, we investigate a cheaper alternative of re-ducing input graph complexity by graph sparsification in this work. Graph sparsification (Liu et al.,2018; Zhang & Patone, 2017) aims to find smaller subgraphs from input large graphs that best pre-serve desired properties. Existing sparsification methods could lead to suboptimal performance fordownstream prediction tasks: (1) these methods are unsupervised such that the resulting sparsified1Under review as a conference paper at ICLR 2020Promising OrganizationsOther Organizations(a) Original SubgraphPromising OrganizationsOther Organizations (b) Sparsified SubgraphOriginal Graph Sparsified GraphNeuralSparseSparsified GraphSpectral Sparsifier0.40.60.8AUC0.640.790.63 (c) Testing AUCFigure 1: A subgraph of 38 organizations from Transaction dataset: (a) The original subgraph sam-pled from the Transaction dataset, where nodes and edges represent organizations and their trans-actions, respectively; (b) The sparsified subgraph by NeuralSparse; (c) Testing AUC on identifyingpromising organizations.graphs may not favor downstream tasks; and (2) they only consider structural information for sparsi-fication decision, while non-structural information in graphs, such as node/edge features, could havenon-trivial impact to the quality of sparsification. Recently, there have been GNN models attempt-ing to sample subgraphs from predefined distributions (Leskovec & Faloutsos, 2006; Adhikari et al.,2018; Hamilton et al., 2017; Chen et al., 2018). As the predefined distributions could be irrelevantto subsequent tasks, the sparsified graphs may miss important information for downstream tasks,leading to suboptimal prediction performance.Present work . We propose Neural Sparsification (NeuralSparse), a general framework that simulta-neously learns graph sparsification and graph representation by feedback signals from downstreamtasks. The NeuralSparse consists of two major components: sparsification network and GNN. Forthe sparsification network, we utilize a deep neural network to parameterize the sparsification pro-cess: how to select edges from one-hop neighborhood given a fixed budget. In the training phase,the network learns to optimize a sparsification strategy that favors downstream tasks. In the testingphase, the network sparsifies input graphs following the learned strategy, instead of sampling sub-graphs from a predefined distribution. Unlike conventional sparsification techniques, our techniquetakes both structural and non-structural information as input and optimizes the sparsification strategyby feedback from downstream tasks, instead of using (possibly irrelevant) heuristics. For the GNNcomponent, the NeuralSparse feeds the sparsified graphs to a GNN and learns a graph representationfor subsequent prediction tasks.Under the framework of NeuralSparse, we are able to leverage the standard stochastic gradientdescent and backpropagation techniques to simultaneously optimize graph sparsification and repre-sentation. As shown in Figure 1(b), the graph sparsified by the NeuralSparse has lower complexitywith average node degree around 5. As a result (illustrated in Figure 1(c)), the testing classificationaccuracy on the sparsified graph is improved by 15%, compared with its counterpart in the orig-inal input graph, while conventional techniques could not offer competitive sparsification for theclassification task.Experimental results on both public and private datasets show that the NeuralSparse is able to con-sistently provide improved performance for existing GNNs on node classification tasks, bringing upto 7% improvement.2 R ELATED WORKOur work is related to two lines of research: graph sparsification and graph representation learning.Graph sparsification . The goal of graph sparsification is to find small subgraphs from input largegraphs that best preserve desired properties. Existing techniques are mainly unsupervised and dealwith simple graphs without node/edge features for preserving predefined graph metrics (H ̈ubleret al., 2008), information propagation traces (Mathioudakis et al., 2011), graph spectrum (Calan-driello et al., 2018; Chakeri et al., 2016; Adhikari et al., 2018), node degree distribution (Eden et al.,2Under review as a conference paper at ICLR 2020Graph GSparsification Network Q#gGGraph Neural NetworksQ%YgSparsified Graph gClassification Results Y 'LossL∂L∂θ∂L∂φijijV0V1A′10A10GNNijijMLPFigure 2: The overview of NeuralSparse2018; V oudigari et al., 2016), node distance distribution (Leskovec & Faloutsos, 2006), or clusteringcoefficient (Maiya & Berger-Wolf, 2010). Importance based edge sampling has also been studied ina scenario where we could predefine edge importance (Zhao, 2015; Chen et al., 2018).Unlike existing methods that mainly work with simple graphs without node/edge features in anunsupervised manner, our method takes node/edge features as parts of input and optimizes graphsparsification by supervision signals from errors made in downstream tasks.Graph representation learning . Graph neural networks (GNNs) are the most popular techniquesthat enable vector representation learning for large graphs with complex node/edge features. Allexisting GNNs share a common spirit: extracting local structural features by neighborhood aggre-gation. Scarselli et al. (2009) explore how to extract multi-hop features by iterative neighborhoodaggregation. Inspired by the success of convolutional neural networks, multiple studies (Defferrardet al., 2016; Bruna et al., 2014) investigate how to learn convolutional filters in the graph spectraldomain under transductive settings (Zhang et al., 2018b; Zhuang & Ma, 2018). To enable inductivelearning, convolutional filters in the graph domain are proposed (Simonovsky & Komodakis, 2017;Niepert et al., 2016; Kipf & Welling, 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2018), and a few stud-ies (Hamilton et al., 2017; Lee et al., 2018) explore how to differentiate neighborhood filtering bysequential models. In addition, multiple recent works (Ying et al., 2018b; Xu et al., 2019; Abu-El-Haija et al., 2019) investigate the expressive power of GNNs. Recently, (Franceschi et al., 2019)study how to sample high-quality subgraphs from a space of all possible graphs of a complete graphso that the sampled graphs enhance the prediction power in downstream learning tasks. In particular,the proposed method only focus on transductive tasks.Our work contributes from a unique angle: by reducing the noise from input graphs, our techniquecan further boost testing performance of existing GNNs.3 P ROPOSED METHOD : NEURAL SPARSEIn this section, we introduce the core idea of our method. We start with the notations that arefrequently used in this paper. We then describe the theoretical justification behind NeuralSparse andour architecture to tackle the supervised node classification problem.Notations . In this paper, we represent an input graph of nnodes asG= (V;E;A): (1)V2Rndnincludes node features with dimensionality dn; (2)E2Rnnis a binary matrix where E(u;v) = 1if there is an edge between node uand nodev; (3)A2Rnndeencodes input edge features ofdimensionality de. In addition, we use Yto denote the prediction target in downstream tasks ( e.g.,Y2Rndlif we are dealing with a node classification problem with dlclasses).Theoretical justification . From the perspective of statistical learning, the key of a defined predictiontask is to learn P(YjG), whereYis the prediction target and Gis an input graph. Instead of directlyworking with original graphs, we would like to leverage sparsified subgraphs to mitigate overfittingrisks. In other words, we are interested in the following variant,P(YjG)Xg2SGP(Yjg)P(gjG); (1)wheregis a sparsified subgraph, and SGis a class of sparsified subgraphs of G.3Under review as a conference paper at ICLR 2020In general, because of the combinatorial complexity in graphs, it is intractable to enumerate allpossiblegas well as estimate the exact values of P(Yjg)andP(gjG). Therefore, we approximatethe distributions by tractable functions,Xg2SGP(Yjg)P(gjG)Xg2SGQ(Yjg)Q(gjG) (2)whereQandQare approximation functions for P(Yjg)andP(gjG)parameterized by and, respectively.Moreover, to make the above graph sparsification process differentiable, we employ reparameteri-zation tricks (Jang et al., 2017) to make Q(gjG)directly generate differentiable samples, suchthatXg2SGQ(Yjg)Q(gjG)/Xg0Q(gjG)Q(Yjg0) (3)whereg0Q(gjG)meansg0is a random sample drawn from Q(gjG).To this end, the key is how to find appropriate approximation functions Q(gjG)andQ(Yjg).Architecture . In this paper, we propose Neural Sparsification (NeuralSparse) to implement thetheoretical framework discussed in Equation 3. As shown in Figure 2, NeuralSparse consists of twomajor components: sparsification network and GNNs.The sparsification network is a multi-layer neural network that implements Q(gjG): TakingGas input, it generates a random sparsified subgraph of Gdrawn from a learned distribution.GNNs implement Q(Yjg)that takes a sparsified subgraph as input, extracts node representa-tions, and makes predictions for downstream tasks.Algorithm 1 Training algorithm for NeuralSparseInput: graphG= (V;E;A), integerl, and training labels Y.1:while stop criterion is not met do2: Generate sparsified subgraphs fg1;g2;;glgby sparsification network (Section 4);3: Produce prediction f^Y1;^Y2;;^Ylgby feedingfg1;g2;;glginto GNNs;4: Calculate loss function J;5: Updateandby descending J6:end whileAs the sparsified subgraph samples are differentiable, the two components can be jointly trained us-ing gradient descent based backpropagation techniques from a supervised loss function, as illustratedin Algorithm 1. While the GNNs have been widely investigated in recent works (Kipf & Welling,2017; Hamilton et al., 2017; Veli ˇckovi ́c et al., 2018), we focus on the practical implementation forsparsification network in the remaining of this paper.4 S PARSIFICATION NETWORKFollowing the theory discussed above, the goal of sparsification network is to generate sparsifiedsubgraphs for input graphs, serving as the approximation function Q(gjG). Therefore, we needto answer the following three questions in sparsification network. i). What is SGin Equation 1,the class of subgraphs we focus on? ii). How to sample sparsified subgraphs? iii). How to makesparsified subgraph sampling process differentiable for the end-to-end training? In the following,we address the questions one by one.k-neighbor subgraphs . We focus on k-neighbor subgraphs for SG(Sadhanala et al., 2016): Givenan input graph, a k-neighbor subgraph shares the same set of nodes with the input graph, and eachnode in the subgraph can select no more than kedges from its one-hop neighborhood. Althoughthe concept of sparsification network is not limited to a specific class of subgraphs, we choose k-neighbor subgraphs for the following reasons.4Under review as a conference paper at ICLR 2020We are able to adjust the estimation on the amount of task-relevant graph data by tuning thehyper-parameter k. Intuitively, when kis an under-estimate, the amount of task-relevant graphdata accessed by GNNs could be inadequate, leading to inferior performance. When kis an over-estimate, the downstream GNNs may overfit the introduced noise or irrelevant graph data, resultingin sub-optimal performance. It could be difficult to set a golden hyper-parameter that works alltime, but one has the freedom to choose the kthat is the best fit for a specific task.k-neighbor subgraphs are friendly to parallel computation. As each node selects its edges in-dependently from its neighborhood, we can utilize tensor operations in existing deep learningframeworks, such as tensorflow (Abadi et al., 2016), to speed up the sparsification process.Samplingk-neighbor subgraphs . Givenkand an input graph G= (V;E;A), we obtain a k-neighbor subgraph by repeatedly sampling edges for each node in the original graph. Without lossof generality, we sketch this sampling process by focusing on a specific node uin graphG. LetNube the set of one-hop neighbors of node u.1.vf(V(u);V(Nu);A(u)), wheref()is a function that generates a one-hop neighbor vfromthe learned distribution based on node u’s attributes, node attributes of u’s neighbors V(Nu), andtheir edge attributes A(u). In particular, the learned distribution is encoded by parameters .2. EdgeE(u;v)is selected for node u.3. The above two steps are repeated ktimes.Note that the above process performs sampling without replacement. Given a node u, each of itsadjacent edges is selected at most once. Moreover, the sampling function f()is shared amongnodes; therefore, the number of parameters is independent of the input graph size.Making samples differentiable . While conventional methods are able to generate discrete sam-ples (Sadhanala et al., 2016), these samples are not differentiable such that it is difficult to utilizethem to optimize sample generation. To make samples differentiable, we propose a Gumbel-Softmaxbased multi-layer neural network to implement the sampling function f()discussed in above.To make the discussion self-contained, we briefly discuss the idea of Gumbel-Softmax. Gumbel-Softmax is a reparameterization trick used to generate differentiable discrete samples (Jang et al.,2017; Maddison et al., 2017). Under appropriate hyper-parameter settings, Gumbel-Softmax is ableto generate continuous vectors that are as “sharp” as one-hot vectors widely used to encode discretedata.Without loss of generality, we focus on a specific node uin a graphG= (V;E;A). LetNube theset of one-hop neighbors of node u. We implement f()as follows.1.8v2Nu,zu;v=MLP(V(u);V(v);A(u;v)); (4)where MLP is a multi-layer neural network with parameters .2.8v2Nu, we employ a softmax function to compute the probability to sample the edge,u;v=exp(zu;v)Pw2Nuexp(zu;w)(5)3. Using Gumbel-Softmax, we generate differentiable samplesxu;v=exp((log(u;v) +v)=)Pw2Nuexp((log(u;w) +w)=)(6)wherexu;vis a scalar,v=log(log(s))withsrandomly drawn from Uniform (0;1), andis a hyper-parameter called temperature which controls the interpolation between discretedistribution and continuous categorical densities.Note that when we sample kedges, the computation for zu;vandu;vonly needs to be performedonce. For the hyper-parameter , we discuss how to tune it as follows.Discussion on temperature tuning . The behavior of Gumbel-Softmax is governed by a hyper-parametercalled temperature. In general, when is small, the Gumbel-Softmax distribution5Under review as a conference paper at ICLR 2020resembles the discrete distribution, which induces strong sparsity; however, small also introduceshigh variance gradient that blocks effective backpropagation. A high value of cannot produceexpected sparsification effect. Following the practice in (Jang et al., 2017), we adopt the strategy bystarting the training with a high temperature and anneal to a small value with a guided schedule.Sparsification algorithm and its complexity . As shown in Algorithm 2, given hyper-parameterk, the sparsification network visits each node’s one-hop neighbors ktimes. Letmbe the totalnumber of edges in the graph. The complexity of sampling subgraphs by the sparsification networkisO(km). Whenkis small in practice, the overall complexity is O(m).Algorithm 2 Sampling subgraphs by sparsification networkInput: graphG= (V;E;A)and integerk.1:Edge set H=;2:foru2Vdo3: forv2Nudo4:zu;v MLP(V(u);V(v);A(u;v))5: end for6: forv2Nudo7:u;v exp(zu;v)=Pw2Nuexp(zu;w)8: end for9: forj= 1;;kdo10: forv2Nudo11: xu;v exp((log(u;v) +v)=)=Pw2Nuexp((log(u;w) +w)=)12: end for13: Add the edge represented by vector [xu;v]intoH14: end for15:end forComparison with multiple related methods . Unlike GraphSAGE (Hamilton et al., 2017), Fast-GCN (Chen et al., 2018), and AS-GCN (Huang et al., 2018) that incorporate layer-wise node sam-plers to reduce the complexity of GNNs, NeuralSparse samples subgraphs before applying GNNs.As for the computation complexity, the sparsification in NeuralSparse is more friendly to paral-lel computation than the layer-conditioned approach in AS-GCN. Compared with GAT (Veli ˇckovi ́cet al., 2018; Zhang et al., 2018a), the NeuralSparse can produce sparser neighborhood, which ef-fectively mitigates overfitting risks. Unlike LDS (Franceschi et al., 2019), NeuralSparse learnsinductive graph sparsification, and its graph sampling is constrained by input graph topology.5 E XPERIMENTAL STUDYIn this section, we evaluate our proposed NeuralSparse on node classification task, including induc-tive and transductive settings. We demonstrate that NeuralSparse achieves superior classificationperformance over state-of-the-art GNN models. Moreover, we provide a case study to demonstratehow sparsified subgraphs generated by NeuralSparse could improve classification. The supplemen-tary material contains more detailed experimental information.5.1 D ATASETSWe employ five datasets from various domains and conduct node classification task following thesettings as described in Hamilton et al. (2017); Kipf & Welling (2017). The dataset statistics aresummarized in Table 1.Inductive datasets. We utilize the Reddit and PPI datasets and follow the same setting in Hamiltonet al. (2017). The Reddit dataset contains post-to-post graph with word vectors as node features. Thenode labels represent which community Reddit posts belong to. The protein-protein interaction (PPI)dataset contains graphs corresponding to different human tissues. The node features are positionalgene sets, motif gene sets and immunological signatures. The nodes are multi-labeled by geneontology. The graph in the Transaction dataset contains real transactions between organizations intwo years, with the first year for training and the second year for validation/testing. Each node6Under review as a conference paper at ICLR 2020Table 1: Dataset statisticsReddit PPI Transaction Cora CiteseerTask Inductive Inductive Inductive Transductive TransductiveNodes 232,965 56,944 95,544 2,708 3,327Edges 11,606,919 818,716 963,468 5,429 4,732Features 602 50 120 1,433 3,703Classes 41 121 2 7 6Training Nodes 152,410 44,906 47,772 140 120Validation Nodes 23,699 6,514 9,554 500 500Testing Nodes 55,334 5,524 38,218 1,000 1,000represents an organization and each edge indicates a transaction between two organizations. Nodeattributes are side information about the organizations such as account balance, cash reserve, etc. Onthis dataset, we aim to classify organizations into two categories: promising orothers for investmentin near future. The class distribution in the Transaction dataset is highly imbalanced. During thetraining under inductive setting, algorithms have only access to training nodes’ attributes and edges.In the PPI and Transaction datasets, the models have to generalize to completely unseen graphs.Transductive datasets. We use two citation benchmark datasets with transductive experimental set-ting in Yang et al. (2016); Kipf & Welling (2017). The citation graphs contain nodes correspondingto documents and edges as citations. Node features are the sparse bag-of-words representations ofdocuments and node labels indicate the topic class of the documents. In transductive learning, thetraining methods have access to all node features and edges, with a limited subset of node labels.5.2 E XPERIMENTAL SETUPBaseline models . We incorporate four state-of-the-art methods as the base GNN components, in-cluding GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veli ˇckovi ́c et al.,2018), and GIN (Xu et al., 2019). We evaluate our proposed NeuralSparse with sparsification net-work and each of the four GNNs. Besides, we also implement variants of NeuralSparse by replacingthe sparsification network with either the spectral sparsifier (SS, Sadhanala et al., 2016) or the RankDegree (RD, V oudigari et al., 2016) method.Temperature tuning. We anneal the temperature with the schedule = max(0:05;exp(rp)),wherepis the training epoch and r210f5;4;3;2;1g.is updated every Nsteps andN2f50;100;:::;500g. Compared with MNIST V AE model in Jang et al. (2017), smaller hyper-parameterfits NeuralSparse better in practice.Metrics. We evaluate the performance on the transductive datasets with accuracy (Kipf & Welling,2017). For inductive tasks on the Reddit and PPI datasets, we report micro-averaged F1 scores(Hamilton et al., 2017). Due to the highly imbalanced classes in the Transaction dataset, models areevaluated with AUC value (Huang & Ling, 2005). The results show the average of 10 runs.5.3 C LASSIFICATION PERFORMANCETable 2 summarizes classification performance of NeuralSparse and the baseline methods on alldatasets. For Reddit, PPI, Transaction, Cora and Citeseer, the hyper-parameter kis set as 30,15,10,5, and 3respectively. The hyper-parameter lis set as 1in this experiment. Note that the result ofGAT on Reddit is missing due to the out-of-memory error.Overall, NeuralSparse is able to help GNN techniques achieve competitive generalization perfor-mance with sparsified graph data. We make the following observations. (1) Compared with basicGNN models, NeuralSparse can enhance the generalization performance on node classification tasksby utilizing the sparsified subgraphs from sparsification network, especially in the inductive setting.Indeed, large neighborhood size in the original graphs could bring increased chance of introduc-ing noise into the convolutional operations, leading to sub-optimal performance. (2) With differ-ent GNN models, the NeuralSparse can consistently achieve comparable or superior performance,which demonstrates NeuralSparse is general and can be applied to multiple classification models.7Under review as a conference paper at ICLR 2020Table 2: Node classification performanceSparsifier MethodReddit PPI Transaction Cora CiteseerMicro-F1 Micro-F1 AUC Accuracy AccuracyN/AGCN 0.922 0.041 0.532 0.024 0.564 0.018 0.810 0.027 0.694 0.020GraphSAGE 0.938 0.029 0.600 0.027 0.574 0.029 0.825 0.033 0.710 0.020GAT - 0.917 0.030 0.616 0.022 0.821 0.043 0.721 0.037GIN 0.928 0.022 0.703 0.028 0.607 0.031 0.816 0.020 0.709 0.037GCN 0.912 0.022 0.521 0.024 0.562 0.035 0.780 0.045 0.684 0.033SS/ GraphSAGE 0.907 0.018 0.576 0.022 0.565 0.042 0.806 0.032 0.701 0.027RD* GAT - 0.889 0.034 0.614 0.044 0.807 0.047 0.686 0.034GIN 0.901 0.021 0.693 0.019 0.593 0.038 0.785 0.041 0.706 0.043GCN 0.946 0.020 0.600 0.014 0.610 0.022 0.821 0.014 0.715 0.014Neural GraphSAGE 0.9510.015 0.626 0.023 0.649 0.018 0.832 0.024 0.720 0.013Sparse GAT - 0.9210.015 0.6710.018 0.8340.015 0.7240.026GIN 0.937 0.027 0.744 0.015 0.634 0.023 0.824 0.027 0.719 0.015(* Report the better performance with SS or RD)Promising OrganizationsOther Organizations(a) Spectral SparsifierPromising OrganizationsOther Organizations (b) RD Sparsifier5 10 15Hyper-parameter k0.620.640.660.68AUCNeuralSparse-GATNeuralSparse-GraphSAGE (c) Hyperparameter k1 2 3 4 5Hyper-parameter l0.6450.6500.6550.6600.6650.6700.675AUCNeuralSparse-GATNeuralSparse-GraphSAGE (d) Hyperparameter lFigure 3: Sparsified subgraphs and performance vs hyper-parameters(3) In comparison with the two NeuralSparse variants SS-GraphSAGE and RD-GraphSAGE, Neu-ralSparse outperforms because of the automatically learned graph sparsification with both structuraland non-structural information as input.5.4 S ENSITIVITY TO HYPER -PARAMETERS AND SPARSIFIED SUBGRAPHSFigure 3(c) demonstrates how classification performance responds when kincreases on the Trans-action dataset. There exists an optimal kthat delivers the best classification AUC score. When kissmall, NeuralSparse can only make use of little relevant structural information in feature aggrega-tion, which leads to inferior performance. When kincreases, the aggregation convolution involvesmore complex neighborhood aggregation with higher chance of overfitting noise data, which nega-tively impacts the classification performance for unseen testing data. Figure 3(d) shows how hyper-parameterlimpacts classification performance on the Transaction dataset. When lincreases from 1to5, we observe a relatively small improvement in classification AUC score. As the parameters inthe sparsification network are shared by all edges in the graph, the estimation variance from randomsampling could already be mitigated to some extent by a number of sampled edges in a sparsifiedsubgraph. Thus, when we increase the number of sparsified subgraphs, the incremental gain couldbe small.In Figure 3(a, b), we present the sparsified graphs output by two baseline methods, SS and RD.By comparing the two plots with Figure 1(b), we make the following observations. First, the Neu-ralSparse sparsified graph tends to select edges that connect nodes of identical labels, which favorsthe downstream classification task. The observed clustering effect could further boost the confidenceof decision making. Second, instead of exploring all the neighbors, we can focus on selected con-nections/edges in sparsified graphs, which could make it easier for human experts to perform modelinterpretation and result visualization.8Under review as a conference paper at ICLR 20206 C ONCLUSIONIn this paper, we propose Neural Sparsification (NeuralSparse) to address the overfitting issuesbrought by the complexity in real-life large graphs. NeuralSparse consists of two major compo-nents: (1) The sparsification network sparsifies input graphs by sampling edges following a learneddistribution; (2) GNNs take sparsified subgraphs as input and extracts node representations for down-stream tasks. The two components in NeuralSparse can be jointly trained with supervised loss, gra-dient descent, and backpropagation techniques. The experimental study on real-life datasets showthat the NeuralSparse consistently renders more robust graph representations, and brings up to 7%improvement in accuracy over the state-of-the-art GNN models.9Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The authors argue that existing GCN-based approaches may pose non-trival overfitting risk during the training phase, especially when high-dimensional features and high-degree entities are observed in the graphs. To address the issue, the authors integrate graph sparsification with conventional graph neural nets. Experimental results show the efficacy of the proposed model in a series of benchmark datasets. In general, the paper is easy-to-follow and well-organized. My main concern is there lack some insightful discussion regarding the problem motivation and the proposed algorithm. In particular, (1) It is unclear why existing GCN-based approaches can not handle the cases shown in Fig. 1. Is there any evidence (either theoretical or empirical) or reference to support this argument? (2) The motivation example shown in Fig. 1 is confusing. Conventionally, graph sparsification aims to find smaller subgraphs from the input graphs that preserve the key structures. However, in Fig. 1 (b), the sparsified subgraph seems only downsampling the edges while preserving all the nodes as the original graph. The authors may want to clarify whether the sparisified subgraph has the identical size as the input graph. (3) Some notations are not formally defined before using them. In Eq. 2, what do Q_\theta and Q_\phi denote? (4) The statement of "trade-off between model accuracy and graph complexity by tuning the hyperparameter k" is vulnerable. If the overfitting exists, larger k may result in lower accuracy in the testing phase. (5) What is the complexity of f_\Phi()? (6) The complexity (i.e., f(km)) of the proposed model is problematic. As stated at the beginning of this paper, the paper targets the graph with "complex local neighborhood", where each node is described by rich features and neighbors. In other words, the target graph is not sparse. In this case, the complexity of the proposed algorithm can be intractable, especially when k is large and m is close to n^2. ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
6Tm1mposlrM
ICLR.cc/2021/Conference
2021
Sharpness-aware Minimization for Efficiently Improving Generalization
["Pierre Foret", "Ariel Kleiner", "Hossein Mobahi", "Behnam Neyshabur"]
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by the connection between geometry of the loss landscape and generalization---including a generalization bound that we prove here---we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels.
["Sharpness Minimization", "Generalization", "Regularization", "Training Method", "Deep Learning"]
ABSTRACTIn today’s heavily overparameterized models, the value of the training loss pro-vides few guarantees on model generalization ability. Indeed, optimizing onlythe training loss value, as is commonly done, can easily lead to suboptimalmodel quality. Motivated by prior work connecting the geometry of the losslandscape and generalization, we introduce a novel, effective procedure for in-stead simultaneously minimizing loss value and loss sharpness. In particular,our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that liein neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed effi-ciently. We present empirical results showing that SAM improves model gen-eralization across a variety of benchmark datasets (e.g., CIFAR- f10, 100g, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performancefor several. Additionally, we find that SAM natively provides robustness to la-bel noise on par with that provided by state-of-the-art procedures that specifi-cally target learning with noisy labels. We open source our code at https://github.com/google-research/sam .1 I NTRODUCTIONModern machine learning’s success in achieving ever better performance on a wide range of taskshas relied in significant part on ever heavier overparameterization, in conjunction with developingever more effective training algorithms that are able to find parameters that generalize well. Indeed,many modern neural networks can easily memorize the training data and have the capacity to readilyoverfit (Zhang et al., 2016). Such heavy overparameterization is currently required to achieve state-of-the-art results in a variety of domains (Tan & Le, 2019; Kolesnikov et al., 2020; Huang et al.,2018). In turn, it is essential that such models be trained using procedures that ensure that theparameters actually selected do in fact generalize beyond the training set.Unfortunately, simply minimizing commonly used loss functions (e.g., cross-entropy) on the train-ing set is typically not sufficient to achieve satisfactory generalization. The training loss landscapesof today’s models are commonly complex and non-convex, with a multiplicity of local and globalminima, and with different global minima yielding models with different generalization abilities(Shirish Keskar et al., 2016). As a result, the choice of optimizer (and associated optimizer settings)from among the many available (e.g., stochastic gradient descent (Nesterov, 1983), Adam (Kingma& Ba, 2014), RMSProp (Hinton et al.), and others (Duchi et al., 2011; Dozat, 2016; Martens &Grosse, 2015)) has become an important design choice, though understanding of its relationshipto model generalization remains nascent (Shirish Keskar et al., 2016; Wilson et al., 2017; ShirishKeskar & Socher, 2017; Agarwal et al., 2020; Jacot et al., 2018). Relatedly, a panoply of methodsfor modifying the training process have been proposed, including dropout (Srivastava et al., 2014),Work done as part of the Google AI Residency program.1Published as a conference paper at ICLR 20210 20 40Error reduction (%)ImagenetCifar10Cifar100FinetuningSVHN F-MNISTNoisy CifarFigure 1: (left) Error rate reduction obtained by switching to SAM. Each point is a different dataset/ model / data augmentation. (middle) A sharp minimum to which a ResNet trained with SGDconverged. (right) A wide minimum to which the same ResNet trained with SAM converged.batch normalization (Ioffe & Szegedy, 2015), stochastic depth (Huang et al., 2016), data augmenta-tion (Cubuk et al., 2018), and mixed sample augmentations (Zhang et al., 2017; Harris et al., 2020).The connection between the geometry of the loss landscape—in particular, the flatness of minima—and generalization has been studied extensively from both theoretical and empirical perspectives(Shirish Keskar et al., 2016; Dziugaite & Roy, 2017; Jiang et al., 2019). While this connectionhas held the promise of enabling new approaches to model training that yield better generalization,practical efficient algorithms that specifically seek out flatter minima and furthermore effectivelyimprove generalization on a range of state-of-the-art models have thus far been elusive (e.g., see(Chaudhari et al., 2016; Izmailov et al., 2018); we include a more detailed discussion of prior workin Section 5).We present here a new efficient, scalable, and effective approach to improving model generalizationability that directly leverages the geometry of the loss landscape and its connection to generaliza-tion, and is powerfully complementary to existing techniques. In particular, we make the followingcontributions:• We introduce Sharpness-Aware Minimization (SAM), a novel procedure that improvesmodel generalization by simultaneously minimizing loss value and loss sharpness. SAMfunctions by seeking parameters that lie in neighborhoods having uniformly low loss value(rather than parameters that only themselves have low loss value, as illustrated in the middleand righthand images of Figure 1), and can be implemented efficiently and easily.• We show via a rigorous empirical study that using SAM improves model generalizationability across a range of widely studied computer vision tasks (e.g., CIFAR- f10, 100g,ImageNet, finetuning tasks) and models, as summarized in the lefthand plot of Figure 1. Forexample, applying SAM yields novel state-of-the-art performance for a number of already-intensely-studied tasks, such as ImageNet, CIFAR- f10, 100g, SVHN, Fashion-MNIST,and the standard set of image classification finetuning tasks (e.g., Flowers, Stanford Cars,Oxford Pets, etc).• We show that SAM furthermore provides robustness to label noise on par with that providedby state-of-the-art procedures that specifically target learning with noisy labels.• Through the lens provided by SAM, we further elucidate the connection between losssharpness and generalization by surfacing a promising new notion of sharpness, whichwe term m-sharpness .Section 2 below derives the SAM procedure and presents the resulting algorithm in full detail. Sec-tion 3 evaluates SAM empirically, and Section 4 further analyzes the connection between loss sharp-ness and generalization through the lens of SAM. Finally, we conclude with an overview of relatedwork and a discussion of conclusions and future work in Sections 5 and 6, respectively.2Published as a conference paper at ICLR 20212 S HARPNESS -AWARE MINIMIZATION (SAM)Throughout the paper, we denote scalars as a, vectors asa, matrices asA, sets asA, and equality bydefinition as ,. Given a training dataset S,[ni=1f(xi;yi)gdrawn i.i.d. from distribution D, weseek to learn a model that generalizes well. In particular, consider a family of models parameterizedbyw2W Rd; given a per-data-point loss function l:WXY! R+, we define the trainingset lossLS(w),1nPni=1l(w;xi;yi)and the population loss LD(w),E(x;y)D[l(w;x;y)].Having observed only S, the goal of model training is to select model parameters whaving lowpopulation loss LD(w).UtilizingLS(w)as an estimate of LD(w)motivates the standard approach of selecting parameterswby solving minwLS(w)(possibly in conjunction with a regularizer on w) using an optimizationprocedure such as SGD or Adam. Unfortunately, however, for modern overparameterized mod-els such as deep neural networks, typical optimization approaches can easily result in suboptimalperformance at test time. In particular, for modern models, LS(w)is typically non-convex in w,with multiple local and even global minima that may yield similar values of LS(w)while havingsignificantly different generalization performance (i.e., significantly different values of LD(w)).Motivated by the connection between sharpness of the loss landscape and generalization, we proposea different approach: rather than seeking out parameter values wthat simply have low training lossvalueLS(w), we seek out parameter values whose entire neighborhoods have uniformly low trainingloss value (equivalently, neighborhoods having both low loss and low curvature). The followingtheorem illustrates the motivation for this approach by bounding generalization ability in terms ofneighborhood-wise training loss (full theorem statement and proof in Appendix A):Theorem (stated informally) 1. For any>0, with high probability over training set Sgeneratedfrom distribution D,LD(w)maxkk2LS(w+) +h(kwk22=2);whereh:R+!R+is a strictly increasing function (under some technical conditions on LD(w)).To make explicit our sharpness term, we can rewrite the right hand side of the inequality above as[ maxkk2LS(w+)LS(w)] +LS(w) +h(kwk22=2):The term in square brackets captures the sharpness of LSatwby measuring how quickly the trainingloss can be increased by moving from wto a nearby parameter value; this sharpness term is thensummed with the training loss value itself and a regularizer on the magnitude of w. Given that thespecific function his heavily influenced by the details of the proof, we substitute the second termwithjjwjj22for a hyperparameter , yielding a standard L2 regularization term. Thus, inspired bythe terms from the bound, we propose to select parameter values by solving the following Sharpness-Aware Minimization (SAM) problem:minwLSAMS(w) +jjwjj22 whereLSAMS(w),maxjjjjpLS(w+); (1)where0is a hyperparameter and p2[1;1](we have generalized slightly from an L2-normto ap-norm in the maximization over , though we show empirically in appendix C.5 that p= 2istypically optimal). Figure 1 shows1the loss landscape for a model that converged to minima foundby minimizing either LS(w)orLSAMS(w), illustrating that the sharpness-aware loss prevents themodel from converging to a sharp minimum.In order to minimize LSAMS(w), we derive an efficient and effective approximation torwLSAMS(w)by differentiating through the inner maximization, which in turn enables us to applystochastic gradient descent directly to the SAM objective. Proceeding down this path, we first ap-proximate the inner maximization problem via a first-order Taylor expansion of LS(w+)w.r.t.around 0, obtaining(w),arg maxkkpLS(w+)arg maxkkpLS(w) +TrwLS(w) = arg maxkkpTrwLS(w):1Figure 1 was generated following Li et al. (2017) with the provided ResNet56 (no residual connections)checkpoint, and training the same model with SAM.3Published as a conference paper at ICLR 2021In turn, the value ^(w)that solves this approximation is given by the solution to a classical dualnorm problem (jjq1denotes elementwise absolute value and power)2:^(w) =sign(rwLS(w))jrwLS(w)jq1=krwLS(w)kqq1=p(2)where 1=p+ 1=q= 1. Substituting back into equation (1) and differentiating, we then haverwLSAMS(w)rwLS(w+^(w)) =d(w+^(w))dwrwLS(w)jw+^(w)=rwLS(w)jw+^(w)+d^(w)dwrwLS(w)jw+^(w):This approximation to rwLSAMS(w)can be straightforwardly computed via automatic differentia-tion, as implemented in common libraries such as JAX, TensorFlow, and PyTorch. Though this com-putation implicitly depends on the Hessian of LS(w)because ^(w)is itself a function of rwLS(w),the Hessian enters only via Hessian-vector products, which can be computed tractably without ma-terializing the Hessian matrix. Nonetheless, to further accelerate the computation, we drop thesecond-order terms. obtaining our final gradient approximation:rwLSAMS(w)rwLS(w)jw+^(w): (3)As shown by the results in Section 3, this approximation (without the second-order terms) yields aneffective algorithm. In Appendix C.4, we additionally investigate the effect of instead including thesecond-order terms; in that initial experiment, including them surprisingly degrades performance,and further investigating these terms’ effect should be a priority in future work.We obtain the final SAM algorithm by applying a standard numerical optimizer such as stochasticgradient descent (SGD) to the SAM objective LSAMS(w), using equation 3 to compute the requisiteobjective function gradients. Algorithm 1 gives pseudo-code for the full SAM algorithm, using SGDas the base optimizer, and Figure 2 schematically illustrates a single SAM parameter update.Input: Training setS,[ni=1f(xi;yi)g, Loss functionl:WXY! R+, Batch sizeb, Step size>0,Neighborhood size >0.Output: Model trained with SAMInitialize weights w0,t= 0;while not converged doSample batchB=f(x1;y1);:::(xb;yb)g;Compute gradientrwLB(w)of the batch’s training loss;Compute ^(w)per equation 2;Compute gradient approximation for the SAM objective(equation 3):g=rwLB(w)jw+^(w);Update weights: wt+1=wtg;t=t+ 1;endreturnwtAlgorithm 1: SAM algorithmwtwt+1wSAMt+1wadvL(wt)||L(wt)||2L(wt)L(wadv)Figure 2: Schematic of the SAM param-eter update.3 E MPIRICAL EVALUATIONIn order to assess SAM’s efficacy, we apply it to a range of different tasks, including image clas-sification from scratch (including on CIFAR-10, CIFAR-100, and ImageNet), finetuning pretrainedmodels, and learning with noisy labels. In all cases, we measure the benefit of using SAM by simplyreplacing the optimization procedure used to train existing models with SAM, and computing theresulting effect on model generalization. As seen below, SAM materially improves generalizationperformance in the vast majority of these cases.2In the case of interest p= 2, this boils down to simply rescaling the gradient such that its norm is .4Published as a conference paper at ICLR 20213.1 I MAGE CLASSIFICATION FROM SCRATCHWe first evaluate SAM’s impact on generalization for today’s state-of-the-art models on CIFAR-10and CIFAR-100 (without pretraining): WideResNets with ShakeShake regularization (Zagoruyko& Komodakis, 2016; Gastaldi, 2017) and PyramidNet with ShakeDrop regularization (Han et al.,2016; Yamada et al., 2018). Note that some of these models have already been heavily tuned inprior work and include carefully chosen regularization schemes to prevent overfitting; therefore,significantly improving their generalization is quite non-trivial. We have ensured that our imple-mentations’ generalization performance in the absence of SAM matches or exceeds that reported inprior work (Cubuk et al., 2018; Lim et al., 2019)All results use basic data augmentations (horizontal flip, padding by four pixels, and random crop).We also evaluate in the setting of more advanced data augmentation methods such as cutout regu-larization (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018), which are utilized byprior work to achieve state-of-the-art results.SAM has a single hyperparameter (the neighborhood size), which we tune via a grid search overf0:01;0:02;0:05;0:1;0:2;0:5gusing 10% of the training set as a validation set3. Please see ap-pendix C.1 for the values of all hyperparameters and additional training details. As each SAMweight update requires two backpropagation operations (one to compute ^(w)and another to com-pute the final gradient), we allow each non-SAM training run to execute twice as many epochs aseach SAM training run, and we report the best score achieved by each non-SAM training run acrosseither the standard epoch count or the doubled epoch count4. We run five independent replicas ofeach experimental condition for which we report results (each with independent weight initializationand data shuffling), reporting the resulting mean error (or accuracy) on the test set, and the associ-ated 95% confidence interval. Our implementations utilize JAX (Bradbury et al., 2018), and wetrain all models on a single host having 8 Nvidia V100 GPUs5. To compute the SAM update whenparallelizing across multiple accelerators, we divide each data batch evenly among the accelerators,independently compute the SAM gradient on each accelerator, and average the resulting sub-batchSAM gradients to obtain the final SAM update.As seen in Table 1, SAM improves generalization across all settings evaluated for CIFAR-10 andCIFAR-100. For example, SAM enables a simple WideResNet to attain 1.6% test error, versus2.2% error without SAM. Such gains have previously been attainable only by using more complexmodel architectures (e.g., PyramidNet) and regularization schemes (e.g., Shake-Shake, ShakeDrop);SAM provides an easily-implemented, model-independent alternative. Furthermore, SAM deliversimprovements even when applied atop complex architectures that already use sophisticated regular-ization: for instance, applying SAM to a PyramidNet with ShakeDrop regularization yields 10.3%error on CIFAR-100, which is, to our knowledge, a new state-of-the-art on this dataset without theuse of additional data.Beyond CIFAR-f10, 100g, we have also evaluated SAM on the SVHN (Netzer et al., 2011) andFashion-MNIST datasets (Xiao et al., 2017). Once again, SAM enables a simple WideResNet toachieve accuracy at or above the state-of-the-art for these datasets: 0.99% error for SVHN, and3.59% for Fashion-MNIST. Details are available in appendix B.1.To assess SAM’s performance at larger scale, we apply it to ResNets (He et al., 2015) of differentdepths (50, 101, 152) trained on ImageNet (Deng et al., 2009). In this setting, following prior work(He et al., 2015; Szegedy et al., 2015), we resize and crop images to 224-pixel resolution, normalizethem, and use batch size 4096, initial learning rate 1.0, cosine learning rate schedule, SGD optimizerwith momentum 0.9, label smoothing of 0.1, and weight decay 0.0001. When applying SAM, we use= 0:05(determined via a grid search on ResNet-50 trained for 100 epochs). We train all modelson ImageNet for up to 400 epochs using a Google Cloud TPUv3 and report top-1 and top-5 testerror rates for each experimental condition (mean and 95% confidence interval across 5 independentruns).3We found= 0:05to be a solid default value, and we report in appendix C.3 the scores for all ourexperiments, obtained with = 0:05without further tuning.4Training for longer generally did not improve accuracy significantly, except for the models previouslytrained for only 200 epochs and for the largest, most regularized model (PyramidNet + ShakeDrop).5Because SAM’s performance is amplified by not syncing the perturbations, data parallelism is highlyrecommended to leverage SAM’s full potential (see Section 4 for more details).5Published as a conference paper at ICLR 2021CIFAR-10 CIFAR-100Model Augmentation SAM SGD SAM SGDWRN-28-10 (200 epochs) Basic 2.70:1 3:50:1 16.50:218:80:2WRN-28-10 (200 epochs) Cutout 2.30:1 2:60:1 14.90:216:90:1WRN-28-10 (200 epochs) AA 2.1<0:12:30:1 13.60:215:80:2WRN-28-10 (1800 epochs) Basic 2.40:1 3:50:1 16.30:219:10:1WRN-28-10 (1800 epochs) Cutout 2.10:1 2:70:1 14.00:117:40:1WRN-28-10 (1800 epochs) AA 1.60:12:2<0:112.80:216:10:2Shake-Shake (26 2x96d) Basic 2.3<0:12:70:1 15.10:117:00:1Shake-Shake (26 2x96d) Cutout 2.0<0:12:30:1 14.20:215:70:2Shake-Shake (26 2x96d) AA 1.6<0:11:90:1 12.80:114:10:2PyramidNet Basic 2.70:1 4:00:1 14.60:419:70:3PyramidNet Cutout 1.90:1 2:50:1 12.60:216:40:1PyramidNet AA 1.60:1 1:90:1 11.60:114:60:1PyramidNet+ShakeDrop Basic 2.10:1 2:50:1 13.30:214:50:1PyramidNet+ShakeDrop Cutout 1.6<0:11:90:1 11.30:111:80:2PyramidNet+ShakeDrop AA 1.4<0:11:6<0:110.30:110:60:1Table 1: Results for SAM on state-of-the-art models on CIFAR- f10, 100g(WRN = WideResNet;AA = AutoAugment; SGD is the standard non-SAM procedure used to train these models).As seen in Table 2, SAM again consistently improves performance, for example improving theImageNet top-1 error rate of ResNet-152 from 20.3% to 18.4%. Furthermore, note that SAM enablesincreasing the number of training epochs while continuing to improve accuracy without overfitting.In contrast, the standard training procedure (without SAM) generally significantly overfits as trainingextends from 200 to 400 epochs.Model EpochSAM Standard Training (No SAM)Top-1 Top-5 Top-1 Top-5ResNet-50 100 22.50:1 6:280:08 22:90:1 6:620:11200 21.40:1 5:820:03 22:30:1 6:370:04400 20.90:1 5:510:03 22:30:1 6:400:06ResNet-101 100 20.20:1 5:120:03 21:20:1 5:660:05200 19.40:1 4:760:03 20:90:1 5:660:04400 19.0<0:014:650:05 22:30:1 6:410:06ResNet-152 100 19.2<0:014:690:0420:4<0:0 5:390:06200 18.50:1 4:370:03 20:30:2 5:390:07400 18.4<0:014:350:0420:9<0:0 5:840:07Table 2: Test error rates for ResNets trained on ImageNet, with and without SAM.3.2 F INETUNINGTransfer learning by pretraining a model on a large related dataset and then finetuning on a smallertarget dataset of interest has emerged as a powerful and widely used technique for producing high-quality models for a variety of different tasks. We show here that SAM once again offers con-siderable benefits in this setting, even when finetuning extremely large, state-of-the-art, alreadyhigh-performing models.In particular, we apply SAM to finetuning EfficentNet-b7 (pretrained on ImageNet) andEfficientNet-L2 (pretrained on ImageNet plus unlabeled JFT; input resolution 475) (Tan & Le, 2019;Kornblith et al., 2018; Huang et al., 2018). We initialize these models to publicly available check-points6trained with RandAugment (84.7% accuracy on ImageNet) and NoisyStudent (88.2% ac-curacy on ImageNet), respectively. We finetune these models on each of several target datasets bytraining each model starting from the aforementioned checkpoint; please see the appendix for detailsof the hyperparameters used. We report the mean and 95% confidence interval of top-1 test errorover 5 independent runs for each dataset.6https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet6Published as a conference paper at ICLR 2021As seen in Table 3, SAM uniformly improves performance relative to finetuning without SAM.Furthermore, in many cases, SAM yields novel state-of-the-art performance, including 0.30% erroron CIFAR-10, 3.92% error on CIFAR-100, and 11.39% error on ImageNet.DatasetEffNet-b7+ SAMEffNet-b7Prev. SOTA(ImageNet only)EffNet-L2+ SAMEffNet-L2 Prev. SOTAFGVC Aircraft 6:800:06 8:150:08 5.3(TBMSL-Net) 4.820:08 5:800:1 5.3 (TBMSL-Net)Flowers 0.630:02 1:160:05 0.7 (BiT-M) 0.350:01 0:400:02 0.37 (EffNet)Oxford IIIT Pets 3.970:04 4:240:09 4.1 (Gpipe) 2.900:04 3:080:04 4.1 (Gpipe)Stanford Cars 5:180:02 5:940:06 5.0(TBMSL-Net) 4:040:03 4:930:04 3.8(DAT)CIFAR-10 0.880:02 0:950:03 1 (Gpipe) 0.300:01 0:340:02 0.63 (BiT-L)CIFAR-100 7.440:06 7:680:06 7.83 (BiT-M) 3.920:06 4:070:08 6.49 (BiT-L)Birdsnap 13.640:15 14:300:18 15.7 (EffNet) 9.930:15 10:310:15 14.5 (DAT)Food101 7:020:02 7:170:03 7.0 (Gpipe) 3.820:01 3:970:03 4.7 (DAT)ImageNet 15:140:03 15:3 14.2 (KDforAA) 11.390:02 11:8 11:45(ViT)Table 3: Top-1 error rates for finetuning EfficientNet-b7 (left; ImageNet pretraining only) andEfficientNet-L2 (right; pretraining on ImageNet plus additional data, such as JFT) on various down-stream tasks. Previous state-of-the-art (SOTA) includes EfficientNet (EffNet) (Tan & Le, 2019),Gpipe (Huang et al., 2018), DAT (Ngiam et al., 2018), BiT-M/L (Kolesnikov et al., 2020), KD-forAA (Wei et al., 2020), TBMSL-Net (Zhang et al., 2020), and ViT (Dosovitskiy et al., 2020).3.3 R OBUSTNESS TO LABEL NOISEMethod Noise rate (%)20 40 60 80Sanchez et al. (2019) 94.0 92.8 90.3 74.1Zhang & Sabuncu (2018) 89.7 87.6 82.7 67.9Lee et al. (2019) 87.1 81.8 75.4 -Chen et al. (2019) 89.7 - - 52.3Huang et al. (2019) 92.6 90.3 43.4 -MentorNet (2017) 92.0 91.2 74.2 60.0Mixup (2017) 94.0 91.5 86.8 76.9MentorMix (2019) 95.6 94.2 91.3 81.0SGD 84.8 68.8 48.2 26.2Mixup 93.0 90.0 83.8 70.2Bootstrap + Mixup 93.3 92.0 87.6 72.0SAM 95.1 93.4 90.5 77.9Bootstrap + SAM 95.4 94.2 91.8 79.9Table 4: Test accuracy on the clean test setfor models trained on CIFAR-10 with noisy la-bels. Lower block is our implementation, up-per block gives scores from the literature, perJiang et al. (2019).The fact that SAM seeks out model parameters thatare robust to perturbations suggests SAM’s poten-tial to provide robustness to noise in the training set(which would perturb the training loss landscape).Thus, we assess here the degree of robustness thatSAM provides to label noise.In particular, we measure the effect of apply-ing SAM in the classical noisy-label setting forCIFAR-10, in which a fraction of the training set’slabels are randomly flipped; the test set remainsunmodified (i.e., clean). To ensure valid compar-ison to prior work, which often utilizes architec-tures specialized to the noisy-label setting, we traina simple model of similar size (ResNet-32) for 200epochs, following Jiang et al. (2019). We evalu-ate five variants of model training: standard SGD,SGD with Mixup (Zhang et al., 2017), SAM, and”bootstrapped” variants of SGD with Mixup andSAM (wherein the model is first trained as usualand then retrained from scratch on the labels pre-dicted by the initially trained model). When apply-ing SAM, we use = 0:1for all noise levels except 80%, for which we use = 0:05for more stableconvergence. For the Mixup baselines, we tried all values of 2f1;8;16;32gand conservativelyreport the best score for each noise level.As seen in Table 4, SAM provides a high degree of robustness to label noise, on par with thatprovided by state-of-the art procedures that specifically target learning with noisy labels. Indeed,simply training a model with SAM outperforms all prior methods specifically targeting label noiserobustness, with the exception of MentorMix (Jiang et al., 2019). However, simply bootstrappingSAM yields performance comparable to that of MentorMix (which is substantially more complex).7Published as a conference paper at ICLR 20210 20 40 60Epoch: 1max=62.9max/5=2.5SGD0 5 10Epoch: 50max=12.5max/5=1.70 10 20p()Epoch: 300max=24.2max/5=11.40 20 40 60max=18.6max/5=3.6SAM0 5 10max=8.9max/5=1.90 10 20p()max=1.0max/5=2.60.00 0.05 0.10 0.150.040.050.060.070.08Error rate (%)m141664256141664256m0.0300.0350.0400.0450.050Mutual informationTask 1Task 20.140.150.160.17Mutual informationFigure 3: (left) Evolution of the spectrum of the Hessian during training of a model with standardSGD (lefthand column) or SAM (righthand column). (middle) Test error as a function of for dif-ferent values of m. (right) Predictive power of m-sharpness for the generalization gap, for differentvalues ofm(higher means the sharpness measure is more correlated with actual generalization gap).4 S HARPNESS AND GENERALIZATION THROUGH THE LENS OF SAM4.1m-SHARPNESSThough our derivation of SAM defines the SAM objective over the entire training set, when utilizingSAM in practice, we compute the SAM update per-batch (as described in Algorithm 1) or even byaveraging SAM updates computed independently per-accelerator (where each accelerator receives asubset of size mof a batch, as described in Section 3). This latter setting is equivalent to modifyingthe SAM objective (equation 1) to sum over a set of independent maximizations, each performedon a sum of per-data-point losses on a disjoint subset of mdata points, rather than performing themaximization over a global sum over the training set (which would be equivalent to setting mto the total training set size). We term the associated measure of sharpness of the loss landscapem-sharpness .To better understand the effect of mon SAM, we train a small ResNet on CIFAR-10 using SAMwith a range of values of m. As seen in Figure 3 (middle), smaller values of mtend to yield modelshaving better generalization ability. This relationship fortuitously aligns with the need to parallelizeacross multiple accelerators in order to scale training for many of today’s models.Intriguingly, the m-sharpness measure described above furthermore exhibits better correlation withmodels’ actual generalization gaps as mdecreases, as demonstrated by Figure 3 (right)7. In partic-ular, this implies that m-sharpness with m < n yields a better predictor of generalization than thefull-training-set measure suggested by Theorem 1 in Section 2 above, suggesting an interesting newavenue of future work for understanding generalization.4.2 H ESSIAN SPECTRAMotivated by the connection between geometry of the loss landscape and generalization, we con-structed SAM to seek out minima of the training loss landscape having both low loss value and lowcurvature (i.e., low sharpness). To further confirm that SAM does in fact find minima having lowcurvature, we compute the spectrum of the Hessian for a WideResNet40-10 trained on CIFAR-10for 300 steps both with and without SAM (without batch norm, which tends to obscure interpretationof the Hessian), at different epochs during training. Due to the parameter space’s dimensionality, weapproximate the Hessian spectrum using the Lanczos algorithm of Ghorbani et al. (2019).Figure 3 (left) reports the resulting Hessian spectra. As expected, the models trained with SAMconverge to minima having lower curvature, as seen in the overall distribution of eigenvalues, the7We follow the rigorous framework of Jiang et al. (2019), reporting the mutual information betweenthem-sharpness measure and generalization on the two publicly available tasks from the Predicting gen-eralization in deep learning NeurIPS2020 competition. https://competitions.codalab.org/competitions/253018Published as a conference paper at ICLR 2021maximum eigenvalue ( max) at convergence (approximately 24 without SAM, 1.0 with SAM), andthe bulk of the spectrum (the ratio max=5, commonly used as a proxy for sharpness (Jastrzebskiet al., 2020); up to 11.4 without SAM, and 2.6 with SAM).5 R ELATED WORKThe idea of searching for “flat” minima can be traced back to Hochreiter & Schmidhuber (1995), andits connection to generalization has seen significant study (Shirish Keskar et al., 2016; Dziugaite &Roy, 2017; Neyshabur et al., 2017; Dinh et al., 2017). In a recent large scale empirical study, Jianget al. (2019) studied 40 complexity measures and showed that a sharpness-based measure has highestcorrelation with generalization, which motivates penalizing sharpness. Hochreiter & Schmidhuber(1997) was perhaps the first paper on penalizing the sharpness, regularizing a notion related to Min-imum Description Length (MDL). Other ideas which also penalize sharp minima include operatingon diffused loss landscape (Mobahi, 2016) and regularizing local entropy (Chaudhari et al., 2016).Another direction is to not penalize the sharpness explicitly, but rather average weights during train-ing; Izmailov et al. (2018) showed that doing so can yield flatter minima that can also generalizebetter. However, the measures of sharpness proposed previously are difficult to compute and differ-entiate through. In contrast, SAM is highly scalable as it only needs two gradient computations periteration. The concurrent work of Sun et al. (2020) focuses on resilience to random and adversarialcorruption to expose a model’s vulnerabilities; this work is perhaps closest to ours. Our work has adifferent basis: we develop SAM motivated by a principled starting point in generalization, clearlydemonstrate SAM’s efficacy via rigorous large-scale empirical evaluation, and surface importantpractical and theoretical facets of the procedure (e.g., m-sharpness). The notion of all-layer marginintroduced by Wei & Ma (2020) is closely related to this work; one is adversarial perturbation overthe activations of a network and the other over its weights, and there is some coupling between thesetwo quantities.6 D ISCUSSION AND FUTURE WORKIn this work, we have introduced SAM, a novel algorithm that improves generalization by simulta-neously minimizing loss value and loss sharpness; we have demonstrated SAM’s efficacy through arigorous large-scale empirical evaluation. We have surfaced a number of interesting avenues for fu-ture work. On the theoretical side, the notion of per-data-point sharpness yielded by m-sharpness (incontrast to global sharpness computed over the entire training set, as has typically been studied in thepast) suggests an interesting new lens through which to study generalization. Methodologically, ourresults suggest that SAM could potentially be used in place of Mixup in robust or semi-supervisedmethods that currently rely on Mixup (giving, for instance, MentorSAM). We leave to future worka more in-depth investigation of these possibilities.7 A CKNOWLEDGMENTSWe thank our colleagues at Google — Atish Agarwala, Xavier Garcia, Dustin Tran, Yiding Jiang,Basil Mustafa, Samy Bengio — for their feedback and insightful discussions. We also thank the JAXand FLAX teams for going above and beyond to support our implementation. We are grateful to SvenGowal for his help in replicating EfficientNet using JAX, and Justin Gilmer for his implementationof the Lanczos algorithm8used to generate the Hessian spectra. We thank Niru Maheswaranathanfor his matplotlib mastery. We also thank David Samuel for providing a PyTorch implementation ofSAM9.
SMQLOQzD-T
Interesting work with good results, concern is on selecting the right $\rho$
7: Good paper, accept
Motivated by the connection between the flatness of minima and its generalization ability, the authors propose Sharpness-aware Minimization (SAM), which explicitly minimizes both loss value and loss sharpness during training deep neural networks. They find SAM improves generalization for a range of image classification tasks and provide robustness to label noise as well. They also introduce a new notion of sharpness named m-sharpness. Strength: * The paper is overall well written with clear motivation. * The experiments are comprehensive and the results show clear improvement over non-SAM approaches or previous SOTA. Weakness: * There is no clear definition of the “sharpness“ that the algorithm tries to optimize. Given many existing definitions of the sharpness (e.g., [1]), it is not clear how the proposed measurement connects or differs with previous works. * My major concern is about the usage of hyperparameter $\rho$: a) The introduction of the dataset and model dependent hyperparameter $\rho$ and the need of grid-search before training makes the algorithm more tricky to work and sensitive to other hyperparameters and scale of $w$, e.g., when weight decay is applied, the norm of $w$ usually shrinks during training, and the same radius $\rho$ could be too large for a small scaled $w$ at the end of training in comparison with the $w$ at the beginning. This discrepancy would become larger when the number of epochs training gets larger. b) The details for how to obtain the optimal $rho$ is not quite clear, e.g., smaller $\rho$ in sec 3.3. An ablation study on the sensitivity of $\rho$ regarding different dataset, model and noisy level would be useful. c) The wall-clock training time of the SAM method is not discussed. A comprehensive of the cost (including hyperparameter search for $rho$.) would be helpful to have for evaluating the complexity of the method. * The message conveyed in section 4.1 is not quite clear. Does each accelerator perform independent $\epsilon$ estimation? Is $epsilon$ obtained on each accelerator synchronized after their estimation? Does it indicate the SAM training is done better in model-parallel in small batches rather than data-parallel with large batches? Suggestions: 1) To avoid the scaling issue of $\rho$, one suggestion would be considering optimizing the sharpness metric on the normalized loss landscape as described in [2]. In Figure 1, the authors adopt [2] for comparing the landscape of minimas obtained by non-SAM and SAM, so it might be intuitive to optimize this normalized sharpness directly, in which $\rho$ can be fixed and random direction is sufficient? 2) The benefit of flatness to the robustness to label noise is not well discussed. What is the performance when the label noise is over 90% or even 100%. Eventually all models should not generalize given 100% corruption but it would be interesting to know where the limit of SAM is. Minor: * Some figures are not well described, e.g., the meaning of Figure 1 left is not quite clear. Figure 2 is not intuitive as the loss contour value is not clear. It is not straightforward to know why w_{t+1}^{SAM} is a better or “flatter” move. The notion $w_{adv}$ is also not defined anywhere. [1] Keskar et al, On large-batch training for deep learning: Generalization gap and sharp minima, ICLR 2017 [2] Li et al, Visualizing the Loss Landscape of Neural Nets, NIPS 2018 ====== After Rebuttal Thanks for the detailed reply and additional experiments. I increased my score accordingly and I hope the authors could further address following issues: - While the results in C.3 shows default $\rho$ improves over SGD on most experiments (may also add SVHN and Fashion), I can still see its sensitivity to datasets, architecture, noise level and number of accelerators as shown in Table 6, 7, 8 and Fig. 3. For example, 0.05 is not close to optimal with labe noise 20%~60% in Table 8. It is unclear whether $\rho$ is robust to other hyperparameter changes (e.g., weight decay that controls weight scales). So an ablation study on the sensitivity of $\rho$ and further explanation would be necessary and much valuable for practitioners. - It would be also helpful if the authors can provide more details about how to get the flat minima of Fig .1 (right) when optimizing deep non-residual networks, such as $\rho$ and other hyperparameters. - Minor: Table 8 should be validation errors rather than accuracy.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Sharpness-aware Minimization for Efficiently Improving Generalization ### Paper Abstract In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by the connection between geometry of the loss landscape and generalization---including a generalization bound that we prove here---we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. ### Paper Keywords ["Sharpness Minimization", "Generalization", "Regularization", "Training Method", "Deep Learning"] ### Paper Content ABSTRACTIn today’s heavily overparameterized models, the value of the training loss pro-vides few guarantees on model generalization ability. Indeed, optimizing onlythe training loss value, as is commonly done, can easily lead to suboptimalmodel quality. Motivated by prior work connecting the geometry of the losslandscape and generalization, we introduce a novel, effective procedure for in-stead simultaneously minimizing loss value and loss sharpness. In particular,our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that liein neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed effi-ciently. We present empirical results showing that SAM improves model gen-eralization across a variety of benchmark datasets (e.g., CIFAR- f10, 100g, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performancefor several. Additionally, we find that SAM natively provides robustness to la-bel noise on par with that provided by state-of-the-art procedures that specifi-cally target learning with noisy labels. We open source our code at https://github.com/google-research/sam .1 I NTRODUCTIONModern machine learning’s success in achieving ever better performance on a wide range of taskshas relied in significant part on ever heavier overparameterization, in conjunction with developingever more effective training algorithms that are able to find parameters that generalize well. Indeed,many modern neural networks can easily memorize the training data and have the capacity to readilyoverfit (Zhang et al., 2016). Such heavy overparameterization is currently required to achieve state-of-the-art results in a variety of domains (Tan & Le, 2019; Kolesnikov et al., 2020; Huang et al.,2018). In turn, it is essential that such models be trained using procedures that ensure that theparameters actually selected do in fact generalize beyond the training set.Unfortunately, simply minimizing commonly used loss functions (e.g., cross-entropy) on the train-ing set is typically not sufficient to achieve satisfactory generalization. The training loss landscapesof today’s models are commonly complex and non-convex, with a multiplicity of local and globalminima, and with different global minima yielding models with different generalization abilities(Shirish Keskar et al., 2016). As a result, the choice of optimizer (and associated optimizer settings)from among the many available (e.g., stochastic gradient descent (Nesterov, 1983), Adam (Kingma& Ba, 2014), RMSProp (Hinton et al.), and others (Duchi et al., 2011; Dozat, 2016; Martens &Grosse, 2015)) has become an important design choice, though understanding of its relationshipto model generalization remains nascent (Shirish Keskar et al., 2016; Wilson et al., 2017; ShirishKeskar & Socher, 2017; Agarwal et al., 2020; Jacot et al., 2018). Relatedly, a panoply of methodsfor modifying the training process have been proposed, including dropout (Srivastava et al., 2014),Work done as part of the Google AI Residency program.1Published as a conference paper at ICLR 20210 20 40Error reduction (%)ImagenetCifar10Cifar100FinetuningSVHN F-MNISTNoisy CifarFigure 1: (left) Error rate reduction obtained by switching to SAM. Each point is a different dataset/ model / data augmentation. (middle) A sharp minimum to which a ResNet trained with SGDconverged. (right) A wide minimum to which the same ResNet trained with SAM converged.batch normalization (Ioffe & Szegedy, 2015), stochastic depth (Huang et al., 2016), data augmenta-tion (Cubuk et al., 2018), and mixed sample augmentations (Zhang et al., 2017; Harris et al., 2020).The connection between the geometry of the loss landscape—in particular, the flatness of minima—and generalization has been studied extensively from both theoretical and empirical perspectives(Shirish Keskar et al., 2016; Dziugaite & Roy, 2017; Jiang et al., 2019). While this connectionhas held the promise of enabling new approaches to model training that yield better generalization,practical efficient algorithms that specifically seek out flatter minima and furthermore effectivelyimprove generalization on a range of state-of-the-art models have thus far been elusive (e.g., see(Chaudhari et al., 2016; Izmailov et al., 2018); we include a more detailed discussion of prior workin Section 5).We present here a new efficient, scalable, and effective approach to improving model generalizationability that directly leverages the geometry of the loss landscape and its connection to generaliza-tion, and is powerfully complementary to existing techniques. In particular, we make the followingcontributions:• We introduce Sharpness-Aware Minimization (SAM), a novel procedure that improvesmodel generalization by simultaneously minimizing loss value and loss sharpness. SAMfunctions by seeking parameters that lie in neighborhoods having uniformly low loss value(rather than parameters that only themselves have low loss value, as illustrated in the middleand righthand images of Figure 1), and can be implemented efficiently and easily.• We show via a rigorous empirical study that using SAM improves model generalizationability across a range of widely studied computer vision tasks (e.g., CIFAR- f10, 100g,ImageNet, finetuning tasks) and models, as summarized in the lefthand plot of Figure 1. Forexample, applying SAM yields novel state-of-the-art performance for a number of already-intensely-studied tasks, such as ImageNet, CIFAR- f10, 100g, SVHN, Fashion-MNIST,and the standard set of image classification finetuning tasks (e.g., Flowers, Stanford Cars,Oxford Pets, etc).• We show that SAM furthermore provides robustness to label noise on par with that providedby state-of-the-art procedures that specifically target learning with noisy labels.• Through the lens provided by SAM, we further elucidate the connection between losssharpness and generalization by surfacing a promising new notion of sharpness, whichwe term m-sharpness .Section 2 below derives the SAM procedure and presents the resulting algorithm in full detail. Sec-tion 3 evaluates SAM empirically, and Section 4 further analyzes the connection between loss sharp-ness and generalization through the lens of SAM. Finally, we conclude with an overview of relatedwork and a discussion of conclusions and future work in Sections 5 and 6, respectively.2Published as a conference paper at ICLR 20212 S HARPNESS -AWARE MINIMIZATION (SAM)Throughout the paper, we denote scalars as a, vectors asa, matrices asA, sets asA, and equality bydefinition as ,. Given a training dataset S,[ni=1f(xi;yi)gdrawn i.i.d. from distribution D, weseek to learn a model that generalizes well. In particular, consider a family of models parameterizedbyw2W Rd; given a per-data-point loss function l:WXY! R+, we define the trainingset lossLS(w),1nPni=1l(w;xi;yi)and the population loss LD(w),E(x;y)D[l(w;x;y)].Having observed only S, the goal of model training is to select model parameters whaving lowpopulation loss LD(w).UtilizingLS(w)as an estimate of LD(w)motivates the standard approach of selecting parameterswby solving minwLS(w)(possibly in conjunction with a regularizer on w) using an optimizationprocedure such as SGD or Adam. Unfortunately, however, for modern overparameterized mod-els such as deep neural networks, typical optimization approaches can easily result in suboptimalperformance at test time. In particular, for modern models, LS(w)is typically non-convex in w,with multiple local and even global minima that may yield similar values of LS(w)while havingsignificantly different generalization performance (i.e., significantly different values of LD(w)).Motivated by the connection between sharpness of the loss landscape and generalization, we proposea different approach: rather than seeking out parameter values wthat simply have low training lossvalueLS(w), we seek out parameter values whose entire neighborhoods have uniformly low trainingloss value (equivalently, neighborhoods having both low loss and low curvature). The followingtheorem illustrates the motivation for this approach by bounding generalization ability in terms ofneighborhood-wise training loss (full theorem statement and proof in Appendix A):Theorem (stated informally) 1. For any>0, with high probability over training set Sgeneratedfrom distribution D,LD(w)maxkk2LS(w+) +h(kwk22=2);whereh:R+!R+is a strictly increasing function (under some technical conditions on LD(w)).To make explicit our sharpness term, we can rewrite the right hand side of the inequality above as[ maxkk2LS(w+)LS(w)] +LS(w) +h(kwk22=2):The term in square brackets captures the sharpness of LSatwby measuring how quickly the trainingloss can be increased by moving from wto a nearby parameter value; this sharpness term is thensummed with the training loss value itself and a regularizer on the magnitude of w. Given that thespecific function his heavily influenced by the details of the proof, we substitute the second termwithjjwjj22for a hyperparameter , yielding a standard L2 regularization term. Thus, inspired bythe terms from the bound, we propose to select parameter values by solving the following Sharpness-Aware Minimization (SAM) problem:minwLSAMS(w) +jjwjj22 whereLSAMS(w),maxjjjjpLS(w+); (1)where0is a hyperparameter and p2[1;1](we have generalized slightly from an L2-normto ap-norm in the maximization over , though we show empirically in appendix C.5 that p= 2istypically optimal). Figure 1 shows1the loss landscape for a model that converged to minima foundby minimizing either LS(w)orLSAMS(w), illustrating that the sharpness-aware loss prevents themodel from converging to a sharp minimum.In order to minimize LSAMS(w), we derive an efficient and effective approximation torwLSAMS(w)by differentiating through the inner maximization, which in turn enables us to applystochastic gradient descent directly to the SAM objective. Proceeding down this path, we first ap-proximate the inner maximization problem via a first-order Taylor expansion of LS(w+)w.r.t.around 0, obtaining(w),arg maxkkpLS(w+)arg maxkkpLS(w) +TrwLS(w) = arg maxkkpTrwLS(w):1Figure 1 was generated following Li et al. (2017) with the provided ResNet56 (no residual connections)checkpoint, and training the same model with SAM.3Published as a conference paper at ICLR 2021In turn, the value ^(w)that solves this approximation is given by the solution to a classical dualnorm problem (jjq1denotes elementwise absolute value and power)2:^(w) =sign(rwLS(w))jrwLS(w)jq1=krwLS(w)kqq1=p(2)where 1=p+ 1=q= 1. Substituting back into equation (1) and differentiating, we then haverwLSAMS(w)rwLS(w+^(w)) =d(w+^(w))dwrwLS(w)jw+^(w)=rwLS(w)jw+^(w)+d^(w)dwrwLS(w)jw+^(w):This approximation to rwLSAMS(w)can be straightforwardly computed via automatic differentia-tion, as implemented in common libraries such as JAX, TensorFlow, and PyTorch. Though this com-putation implicitly depends on the Hessian of LS(w)because ^(w)is itself a function of rwLS(w),the Hessian enters only via Hessian-vector products, which can be computed tractably without ma-terializing the Hessian matrix. Nonetheless, to further accelerate the computation, we drop thesecond-order terms. obtaining our final gradient approximation:rwLSAMS(w)rwLS(w)jw+^(w): (3)As shown by the results in Section 3, this approximation (without the second-order terms) yields aneffective algorithm. In Appendix C.4, we additionally investigate the effect of instead including thesecond-order terms; in that initial experiment, including them surprisingly degrades performance,and further investigating these terms’ effect should be a priority in future work.We obtain the final SAM algorithm by applying a standard numerical optimizer such as stochasticgradient descent (SGD) to the SAM objective LSAMS(w), using equation 3 to compute the requisiteobjective function gradients. Algorithm 1 gives pseudo-code for the full SAM algorithm, using SGDas the base optimizer, and Figure 2 schematically illustrates a single SAM parameter update.Input: Training setS,[ni=1f(xi;yi)g, Loss functionl:WXY! R+, Batch sizeb, Step size>0,Neighborhood size >0.Output: Model trained with SAMInitialize weights w0,t= 0;while not converged doSample batchB=f(x1;y1);:::(xb;yb)g;Compute gradientrwLB(w)of the batch’s training loss;Compute ^(w)per equation 2;Compute gradient approximation for the SAM objective(equation 3):g=rwLB(w)jw+^(w);Update weights: wt+1=wtg;t=t+ 1;endreturnwtAlgorithm 1: SAM algorithmwtwt+1wSAMt+1wadvL(wt)||L(wt)||2L(wt)L(wadv)Figure 2: Schematic of the SAM param-eter update.3 E MPIRICAL EVALUATIONIn order to assess SAM’s efficacy, we apply it to a range of different tasks, including image clas-sification from scratch (including on CIFAR-10, CIFAR-100, and ImageNet), finetuning pretrainedmodels, and learning with noisy labels. In all cases, we measure the benefit of using SAM by simplyreplacing the optimization procedure used to train existing models with SAM, and computing theresulting effect on model generalization. As seen below, SAM materially improves generalizationperformance in the vast majority of these cases.2In the case of interest p= 2, this boils down to simply rescaling the gradient such that its norm is .4Published as a conference paper at ICLR 20213.1 I MAGE CLASSIFICATION FROM SCRATCHWe first evaluate SAM’s impact on generalization for today’s state-of-the-art models on CIFAR-10and CIFAR-100 (without pretraining): WideResNets with ShakeShake regularization (Zagoruyko& Komodakis, 2016; Gastaldi, 2017) and PyramidNet with ShakeDrop regularization (Han et al.,2016; Yamada et al., 2018). Note that some of these models have already been heavily tuned inprior work and include carefully chosen regularization schemes to prevent overfitting; therefore,significantly improving their generalization is quite non-trivial. We have ensured that our imple-mentations’ generalization performance in the absence of SAM matches or exceeds that reported inprior work (Cubuk et al., 2018; Lim et al., 2019)All results use basic data augmentations (horizontal flip, padding by four pixels, and random crop).We also evaluate in the setting of more advanced data augmentation methods such as cutout regu-larization (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018), which are utilized byprior work to achieve state-of-the-art results.SAM has a single hyperparameter (the neighborhood size), which we tune via a grid search overf0:01;0:02;0:05;0:1;0:2;0:5gusing 10% of the training set as a validation set3. Please see ap-pendix C.1 for the values of all hyperparameters and additional training details. As each SAMweight update requires two backpropagation operations (one to compute ^(w)and another to com-pute the final gradient), we allow each non-SAM training run to execute twice as many epochs aseach SAM training run, and we report the best score achieved by each non-SAM training run acrosseither the standard epoch count or the doubled epoch count4. We run five independent replicas ofeach experimental condition for which we report results (each with independent weight initializationand data shuffling), reporting the resulting mean error (or accuracy) on the test set, and the associ-ated 95% confidence interval. Our implementations utilize JAX (Bradbury et al., 2018), and wetrain all models on a single host having 8 Nvidia V100 GPUs5. To compute the SAM update whenparallelizing across multiple accelerators, we divide each data batch evenly among the accelerators,independently compute the SAM gradient on each accelerator, and average the resulting sub-batchSAM gradients to obtain the final SAM update.As seen in Table 1, SAM improves generalization across all settings evaluated for CIFAR-10 andCIFAR-100. For example, SAM enables a simple WideResNet to attain 1.6% test error, versus2.2% error without SAM. Such gains have previously been attainable only by using more complexmodel architectures (e.g., PyramidNet) and regularization schemes (e.g., Shake-Shake, ShakeDrop);SAM provides an easily-implemented, model-independent alternative. Furthermore, SAM deliversimprovements even when applied atop complex architectures that already use sophisticated regular-ization: for instance, applying SAM to a PyramidNet with ShakeDrop regularization yields 10.3%error on CIFAR-100, which is, to our knowledge, a new state-of-the-art on this dataset without theuse of additional data.Beyond CIFAR-f10, 100g, we have also evaluated SAM on the SVHN (Netzer et al., 2011) andFashion-MNIST datasets (Xiao et al., 2017). Once again, SAM enables a simple WideResNet toachieve accuracy at or above the state-of-the-art for these datasets: 0.99% error for SVHN, and3.59% for Fashion-MNIST. Details are available in appendix B.1.To assess SAM’s performance at larger scale, we apply it to ResNets (He et al., 2015) of differentdepths (50, 101, 152) trained on ImageNet (Deng et al., 2009). In this setting, following prior work(He et al., 2015; Szegedy et al., 2015), we resize and crop images to 224-pixel resolution, normalizethem, and use batch size 4096, initial learning rate 1.0, cosine learning rate schedule, SGD optimizerwith momentum 0.9, label smoothing of 0.1, and weight decay 0.0001. When applying SAM, we use= 0:05(determined via a grid search on ResNet-50 trained for 100 epochs). We train all modelson ImageNet for up to 400 epochs using a Google Cloud TPUv3 and report top-1 and top-5 testerror rates for each experimental condition (mean and 95% confidence interval across 5 independentruns).3We found= 0:05to be a solid default value, and we report in appendix C.3 the scores for all ourexperiments, obtained with = 0:05without further tuning.4Training for longer generally did not improve accuracy significantly, except for the models previouslytrained for only 200 epochs and for the largest, most regularized model (PyramidNet + ShakeDrop).5Because SAM’s performance is amplified by not syncing the perturbations, data parallelism is highlyrecommended to leverage SAM’s full potential (see Section 4 for more details).5Published as a conference paper at ICLR 2021CIFAR-10 CIFAR-100Model Augmentation SAM SGD SAM SGDWRN-28-10 (200 epochs) Basic 2.70:1 3:50:1 16.50:218:80:2WRN-28-10 (200 epochs) Cutout 2.30:1 2:60:1 14.90:216:90:1WRN-28-10 (200 epochs) AA 2.1<0:12:30:1 13.60:215:80:2WRN-28-10 (1800 epochs) Basic 2.40:1 3:50:1 16.30:219:10:1WRN-28-10 (1800 epochs) Cutout 2.10:1 2:70:1 14.00:117:40:1WRN-28-10 (1800 epochs) AA 1.60:12:2<0:112.80:216:10:2Shake-Shake (26 2x96d) Basic 2.3<0:12:70:1 15.10:117:00:1Shake-Shake (26 2x96d) Cutout 2.0<0:12:30:1 14.20:215:70:2Shake-Shake (26 2x96d) AA 1.6<0:11:90:1 12.80:114:10:2PyramidNet Basic 2.70:1 4:00:1 14.60:419:70:3PyramidNet Cutout 1.90:1 2:50:1 12.60:216:40:1PyramidNet AA 1.60:1 1:90:1 11.60:114:60:1PyramidNet+ShakeDrop Basic 2.10:1 2:50:1 13.30:214:50:1PyramidNet+ShakeDrop Cutout 1.6<0:11:90:1 11.30:111:80:2PyramidNet+ShakeDrop AA 1.4<0:11:6<0:110.30:110:60:1Table 1: Results for SAM on state-of-the-art models on CIFAR- f10, 100g(WRN = WideResNet;AA = AutoAugment; SGD is the standard non-SAM procedure used to train these models).As seen in Table 2, SAM again consistently improves performance, for example improving theImageNet top-1 error rate of ResNet-152 from 20.3% to 18.4%. Furthermore, note that SAM enablesincreasing the number of training epochs while continuing to improve accuracy without overfitting.In contrast, the standard training procedure (without SAM) generally significantly overfits as trainingextends from 200 to 400 epochs.Model EpochSAM Standard Training (No SAM)Top-1 Top-5 Top-1 Top-5ResNet-50 100 22.50:1 6:280:08 22:90:1 6:620:11200 21.40:1 5:820:03 22:30:1 6:370:04400 20.90:1 5:510:03 22:30:1 6:400:06ResNet-101 100 20.20:1 5:120:03 21:20:1 5:660:05200 19.40:1 4:760:03 20:90:1 5:660:04400 19.0<0:014:650:05 22:30:1 6:410:06ResNet-152 100 19.2<0:014:690:0420:4<0:0 5:390:06200 18.50:1 4:370:03 20:30:2 5:390:07400 18.4<0:014:350:0420:9<0:0 5:840:07Table 2: Test error rates for ResNets trained on ImageNet, with and without SAM.3.2 F INETUNINGTransfer learning by pretraining a model on a large related dataset and then finetuning on a smallertarget dataset of interest has emerged as a powerful and widely used technique for producing high-quality models for a variety of different tasks. We show here that SAM once again offers con-siderable benefits in this setting, even when finetuning extremely large, state-of-the-art, alreadyhigh-performing models.In particular, we apply SAM to finetuning EfficentNet-b7 (pretrained on ImageNet) andEfficientNet-L2 (pretrained on ImageNet plus unlabeled JFT; input resolution 475) (Tan & Le, 2019;Kornblith et al., 2018; Huang et al., 2018). We initialize these models to publicly available check-points6trained with RandAugment (84.7% accuracy on ImageNet) and NoisyStudent (88.2% ac-curacy on ImageNet), respectively. We finetune these models on each of several target datasets bytraining each model starting from the aforementioned checkpoint; please see the appendix for detailsof the hyperparameters used. We report the mean and 95% confidence interval of top-1 test errorover 5 independent runs for each dataset.6https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet6Published as a conference paper at ICLR 2021As seen in Table 3, SAM uniformly improves performance relative to finetuning without SAM.Furthermore, in many cases, SAM yields novel state-of-the-art performance, including 0.30% erroron CIFAR-10, 3.92% error on CIFAR-100, and 11.39% error on ImageNet.DatasetEffNet-b7+ SAMEffNet-b7Prev. SOTA(ImageNet only)EffNet-L2+ SAMEffNet-L2 Prev. SOTAFGVC Aircraft 6:800:06 8:150:08 5.3(TBMSL-Net) 4.820:08 5:800:1 5.3 (TBMSL-Net)Flowers 0.630:02 1:160:05 0.7 (BiT-M) 0.350:01 0:400:02 0.37 (EffNet)Oxford IIIT Pets 3.970:04 4:240:09 4.1 (Gpipe) 2.900:04 3:080:04 4.1 (Gpipe)Stanford Cars 5:180:02 5:940:06 5.0(TBMSL-Net) 4:040:03 4:930:04 3.8(DAT)CIFAR-10 0.880:02 0:950:03 1 (Gpipe) 0.300:01 0:340:02 0.63 (BiT-L)CIFAR-100 7.440:06 7:680:06 7.83 (BiT-M) 3.920:06 4:070:08 6.49 (BiT-L)Birdsnap 13.640:15 14:300:18 15.7 (EffNet) 9.930:15 10:310:15 14.5 (DAT)Food101 7:020:02 7:170:03 7.0 (Gpipe) 3.820:01 3:970:03 4.7 (DAT)ImageNet 15:140:03 15:3 14.2 (KDforAA) 11.390:02 11:8 11:45(ViT)Table 3: Top-1 error rates for finetuning EfficientNet-b7 (left; ImageNet pretraining only) andEfficientNet-L2 (right; pretraining on ImageNet plus additional data, such as JFT) on various down-stream tasks. Previous state-of-the-art (SOTA) includes EfficientNet (EffNet) (Tan & Le, 2019),Gpipe (Huang et al., 2018), DAT (Ngiam et al., 2018), BiT-M/L (Kolesnikov et al., 2020), KD-forAA (Wei et al., 2020), TBMSL-Net (Zhang et al., 2020), and ViT (Dosovitskiy et al., 2020).3.3 R OBUSTNESS TO LABEL NOISEMethod Noise rate (%)20 40 60 80Sanchez et al. (2019) 94.0 92.8 90.3 74.1Zhang & Sabuncu (2018) 89.7 87.6 82.7 67.9Lee et al. (2019) 87.1 81.8 75.4 -Chen et al. (2019) 89.7 - - 52.3Huang et al. (2019) 92.6 90.3 43.4 -MentorNet (2017) 92.0 91.2 74.2 60.0Mixup (2017) 94.0 91.5 86.8 76.9MentorMix (2019) 95.6 94.2 91.3 81.0SGD 84.8 68.8 48.2 26.2Mixup 93.0 90.0 83.8 70.2Bootstrap + Mixup 93.3 92.0 87.6 72.0SAM 95.1 93.4 90.5 77.9Bootstrap + SAM 95.4 94.2 91.8 79.9Table 4: Test accuracy on the clean test setfor models trained on CIFAR-10 with noisy la-bels. Lower block is our implementation, up-per block gives scores from the literature, perJiang et al. (2019).The fact that SAM seeks out model parameters thatare robust to perturbations suggests SAM’s poten-tial to provide robustness to noise in the training set(which would perturb the training loss landscape).Thus, we assess here the degree of robustness thatSAM provides to label noise.In particular, we measure the effect of apply-ing SAM in the classical noisy-label setting forCIFAR-10, in which a fraction of the training set’slabels are randomly flipped; the test set remainsunmodified (i.e., clean). To ensure valid compar-ison to prior work, which often utilizes architec-tures specialized to the noisy-label setting, we traina simple model of similar size (ResNet-32) for 200epochs, following Jiang et al. (2019). We evalu-ate five variants of model training: standard SGD,SGD with Mixup (Zhang et al., 2017), SAM, and”bootstrapped” variants of SGD with Mixup andSAM (wherein the model is first trained as usualand then retrained from scratch on the labels pre-dicted by the initially trained model). When apply-ing SAM, we use = 0:1for all noise levels except 80%, for which we use = 0:05for more stableconvergence. For the Mixup baselines, we tried all values of 2f1;8;16;32gand conservativelyreport the best score for each noise level.As seen in Table 4, SAM provides a high degree of robustness to label noise, on par with thatprovided by state-of-the art procedures that specifically target learning with noisy labels. Indeed,simply training a model with SAM outperforms all prior methods specifically targeting label noiserobustness, with the exception of MentorMix (Jiang et al., 2019). However, simply bootstrappingSAM yields performance comparable to that of MentorMix (which is substantially more complex).7Published as a conference paper at ICLR 20210 20 40 60Epoch: 1max=62.9max/5=2.5SGD0 5 10Epoch: 50max=12.5max/5=1.70 10 20p()Epoch: 300max=24.2max/5=11.40 20 40 60max=18.6max/5=3.6SAM0 5 10max=8.9max/5=1.90 10 20p()max=1.0max/5=2.60.00 0.05 0.10 0.150.040.050.060.070.08Error rate (%)m141664256141664256m0.0300.0350.0400.0450.050Mutual informationTask 1Task 20.140.150.160.17Mutual informationFigure 3: (left) Evolution of the spectrum of the Hessian during training of a model with standardSGD (lefthand column) or SAM (righthand column). (middle) Test error as a function of for dif-ferent values of m. (right) Predictive power of m-sharpness for the generalization gap, for differentvalues ofm(higher means the sharpness measure is more correlated with actual generalization gap).4 S HARPNESS AND GENERALIZATION THROUGH THE LENS OF SAM4.1m-SHARPNESSThough our derivation of SAM defines the SAM objective over the entire training set, when utilizingSAM in practice, we compute the SAM update per-batch (as described in Algorithm 1) or even byaveraging SAM updates computed independently per-accelerator (where each accelerator receives asubset of size mof a batch, as described in Section 3). This latter setting is equivalent to modifyingthe SAM objective (equation 1) to sum over a set of independent maximizations, each performedon a sum of per-data-point losses on a disjoint subset of mdata points, rather than performing themaximization over a global sum over the training set (which would be equivalent to setting mto the total training set size). We term the associated measure of sharpness of the loss landscapem-sharpness .To better understand the effect of mon SAM, we train a small ResNet on CIFAR-10 using SAMwith a range of values of m. As seen in Figure 3 (middle), smaller values of mtend to yield modelshaving better generalization ability. This relationship fortuitously aligns with the need to parallelizeacross multiple accelerators in order to scale training for many of today’s models.Intriguingly, the m-sharpness measure described above furthermore exhibits better correlation withmodels’ actual generalization gaps as mdecreases, as demonstrated by Figure 3 (right)7. In partic-ular, this implies that m-sharpness with m < n yields a better predictor of generalization than thefull-training-set measure suggested by Theorem 1 in Section 2 above, suggesting an interesting newavenue of future work for understanding generalization.4.2 H ESSIAN SPECTRAMotivated by the connection between geometry of the loss landscape and generalization, we con-structed SAM to seek out minima of the training loss landscape having both low loss value and lowcurvature (i.e., low sharpness). To further confirm that SAM does in fact find minima having lowcurvature, we compute the spectrum of the Hessian for a WideResNet40-10 trained on CIFAR-10for 300 steps both with and without SAM (without batch norm, which tends to obscure interpretationof the Hessian), at different epochs during training. Due to the parameter space’s dimensionality, weapproximate the Hessian spectrum using the Lanczos algorithm of Ghorbani et al. (2019).Figure 3 (left) reports the resulting Hessian spectra. As expected, the models trained with SAMconverge to minima having lower curvature, as seen in the overall distribution of eigenvalues, the7We follow the rigorous framework of Jiang et al. (2019), reporting the mutual information betweenthem-sharpness measure and generalization on the two publicly available tasks from the Predicting gen-eralization in deep learning NeurIPS2020 competition. https://competitions.codalab.org/competitions/253018Published as a conference paper at ICLR 2021maximum eigenvalue ( max) at convergence (approximately 24 without SAM, 1.0 with SAM), andthe bulk of the spectrum (the ratio max=5, commonly used as a proxy for sharpness (Jastrzebskiet al., 2020); up to 11.4 without SAM, and 2.6 with SAM).5 R ELATED WORKThe idea of searching for “flat” minima can be traced back to Hochreiter & Schmidhuber (1995), andits connection to generalization has seen significant study (Shirish Keskar et al., 2016; Dziugaite &Roy, 2017; Neyshabur et al., 2017; Dinh et al., 2017). In a recent large scale empirical study, Jianget al. (2019) studied 40 complexity measures and showed that a sharpness-based measure has highestcorrelation with generalization, which motivates penalizing sharpness. Hochreiter & Schmidhuber(1997) was perhaps the first paper on penalizing the sharpness, regularizing a notion related to Min-imum Description Length (MDL). Other ideas which also penalize sharp minima include operatingon diffused loss landscape (Mobahi, 2016) and regularizing local entropy (Chaudhari et al., 2016).Another direction is to not penalize the sharpness explicitly, but rather average weights during train-ing; Izmailov et al. (2018) showed that doing so can yield flatter minima that can also generalizebetter. However, the measures of sharpness proposed previously are difficult to compute and differ-entiate through. In contrast, SAM is highly scalable as it only needs two gradient computations periteration. The concurrent work of Sun et al. (2020) focuses on resilience to random and adversarialcorruption to expose a model’s vulnerabilities; this work is perhaps closest to ours. Our work has adifferent basis: we develop SAM motivated by a principled starting point in generalization, clearlydemonstrate SAM’s efficacy via rigorous large-scale empirical evaluation, and surface importantpractical and theoretical facets of the procedure (e.g., m-sharpness). The notion of all-layer marginintroduced by Wei & Ma (2020) is closely related to this work; one is adversarial perturbation overthe activations of a network and the other over its weights, and there is some coupling between thesetwo quantities.6 D ISCUSSION AND FUTURE WORKIn this work, we have introduced SAM, a novel algorithm that improves generalization by simulta-neously minimizing loss value and loss sharpness; we have demonstrated SAM’s efficacy through arigorous large-scale empirical evaluation. We have surfaced a number of interesting avenues for fu-ture work. On the theoretical side, the notion of per-data-point sharpness yielded by m-sharpness (incontrast to global sharpness computed over the entire training set, as has typically been studied in thepast) suggests an interesting new lens through which to study generalization. Methodologically, ourresults suggest that SAM could potentially be used in place of Mixup in robust or semi-supervisedmethods that currently rely on Mixup (giving, for instance, MentorSAM). We leave to future worka more in-depth investigation of these possibilities.7 A CKNOWLEDGMENTSWe thank our colleagues at Google — Atish Agarwala, Xavier Garcia, Dustin Tran, Yiding Jiang,Basil Mustafa, Samy Bengio — for their feedback and insightful discussions. We also thank the JAXand FLAX teams for going above and beyond to support our implementation. We are grateful to SvenGowal for his help in replicating EfficientNet using JAX, and Justin Gilmer for his implementationof the Lanczos algorithm8used to generate the Hessian spectra. We thank Niru Maheswaranathanfor his matplotlib mastery. We also thank David Samuel for providing a PyTorch implementation ofSAM9.<|im_end|> <|im_start|>assistant ### Review Title Interesting work with good results, concern is on selecting the right $\rho$ ### Review Text Motivated by the connection between the flatness of minima and its generalization ability, the authors propose Sharpness-aware Minimization (SAM), which explicitly minimizes both loss value and loss sharpness during training deep neural networks. They find SAM improves generalization for a range of image classification tasks and provide robustness to label noise as well. They also introduce a new notion of sharpness named m-sharpness. Strength: * The paper is overall well written with clear motivation. * The experiments are comprehensive and the results show clear improvement over non-SAM approaches or previous SOTA. Weakness: * There is no clear definition of the “sharpness“ that the algorithm tries to optimize. Given many existing definitions of the sharpness (e.g., [1]), it is not clear how the proposed measurement connects or differs with previous works. * My major concern is about the usage of hyperparameter $\rho$: a) The introduction of the dataset and model dependent hyperparameter $\rho$ and the need of grid-search before training makes the algorithm more tricky to work and sensitive to other hyperparameters and scale of $w$, e.g., when weight decay is applied, the norm of $w$ usually shrinks during training, and the same radius $\rho$ could be too large for a small scaled $w$ at the end of training in comparison with the $w$ at the beginning. This discrepancy would become larger when the number of epochs training gets larger. b) The details for how to obtain the optimal $rho$ is not quite clear, e.g., smaller $\rho$ in sec 3.3. An ablation study on the sensitivity of $\rho$ regarding different dataset, model and noisy level would be useful. c) The wall-clock training time of the SAM method is not discussed. A comprehensive of the cost (including hyperparameter search for $rho$.) would be helpful to have for evaluating the complexity of the method. * The message conveyed in section 4.1 is not quite clear. Does each accelerator perform independent $\epsilon$ estimation? Is $epsilon$ obtained on each accelerator synchronized after their estimation? Does it indicate the SAM training is done better in model-parallel in small batches rather than data-parallel with large batches? Suggestions: 1) To avoid the scaling issue of $\rho$, one suggestion would be considering optimizing the sharpness metric on the normalized loss landscape as described in [2]. In Figure 1, the authors adopt [2] for comparing the landscape of minimas obtained by non-SAM and SAM, so it might be intuitive to optimize this normalized sharpness directly, in which $\rho$ can be fixed and random direction is sufficient? 2) The benefit of flatness to the robustness to label noise is not well discussed. What is the performance when the label noise is over 90% or even 100%. Eventually all models should not generalize given 100% corruption but it would be interesting to know where the limit of SAM is. Minor: * Some figures are not well described, e.g., the meaning of Figure 1 left is not quite clear. Figure 2 is not intuitive as the loss contour value is not clear. It is not straightforward to know why w_{t+1}^{SAM} is a better or “flatter” move. The notion $w_{adv}$ is also not defined anywhere. [1] Keskar et al, On large-batch training for deep learning: Generalization gap and sharp minima, ICLR 2017 [2] Li et al, Visualizing the Loss Landscape of Neural Nets, NIPS 2018 ====== After Rebuttal Thanks for the detailed reply and additional experiments. I increased my score accordingly and I hope the authors could further address following issues: - While the results in C.3 shows default $\rho$ improves over SGD on most experiments (may also add SVHN and Fashion), I can still see its sensitivity to datasets, architecture, noise level and number of accelerators as shown in Table 6, 7, 8 and Fig. 3. For example, 0.05 is not close to optimal with labe noise 20%~60% in Table 8. It is unclear whether $\rho$ is robust to other hyperparameter changes (e.g., weight decay that controls weight scales). So an ablation study on the sensitivity of $\rho$ and further explanation would be necessary and much valuable for practitioners. - It would be also helpful if the authors can provide more details about how to get the flat minima of Fig .1 (right) when optimizing deep non-residual networks, such as $\rho$ and other hyperparameters. - Minor: Table 8 should be validation errors rather than accuracy. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rCztBS_Uqxc
ICLR.cc/2022/Workshop/OSC
2022
Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
["Tongzhou Mu", "Kaixiang Lin", "Feiyang Niu", "Govind Thattai"]
We present a two-step hybrid reinforcement learning (RL) policy that is designed to generate interpretable and robust hierarchical policies on the RL problem with graph-based input. Unlike prior deep reinforcement learning policies parameterized by an end-to-end black-box graph neural network, our approach disentangles the decision-making process into two steps. The first step is a simplified classification problem that maps the graph input to an action group where all actions share a similar semantic meaning. The second step implements a sophisticated rule-miner that conducts explicit one-hop reasoning over the graph and identifies decisive edges in the graph input without the necessity of heavy domain knowledge. This two-step hybrid policy presents human-friendly interpretations and achieves better performance in terms of generalization and robustness. Extensive experimental studies on four levels of complex text-based games have demonstrated the superiority of the proposed method compared to the state-of-the-art.
["Graph-based Reinforcement Learning", "Interpretable Reinforcement Learning", "Generalization", "Robustness"]
ABSTRACTWe present a two-step hybrid reinforcement learning (RL) policy that is designedto generate interpretable and robust hierarchical policies on the RL problem withgraph-based input. Unlike prior deep reinforcement learning policies parameterizedby an end-to-end black-box graph neural network, our approach disentangles thedecision-making process into two steps. The first step is a simplified classificationproblem that maps the graph input to an action group where all actions share a sim-ilar semantic meaning. The second step implements a sophisticated rule-miner thatconducts explicit one-hop reasoning over the graph and identifies decisive edges inthe graph input without the necessity of heavy domain knowledge. This two-stephybrid policy presents human-friendly interpretations and achieves better perfor-mance in terms of generalization and robustness. Extensive experimental studieson four levels of complex text-based games have demonstrated the superiority ofthe proposed method compared to the state-of-the-art.1 I NTRODUCTIONRecent years have witnessed the rapid developments of deep reinforcement learning across variousdomains such as mastering board games (Silver et al., 2016) playing video games (Mnih et al.,2015), etc. The larger and complicated architecture of deep RL models empowered the capabilities ofresolving challenging tasks while brought in significant challenges of interpreting the decision makingprocess of those complex policies (Puiutta & Veith, 2020). This trade-off between performance andinterpretability becomes an inevitable issue when DRL is applied to high stakes applications suchas health care (Rudin, 2019). In this work, we focus on graph-based interpretable reinforcementlearning (Zambaldi et al., 2018; Waradpande et al., 2020) as the graph representation is expressive invarious domains including drug discovery (Patel et al., 2020), visual question answering (Hildebrandtet al., 2020), and embodied AI (Chaplot et al., 2020; Huang et al., 2019), etc. Another benefit ofstudying graph-based RL is that the graph structure can provide natural explanations of the decision-making process without the necessity of introducing new programms (Verma et al., 2018) or heavydomain knowledge (Bastani et al., 2018) for interpretation. Prior works in interpretable RL (Vermaet al., 2018; Madumal et al., 2020; Liu et al., 2018) either works on restricted policy class (Liu et al.,2018) that leads to downgraded performance, or the interpretablility (Shu et al., 2017; Zambaldiet al., 2018) is limited. Another common issue of interpretable RL is that the provided explanation isgenerally difficult to comprehend for non-experts (Du et al., 2019).To resolve the challenges mentioned above, we propose a novel two-step hybrid decision-makingprocess for general deep RL methods with graph input. Our approach is inspired by the observationof human decision-making. When confront complicated tasks that involve expertise from multipledomains, we typically identify which domain of expert we would like to consult first and then searchfor specific knowledge to solve the problem. Recognizing the scope of the problem significantlyreduces the search space of downstream tasks, which leads to a more simplified problem comparedto finding the solution in all domains directly. As an analogy of this procedure, we disentangle acomplicated deep RL policy into a classification the problem for problem type selection and rule miner.The classification establishes a mapping from complex graph input into an action type, which handleshigh-order logical interactions among node and edge representations with graph neural network. Therule miner conducts explicit one-hop reasoning over the graph and provides user-friendly selectiveexplanations (Du et al., 2019) by mining several decisive edges. This two-step decision makingis essential not only for providing interpretability, but also for generalization and robustness. It1Under review at the ICLR 2022 workshop on Objects, Structure and Causalityis intuitive to see that the simplified classification is easier to achieve better generalization androbustness than the original complicated RL policy. Furthermore, the rule miner identifying keyedges in the graph is much more robust to the noisy perturbations on the irrelevant graph components.In summary, our contributions are three folds: 1) We formalize an interpretable deep RL problembased on graph input; 2) We propose a two-step decision-making framework that achieves far betterperformance in terms of generalization androbustness , and provides human-friendly interpreta-tions . 3) Experiments on several text-based games (C ˆot ́e et al., 2018) demonstrated that the proposedapproach achieves a new state-of-the-art performance for both generalization and robustness.2 A G ENERAL FRAMEWORK FOR TWO-STEPHYBRID DECISION MAKINGIn this section, we first describe our problem setting including key assumptions and a generalframework that formulates the decision-making process in a two-step manner.We consider a discrete time Markov Decision Process (MDP) with a discrete state space Sand theaction space Ais finite. The environment dynamics is denoted as P={p(s′|s, a),∀s, s′∈ S, a∈ A} .Given an action set Aand the current state st∈ S, our goal is to learn a policy πthat select an actionat∈ A to maximize the long-term reward Eπ[PTi=1r(si, ai)]. Assume we are able to group theactions into several mutual exclusive action types ( Ak) according to its semantic meanings. Moreconcretely, the k-th action type Ak={a1k, ..., ank}denotes a subset of actions in original action spaceAk⊆A. Then we have A1, A2, .., A K⊆ A, Ai∩Aj=∅(i̸=j),∪Ki=1Ai=A. It is worth notingthat the number of action type Kis usually way smaller than original actions ( K≪ |A| ).Let the policy π=⟨fp, fs⟩represented by a hybrid model that consists of action pruner fpand actionselector fs. The action pruner is used to prune all available action candidates to a single action type,i.e.,fp(si) =k, where kis the index of chosen action type and Ak∈ {A1, A2, .., A K}. Then theaction selector is used to select a specific action given the action type chosen by the action pruner,i.e.,fs(si, Ak) =ai, where ai∈Akandk=fp(si).Intuitively, this design intends to disentangle different phases in decision-making process to two dif-ferent modules. On one hand, determining the action type typically involves high-order relationshipsand needs a model with strong expressive power, then neural network is a good candidate in thisregard. On the other hand, selecting an action within a specific action type can resort to rule-basedapproaches, which is essential in providing strong intrepretability, generalizability and robustness.Figure 1 shows the overall pipeline of our two-step hybrid decision making. The agent will receivea state siat each time step i. At time step i, we will first call the action pruner fp(si)to select theaction type Ai. Then the rule-based action selector fs(si, Ai)will take as inputs the current state siand the action type Aigiven by action pruner to select the specific action to be executed in this step.3TWO-STEPHYBRID POLICY FOR TEXT-BASED GAMES WITH GRAPH INPUTIn this section, we instantiate our proposed framework in the setting of text-based games (C ˆot ́e et al.,2018) with graph input. In text-based games, the agent receives a knowledge graph (as shown inFigure 3) that describes the current state of the environment and the task to be completed, e.g., thetask can be represented by several edges like (“potato”, “needs”, “diced”). Our goal is to learn apolicy that maps the input knowledge graph to an action from the provided action set A. Each actionaj∈ A is a short sentence, e.g., “take apple from fridge”.In this setting, we use graph neural networks (GNNs) as the action pruner fp, and a rule-based modelas the action selector fs. We will elaborate the details of the action pruner and the action selector inSec 3.1 and Sec 3.2, respectively.Training the GNN-based action pruner and the rule-based action selector by reinforcement learningis nontrivial since the whole pipeline is not end-to-end differential. Therefore, we propose to learnboth models separately from a demonstration dataset, and the demonstration dataset can be obtainedby a trained reinforcement learning agent. This process is inspired by existing works like (Bastaniet al., 2018; Sun et al., 2018; Mu et al., 2020), where policies trained by reinforcement learning canbe refactorized into other forms. The demonstration dataset is denoted as D={(si, π(si), ki)}Ni=1,2Under review at the ICLR 2022 workshop on Objects, Structure and Causalitywhere kiis the index of the action type. In addition, we split the demonstration dataset based onthe action types to get Ksubset of the original demonstration dataset, {D1,D2, ...,DK}, whereDk={si, π(si), ki|ki=k}. We elaborate the details of demonstration preparation in Sec B.1.3.1 GNN- BASED ACTION PRUNERThe action pruner needs to output an action type based on the input knowledge graph, so it isessentially a classifier. Given the demonstration dataset D={(si, ki)}Ni=1obtained in the laststep, we want to train a classifier fp(s;θ) =k, where k∈ {1,2, ..., K}is the index of action type.This is a conventional classification problem which can be solved by minimizing cross entropy loss:θ= arg minθ−PiPKj=1kjilog(fjθ(si)),where fθ(si)outputs a probability distribution over the Kaction types, fjθ(si)denotes the j-th action type’s probability. kji∈ {0,1}denotes denotes whetherthe action type jwas chosen in the demonstration dataset at state i.3.2 R ULE-BASED ACTION SELECTOR3.2.1 A BSTRACT SUPPORTING EDGE SETSWhen the input is a knowledge graph, the action is naturally strongly correlated with some criticaledges. For example, (“potato”, “needs”, “diced”) and (“potato”, “in”, “player”) can lead to the action“dice potato”. We refer those decisive edges correlating to an action as the supporting edge set ofthis action. Since we have grouped actions by their semantic meanings, actions within each actiontype are actually supported by similar edges. For example, “dice potato” is supported by (“potato”,“in”, “player”), and “dice apple” is supported by (“apple”, “in”, “player”). As mentioned in Sec B.2,each action type comes with an action template like “dice object ”. Based on the action template,we can perform some sorts of abstraction. For example, given an input knowledge graph labeledwith action “dice apple”, we can replace all the “apple” appearing in the graph edges and the actionwith an abstract name “ object ”. Then we can say the action “dice object ” is essentially supported by(“object ”, “in”, “player”), where the two “ object ” should be instantiated by the same word. Underthis kind of abstraction, different actions within the same action type can share a same abstractsupporting edge set which contains edges with abstract names.The abstract supporting edge set indicates the decisive edges for an action type, and it can beinstantiated for each specific action. For example, to check whether the action “dice apple” shouldbe executed, the abstract edge (“ object ”, “in”, “player”) will be instantiated to edge (“apple”, “in”,“player”). Then, the existence of (“apple”, “in”, “player”) in input knowledge graph becomes anevidence for selecting the action “dice apple”. We aim at finding an abstract supporting edge set foreach action type, and it will be used during inference.3.2.2 M INEABSTRACT SUPPORTING EDGE SETS FROM DEMONSTRATIONSFinding the abstract supporting edge set for each action type is actually a rule mining process. Thereare several off-the-shelf rule miners like FP-Growth (Han et al., 2004), Apriori (Agrawal et al., 1994),Eclat (Zaki, 2000), etc. , but they are not designed for knowledge graphs. Thus, we propose a simpleyet effective rule miner for our setting to discover the supporting edge sets.To find the Abstract Supporting Edge set ASE(Ak)for each action type Ak, we designed a numericalstatistic that is intended to reflect the importance of an edge when taking an action a∈A, andthis numerical statistic is inspired by tf-idf (Rajaraman & Ullman, 2011). Formally, for actiontypeAk, we have a subset of demonstration dataset Dk={si}(ignoring πihere). Under theabstraction mentioned above, we can count the edge frequency for every (abstract) edge einDk:freq k(e) =|{si|si∈Dk,e∈si}||Dk|. Similarly, we can also count the edge frequency for the entiredemonstration dataset: freq(e) =|{si|si∈D,e∈si}||D|.Finally, we can define an importance score of an edge w.r.t to the action type Ak:Ik(e) =freq k(e)·log(1freq (e)),where the term freq k(e)is similar to the term-frequency (tf), and the term log(1freq (e))is similar to the inverse document frequency (idf).3Under review at the ICLR 2022 workshop on Objects, Structure and CausalityThen we can get the ASE(Ak)by selecting the edges with the importance higher than a threshold,i.e., ASE (Ak) ={e|Ia(e)> τ}, where τis a hyperparameter shared across all action types.3.2.3 I NFERENCE BASED ON SUPPORTING EDGESDuring inference, we use the supporting edge sets to score each action within the action type providedby action pruner, and select the action with the highest score. Fig. 2 shows a concrete example ofusing supporting edge set to score an action. Firstly, the action pruner outputs an action type based onthe input KG (knowledge graph), and we can retrieve the abstract supporting edge set of this actiontype. Secondly, given an action within the action type, we can instantiate the abstract supporting edgeset to a specific supporting edge set by replacing the abstract names with the concrete words based onthe action, e.g, (“ object ”, “in”, “player”) will be instantiated to (“potato”, “in”, “player”) if the actionis “cook potato with oven”. Finally, we compare the input KG with the supporting edge set of eachaction to find which supporting edge set is best covered by the input KG. The number of overlappededges between the supporting edge set and the input KG will be regarded as the score of the action.Formally, the inference process of our rule-based action selector can be described as followsfs(s, A) = arg maxa∈A|s∩SE(a)|, where SE(a)is the supporting edge set associated with actiona, which is obtained by instantiating the abstract supporting edge set of action type A.StateNN-basedActionPrunerActionTypeRule-basedActionSelectorActionFigure 1: Generalframework of the two-step hybrid decisionmaking.InputKnowledgeGraph-----------------------------------[player,at,kitchen][potato,needs,roasted][fridge,is,open][potato,in,player]ActionPrunerAbstractSupportingEdgeSet------------------------------------------[object,needs,cooking_method][object,in,player]ActionType----------------------cookSupportingEdgeSet---------------------------------[potato,needs,roasted][potato,in,player]Action----------------------------cookpotatowithovenActionScore----------------------2Figure 2: An example of using supporting edge set to score an action. Basedon the knowledge graph, the action pruner first predict the action type (e.g.,cook) that needs to be taken at current state, then instantiate the abstractsupporting edge set to a concrete supporting edge set. Comparing the inputknowledge graph with the supporting edge set, we can compute action scorefor each action and select the action with highest score accordingly.4 E XPERIMENTS4.1 D ATASET SETUPWe evaluate our method on TextWorld (C ˆot ́e et al., 2018), which is a framework for designing text-based interactive games. More specifically, we use the TextWorld games generated by GATA (Ad-hikari et al., 2020). In these games, the agent is asked to cook a meal according to given recipes. Itrequires the agent to navigate among different rooms to locate and collect food ingredients specifiedin the recipe, process the food ingredients appropriately, and finally cook and eat a meal.The state received by the agent is a knowledge graph describing all the necessary information aboutthe game. All the nodes and edges in the knowledge graph are represented in text. Figure 3 shows apartial example of input knowledge graph. The actions are also represented in text. Note that thenumber of available actions vary from state to state, so most of of existing network architecture useddeep reinforcement learning cannot be directly used here.The games have four different difficulty levels, and each difficulty level contains 20 training, 20validation, and 20 test environments, which are sampled from a distribution based on the difficultylevel. The higher the difficulty levels are, the more complicated recipe will be and the more roomsfood ingredients will be distributed among. Statistics of the games are shown in Table 2. For4Under review at the ICLR 2022 workshop on Objects, Structure and Causalityevaluating model generalizability, we select the top-performing agent on validation sets and report itstest scores; all validation and test games are unseen in the training set.4.2 R ESULTSWe demonstrate the advantages of our method from three different perspectives: interpretability,generalization, and robustness .4.2.1 I NTERPRETABILITYThe interpretablity of our two-step policy is two-fold: 1) the transparent two-step decision makingprocess; 2) the rule-based models make decisions in a way which is easy to interpret by human.Table 3 shows some representative rules discovered by our rule miners. We observed that all of thefour discovered abstract supporting edges are indeed prerequisites of the “cut” actions. In particular,the abstract supporting edge (“object”, “needs”, “verb passive”) is a crucial prerequisite for the agentto select the correct verb in the “cut” actions. For example, if the input knowledge graph contains theedge (“potato”, “needs”, “sliced”), then the action “slice potato with knife” will get one more scorethan others because this edge is in the supporting edge set of action “slice potato with knife”.Since we clearly see the rule-based model makes decisions based on these rules, it is not hard to checkwhether this model works in a correct way. In this way, the agent can certainly select the correct “cut”action even in unseen test environments.4.2.2 G ENERALIZATIONTraining TestDifficulty 1 2 3 4 1 2 3 4GATA-GTF 98.6 58.4 95.6 36.1 83.8 53 33.3 23.6Vanilla RL 100 100 98.3 100 83.8 68 50 30.9Ours 100 100 100 65.5 100 100 51.7 49.7Table 1: Evaluation results on both training environments and test envi-ronment in TextWorld. The numbers show the agent’s normalized scores.We compare with twobaselines: GATA-GTFand vanilla RL. GATA-GTF is a variant fromGATA (Adhikari et al.,2020), and it uses thesame ground-truth graphinput as us. GATA-GTFprocesses the input graphs by relational graph convolutional networks (R-GCNs) (Schlichtkrull et al.,2018), and the whole pipeline is trained by DQN (Mnih et al., 2015). Vanilla RL is essentially thesame method with GATA-GTF, but implemented by ourselves. With better implementation andhyperparameter tuning, it performs better than the implementation released by the GATA authors.And vanilla RL also serves as the teacher policy used to generate demonstrations for our method.Table 1 shows the normalized scores of different methods on both training and test environment inTextWorld. The result of GATA-GTF is obtained from its paper. In all the environments, our agentachieves better generalization performance the vanilla RL (which is our RL teacher) and GATA-GTFbaselines. Our agent can generalize pretty well to the unseen test environments in all difficulty levels,while the performance of GATA-GTF and vanilla RL performs poorly in unseen test environments.4.2.3 R OBUSTNESSAs mentioned above, our method also aims at robustness. To evaluate the robustness of our models,we add different levels of noises to input knowledge graphs and regard the performance of the agentsunder noisy inputs. In this paper, we define the noise on a knowledge graph as adding additionaledges to the graph or dropping existing edges in the graph. The formal definition can be found in F.1.Table 4 shows the performance of vanilla RL and our method under noisy input graphs generatedin the above mentioned way. Note that we test robustness in training environments instead of testenvironments, because the performance of both agents in test environments are not good enough andexamining robustness of poor-performed agents does not make too much sense.We observed that in difficulty level 1, both agents are very robust under the noisy inputs. However,in difficulty level 2 & 3 & 4, the performance of the RL agents are hurts a lot by the input noises,especially in the difficulty level 4. In contrast, the agents obtained by our method are still performspretty good and the performance drops significantly less than the RL agents.5Under review at the ICLR 2022 workshop on Objects, Structure and Causality
H5eT1_2Cbc
Review
2: Good workshop paper, accept
The paper is well-written and the main approach could be understood. Pros: * The approach of separating out a policy intro a graph-based classification problem and then rule-mining based approach seems novel. * The experiment results are promising Cons: * Discussion regarding how such an approach could translate to standard RL benchmarks could have been discussed. * Discussing how the approach could extend to non-text-based domains could also have been discussed.
2: The reviewer is somewhat confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning ### Paper Abstract We present a two-step hybrid reinforcement learning (RL) policy that is designed to generate interpretable and robust hierarchical policies on the RL problem with graph-based input. Unlike prior deep reinforcement learning policies parameterized by an end-to-end black-box graph neural network, our approach disentangles the decision-making process into two steps. The first step is a simplified classification problem that maps the graph input to an action group where all actions share a similar semantic meaning. The second step implements a sophisticated rule-miner that conducts explicit one-hop reasoning over the graph and identifies decisive edges in the graph input without the necessity of heavy domain knowledge. This two-step hybrid policy presents human-friendly interpretations and achieves better performance in terms of generalization and robustness. Extensive experimental studies on four levels of complex text-based games have demonstrated the superiority of the proposed method compared to the state-of-the-art. ### Paper Keywords ["Graph-based Reinforcement Learning", "Interpretable Reinforcement Learning", "Generalization", "Robustness"] ### Paper Content ABSTRACTWe present a two-step hybrid reinforcement learning (RL) policy that is designedto generate interpretable and robust hierarchical policies on the RL problem withgraph-based input. Unlike prior deep reinforcement learning policies parameterizedby an end-to-end black-box graph neural network, our approach disentangles thedecision-making process into two steps. The first step is a simplified classificationproblem that maps the graph input to an action group where all actions share a sim-ilar semantic meaning. The second step implements a sophisticated rule-miner thatconducts explicit one-hop reasoning over the graph and identifies decisive edges inthe graph input without the necessity of heavy domain knowledge. This two-stephybrid policy presents human-friendly interpretations and achieves better perfor-mance in terms of generalization and robustness. Extensive experimental studieson four levels of complex text-based games have demonstrated the superiority ofthe proposed method compared to the state-of-the-art.1 I NTRODUCTIONRecent years have witnessed the rapid developments of deep reinforcement learning across variousdomains such as mastering board games (Silver et al., 2016) playing video games (Mnih et al.,2015), etc. The larger and complicated architecture of deep RL models empowered the capabilities ofresolving challenging tasks while brought in significant challenges of interpreting the decision makingprocess of those complex policies (Puiutta & Veith, 2020). This trade-off between performance andinterpretability becomes an inevitable issue when DRL is applied to high stakes applications suchas health care (Rudin, 2019). In this work, we focus on graph-based interpretable reinforcementlearning (Zambaldi et al., 2018; Waradpande et al., 2020) as the graph representation is expressive invarious domains including drug discovery (Patel et al., 2020), visual question answering (Hildebrandtet al., 2020), and embodied AI (Chaplot et al., 2020; Huang et al., 2019), etc. Another benefit ofstudying graph-based RL is that the graph structure can provide natural explanations of the decision-making process without the necessity of introducing new programms (Verma et al., 2018) or heavydomain knowledge (Bastani et al., 2018) for interpretation. Prior works in interpretable RL (Vermaet al., 2018; Madumal et al., 2020; Liu et al., 2018) either works on restricted policy class (Liu et al.,2018) that leads to downgraded performance, or the interpretablility (Shu et al., 2017; Zambaldiet al., 2018) is limited. Another common issue of interpretable RL is that the provided explanation isgenerally difficult to comprehend for non-experts (Du et al., 2019).To resolve the challenges mentioned above, we propose a novel two-step hybrid decision-makingprocess for general deep RL methods with graph input. Our approach is inspired by the observationof human decision-making. When confront complicated tasks that involve expertise from multipledomains, we typically identify which domain of expert we would like to consult first and then searchfor specific knowledge to solve the problem. Recognizing the scope of the problem significantlyreduces the search space of downstream tasks, which leads to a more simplified problem comparedto finding the solution in all domains directly. As an analogy of this procedure, we disentangle acomplicated deep RL policy into a classification the problem for problem type selection and rule miner.The classification establishes a mapping from complex graph input into an action type, which handleshigh-order logical interactions among node and edge representations with graph neural network. Therule miner conducts explicit one-hop reasoning over the graph and provides user-friendly selectiveexplanations (Du et al., 2019) by mining several decisive edges. This two-step decision makingis essential not only for providing interpretability, but also for generalization and robustness. It1Under review at the ICLR 2022 workshop on Objects, Structure and Causalityis intuitive to see that the simplified classification is easier to achieve better generalization androbustness than the original complicated RL policy. Furthermore, the rule miner identifying keyedges in the graph is much more robust to the noisy perturbations on the irrelevant graph components.In summary, our contributions are three folds: 1) We formalize an interpretable deep RL problembased on graph input; 2) We propose a two-step decision-making framework that achieves far betterperformance in terms of generalization androbustness , and provides human-friendly interpreta-tions . 3) Experiments on several text-based games (C ˆot ́e et al., 2018) demonstrated that the proposedapproach achieves a new state-of-the-art performance for both generalization and robustness.2 A G ENERAL FRAMEWORK FOR TWO-STEPHYBRID DECISION MAKINGIn this section, we first describe our problem setting including key assumptions and a generalframework that formulates the decision-making process in a two-step manner.We consider a discrete time Markov Decision Process (MDP) with a discrete state space Sand theaction space Ais finite. The environment dynamics is denoted as P={p(s′|s, a),∀s, s′∈ S, a∈ A} .Given an action set Aand the current state st∈ S, our goal is to learn a policy πthat select an actionat∈ A to maximize the long-term reward Eπ[PTi=1r(si, ai)]. Assume we are able to group theactions into several mutual exclusive action types ( Ak) according to its semantic meanings. Moreconcretely, the k-th action type Ak={a1k, ..., ank}denotes a subset of actions in original action spaceAk⊆A. Then we have A1, A2, .., A K⊆ A, Ai∩Aj=∅(i̸=j),∪Ki=1Ai=A. It is worth notingthat the number of action type Kis usually way smaller than original actions ( K≪ |A| ).Let the policy π=⟨fp, fs⟩represented by a hybrid model that consists of action pruner fpand actionselector fs. The action pruner is used to prune all available action candidates to a single action type,i.e.,fp(si) =k, where kis the index of chosen action type and Ak∈ {A1, A2, .., A K}. Then theaction selector is used to select a specific action given the action type chosen by the action pruner,i.e.,fs(si, Ak) =ai, where ai∈Akandk=fp(si).Intuitively, this design intends to disentangle different phases in decision-making process to two dif-ferent modules. On one hand, determining the action type typically involves high-order relationshipsand needs a model with strong expressive power, then neural network is a good candidate in thisregard. On the other hand, selecting an action within a specific action type can resort to rule-basedapproaches, which is essential in providing strong intrepretability, generalizability and robustness.Figure 1 shows the overall pipeline of our two-step hybrid decision making. The agent will receivea state siat each time step i. At time step i, we will first call the action pruner fp(si)to select theaction type Ai. Then the rule-based action selector fs(si, Ai)will take as inputs the current state siand the action type Aigiven by action pruner to select the specific action to be executed in this step.3TWO-STEPHYBRID POLICY FOR TEXT-BASED GAMES WITH GRAPH INPUTIn this section, we instantiate our proposed framework in the setting of text-based games (C ˆot ́e et al.,2018) with graph input. In text-based games, the agent receives a knowledge graph (as shown inFigure 3) that describes the current state of the environment and the task to be completed, e.g., thetask can be represented by several edges like (“potato”, “needs”, “diced”). Our goal is to learn apolicy that maps the input knowledge graph to an action from the provided action set A. Each actionaj∈ A is a short sentence, e.g., “take apple from fridge”.In this setting, we use graph neural networks (GNNs) as the action pruner fp, and a rule-based modelas the action selector fs. We will elaborate the details of the action pruner and the action selector inSec 3.1 and Sec 3.2, respectively.Training the GNN-based action pruner and the rule-based action selector by reinforcement learningis nontrivial since the whole pipeline is not end-to-end differential. Therefore, we propose to learnboth models separately from a demonstration dataset, and the demonstration dataset can be obtainedby a trained reinforcement learning agent. This process is inspired by existing works like (Bastaniet al., 2018; Sun et al., 2018; Mu et al., 2020), where policies trained by reinforcement learning canbe refactorized into other forms. The demonstration dataset is denoted as D={(si, π(si), ki)}Ni=1,2Under review at the ICLR 2022 workshop on Objects, Structure and Causalitywhere kiis the index of the action type. In addition, we split the demonstration dataset based onthe action types to get Ksubset of the original demonstration dataset, {D1,D2, ...,DK}, whereDk={si, π(si), ki|ki=k}. We elaborate the details of demonstration preparation in Sec B.1.3.1 GNN- BASED ACTION PRUNERThe action pruner needs to output an action type based on the input knowledge graph, so it isessentially a classifier. Given the demonstration dataset D={(si, ki)}Ni=1obtained in the laststep, we want to train a classifier fp(s;θ) =k, where k∈ {1,2, ..., K}is the index of action type.This is a conventional classification problem which can be solved by minimizing cross entropy loss:θ= arg minθ−PiPKj=1kjilog(fjθ(si)),where fθ(si)outputs a probability distribution over the Kaction types, fjθ(si)denotes the j-th action type’s probability. kji∈ {0,1}denotes denotes whetherthe action type jwas chosen in the demonstration dataset at state i.3.2 R ULE-BASED ACTION SELECTOR3.2.1 A BSTRACT SUPPORTING EDGE SETSWhen the input is a knowledge graph, the action is naturally strongly correlated with some criticaledges. For example, (“potato”, “needs”, “diced”) and (“potato”, “in”, “player”) can lead to the action“dice potato”. We refer those decisive edges correlating to an action as the supporting edge set ofthis action. Since we have grouped actions by their semantic meanings, actions within each actiontype are actually supported by similar edges. For example, “dice potato” is supported by (“potato”,“in”, “player”), and “dice apple” is supported by (“apple”, “in”, “player”). As mentioned in Sec B.2,each action type comes with an action template like “dice object ”. Based on the action template,we can perform some sorts of abstraction. For example, given an input knowledge graph labeledwith action “dice apple”, we can replace all the “apple” appearing in the graph edges and the actionwith an abstract name “ object ”. Then we can say the action “dice object ” is essentially supported by(“object ”, “in”, “player”), where the two “ object ” should be instantiated by the same word. Underthis kind of abstraction, different actions within the same action type can share a same abstractsupporting edge set which contains edges with abstract names.The abstract supporting edge set indicates the decisive edges for an action type, and it can beinstantiated for each specific action. For example, to check whether the action “dice apple” shouldbe executed, the abstract edge (“ object ”, “in”, “player”) will be instantiated to edge (“apple”, “in”,“player”). Then, the existence of (“apple”, “in”, “player”) in input knowledge graph becomes anevidence for selecting the action “dice apple”. We aim at finding an abstract supporting edge set foreach action type, and it will be used during inference.3.2.2 M INEABSTRACT SUPPORTING EDGE SETS FROM DEMONSTRATIONSFinding the abstract supporting edge set for each action type is actually a rule mining process. Thereare several off-the-shelf rule miners like FP-Growth (Han et al., 2004), Apriori (Agrawal et al., 1994),Eclat (Zaki, 2000), etc. , but they are not designed for knowledge graphs. Thus, we propose a simpleyet effective rule miner for our setting to discover the supporting edge sets.To find the Abstract Supporting Edge set ASE(Ak)for each action type Ak, we designed a numericalstatistic that is intended to reflect the importance of an edge when taking an action a∈A, andthis numerical statistic is inspired by tf-idf (Rajaraman & Ullman, 2011). Formally, for actiontypeAk, we have a subset of demonstration dataset Dk={si}(ignoring πihere). Under theabstraction mentioned above, we can count the edge frequency for every (abstract) edge einDk:freq k(e) =|{si|si∈Dk,e∈si}||Dk|. Similarly, we can also count the edge frequency for the entiredemonstration dataset: freq(e) =|{si|si∈D,e∈si}||D|.Finally, we can define an importance score of an edge w.r.t to the action type Ak:Ik(e) =freq k(e)·log(1freq (e)),where the term freq k(e)is similar to the term-frequency (tf), and the term log(1freq (e))is similar to the inverse document frequency (idf).3Under review at the ICLR 2022 workshop on Objects, Structure and CausalityThen we can get the ASE(Ak)by selecting the edges with the importance higher than a threshold,i.e., ASE (Ak) ={e|Ia(e)> τ}, where τis a hyperparameter shared across all action types.3.2.3 I NFERENCE BASED ON SUPPORTING EDGESDuring inference, we use the supporting edge sets to score each action within the action type providedby action pruner, and select the action with the highest score. Fig. 2 shows a concrete example ofusing supporting edge set to score an action. Firstly, the action pruner outputs an action type based onthe input KG (knowledge graph), and we can retrieve the abstract supporting edge set of this actiontype. Secondly, given an action within the action type, we can instantiate the abstract supporting edgeset to a specific supporting edge set by replacing the abstract names with the concrete words based onthe action, e.g, (“ object ”, “in”, “player”) will be instantiated to (“potato”, “in”, “player”) if the actionis “cook potato with oven”. Finally, we compare the input KG with the supporting edge set of eachaction to find which supporting edge set is best covered by the input KG. The number of overlappededges between the supporting edge set and the input KG will be regarded as the score of the action.Formally, the inference process of our rule-based action selector can be described as followsfs(s, A) = arg maxa∈A|s∩SE(a)|, where SE(a)is the supporting edge set associated with actiona, which is obtained by instantiating the abstract supporting edge set of action type A.StateNN-basedActionPrunerActionTypeRule-basedActionSelectorActionFigure 1: Generalframework of the two-step hybrid decisionmaking.InputKnowledgeGraph-----------------------------------[player,at,kitchen][potato,needs,roasted][fridge,is,open][potato,in,player]ActionPrunerAbstractSupportingEdgeSet------------------------------------------[object,needs,cooking_method][object,in,player]ActionType----------------------cookSupportingEdgeSet---------------------------------[potato,needs,roasted][potato,in,player]Action----------------------------cookpotatowithovenActionScore----------------------2Figure 2: An example of using supporting edge set to score an action. Basedon the knowledge graph, the action pruner first predict the action type (e.g.,cook) that needs to be taken at current state, then instantiate the abstractsupporting edge set to a concrete supporting edge set. Comparing the inputknowledge graph with the supporting edge set, we can compute action scorefor each action and select the action with highest score accordingly.4 E XPERIMENTS4.1 D ATASET SETUPWe evaluate our method on TextWorld (C ˆot ́e et al., 2018), which is a framework for designing text-based interactive games. More specifically, we use the TextWorld games generated by GATA (Ad-hikari et al., 2020). In these games, the agent is asked to cook a meal according to given recipes. Itrequires the agent to navigate among different rooms to locate and collect food ingredients specifiedin the recipe, process the food ingredients appropriately, and finally cook and eat a meal.The state received by the agent is a knowledge graph describing all the necessary information aboutthe game. All the nodes and edges in the knowledge graph are represented in text. Figure 3 shows apartial example of input knowledge graph. The actions are also represented in text. Note that thenumber of available actions vary from state to state, so most of of existing network architecture useddeep reinforcement learning cannot be directly used here.The games have four different difficulty levels, and each difficulty level contains 20 training, 20validation, and 20 test environments, which are sampled from a distribution based on the difficultylevel. The higher the difficulty levels are, the more complicated recipe will be and the more roomsfood ingredients will be distributed among. Statistics of the games are shown in Table 2. For4Under review at the ICLR 2022 workshop on Objects, Structure and Causalityevaluating model generalizability, we select the top-performing agent on validation sets and report itstest scores; all validation and test games are unseen in the training set.4.2 R ESULTSWe demonstrate the advantages of our method from three different perspectives: interpretability,generalization, and robustness .4.2.1 I NTERPRETABILITYThe interpretablity of our two-step policy is two-fold: 1) the transparent two-step decision makingprocess; 2) the rule-based models make decisions in a way which is easy to interpret by human.Table 3 shows some representative rules discovered by our rule miners. We observed that all of thefour discovered abstract supporting edges are indeed prerequisites of the “cut” actions. In particular,the abstract supporting edge (“object”, “needs”, “verb passive”) is a crucial prerequisite for the agentto select the correct verb in the “cut” actions. For example, if the input knowledge graph contains theedge (“potato”, “needs”, “sliced”), then the action “slice potato with knife” will get one more scorethan others because this edge is in the supporting edge set of action “slice potato with knife”.Since we clearly see the rule-based model makes decisions based on these rules, it is not hard to checkwhether this model works in a correct way. In this way, the agent can certainly select the correct “cut”action even in unseen test environments.4.2.2 G ENERALIZATIONTraining TestDifficulty 1 2 3 4 1 2 3 4GATA-GTF 98.6 58.4 95.6 36.1 83.8 53 33.3 23.6Vanilla RL 100 100 98.3 100 83.8 68 50 30.9Ours 100 100 100 65.5 100 100 51.7 49.7Table 1: Evaluation results on both training environments and test envi-ronment in TextWorld. The numbers show the agent’s normalized scores.We compare with twobaselines: GATA-GTFand vanilla RL. GATA-GTF is a variant fromGATA (Adhikari et al.,2020), and it uses thesame ground-truth graphinput as us. GATA-GTFprocesses the input graphs by relational graph convolutional networks (R-GCNs) (Schlichtkrull et al.,2018), and the whole pipeline is trained by DQN (Mnih et al., 2015). Vanilla RL is essentially thesame method with GATA-GTF, but implemented by ourselves. With better implementation andhyperparameter tuning, it performs better than the implementation released by the GATA authors.And vanilla RL also serves as the teacher policy used to generate demonstrations for our method.Table 1 shows the normalized scores of different methods on both training and test environment inTextWorld. The result of GATA-GTF is obtained from its paper. In all the environments, our agentachieves better generalization performance the vanilla RL (which is our RL teacher) and GATA-GTFbaselines. Our agent can generalize pretty well to the unseen test environments in all difficulty levels,while the performance of GATA-GTF and vanilla RL performs poorly in unseen test environments.4.2.3 R OBUSTNESSAs mentioned above, our method also aims at robustness. To evaluate the robustness of our models,we add different levels of noises to input knowledge graphs and regard the performance of the agentsunder noisy inputs. In this paper, we define the noise on a knowledge graph as adding additionaledges to the graph or dropping existing edges in the graph. The formal definition can be found in F.1.Table 4 shows the performance of vanilla RL and our method under noisy input graphs generatedin the above mentioned way. Note that we test robustness in training environments instead of testenvironments, because the performance of both agents in test environments are not good enough andexamining robustness of poor-performed agents does not make too much sense.We observed that in difficulty level 1, both agents are very robust under the noisy inputs. However,in difficulty level 2 & 3 & 4, the performance of the RL agents are hurts a lot by the input noises,especially in the difficulty level 4. In contrast, the agents obtained by our method are still performspretty good and the performance drops significantly less than the RL agents.5Under review at the ICLR 2022 workshop on Objects, Structure and Causality<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text The paper is well-written and the main approach could be understood. Pros: * The approach of separating out a policy intro a graph-based classification problem and then rule-mining based approach seems novel. * The experiment results are promising Cons: * Discussion regarding how such an approach could translate to standard RL benchmarks could have been discussed. * Discussing how the approach could extend to non-text-based domains could also have been discussed. ### Review Rating 2: Good workshop paper, accept ### Review Confidence 2: The reviewer is somewhat confident that the evaluation is correct<|im_end|> <|im_end|>
ryxWIgBFPS
ICLR.cc/2020/Conference
2020
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
["Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Nan Rosemary Ke", "Sebastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal"]
We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results.
["meta-learning", "transfer learning", "structure learning", "modularity", "causality"]
ABSTRACTWe propose to use a meta-learning objective that maximizes the speed of transferon a modified distribution to learn how to modularize acquired knowledge anddiscover causal dependencies. In particular, we focus on how to factor a jointdistribution into appropriate conditionals, consistent with the causal directions. Toreplace the assumption that the test cases are of the same distribution as the trainingexamples, this method exploits the assumption that the changes in distributionsare localized (e.g. to one of the marginals, for example due to an interventionon a cause). We prove that under this assumption of localized changes in causalmechanisms, the correct causal graph will tend to have only a few of its param-eters with non-zero gradient, i.e. that need to be adapted (those of the modifiedvariables). We argue and observe experimentally that this leads to faster adaptation,and use this property to define a meta-learning surrogate score which, in additionto a continuous parametrization of graphs, would favour correct causal graphs,making it possible to discover causal structure by gradient-based methods. Finally,motivated by the AI agent point of view (e.g. of a robot discovering its environ-ment autonomously), we consider how the same objective can discover the causalvariables themselves, as a transformation of observed low-level variables with nocausal meaning. Experiments in the two-variable case validate the proposed ideasand theoretical results.1 I NTRODUCTIONThe data used to train our models is often assumed to be independent and identically distributed (iid.),according to some unknown distribution. Likewise, the performance of a machine learning model istypically evaluated using test samples from the same distribution, assumed to be representative ofthe learned system’s usage. While these assumptions are well analyzed from a statistical point ofview, they are rarely satisfied in many real-world applications. For example, an accident on a majorhighway could completely perturb the trajectories of cars, and a driving policy trained in a static waymight not be robust to such changes. Ideally, we would like our models to generalize well and adaptquickly to out-of-distribution data.However, this comes at a price – in order to successfully transfer to a novel distribution, onemight need additional information about these distributions. In this paper, we are not consideringassumptions on the data distribution itself, but rather on how it changes (e.g., when going from atraining distribution to a transfer distribution, possibly resulting from some agent’s actions). We focuson the assumption that the changes are sparse when the knowledge is represented in an appropriatelymodularized way, with only one or a few of the modules having changed. This is especially relevantwhen the distributional change is due to actions by one or more agents, because agents interveneat a particular place and time, and this is reflected in the form of the interventions discussed inthe causality literature (Pearl, 2009; Peters et al., 2016), where a single causal variable is clampedto a particular value or a random variable. In general, it is difficult for agents to influence manyunderlying causal variables at a time, and although this paper is not about agent learning as such,this is a property of the world that we propose to exploit here, to help discovering these variables1Université de Montréal,2CIFAR Senior Fellow,3École Polytechnique Montréal,4Max-Planck Institutefor Intelligent Systems, Tübingen,5Canada CIFAR AI Chair1Published as a conference paper at ICLR 2020and how they are causally related to each other. In this context, the causal graph is a powerful toolbecause it tells us how perturbations in the distribution of intervened variables will propagate to allother variables and affect their distributions.As expected, it is often the case that the causal structure is not known in advance. The problem ofcausal discovery then entails obtaining the causal graph, a feat which is in general achievable onlywith strong assumptions. One such assumption is that a learner that has learned to capture the correctstructure of the true underlying data-generating process should still generalize to the case where thestructure has been perturbed in a certain, restrictive way. This can be illustrated by considering theexample of temperature and altitude from Peters et al. (2017): a learner that has learned to capture themechanisms of atmospheric physics by learning that it makes more sense to predict temperature fromthe altitude (rather than vice versa) given training data from (say) Switzerland, will still remain validwhen tested on out-of-distribution data from a less mountainous country like (say) the Netherlands. Ithas therefore been suggested that the out-of-distribution robustness of predictive models can be usedto guide the inference of the true causal structure (Peters et al., 2016; 2017).How can we exploit the assumption of localized change? As we explain theoretically and verifyexperimentally here, if we have the right knowledge representation, then we should get fast adaptationto the transfer distribution when starting from a model that is well trained on the training distribution.This arises because of our assumption that the ground truth data generative process is obtainedas the composition of independent mechanisms, and that very few ground truth mechanisms andparameters need to change when going from the training distribution to the transfer distribution. Amodel capturing a corresponding factorization of knowledge would thus require just a few updates, afew examples, for this adaptation to the transfer distribution. As shown below, the expected gradienton the unchanged parameters would be near 0 (if the model was already well trained on the trainingdistribution), so the effective search space during adaptation to the transfer distribution would begreatly reduced, which tends to produce faster adaptation, as found experimentally. Thus, basedon the assumption of small change in the right knowledge representation space, we can define ameta-learning objective that measures the speed of online adaptation in order to optimize the way inwhich knowledge should be represented, factorized and structured. This is the core idea presented inthis paper.Returning to the example of temperature and altitude: when presented with out-of-distribution datafrom the Netherlands, we expect the correct model to adapt faster given a few transfer samples ofactual weather data collected in the Netherlands. Analogous to the case of robustness, the adaptationspeed can then be used to guide the inference of the true causal structure of the problem at hand,possibly along with other sources of signal about causal structure.Contributions. We first verify on synthetic data that the model that correctly captures the underlyingcausal structure adapts faster when presented with data sampled after a performing certain interven-tions on the true two-variable causal graph (which is unknown to the learner). This suggests that theadaptation speed can indeed function as a score to assess how well the learner fits the underlyingcausal graph. We then use a smooth parameterization of the considered causal graph to directlyoptimize this score in an end-to-end gradient-based manner. Finally, we show in a simple setting thatthe score can be exploited to disentangle the correct causal variables given an unknown mixture ofthe said variables.2 W HICH IS CAUSE AND WHICH IS EFFECT ?As an illustrative example of the proposed ideas, let us consider two discrete random variables AandB, each taking Npossible values. We assume that AandBare correlated, without any hiddenconfounder. Our goal is to determine whether the underlying causal graph is A!B(AcausesB),orB!A. Note that this underlying causal graph cannot be identified from observational data froma single (training) distribution ponly, since both graphs are Markov equivalent for p(Verma & Pearl,1991); see Appendix A. In order to disambiguate between these two hypotheses, we will use samplesfrom some transfer distribution ~pin addition to our original samples from the training distribution p.2.1 T HE ADVANTAGE OF THE CORRECT CAUSAL MODELWithout loss of generality, we can fix the true causal graph to be A!B, which is unknown tothe learner. Moreover, to make the case stronger, we will consider a setting called covariate shift(Rojas-Carulla et al., 2018; Quionero-Candela et al., 2009), where we assume that the change (again,2Published as a conference paper at ICLR 2020whose nature is unknown to the learner) between the training and transfer distributions occurs afteran intervention on the cause A. In other words, the marginal of Achanges, while the conditionalp(BjA)does not, i.e. p(BjA) = ~p(BjA). Changes on the cause will be most informative, sincethey will have direct effects on B. This is sufficient to fully identify the causal graph (Hauser &Bühlmann, 2012).In order to demonstrate the advantage of choosing the causal model A!Bover the anti-causalB!A, we can compare how fast the two models can adapt to samples from the transfer distribution~p. We quantify the speed of adaptation as the log-likelihood after multiple steps of fine-tuning via(stochastic) gradient ascent, starting with both models trained on a large amount of data from thetraining distribution. In Figure 1 (see Section 3.3 for the experimental setup), we can see that themodel corresponding to the underlying causal model adapts faster. Moreover, the difference is moresignificant when adapting on a small amount of data, of the order of 10 to 30 samples from thetransfer distribution. We will make use of this property as a noisy signal to infer the direction ofcausality, which here is equivalent to choosing how to modularize the joint distribution.100101102103104Number of examples−5.0−4.8−4.6−4.4−4.2−4.0logp(D|·→· )A→BB→AFigure 1: Adaptation to the transfer distribution (average log-likelihood of the model during fine-tuning adaptation to transfer examples, vertical axis), as more transfer examples are seen by thelearner (horizontal axis). The curves are the median over 20,000 runs, with their 25-75th quantilesintervals. The dotted line is the asymptotic log-likelihood (here, that of the ground truth ~p). Thered region corresponds to the range where the effect is the most significant (10-30 samples from thetransfer distribution).2.2 P ARAMETER COUNTING ARGUMENTA simple parameter counting argument can help us understand what we are observing in Figure 1.Since we are using gradient ascent for the adaptation, let’s first inspect how the gradients of thelog-likelihood wrt. each module behave under the transfer distribution.Proposition 1. LetGbe a causal graph, and pa (training) distribution that factorizes according toG, with parameters . Let ~pbe a second (transfer) distribution that also factorizes according to G. Ifthe training and transfer distributions have the same conditional probability distributions for all Vibut a subset C(e.g. the transfer distribution is the result of an intervention on the nodes in C):p(VijPaG(Vi))d= ~p(VijPaG(Vi))8Vi=2C (1)then the expected gradient w.r.t. the parameters isuch thatVi=2Cof the log-likelihood under thetransfer distribution will be zero8Vi=2C;EV~p@logp(V)@i= 0: (2)Proposition 1 (see proof in Appendix B.1) suggests that if both distributions factorize according to thecorrect causal graph, then only the parameters of the mechanisms that changed between the trainingand transfer distributions need to be updated. This effectively reduces the number of parameters thatneed to be adapted compared to any other factorization over a different graph. It also affects thenumber of examples necessary for the adaptation, since the sample complexity of a model growsapproximately linearly with the VC-dimension (Ehrenfeucht et al., 1989; Vapnik & Chervonenkis,1971), which itself also grows approximately linearly with the number of parameters (for linear3Published as a conference paper at ICLR 2020models and neural networks; Shalev-Shwartz & Ben-David, 2014). Therefore we argue that theperformance on the transfer distribution (in terms of log-likelihood) will tend to improve faster if itfactorizes according to the correct causal graph, an assertion which may not be true for every graphbut that we can test by simulations.Recall that in our example on two discrete random variables (each taking say Nvalues), we assumedthat the underlying causal model is A!B, and the transfer distribution is the result of an interventionon the cause A. If the model we learn on the training distribution factorizes according to the correctgraph, then only N1free parameters should be updated to adapt to the shifted distribution,accounting for the change in the marginal distribution ~p(A), since the conditional ~p(BjA) =p(BjA)stays invariant. On the other hand, if the model factorizes according to the anti-causal graphB!A, then the parameters for both the marginal ~p(B)and the conditional ~p(AjB)must beadapted. Assuming there is a linear relationship between sample complexity and the number of freeparameters, the sample complexity would be O(N2)for the anti-causal graph, compared to onlyO(N)for the true underlying causal graph A!B.3 T HEMETA-TRANSFER OBJECTIVESince the speed of adaptation to some transfer distribution is closely related to the right modularizationof knowledge, we propose to use it as a noisy signal to iteratively improve inference of the causalstructure from data. Moreover, we saw in Figure 1 that the gap between correct and incorrect modelsis largest with a small amount of transfer data. In order to compare how fast some models adapt to achange in distribution, we can quantify the speed of adaptation based on their accumulated onlineperformance after fine-tuning with gradient ascent on few examples from the transfer distribution.More precisely, given a small “intervention” dataset Dint=fxtgTt=1from ~p, we can define the onlinelikelihood asLG(Dint) =TYt=1p(xt;(t)G;G)(1)G=^MLG(Dobs)(t+1)G =(t)G+rlogp(xt;(t)G;G);(3)where(t)Gaggregates all the modules’ parameters in Gaftertsteps of fine-tuning with gradientascent, with learning rate , starting from the maximum-likelihood estimate ^MLG(Dobs)on a largeamount of dataDobsfrom the training distribution p. Note that, in addition to its contribution tothe update of the parameters, each data point xtis also used to evaluate the performance of ourmodel so far; this is called a prequential analysis (Dawid, 1984), also corresponding to sequentialcross-validation (Gingras et al., 1999). From a structure learning perspective, the online likelihood(or, equivalently, its logarithm) can be interpreted as a score we would like to maximize, in order torecover the correct causal graph.3.1 C ONNECTION TO THE BAYESIAN SCOREWe can draw an interesting connection between the online log-likelihood, and a widely used score instructure learning called the Bayesian score (Heckerman et al., 1995; Geiger & Heckerman, 1994).The idea behind this score is to treat the problem of learning the structure from a fully Bayesianperspective. If we define a prior over graphs p(G)and a priorp(GjG)over the parameters of eachgraphG, the Bayesian score is defined as scoreB(G;Dint) = logp(DintjG) + logp(G), wherep(DintjG)is the marginal likelihoodp(DintjG) =TYt=1p(xtjx1;:::;xt1;G) =TYt=1ZGp(xtjG;G)p(Gjx1:t1;G)dG:(4)In the online likelihood, the adapted parameters (t)Gact as a summary of past data x1:t1. Eq. (3)can be seen as an approximation of the marginal likelihood in Eq. (4), where the posteriors overthe parameters p(Gjx1:t1;G)is approximated by the point estimate (t)G. Therefore, the onlinelog-likelihood provides a simple way to approximate the Bayesian score, which is often intractable.3.2 A SMOOTH PARAMETRIZATION OF THE CAUSAL STRUCTUREDue to the super-exponential number of possible Directed Acyclic Graphs (DAGs) over nnodes,the problem of searching for a causal structure that maximizes some score is, in general, NP-hard4Published as a conference paper at ICLR 2020(Chickering, 2002a). However, we can parametrize our belief about causal graphs by keeping trackof the probability for each directed edge to be present. This provides a smooth parametrization ofgraphs, which hinges on gradually changing our belief in individual binary decisions associated witheach edge of the causal graph. This allows us to define a fully differentiable meta-learning objective,with all the beliefs being updated at the same time by gradient descent.In this section, we study the simplest version of this idea, applied to our example on two randomvariables from Section 2. Recall that here, we only have two hypotheses to choose from: eitherA!BorB!A. We represent our belief of having an edge connecting AtoBwith a structuralparametersuch thatp(A!B) =(), where() = 1=(1 + exp())is the sigmoid function.We propose, as a meta-transfer objective , the negative log-likelihood R(a form of regret) over themixture of these two models, where the mixture parameter is given by ():R(Dint) =log [()LA!B(Dint) + (1())LB!A(Dint)] (5)This meta-learning mixture combines the online adaptation likelihoods of each model over onemeta-example or episode (specified by a Dint~p), rather than considering and linearly mixing theper-example likelihoods as in ordinary mixtures.In the experiments below, after each episode involving TexamplesDintfrom the transfer distribution~p, we updateby doing one step of gradient descent, to reduce the regret R. Therefore, in order toupdate our belief about the edge A!B, the quantity of interest is the gradient of the objective Rwith respect to the structural parameter, @R=@. This gradient is pushing ()towards the posteriorprobability that the correct model is A!B, given the evidence from the transfer data:Proposition 2. The gradient of the negative log-likelihood of the transfer data Dintin Equation (5)wrt. the structural parameter is given by@R@=p(A!B)p(A!BjDint); (6)wherep(A!BjDint)is the posterior probability of the hypothesis A!B(when the alternativeisB!A). Furthermore, this can be equivalently written as@R@=()(+ ); (7)where = logLA!B(Dint)logLB!A(Dint)is the difference between the online log-likelihoodsof the two hypotheses on the transfer data Dint.The proof is given in Appendix B.2. Note how the posterior probability is basically measuring whichhypothesis is better explaining the transfer data Dintoverall, along the adaptation trajectory. Thisposterior depends on the difference in online log-likelihoods , showing the close relation betweenminimizing the regret Rand maximizing the online log-likelihood score. The sign and magnitudeofhave a direct effect on the convergence of the meta-transfer objective. We can show that themeta-transfer objective is guaranteed to converge to one of the two hypotheses.Proposition 3. With stochastic gradient descent (and an appropriately decreasing learning rate)onEDint[R(Dint)], where the gradient steps are given by Proposition 2, the structural parameterconverges towards()!1ifEDint[LA!B(Dint)]>EDint[LB!A(Dint)]or()!0otherwise(8)This proposition (proved in Appendix B.3) shows that optimizing is equivalent to picking thehypothesis that has the smallest regret (or fastest convergence), measured as the accumulated log-likelihood of the transfer dataset Dintduring adaptation. The distribution over datasets Dintissimilar to a distribution over tasks in meta-learning. This analogy with meta-learning also appearsin our gradient-based adaptation procedure, which is linked to existing methods like the first-orderapproximation of MAML (Finn et al., 2017), and its related algorithms (Grant et al., 2018; Kim et al.,2018; Finn et al., 2018). The pseudo-code for the proposed algorithm is given in Algorithm 1.This smooth parametrization of the causal graph, along with the definition of the meta-transferobjective in Equation (5), can be extended to graphs with more than 2 variables. This generalformulation builds on the bivariate case, where decisions are binary for each individual edge of thegraph. See Appendix E for details and a generalization of Proposition 2; the structure of Algorithm 1remains unchanged. Experimentally, this generalization of the meta-transfer objective proved to beeffective on larger graphs (Ke et al., 2019), in work following the initial release of this paper.5Published as a conference paper at ICLR 2020Algorithm 1 Meta-learning algorithm for learning the structural parameterRequire: Two graph candidates G=A!BandG=B!ARequire: A training distribution pthat factorizes over the correct causal graph1:Set the initial structural parameter = 0 .equal belief for both hypotheses2:Sample a large dataset Dobsfrom the training distribution p3:Pretrain the parameters of both models with maximum likelihood on Dobs4:for each episode do5: Draw a transfer distribution ~p(via an intervention)6: Sample a (small) transfer dataset Dint=fxtgTt=1from ~p7: fort= 1;:::;T do8: Accumulate the online log-likelihood for both models LA!BandLB!Aas they adapt9: Do one step of gradient ascent for both models: (t+1)G =(t)G+rlogp(xt;(t)G;G)10: Compute the regret R(Dint)11: Compute the gradient of the regret wrt. (see Proposition 2)12: Do one step of gradient descent on the regret w.r.t. 13: Reset the models’ parameters to the maximum likelihood estimate on Dobs3.3 E XPERIMENTAL RESULTSTo illustrate the convergence result from Proposition 3, we experiment with learning the structuralparameterin a bivariate model. Following the setting presented in Section 2.1, we assume in allour experiments that AandBare two correlated random variables, and the underlying causal model(unknown to the algorithm) is fixed to A!B. Recall that both variables are observed, and there isno hidden confounding factor. Since the correct causal model is A!B, the structural parametershould converge correctly, with ()!1. The details of the experimental setups, as well as detailsabout the models, can be found in Appendix C.We first experiment with the case where both AandBare discrete random variables, taking Npossible values. In this setting, we explored how two different parametrizations of the conditionalprobability distributions (CPDs) might influence the convergence of the structural parameter. In thefirst experiment, we parametrized the CPDs as multinomial logistic CPDs (Koller & Friedman, 2009),maintaining a tabular representation of the conditional probabilities. For example, the conditionaldistributionp(BjA)is represented asp(B=jjA=i;) =exp(ij)Pkexp(ik); (9)where the parameter is anNNmatrix. We used a similar representation for the other marginaland conditional distributions p(A),p(B)andp(AjB). In a second experiment, we used structuredCPDs, parametrized with multi-layer perceptrons (MLPs) with a softmax nonlinearity at the outputlayer. The advantage over a tabular representation is the ability to share parameters for similarcontexts, and reduces the overall number of parameters required for each module. This would becrucial if either the number of categories N, or the number of variables, increased significantly.0 100 200 300 400 500Number of episodes0.00.20.40.60.81.0σ(γ)N = 10N = 100ABAB0 100 200 300 400 500Number of episodes0.00.20.40.60.81.0σ(γ)N = 10N = 100ABABFigure 2: Evolution of the belief that A!Bis the correct causal model, as the number of episodesincreases, starting with an equal belief for both hypotheses. (Left) multinomial logistic CPDs, (right)MLP parametrization.6Published as a conference paper at ICLR 2020In Figure 2, we show the evolution of (), which is the model’s belief of A!Bbeing the correctcausal model, as the number of episodes increases, for different values of N. As expected, the struc-tural parameter converges correctly to ()!1, within a few hundreds episodes. This observationis consistent in both experiments, regardless of the parametrization of the CPDs. Interestingly, thestructural parameter tends to converge faster with a larger value of Nand a tabular representation,illustrating the effect of the parameter counting argument described in Section 2.2, which is strongerasNincreases. Precisely when generalization is more difficult (too many parameters and too fewexamples), we get a stronger signal about the better modularization .We also experimented with AandBbeing continuous random variables, where they follow eithermultimodal distributions, or they are linear-Gaussian. Similar to Figure 2, we found that the structuralparameter()consistently converges to the correct causal model as well. See Appendix C.3 andAppendix C.4 for details about these experiments.4 R EPRESENTATION LEARNINGSo far, we have assumed that all the variables in the causal graph are fully observed. However, in manyrealistic scenarios for learning agents, the learner might only have access to low-level observations(e.g. sensory-level data, like pixels or acoustic samples), which are very unlikely to be individuallymeaningful as causal variables. In that case, our assumption that the changes in distributions arelocalized might not hold at this level of observed data. To tackle this, we propose to follow the deeplearning objective of disentangling the underlying causal variables (Bengio et al., 2013), and learn arepresentation in which the variables can be meaningfully cause or effect of each other. Our approachis to jointly learn this representation, as well as the causal graph over the latent variables.We consider the simplest setting where the learner maps raw observations to a hidden representationspace with two causal variables, via an encoder E. The encoder is trained such that this latent spacehelps to optimize the meta-transfer objective described in Section 3. We consider the parametersof the encoder, as well as (see Section 3.2), as part of the set of structural meta-parameters tobe optimized. We assume that we have two raw observed variables (X;Y ), generated from thetrue causal variables (A;B)via the action of a ground truth decoder D(or generator network), thatthe learner is not aware of. This allows us to still have the ability to intervene on the underlyingcausal variables (e.g. to shift from training to transfer distributions) for the purpose of conductingexperiments, while the learner only sees data from (X;Y ).Data generation (unknown to the learner)R(D) R(E)(A;B ) (X;Y ) (U;V ) Decoder D Encoder EABUUVVorFigure 3: The complete experimental setup. The ground-truth variables (A;B)are assumed tooriginate from the true underlying causal model, but the observations available to the learner aresamples from (X;Y ). The observed variables (X;Y )are derived from (A;B)via the action of adecoderD. The encoderEmust be learned to undo this action of the decoder, and thereby recoverthe true causal variables up to symmetries. The components of the data generation on the left arehidden to the model.In this experiment, we only want to validate the proposed meta-objective as a way to recover a goodencoder, and we assume that both the decoder Dand the encoderEare rotations, whose angles areDandErespectively. The encoder maps the raw observed variables (X;Y )to the latent variables(U;V), over which we want to infer the causal graph. Similar to our experiments in Section 3.3, weassume that the underlying causal graph is A!B, and the transfer distribution ~p(now over (X;Y ))is the result of an intervention over A. Therefore, the encoder should ideally recover the structureU!Vin the learned latent space, along with the angle of the encoder E=D. However,since the encoder is not uniquely defined, V!Umight also be a valid solution, if the encoder isE==2D. Details about the experimental setup are provided in Appendix D. In Figure 4,7Published as a conference paper at ICLR 2020we consider that the learner succeeds, since both structural parameters converge to one of the twooptions. This shows how minimizing the meta-transfer objective can disentangle (here in a verysimple setting) the ground-truth variables.0 200 400 600 800 1000Number of episodes−π4−π80π8π4θEθESolution 1/parenleftbig+π4/parenrightbigSolution 2/parenleftbig−π4/parenrightbig0 200 400 600 800 1000Number of episodes0.00.20.40.60.81.0σ(γ)UVUVFigure 4: Evolution of structural parameters Eand, as number of episodes increases. Angle of therotation for the decoder is set to D==4, so there are two valid solutions for the angle Eof theencoder: either E==4, orE==4; the model converges to the former solution.5 R ELATED WORKAs stated already by Bengio et al. (2013), and clearly demonstrated by Locatello et al. (2019),assumptions, priors, or inductive biases are necessary to identify the underlying explanatory variables.The latter paper (Locatello et al., 2019) also reviews and evaluates recent work on disentangling,and discusses different metrics that have been proposed. Chalupka et al. (2015; 2017) recognizethe potential and the challenges underlying causal representation learning. Closely related to ourefforts is (Chalupka et al., 2017), which places a strong focus on the coalescence of low (e.g. sensory)level observations ( microvariables ) to higher level causal variables ( macrovariables ), albeit in a moreobservational setting.There also exists an extensive literature on learning the structure of Bayesian networks from (observa-tional) data, via score-based methods (Koller & Friedman, 2009). Heckerman et al. (1995); Daly et al.(2011) provide a comprehensive review of these methods. Many of these algorithms are based ongreedy-search with local changes to the graphs (Chickering, 2002b), whereas we propose a continuousand fully-differentiable alternative. While most of these approaches only rely on observational data,it is sometimes possible to extend the definition of these scores to interventional data (Hauser &Bühlmann, 2012). The online-likelihood score presented here supports interventional data as its mainfeature.Some identifiability results exist for causal models with purely observational data though (Peters et al.,2017), based on specific assumptions on the underlying causal graph. However, causal discovery ismore natural under local changes in distributions (Tian & Pearl, 2001), similar to the setting usedin this paper. Pearl’s seminal work on do-calculus (Pearl, 1995; 2009; Bareinboim & Pearl, 2016)lays the foundation for expressing the impact of interventions on causal graphical models. Here weare proposing a meta-learning objective function for learning the causal structure (without hiddenvariables), requiring mild assumptions such as localized changes in distributions and faithfulness ofthe causal graph, in contrast to the stronger assumptions necessary for these identifiability results.Our work is also related to other recent advances in causation, domain adaptation, and transferlearning. Magliacane et al. (2018) have sought to identify a subset of features that leads to the bestpredictions for a variable of interest in a source domain, such that the conditional distribution ofthat variable given these features is the same in the target domain. Zhang et al. (2017) also examinenon-stationarity and find that it makes causal discovery easier. Our adaptation procedure, usinggradient ascent, is also closely related to gradient-based methods in meta-learning (Finn et al., 2017;Finn, 2018). Alet et al. (2018) proposed a meta-learning algorithm to recover a set of specializedmodules, but did not establish any connections to causal mechanisms. More recently, Dasgupta et al.(2019) adopted a meta-learning approach to perform causal inference on purely observational data.8Published as a conference paper at ICLR 20206 D ISCUSSION & F UTURE WORKWe have established, in very simple bivariate settings, that the rate at which a learner adapts tosparse changes in the distribution of observed data can be exploited to infer the causal structure, anddisentangle the causal variables. This relies on the assumption that with the correct causal structure,those distributional changes are localized. We have demonstrated these ideas through some theoreticalresults, as well as experimental validation. The source code for the experiments is available here:https://bit.ly/2M6X1al .This work is only a first step in the direction of causal structure learning based on the speed ofadaptation to modified distributions. On the experimental side, many settings other than those studiedhere should be considered, with different kinds of parametrizations, richer and larger causal graphs(see already Ke et al. (2019), based on a first version of this paper), or different kinds of optimizationprocedures. On the theoretical side, much more needs to be done to formally link the localityof interventions to faster adaptation, to clarify the conditions for this to work. Also, more workneeds to be done in exploring how the proposed ideas can be used to learn good representations inwhich the causal variables are disentangled. Scaling up these ideas would permit their applicationtowards improving the way learning agents deal with non-stationarities, and thus improving samplecomplexity and robustness of these agents.An extreme view of disentangling is that the explanatory variables should be marginally independent,and many deep generative models (Goodfellow et al., 2016), and Independent Component Analysismodels (Hyvärinen et al., 2001; Hyvärinen et al., 2018), are built on this assumption. However, thekinds of high-level variables that we manipulate with natural language are not marginally independent:they are related to each other through statements that are usually expressed in sentences (e.g. asentence in natural language, or a classical symbolic AI fact or rule), involving only a few concepts ata time. This kind of assumption has been proposed to help discover relevant high-level representationsfrom raw observations, such as the consciousness prior (Bengio, 2017), with the idea that humansfocus at any particular time on just a few concepts that are present to our consciousness. The workpresented here could provide an interesting meta-learning approach to help learn such encodersoutputting causal variables, as well as figure out how the resulting variables are related to each other.In that case, one should distinguish two important assumptions: the first one is that the causal graphis sparse, which a common assumption in structure learning (Schmidt et al., 2007); the second is thatthe changes in distributions are sparse, which is the focus of this work.
rkl6smfDtB
Official Blind Review #2
8: Accept
Summary: The paper first shows that, in a very simple two-variable task, the model with the correct underlying structure will adapt faster to a causal intervention than the model with the incorrect structure. This idea is used to develop a “meta-transfer” objective function for which gradient ascent on a continuous representation of the model structure allows learning of that structure. The paper shows that optimizing with respect to this objective with a simple model is guaranteed to converge to the correct structure, and also presents experimental results on toy problems to demonstrate. Overall: Accept. I really enjoyed reading this paper. It is clear, well-motivated, well-written, does a good job of connecting to related work, and presents an interesting method for structure learning. While the experiments are quite toy and questions about how well this will work in more complex models with many variables remain largely unaddressed, these do not detract much from the paper for me. Instead, the paper does a good job of motivating its contribution and exploring its effect in simple intelligible tasks, and I feel I got more out of this paper than most SOTA papers. Clarity: Very clear. Significance: Potentially quite significant as this is starting to bring causal structure learning into the realm of tensorflow and pytorch. Questions and comments: - All else being equal, the speed of adaptation between two very similar models will serve as a good proxy, as shown in this paper. However, I can easily imagine scenarios where the two models one wants to differentiate between are quite different, and have very different optimization landscapes. Here, the speed of adaptation will be quite dependent on these landscapes and not just on the underlying model structure. Do you have thoughts about how this can be extended to such a scenario? - The parameter counting argument is not nearly so strong if what actually changes is the conditional p(A|B). In that case, the sample complexity for the correct model would be N^2 = O(N^2) and for the incorrect model would be N + N^2 = O(N^2). Does the objective still work here? Would be great to add an additional experiment showing the results in this case. - Doing an intervention and drawing a new D_int for each step of gradient descent seems quite prohibitive in a lot of domains. Are there ways to decrease this burden? - In Figure 2, can you speak to why the N=100 curve for the MLP parameterization converges more slowly than the N=10 curve? I would still expect more data to be beneficial here.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms ### Paper Abstract We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results. ### Paper Keywords ["meta-learning", "transfer learning", "structure learning", "modularity", "causality"] ### Paper Content ABSTRACTWe propose to use a meta-learning objective that maximizes the speed of transferon a modified distribution to learn how to modularize acquired knowledge anddiscover causal dependencies. In particular, we focus on how to factor a jointdistribution into appropriate conditionals, consistent with the causal directions. Toreplace the assumption that the test cases are of the same distribution as the trainingexamples, this method exploits the assumption that the changes in distributionsare localized (e.g. to one of the marginals, for example due to an interventionon a cause). We prove that under this assumption of localized changes in causalmechanisms, the correct causal graph will tend to have only a few of its param-eters with non-zero gradient, i.e. that need to be adapted (those of the modifiedvariables). We argue and observe experimentally that this leads to faster adaptation,and use this property to define a meta-learning surrogate score which, in additionto a continuous parametrization of graphs, would favour correct causal graphs,making it possible to discover causal structure by gradient-based methods. Finally,motivated by the AI agent point of view (e.g. of a robot discovering its environ-ment autonomously), we consider how the same objective can discover the causalvariables themselves, as a transformation of observed low-level variables with nocausal meaning. Experiments in the two-variable case validate the proposed ideasand theoretical results.1 I NTRODUCTIONThe data used to train our models is often assumed to be independent and identically distributed (iid.),according to some unknown distribution. Likewise, the performance of a machine learning model istypically evaluated using test samples from the same distribution, assumed to be representative ofthe learned system’s usage. While these assumptions are well analyzed from a statistical point ofview, they are rarely satisfied in many real-world applications. For example, an accident on a majorhighway could completely perturb the trajectories of cars, and a driving policy trained in a static waymight not be robust to such changes. Ideally, we would like our models to generalize well and adaptquickly to out-of-distribution data.However, this comes at a price – in order to successfully transfer to a novel distribution, onemight need additional information about these distributions. In this paper, we are not consideringassumptions on the data distribution itself, but rather on how it changes (e.g., when going from atraining distribution to a transfer distribution, possibly resulting from some agent’s actions). We focuson the assumption that the changes are sparse when the knowledge is represented in an appropriatelymodularized way, with only one or a few of the modules having changed. This is especially relevantwhen the distributional change is due to actions by one or more agents, because agents interveneat a particular place and time, and this is reflected in the form of the interventions discussed inthe causality literature (Pearl, 2009; Peters et al., 2016), where a single causal variable is clampedto a particular value or a random variable. In general, it is difficult for agents to influence manyunderlying causal variables at a time, and although this paper is not about agent learning as such,this is a property of the world that we propose to exploit here, to help discovering these variables1Université de Montréal,2CIFAR Senior Fellow,3École Polytechnique Montréal,4Max-Planck Institutefor Intelligent Systems, Tübingen,5Canada CIFAR AI Chair1Published as a conference paper at ICLR 2020and how they are causally related to each other. In this context, the causal graph is a powerful toolbecause it tells us how perturbations in the distribution of intervened variables will propagate to allother variables and affect their distributions.As expected, it is often the case that the causal structure is not known in advance. The problem ofcausal discovery then entails obtaining the causal graph, a feat which is in general achievable onlywith strong assumptions. One such assumption is that a learner that has learned to capture the correctstructure of the true underlying data-generating process should still generalize to the case where thestructure has been perturbed in a certain, restrictive way. This can be illustrated by considering theexample of temperature and altitude from Peters et al. (2017): a learner that has learned to capture themechanisms of atmospheric physics by learning that it makes more sense to predict temperature fromthe altitude (rather than vice versa) given training data from (say) Switzerland, will still remain validwhen tested on out-of-distribution data from a less mountainous country like (say) the Netherlands. Ithas therefore been suggested that the out-of-distribution robustness of predictive models can be usedto guide the inference of the true causal structure (Peters et al., 2016; 2017).How can we exploit the assumption of localized change? As we explain theoretically and verifyexperimentally here, if we have the right knowledge representation, then we should get fast adaptationto the transfer distribution when starting from a model that is well trained on the training distribution.This arises because of our assumption that the ground truth data generative process is obtainedas the composition of independent mechanisms, and that very few ground truth mechanisms andparameters need to change when going from the training distribution to the transfer distribution. Amodel capturing a corresponding factorization of knowledge would thus require just a few updates, afew examples, for this adaptation to the transfer distribution. As shown below, the expected gradienton the unchanged parameters would be near 0 (if the model was already well trained on the trainingdistribution), so the effective search space during adaptation to the transfer distribution would begreatly reduced, which tends to produce faster adaptation, as found experimentally. Thus, basedon the assumption of small change in the right knowledge representation space, we can define ameta-learning objective that measures the speed of online adaptation in order to optimize the way inwhich knowledge should be represented, factorized and structured. This is the core idea presented inthis paper.Returning to the example of temperature and altitude: when presented with out-of-distribution datafrom the Netherlands, we expect the correct model to adapt faster given a few transfer samples ofactual weather data collected in the Netherlands. Analogous to the case of robustness, the adaptationspeed can then be used to guide the inference of the true causal structure of the problem at hand,possibly along with other sources of signal about causal structure.Contributions. We first verify on synthetic data that the model that correctly captures the underlyingcausal structure adapts faster when presented with data sampled after a performing certain interven-tions on the true two-variable causal graph (which is unknown to the learner). This suggests that theadaptation speed can indeed function as a score to assess how well the learner fits the underlyingcausal graph. We then use a smooth parameterization of the considered causal graph to directlyoptimize this score in an end-to-end gradient-based manner. Finally, we show in a simple setting thatthe score can be exploited to disentangle the correct causal variables given an unknown mixture ofthe said variables.2 W HICH IS CAUSE AND WHICH IS EFFECT ?As an illustrative example of the proposed ideas, let us consider two discrete random variables AandB, each taking Npossible values. We assume that AandBare correlated, without any hiddenconfounder. Our goal is to determine whether the underlying causal graph is A!B(AcausesB),orB!A. Note that this underlying causal graph cannot be identified from observational data froma single (training) distribution ponly, since both graphs are Markov equivalent for p(Verma & Pearl,1991); see Appendix A. In order to disambiguate between these two hypotheses, we will use samplesfrom some transfer distribution ~pin addition to our original samples from the training distribution p.2.1 T HE ADVANTAGE OF THE CORRECT CAUSAL MODELWithout loss of generality, we can fix the true causal graph to be A!B, which is unknown tothe learner. Moreover, to make the case stronger, we will consider a setting called covariate shift(Rojas-Carulla et al., 2018; Quionero-Candela et al., 2009), where we assume that the change (again,2Published as a conference paper at ICLR 2020whose nature is unknown to the learner) between the training and transfer distributions occurs afteran intervention on the cause A. In other words, the marginal of Achanges, while the conditionalp(BjA)does not, i.e. p(BjA) = ~p(BjA). Changes on the cause will be most informative, sincethey will have direct effects on B. This is sufficient to fully identify the causal graph (Hauser &Bühlmann, 2012).In order to demonstrate the advantage of choosing the causal model A!Bover the anti-causalB!A, we can compare how fast the two models can adapt to samples from the transfer distribution~p. We quantify the speed of adaptation as the log-likelihood after multiple steps of fine-tuning via(stochastic) gradient ascent, starting with both models trained on a large amount of data from thetraining distribution. In Figure 1 (see Section 3.3 for the experimental setup), we can see that themodel corresponding to the underlying causal model adapts faster. Moreover, the difference is moresignificant when adapting on a small amount of data, of the order of 10 to 30 samples from thetransfer distribution. We will make use of this property as a noisy signal to infer the direction ofcausality, which here is equivalent to choosing how to modularize the joint distribution.100101102103104Number of examples−5.0−4.8−4.6−4.4−4.2−4.0logp(D|·→· )A→BB→AFigure 1: Adaptation to the transfer distribution (average log-likelihood of the model during fine-tuning adaptation to transfer examples, vertical axis), as more transfer examples are seen by thelearner (horizontal axis). The curves are the median over 20,000 runs, with their 25-75th quantilesintervals. The dotted line is the asymptotic log-likelihood (here, that of the ground truth ~p). Thered region corresponds to the range where the effect is the most significant (10-30 samples from thetransfer distribution).2.2 P ARAMETER COUNTING ARGUMENTA simple parameter counting argument can help us understand what we are observing in Figure 1.Since we are using gradient ascent for the adaptation, let’s first inspect how the gradients of thelog-likelihood wrt. each module behave under the transfer distribution.Proposition 1. LetGbe a causal graph, and pa (training) distribution that factorizes according toG, with parameters . Let ~pbe a second (transfer) distribution that also factorizes according to G. Ifthe training and transfer distributions have the same conditional probability distributions for all Vibut a subset C(e.g. the transfer distribution is the result of an intervention on the nodes in C):p(VijPaG(Vi))d= ~p(VijPaG(Vi))8Vi=2C (1)then the expected gradient w.r.t. the parameters isuch thatVi=2Cof the log-likelihood under thetransfer distribution will be zero8Vi=2C;EV~p@logp(V)@i= 0: (2)Proposition 1 (see proof in Appendix B.1) suggests that if both distributions factorize according to thecorrect causal graph, then only the parameters of the mechanisms that changed between the trainingand transfer distributions need to be updated. This effectively reduces the number of parameters thatneed to be adapted compared to any other factorization over a different graph. It also affects thenumber of examples necessary for the adaptation, since the sample complexity of a model growsapproximately linearly with the VC-dimension (Ehrenfeucht et al., 1989; Vapnik & Chervonenkis,1971), which itself also grows approximately linearly with the number of parameters (for linear3Published as a conference paper at ICLR 2020models and neural networks; Shalev-Shwartz & Ben-David, 2014). Therefore we argue that theperformance on the transfer distribution (in terms of log-likelihood) will tend to improve faster if itfactorizes according to the correct causal graph, an assertion which may not be true for every graphbut that we can test by simulations.Recall that in our example on two discrete random variables (each taking say Nvalues), we assumedthat the underlying causal model is A!B, and the transfer distribution is the result of an interventionon the cause A. If the model we learn on the training distribution factorizes according to the correctgraph, then only N1free parameters should be updated to adapt to the shifted distribution,accounting for the change in the marginal distribution ~p(A), since the conditional ~p(BjA) =p(BjA)stays invariant. On the other hand, if the model factorizes according to the anti-causal graphB!A, then the parameters for both the marginal ~p(B)and the conditional ~p(AjB)must beadapted. Assuming there is a linear relationship between sample complexity and the number of freeparameters, the sample complexity would be O(N2)for the anti-causal graph, compared to onlyO(N)for the true underlying causal graph A!B.3 T HEMETA-TRANSFER OBJECTIVESince the speed of adaptation to some transfer distribution is closely related to the right modularizationof knowledge, we propose to use it as a noisy signal to iteratively improve inference of the causalstructure from data. Moreover, we saw in Figure 1 that the gap between correct and incorrect modelsis largest with a small amount of transfer data. In order to compare how fast some models adapt to achange in distribution, we can quantify the speed of adaptation based on their accumulated onlineperformance after fine-tuning with gradient ascent on few examples from the transfer distribution.More precisely, given a small “intervention” dataset Dint=fxtgTt=1from ~p, we can define the onlinelikelihood asLG(Dint) =TYt=1p(xt;(t)G;G)(1)G=^MLG(Dobs)(t+1)G =(t)G+rlogp(xt;(t)G;G);(3)where(t)Gaggregates all the modules’ parameters in Gaftertsteps of fine-tuning with gradientascent, with learning rate , starting from the maximum-likelihood estimate ^MLG(Dobs)on a largeamount of dataDobsfrom the training distribution p. Note that, in addition to its contribution tothe update of the parameters, each data point xtis also used to evaluate the performance of ourmodel so far; this is called a prequential analysis (Dawid, 1984), also corresponding to sequentialcross-validation (Gingras et al., 1999). From a structure learning perspective, the online likelihood(or, equivalently, its logarithm) can be interpreted as a score we would like to maximize, in order torecover the correct causal graph.3.1 C ONNECTION TO THE BAYESIAN SCOREWe can draw an interesting connection between the online log-likelihood, and a widely used score instructure learning called the Bayesian score (Heckerman et al., 1995; Geiger & Heckerman, 1994).The idea behind this score is to treat the problem of learning the structure from a fully Bayesianperspective. If we define a prior over graphs p(G)and a priorp(GjG)over the parameters of eachgraphG, the Bayesian score is defined as scoreB(G;Dint) = logp(DintjG) + logp(G), wherep(DintjG)is the marginal likelihoodp(DintjG) =TYt=1p(xtjx1;:::;xt1;G) =TYt=1ZGp(xtjG;G)p(Gjx1:t1;G)dG:(4)In the online likelihood, the adapted parameters (t)Gact as a summary of past data x1:t1. Eq. (3)can be seen as an approximation of the marginal likelihood in Eq. (4), where the posteriors overthe parameters p(Gjx1:t1;G)is approximated by the point estimate (t)G. Therefore, the onlinelog-likelihood provides a simple way to approximate the Bayesian score, which is often intractable.3.2 A SMOOTH PARAMETRIZATION OF THE CAUSAL STRUCTUREDue to the super-exponential number of possible Directed Acyclic Graphs (DAGs) over nnodes,the problem of searching for a causal structure that maximizes some score is, in general, NP-hard4Published as a conference paper at ICLR 2020(Chickering, 2002a). However, we can parametrize our belief about causal graphs by keeping trackof the probability for each directed edge to be present. This provides a smooth parametrization ofgraphs, which hinges on gradually changing our belief in individual binary decisions associated witheach edge of the causal graph. This allows us to define a fully differentiable meta-learning objective,with all the beliefs being updated at the same time by gradient descent.In this section, we study the simplest version of this idea, applied to our example on two randomvariables from Section 2. Recall that here, we only have two hypotheses to choose from: eitherA!BorB!A. We represent our belief of having an edge connecting AtoBwith a structuralparametersuch thatp(A!B) =(), where() = 1=(1 + exp())is the sigmoid function.We propose, as a meta-transfer objective , the negative log-likelihood R(a form of regret) over themixture of these two models, where the mixture parameter is given by ():R(Dint) =log [()LA!B(Dint) + (1())LB!A(Dint)] (5)This meta-learning mixture combines the online adaptation likelihoods of each model over onemeta-example or episode (specified by a Dint~p), rather than considering and linearly mixing theper-example likelihoods as in ordinary mixtures.In the experiments below, after each episode involving TexamplesDintfrom the transfer distribution~p, we updateby doing one step of gradient descent, to reduce the regret R. Therefore, in order toupdate our belief about the edge A!B, the quantity of interest is the gradient of the objective Rwith respect to the structural parameter, @R=@. This gradient is pushing ()towards the posteriorprobability that the correct model is A!B, given the evidence from the transfer data:Proposition 2. The gradient of the negative log-likelihood of the transfer data Dintin Equation (5)wrt. the structural parameter is given by@R@=p(A!B)p(A!BjDint); (6)wherep(A!BjDint)is the posterior probability of the hypothesis A!B(when the alternativeisB!A). Furthermore, this can be equivalently written as@R@=()(+ ); (7)where = logLA!B(Dint)logLB!A(Dint)is the difference between the online log-likelihoodsof the two hypotheses on the transfer data Dint.The proof is given in Appendix B.2. Note how the posterior probability is basically measuring whichhypothesis is better explaining the transfer data Dintoverall, along the adaptation trajectory. Thisposterior depends on the difference in online log-likelihoods , showing the close relation betweenminimizing the regret Rand maximizing the online log-likelihood score. The sign and magnitudeofhave a direct effect on the convergence of the meta-transfer objective. We can show that themeta-transfer objective is guaranteed to converge to one of the two hypotheses.Proposition 3. With stochastic gradient descent (and an appropriately decreasing learning rate)onEDint[R(Dint)], where the gradient steps are given by Proposition 2, the structural parameterconverges towards()!1ifEDint[LA!B(Dint)]>EDint[LB!A(Dint)]or()!0otherwise(8)This proposition (proved in Appendix B.3) shows that optimizing is equivalent to picking thehypothesis that has the smallest regret (or fastest convergence), measured as the accumulated log-likelihood of the transfer dataset Dintduring adaptation. The distribution over datasets Dintissimilar to a distribution over tasks in meta-learning. This analogy with meta-learning also appearsin our gradient-based adaptation procedure, which is linked to existing methods like the first-orderapproximation of MAML (Finn et al., 2017), and its related algorithms (Grant et al., 2018; Kim et al.,2018; Finn et al., 2018). The pseudo-code for the proposed algorithm is given in Algorithm 1.This smooth parametrization of the causal graph, along with the definition of the meta-transferobjective in Equation (5), can be extended to graphs with more than 2 variables. This generalformulation builds on the bivariate case, where decisions are binary for each individual edge of thegraph. See Appendix E for details and a generalization of Proposition 2; the structure of Algorithm 1remains unchanged. Experimentally, this generalization of the meta-transfer objective proved to beeffective on larger graphs (Ke et al., 2019), in work following the initial release of this paper.5Published as a conference paper at ICLR 2020Algorithm 1 Meta-learning algorithm for learning the structural parameterRequire: Two graph candidates G=A!BandG=B!ARequire: A training distribution pthat factorizes over the correct causal graph1:Set the initial structural parameter = 0 .equal belief for both hypotheses2:Sample a large dataset Dobsfrom the training distribution p3:Pretrain the parameters of both models with maximum likelihood on Dobs4:for each episode do5: Draw a transfer distribution ~p(via an intervention)6: Sample a (small) transfer dataset Dint=fxtgTt=1from ~p7: fort= 1;:::;T do8: Accumulate the online log-likelihood for both models LA!BandLB!Aas they adapt9: Do one step of gradient ascent for both models: (t+1)G =(t)G+rlogp(xt;(t)G;G)10: Compute the regret R(Dint)11: Compute the gradient of the regret wrt. (see Proposition 2)12: Do one step of gradient descent on the regret w.r.t. 13: Reset the models’ parameters to the maximum likelihood estimate on Dobs3.3 E XPERIMENTAL RESULTSTo illustrate the convergence result from Proposition 3, we experiment with learning the structuralparameterin a bivariate model. Following the setting presented in Section 2.1, we assume in allour experiments that AandBare two correlated random variables, and the underlying causal model(unknown to the algorithm) is fixed to A!B. Recall that both variables are observed, and there isno hidden confounding factor. Since the correct causal model is A!B, the structural parametershould converge correctly, with ()!1. The details of the experimental setups, as well as detailsabout the models, can be found in Appendix C.We first experiment with the case where both AandBare discrete random variables, taking Npossible values. In this setting, we explored how two different parametrizations of the conditionalprobability distributions (CPDs) might influence the convergence of the structural parameter. In thefirst experiment, we parametrized the CPDs as multinomial logistic CPDs (Koller & Friedman, 2009),maintaining a tabular representation of the conditional probabilities. For example, the conditionaldistributionp(BjA)is represented asp(B=jjA=i;) =exp(ij)Pkexp(ik); (9)where the parameter is anNNmatrix. We used a similar representation for the other marginaland conditional distributions p(A),p(B)andp(AjB). In a second experiment, we used structuredCPDs, parametrized with multi-layer perceptrons (MLPs) with a softmax nonlinearity at the outputlayer. The advantage over a tabular representation is the ability to share parameters for similarcontexts, and reduces the overall number of parameters required for each module. This would becrucial if either the number of categories N, or the number of variables, increased significantly.0 100 200 300 400 500Number of episodes0.00.20.40.60.81.0σ(γ)N = 10N = 100ABAB0 100 200 300 400 500Number of episodes0.00.20.40.60.81.0σ(γ)N = 10N = 100ABABFigure 2: Evolution of the belief that A!Bis the correct causal model, as the number of episodesincreases, starting with an equal belief for both hypotheses. (Left) multinomial logistic CPDs, (right)MLP parametrization.6Published as a conference paper at ICLR 2020In Figure 2, we show the evolution of (), which is the model’s belief of A!Bbeing the correctcausal model, as the number of episodes increases, for different values of N. As expected, the struc-tural parameter converges correctly to ()!1, within a few hundreds episodes. This observationis consistent in both experiments, regardless of the parametrization of the CPDs. Interestingly, thestructural parameter tends to converge faster with a larger value of Nand a tabular representation,illustrating the effect of the parameter counting argument described in Section 2.2, which is strongerasNincreases. Precisely when generalization is more difficult (too many parameters and too fewexamples), we get a stronger signal about the better modularization .We also experimented with AandBbeing continuous random variables, where they follow eithermultimodal distributions, or they are linear-Gaussian. Similar to Figure 2, we found that the structuralparameter()consistently converges to the correct causal model as well. See Appendix C.3 andAppendix C.4 for details about these experiments.4 R EPRESENTATION LEARNINGSo far, we have assumed that all the variables in the causal graph are fully observed. However, in manyrealistic scenarios for learning agents, the learner might only have access to low-level observations(e.g. sensory-level data, like pixels or acoustic samples), which are very unlikely to be individuallymeaningful as causal variables. In that case, our assumption that the changes in distributions arelocalized might not hold at this level of observed data. To tackle this, we propose to follow the deeplearning objective of disentangling the underlying causal variables (Bengio et al., 2013), and learn arepresentation in which the variables can be meaningfully cause or effect of each other. Our approachis to jointly learn this representation, as well as the causal graph over the latent variables.We consider the simplest setting where the learner maps raw observations to a hidden representationspace with two causal variables, via an encoder E. The encoder is trained such that this latent spacehelps to optimize the meta-transfer objective described in Section 3. We consider the parametersof the encoder, as well as (see Section 3.2), as part of the set of structural meta-parameters tobe optimized. We assume that we have two raw observed variables (X;Y ), generated from thetrue causal variables (A;B)via the action of a ground truth decoder D(or generator network), thatthe learner is not aware of. This allows us to still have the ability to intervene on the underlyingcausal variables (e.g. to shift from training to transfer distributions) for the purpose of conductingexperiments, while the learner only sees data from (X;Y ).Data generation (unknown to the learner)R(D) R(E)(A;B ) (X;Y ) (U;V ) Decoder D Encoder EABUUVVorFigure 3: The complete experimental setup. The ground-truth variables (A;B)are assumed tooriginate from the true underlying causal model, but the observations available to the learner aresamples from (X;Y ). The observed variables (X;Y )are derived from (A;B)via the action of adecoderD. The encoderEmust be learned to undo this action of the decoder, and thereby recoverthe true causal variables up to symmetries. The components of the data generation on the left arehidden to the model.In this experiment, we only want to validate the proposed meta-objective as a way to recover a goodencoder, and we assume that both the decoder Dand the encoderEare rotations, whose angles areDandErespectively. The encoder maps the raw observed variables (X;Y )to the latent variables(U;V), over which we want to infer the causal graph. Similar to our experiments in Section 3.3, weassume that the underlying causal graph is A!B, and the transfer distribution ~p(now over (X;Y ))is the result of an intervention over A. Therefore, the encoder should ideally recover the structureU!Vin the learned latent space, along with the angle of the encoder E=D. However,since the encoder is not uniquely defined, V!Umight also be a valid solution, if the encoder isE==2D. Details about the experimental setup are provided in Appendix D. In Figure 4,7Published as a conference paper at ICLR 2020we consider that the learner succeeds, since both structural parameters converge to one of the twooptions. This shows how minimizing the meta-transfer objective can disentangle (here in a verysimple setting) the ground-truth variables.0 200 400 600 800 1000Number of episodes−π4−π80π8π4θEθESolution 1/parenleftbig+π4/parenrightbigSolution 2/parenleftbig−π4/parenrightbig0 200 400 600 800 1000Number of episodes0.00.20.40.60.81.0σ(γ)UVUVFigure 4: Evolution of structural parameters Eand, as number of episodes increases. Angle of therotation for the decoder is set to D==4, so there are two valid solutions for the angle Eof theencoder: either E==4, orE==4; the model converges to the former solution.5 R ELATED WORKAs stated already by Bengio et al. (2013), and clearly demonstrated by Locatello et al. (2019),assumptions, priors, or inductive biases are necessary to identify the underlying explanatory variables.The latter paper (Locatello et al., 2019) also reviews and evaluates recent work on disentangling,and discusses different metrics that have been proposed. Chalupka et al. (2015; 2017) recognizethe potential and the challenges underlying causal representation learning. Closely related to ourefforts is (Chalupka et al., 2017), which places a strong focus on the coalescence of low (e.g. sensory)level observations ( microvariables ) to higher level causal variables ( macrovariables ), albeit in a moreobservational setting.There also exists an extensive literature on learning the structure of Bayesian networks from (observa-tional) data, via score-based methods (Koller & Friedman, 2009). Heckerman et al. (1995); Daly et al.(2011) provide a comprehensive review of these methods. Many of these algorithms are based ongreedy-search with local changes to the graphs (Chickering, 2002b), whereas we propose a continuousand fully-differentiable alternative. While most of these approaches only rely on observational data,it is sometimes possible to extend the definition of these scores to interventional data (Hauser &Bühlmann, 2012). The online-likelihood score presented here supports interventional data as its mainfeature.Some identifiability results exist for causal models with purely observational data though (Peters et al.,2017), based on specific assumptions on the underlying causal graph. However, causal discovery ismore natural under local changes in distributions (Tian & Pearl, 2001), similar to the setting usedin this paper. Pearl’s seminal work on do-calculus (Pearl, 1995; 2009; Bareinboim & Pearl, 2016)lays the foundation for expressing the impact of interventions on causal graphical models. Here weare proposing a meta-learning objective function for learning the causal structure (without hiddenvariables), requiring mild assumptions such as localized changes in distributions and faithfulness ofthe causal graph, in contrast to the stronger assumptions necessary for these identifiability results.Our work is also related to other recent advances in causation, domain adaptation, and transferlearning. Magliacane et al. (2018) have sought to identify a subset of features that leads to the bestpredictions for a variable of interest in a source domain, such that the conditional distribution ofthat variable given these features is the same in the target domain. Zhang et al. (2017) also examinenon-stationarity and find that it makes causal discovery easier. Our adaptation procedure, usinggradient ascent, is also closely related to gradient-based methods in meta-learning (Finn et al., 2017;Finn, 2018). Alet et al. (2018) proposed a meta-learning algorithm to recover a set of specializedmodules, but did not establish any connections to causal mechanisms. More recently, Dasgupta et al.(2019) adopted a meta-learning approach to perform causal inference on purely observational data.8Published as a conference paper at ICLR 20206 D ISCUSSION & F UTURE WORKWe have established, in very simple bivariate settings, that the rate at which a learner adapts tosparse changes in the distribution of observed data can be exploited to infer the causal structure, anddisentangle the causal variables. This relies on the assumption that with the correct causal structure,those distributional changes are localized. We have demonstrated these ideas through some theoreticalresults, as well as experimental validation. The source code for the experiments is available here:https://bit.ly/2M6X1al .This work is only a first step in the direction of causal structure learning based on the speed ofadaptation to modified distributions. On the experimental side, many settings other than those studiedhere should be considered, with different kinds of parametrizations, richer and larger causal graphs(see already Ke et al. (2019), based on a first version of this paper), or different kinds of optimizationprocedures. On the theoretical side, much more needs to be done to formally link the localityof interventions to faster adaptation, to clarify the conditions for this to work. Also, more workneeds to be done in exploring how the proposed ideas can be used to learn good representations inwhich the causal variables are disentangled. Scaling up these ideas would permit their applicationtowards improving the way learning agents deal with non-stationarities, and thus improving samplecomplexity and robustness of these agents.An extreme view of disentangling is that the explanatory variables should be marginally independent,and many deep generative models (Goodfellow et al., 2016), and Independent Component Analysismodels (Hyvärinen et al., 2001; Hyvärinen et al., 2018), are built on this assumption. However, thekinds of high-level variables that we manipulate with natural language are not marginally independent:they are related to each other through statements that are usually expressed in sentences (e.g. asentence in natural language, or a classical symbolic AI fact or rule), involving only a few concepts ata time. This kind of assumption has been proposed to help discover relevant high-level representationsfrom raw observations, such as the consciousness prior (Bengio, 2017), with the idea that humansfocus at any particular time on just a few concepts that are present to our consciousness. The workpresented here could provide an interesting meta-learning approach to help learn such encodersoutputting causal variables, as well as figure out how the resulting variables are related to each other.In that case, one should distinguish two important assumptions: the first one is that the causal graphis sparse, which a common assumption in structure learning (Schmidt et al., 2007); the second is thatthe changes in distributions are sparse, which is the focus of this work.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text Summary: The paper first shows that, in a very simple two-variable task, the model with the correct underlying structure will adapt faster to a causal intervention than the model with the incorrect structure. This idea is used to develop a “meta-transfer” objective function for which gradient ascent on a continuous representation of the model structure allows learning of that structure. The paper shows that optimizing with respect to this objective with a simple model is guaranteed to converge to the correct structure, and also presents experimental results on toy problems to demonstrate. Overall: Accept. I really enjoyed reading this paper. It is clear, well-motivated, well-written, does a good job of connecting to related work, and presents an interesting method for structure learning. While the experiments are quite toy and questions about how well this will work in more complex models with many variables remain largely unaddressed, these do not detract much from the paper for me. Instead, the paper does a good job of motivating its contribution and exploring its effect in simple intelligible tasks, and I feel I got more out of this paper than most SOTA papers. Clarity: Very clear. Significance: Potentially quite significant as this is starting to bring causal structure learning into the realm of tensorflow and pytorch. Questions and comments: - All else being equal, the speed of adaptation between two very similar models will serve as a good proxy, as shown in this paper. However, I can easily imagine scenarios where the two models one wants to differentiate between are quite different, and have very different optimization landscapes. Here, the speed of adaptation will be quite dependent on these landscapes and not just on the underlying model structure. Do you have thoughts about how this can be extended to such a scenario? - The parameter counting argument is not nearly so strong if what actually changes is the conditional p(A|B). In that case, the sample complexity for the correct model would be N^2 = O(N^2) and for the incorrect model would be N + N^2 = O(N^2). Does the objective still work here? Would be great to add an additional experiment showing the results in this case. - Doing an intervention and drawing a new D_int for each step of gradient descent seems quite prohibitive in a lot of domains. Are there ways to decrease this burden? - In Figure 2, can you speak to why the N=100 curve for the MLP parameterization converges more slowly than the N=10 curve? I would still expect more data to be beneficial here. ### Review Rating 8: Accept ### Review Confidence <|im_end|> <|im_end|>
v_1Soh8QUNc
ICLR.cc/2021/Conference
2021
Learning Energy-Based Models by Diffusion Recovery Likelihood
["Ruiqi Gao", "Yang Song", "Ben Poole", "Ying Nian Wu", "Diederik P Kingma"]
While energy-based models (EBMs) exhibit a number of desirable properties, training and sampling on high-dimensional datasets remains challenging. Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset. Each EBM is trained with recovery likelihood, which maximizes the conditional probability of the data at a certain noise level given their noisy versions at a higher noise level. Optimizing recovery likelihood is more tractable than marginal likelihood, as sampling from the conditional distributions is much easier than sampling from the marginal distributions. After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution and progressively samples the conditional distributions at decreasingly lower noise levels. Our method generates high fidelity samples on various image datasets. On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs. Moreover, we demonstrate that unlike previous work on EBMs, our long-run MCMC samples from the conditional distributions do not diverge and still represent realistic images, allowing us to accurately estimate the normalized density of data even for high-dimensional datasets. Our implementation is available at \url{https://github.com/ruiqigao/recovery_likelihood}.
["energy-based model", "EBM", "recovery likelihood", "generative model", "diffusion process", "MCMC", "Langevin dynamics", "HMC"]
ABSTRACTWhile energy-based models (EBMs) exhibit a number of desirable properties,training and sampling on high-dimensional datasets remains challenging. Inspiredby recent progress on diffusion probabilistic models, we present a diffusion re-covery likelihood method to tractably learn and sample from a sequence of EBMstrained on increasingly noisy versions of a dataset. Each EBM is trained withrecovery likelihood, which maximizes the conditional probability of the data at acertain noise level given their noisy versions at a higher noise level. Optimizing re-covery likelihood is more tractable than marginal likelihood, as sampling from theconditional distributions is much easier than sampling from the marginal distribu-tions. After training, synthesized images can be generated by the sampling processthat initializes from Gaussian white noise distribution and progressively samplesthe conditional distributions at decreasingly lower noise levels. Our method gener-ates high fidelity samples on various image datasets. On unconditional CIFAR-10our method achieves FID 9.58 and inception score 8.30, superior to the majorityof GANs. Moreover, we demonstrate that unlike previous work on EBMs, ourlong-run MCMC samples from the conditional distributions do not diverge andstill represent realistic images, allowing us to accurately estimate the normalizeddensity of data even for high-dimensional datasets. Our implementation is avail-able at https://github.com/ruiqigao/recovery_likelihood .1 I NTRODUCTIONEBMs (LeCun et al., 2006; Ngiam et al., 2011; Kim & Bengio, 2016; Zhao et al., 2016; Goyal et al.,2017; Xie et al., 2016b; Finn et al., 2016; Gao et al., 2018; Kumar et al., 2019; Nijkamp et al.,2019b; Du & Mordatch, 2019; Grathwohl et al., 2019; Desjardins et al., 2011; Gao et al., 2020; Cheet al., 2020; Grathwohl et al., 2020; Qiu et al., 2019; Rhodes et al., 2020) are an appealing class ofprobabilistic models, which can be viewed as generative versions of discriminators (Jin et al., 2017;Lazarow et al., 2017; Lee et al., 2018; Grathwohl et al., 2020), yet can be learned from unlabeleddata. Despite a number of desirable properties, two challenges remain for training EBMs on high-dimensional datasets. First, learning EBMs by maximum likelihood requires Markov Chain MonteCarlo (MCMC) to generate samples from the model, which can be extremely expensive. Second, aspointed out in Nijkamp et al. (2019a), the energy potentials learned with non-convergent MCMCdo not have a valid steady-state, in the sense that samples from long-run Markov chains can differgreatly from observed samples, making it difficult to evaluate the learned energy potentials.Another line of work, originating from Sohl-Dickstein et al. (2015), is to learn from a diffusedversion of the data, which are obtained from the original data via a diffusion process that sequentiallyadds Gaussian white noise. From such diffusion data, one can learn the conditional model of the dataat a certain noise level given their noisy versions at the higher noise level of the diffusion process.After learning the sequence of conditional models that invert the diffusion process, one can thengenerate synthesized images from Gaussian white noise images by ancestral sampling. Building on1Published as a conference paper at ICLR 2021Figure 1: Generated samples on LSUN 1282church outdoor ( left), LSUN 1282bedroom ( center )and CelebA 642(right ).Sohl-Dickstein et al. (2015), Ho et al. (2020) further developed the method, obtaining strong imagesynthesis results.Inspired by Sohl-Dickstein et al. (2015) and Ho et al. (2020), we propose a diffusion recovery like-lihood method to tackle the challenge of training EBMs directly on a dataset by instead learning asequence of EBMs for the marginal distributions of the diffusion process. The sequence of marginalEBMs are learned with recovery likelihoods that are defined as the conditional distributions that in-vert the diffusion process. Compared to standard maximum likelihood estimation (MLE) of EBMs,learning marginal EBMs by diffusion recovery likelihood only requires sampling from the condi-tional distributions, which is much easier than sampling from the marginal distributions. After learn-ing the marginal EBMs, we can generate synthesized images by a sequence of conditional samplesinitialized from the Gaussian white noise distribution. Unlike Ho et al. (2020) that approximates thereverse process by normal distributions, in our case the conditional distributions are derived fromthe marginal EBMs, which are more flexible. The framework of recovery likelihood was originallyproposed in Bengio et al. (2013). In our work, we adapt it to learning the sequence of marginalEBMs from the diffusion data.Our work is also related to the denoising score matching method of Vincent (2011), which wasfurther developed by Song & Ermon (2019; 2020) for learning from diffusion data. The training ob-jective used for diffusion probabilisitic models is a weighted version of the denoising score matchingobjective, as revealed by Ho et al. (2020). These methods learn the score functions (the gradientsof the energy functions) directly, instead of using the gradients of learned energy functions as inEBMs. On the other hand, Saremi et al. (2018) parametrizes the score function as the gradient of aMLP energy function, and Saremi & Hyvarinen (2019) further unifies denoising score matching andneural empirical Bayes.We demonstrate the efficacy of diffusion recovery likelihood on CIFAR-10, CelebA and LSUNdatasets. The generated samples are of high fidelity and comparable to GAN-based methods. OnCIFAR-10, we achieve FID 9.58 and inception score 8.30, exceeding existing methods of learningexplicit EBMs to a large extent. We also demonstrate that diffusion recovery likelihood outperformsdenoising score matching from diffusion data if we naively take the gradients of explicit energyfunctions as the score functions. More interestingly, by using a thousand diffusion time steps, wedemonstrate that even very long MCMC chains from the sequence of conditional distributions pro-duce samples that represent realistic images. With the faithful long-run MCMC samples from theconditional distributions, we can accurately estimate the marginal partition function at zero noiselevel by importance sampling, and thus evaluate the normalized density of data under the EBM.2Published as a conference paper at ICLR 2021Figure 3: Illustration of diffusion recovery likelihood on 2D checkerboard example. Top: progres-sively generated samples. Bottom : estimated marginal densities.2 B ACKGROUNDLetxpdata(x)denote a training example, and p(x)denote a model’s probability density func-tion that aims to approximates pdata(x). An energy-based model (EBM) is defined as:p(x) =1Zexp(f(x)); (1)whereZ=Rexp(f(x))dxis the partition function, which is analytically intractable for high-dimensional x. For images, we parameterize f(x)with a convolutional neural network with ascalar output.The energy-based model in equation 1 can, in principle, be learned through MLE. Specifically,suppose we observe samples xipdata(x)fori= 1;2;:::;n . The log-likelihood function isL() =1nnXi=1logp(xi):=Expdata[logp(x)]: (2)In MLE, we seek to maximize the log-likelihood function, where the gradient approximately fol-lows (Xie et al., 2016b)@@DKL(pdatakp) =Expdata@@f(x)Exp@@f(x): (3)The expectations can be approximated by averaging over the observed samples and the synthesizedsamples drawn from the model distribution p(x)respectively. Generating synthesized samplesfromp(x)can be done with Markov Chain Monte Carlo (MCMC) such as Langevin dynamics (orHamiltonian Monte Carlo (Girolami & Calderhead, 2011)), which iteratesx+1=x+22rxf(x) +; (4)Figure 2: Comparison of learningEBMs by diffusion recovery likeli-hood (Ours) versus marginal likelihood(Short-run).whereindexes the time, is the step size, and N(0;I). The difficulty lies in the fact that for high-dimensional and multi-modal distributions, MCMC sam-pling can take a long time to converge, and the samplingchains may have difficulty traversing modes. As demon-strated in Figure 2, training EBMs with synthesized sam-ples from non-convergent MCMC results in malformedenergy landscapes (Nijkamp et al., 2019b), even if the sam-ples from the model look reasonable.3 R ECOVERY LIKELIHOOD3.1 F ROM MARGINAL TO CONDITIONALGiven the difficulty of sampling from the marginal densityp(x), following Bengio et al. (2013), we use the recovery likelihood defined by the density of the3Published as a conference paper at ICLR 2021observed sample conditional on a noisy sample perturbed by isotropic Gaussian noise. Specifically,let~x=x+be the noisy observation of x, whereN(0;I). Supposep(x)is defined by theEBM in equation 1, then the conditional EBM can be derived asp(xj~x) =1~Z(~x)expf(x)122k~xxk2; (5)where ~Z(~x) =Rexpf(x)122k~xxk2dxis the partition function of this conditional EBM.See Appendix A.1 for the derivation. Compared to p(x)(equation 1), the extra quadratic term122k~xxk2inp(xj~x)constrains the energy landscape to be localized around ~x, making the latterless multi-modal and easier to sample from. As we will show later, when is small,p(xj~x)isapproximately a single mode Gaussian distribution, which greatly reduces the burden of MCMC.A more general formulation is ~x=ax+, whereais a positive constant. In that case, we can lety=ax, and treat yas the observed sample. Assume p(y) =1Zexp(f(y)), then by change ofvariable , the density function of xcan be derived as g(x) =ap(ax).3.2 M AXIMIZING RECOVERY LIKELIHOODWith the conditional EBM, assume we have observed samples xipdata(x)and the correspondingperturbed samples ~xi=xi+ifori= 1;:::;n . We define the recovery log-likelihood function asJ() =1nnXi=1logp(xij~xi): (6)The term recovery indicates that we attempt to recover the clean sample xifrom the noisy sample~xi. Thus, instead of maximizing L()in equation 2, we can maximize J(), whose distributionsare easier to sample from. Specifically, we generate synthesized samples by Ksteps of Langevindynamics that iteratesx+1=x+22(rxf(x) +12(~xx)) +: (7)The model is then updated following the same learning gradients as MLE (equation 3), because thequadratic term122k~xxk2is not related to . Following the classical analysis of MLE, we canshow that the point estimate given by maximizing recovery likelihood is an unbiased estimator ofthe true parameters, which means that given enough data, a rich enough model and exact synthesis,maximizing the recovery likelihood learns such thatpdata(x) =p(x). See Appendix A.2 for atheoretical explanation.3.3 N ORMAL APPROXIMATION TO CONDITIONALWhen the variance of perturbed noise 2is small,p(xj~x)can be approximated by a normal distri-bution via a first order Taylor expansion at ~x. Specifically, the negative conditional energy isE(xj~x) =f(x)122k~xxk2(8):=f(~x) +hrxf(~x);x~xi122k~xxk2(9)=122kx(~x+2rxf(~x))k2+c; (10)wherecinclude terms irrelevant of x(see Appendix A.3 for a detailed derivation). In the aboveapproximation, we do not perform second order Taylor expansion because 2is small, andk~xxk2=22will dominate all the second order terms from Taylor expansion. Thus we can approximatep(xj~x)by a Gaussian approximation ep(xj~x):ep(xj~x) =Nx;~x+2rxf(~x);2: (11)We can sample from this distribution using:xgen=~x+2rxf(~x) +; (12)whereN(0;I). This resembles a single step of Langevin dynamics, except that is replacedbyp2in Langevin dynamics. This normal approximation has two traits: (1) it verifies the factthat the conditional density p(xj~x)can be generally easier to sample from when is small; (2) itprovides hints of choosing the step size of Langevin dynamics, as discussed in section 3.5.4Published as a conference paper at ICLR 20213.4 C ONNECTION TO VARIATIONAL INFERENCE AND SCORE MATCHINGThe normal approximation to the conditional distribution leads to a natural connection to diffu-sion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and denoising score match-ing (Vincent, 2011; Song & Ermon, 2019; 2020; Saremi et al., 2018; Saremi & Hyvarinen, 2019).Specifically, in diffusion probabilistic models, instead of modeling p(x)as an energy-based model,it recruits variational inference and directly models the conditional density asp(xj~x)N~x+2s(~x);2; (13)which is in agreement with the normal approximation (equation 11), with s(x) =rxf(x). Onthe other hand, the training objective of denoising score matching is to minimize122Ep(x;~x)[kx(~x+2s(~x))k2]; (14)wheres(x)is the score of the density of ~x. This objective is in agreement with the objective ofmaximizing log-likelihood of the normal approximation (equation 10), except that for normal ap-proximation,rxf()is the score of density of x, instead of ~x. However, the difference between thescores of density of xand~xis ofO(2), which is negligible when is sufficiently small (see Ap-pendix A.4 for details). We can further show that the learning gradient of maximizing log-likelihoodof the normal approximation is approximately the same as the learning gradient of maximizing theoriginal recovery log-likelihood with one step of Langevin dynamics (see Appendix A.5). It indi-cates that the training process of maximizing recovery likelihood agrees with the one of diffusionprobabilistic models and denoising score matching when is small.As the normal approximation is accurate only when is small, it requires many time steps in thediffusion process for this approximation to work well, which is also reported in Ho et al. (2020)and Song & Ermon (2020). In contrast, the diffusion recovery likelihood framework can be moreflexible in choosing the number of time steps and the magnitude of .3.5 D IFFUSION RECOVERY LIKELIHOODAs we discuss, sampling from p(xj~x)becomes simple only when is small. In the extreme casewhen!1 ,p(xj~x)converges to the marginal distribution p(x), which is again highly multi-modal and difficult to sample from. To keep small and meanwhile equip the model with the abil-ity to generate new samples initialized from white noise, inspired by Sohl-Dickstein et al. (2015)and Ho et al. (2020), we propose to learn a sequence of recovery likelihoods, on gradually per-turbed observed data based on a diffusion process. Specifically, assume a sequence of perturbedobservations x0;x1;:::;xTsuch thatx0pdata(x);xt+1=q12t+1xt+t+1t+1; t= 0;1;:::T1: (15)The scaling factorq12t+1ensures that the sequence is a spherical interpolation between theobserved sample and Gaussian white noise. Let yt=q12t+1xt, and we assume a sequence ofconditional EBMsp(ytjxt+1) =1~Z;t(xt+1)expf(yt;t)122t+1kxt+1ytk2; t= 0;1;:::;T1;(16)wheref(yt;t)is defined by a neural network conditioned on t.We follow the learning algorithm in section 3.2. A question is how to determine the step sizescheduletof Langevin dynamics. Inspired by the sampling procedure of the normal approximation(equation 12), we set the step size t=bt, whereb<1is a tuned hyperparameter. This scheduleturns out to work well in practice. Thus the Ksteps of Langevin dynamics iteratesy+1t=yt+b22t2(ryf(yt;t) +12t(xt+1yt)) +bt: (17)Algorithm 1 summarizes the training procedure. After training, we initialize the MCMC samplingfrom Gaussian white noise, and the synthesized sample at each time step serves to initialize theMCMC that samples from the model of the previous time step. See algorithm 2. To show the efficacyof our method, Figures 3 and 2 display several 2D toy examples learned by diffusion recoverylikelihood.5Published as a conference paper at ICLR 2021Algorithm 1 TrainingrepeatSampletUnif(f0;:::;T1g).Sample pairs (yt;xt+1).Set synthesized sample yt=xt+1.for 1toKdoUpdate ytaccording to equation 17.end forUpdatefollowing the gradients@@f(yt;t)@@f(yt;t).until converged.Algorithm 2 Progressive samplingSample xTN(0;I).fort T1to0doyt=xt+1.for 1toKdoUpdate ytaccording to equation 17.end forxt=yt=q12t+1.end forreturn x0.4 E XPERIMENTSTo show that diffusion recovery likelihood is flexible for diffusion process of various magnitudes ofnoise, we test the method under two settings: (1) T= 6, withK= 30 steps of Langevin dynamicper time step; (2) T= 1000 , with sampling from normal approximation. (2) resembles the noiseschedule of Ho et al. (2020) and the magnitude of noise added at each time step is much smallercompared to (1). For both settings, we set 2tto increase linearly. The network structure of f(x;t)is based on Wide ResNet (Zagoruyko & Komodakis, 2016) and we remove weight normalization. tis encoded by Transformer sinusoidal position embedding as in (Ho et al., 2020). For (1), we findthat adding another scaling factor ctto the step size thelps. Architecture and training details are inAppendix B. Henceforth we simply refer the two settings as T6andT1k.4.1 I MAGE GENERATIONFigures 1 and 4 display uncurated samples generated from learned models on CIFAR-10, CelebA6464, LSUN 6464and128128datasets under T6setting. The samples are of high fidelityand comparable to GAN-based methods. Appendix C.5 provides more generated samples. Tables1 and 3 summarize the quantitative evaluations on CIFAR-10 and CelebA datasets, in terms ofFrechet Inception Distance (FID) (Heusel et al., 2017) and inception scores (Salimans et al., 2016).On CIFAR-10, our model achieves FID 9.58 and inception score 8.30, which outperforms existingmethods of learning explicit energy-based models to a large extent, and is superior to a majority ofGAN-based methods. On CelebA, our model obtains results comparable with the state-of-the-artGAN-based methods, and outperforms score-based methods (Song & Ermon, 2019; 2020). Notethat the score-based methods (Song & Ermon, 2019; 2020) and diffusion probabilistic models (Hoet al., 2020) directly parametrize and learn the score of data distribution, whereas our goal is to learnexplicit energy-based models.Figure 4: Generated samples on unconditional CIFAR-10 ( left) and LSUN 642church outdoor ( cen-ter) and LSUN 642bedroom ( right ).6Published as a conference paper at ICLR 2021Table 1: FID and inception scores on CIFAR-10.Model FID #Inception"GAN-basedWGAN-GP (Gulrajani et al., 2017) 36:47.86.07SNGAN (Miyato et al., 2018) 21.7 8.22 .05SNGAN-DDLS (Che et al., 2020) 15.42 9.09 .10StyleGAN2-ADA (Karras et al., 2020) 3.26 9.74.05Score-basedNCSN (Song & Ermon, 2019) 25.32 8.87 .12NCSN-v2 (Song & Ermon, 2020) 10.87 8.40 .07DDPM (Ho et al., 2020) 3.17 9.46.11Explicit EBM-conditionalCoopNets (Xie et al., 2019) - 7.30EBM-IG (Du & Mordatch, 2019) 37.9 8.30JEM (Grathwohl et al., 2019) 38.4 8.76Explicit EBMMuli-grid (Gao et al., 2018) 40.01 6.56CoopNets (Xie et al., 2016a) 33.61 6.55EBM-SR (Nijkamp et al., 2019b) - 6.21EBM-IG (Du & Mordatch, 2019) 38.2 6.78Ours (T6) 9.58 8.30.11Table 2: Ablation of training objectives,time steps Tand sampling steps KonCIFAR-10. K= 0 indicates that we sam-ple from the normal approximation.Setting / Objective FID #Inception"T = 1, K = 180 32.12 6.72 0.12T = 1000, K = 0 22.58 7.71 0.08T = 1000, K = 0 (DSM) 21.76 7.76 0.11T = 6, K = 10 - -T = 6, K = 30 9.58 8.30 0.11T = 6, K = 50 9.36 8.460.13Table 3: FID scores on CelebA 642.Model FID #QA-GAN (Parimala & Channappayya, 2019) 6.42COCO-GAN (Lin et al., 2019) 4.0NV AE (Vahdat & Kautz, 2020) 14.74NCSN (Song & Ermon, 2019) 25.30NCSN-v2 (Song & Ermon, 2020) 10.23EBM-SR (Nijkamp et al., 2019b) 23.02EBM-Triangle (Han et al., 2020) 24.70Ours (T6) 5.98Figure 5: Interpolation results between the leftmost and rightmost generated samples. For toptobottom : LSUN church outdoor 1282, LSUN bedroom 1282and CelebA 642.Table 4: Test bits per dimension on CIFAR-10.yindicates that we estimate the bit perdimension with the approximated log partitionfunction instead of analytically computing it.See section 4.2.Model BPD #DDPM (Ho et al., 2020) 3.70Glow (Kingma & Dhariwal, 2018) 3.35Flow++ (Ho et al., 2019) 3.08GPixelCNN (Van den Oord et al., 2016) 3.03Sparse Transformer (Child et al., 2019) 2.80DistAug (Jun et al., 2020) 2.56Oursy(T1k) 3.18Figure 6: Image inpainting on LSUN church outdoor1282(left) and CelebA 642(right ). With each block,the top row are mask images while the bottom roware inpainted images.Interpolation. As shown in Figure 5, our model is capable of smooth interpolation between twogenerated samples. Specifically, for two samples x(0)0andx(1)0, we do a sphere interpolation between7Published as a conference paper at ICLR 2021the initial white noise images x(0)Tandx(1)Tand the noise terms of Langevin dynamics (0)t;and(1)t;for every sampling step at every time step. More interpolation results can be found in Appendix C.3.Image inpainting. A promising application of energy-based models is to use the learned modelas a prior model for image processing, such as image inpainting, denoising and super-resolution(Gao et al., 2018; Du & Mordatch, 2019; Song & Ermon, 2019). In Figure 6, we demonstrate thatthe learned models by maximizing recovery likelihoods are capable of realistic and semanticallymeaningful image inpainting. Specifically, given a masked image and the corresponding mask, wefirst obtain a sequence of perturbed masked images at different noise levels. The inpainting can beeasily achieved by running Langevin dynamics progressively on the masked pixels while keepingthe observed pixels fixed at decreasingly lower noise levels. Additional image inpainting results canbe found in Appendix C.4.Ablation study. Table 2 summarizes the results of ablation study on CIFAR-10. We investigatethe effect of changing the numbers of time steps Tand sampling steps K. First, to show thatit is beneficial to learn by diffusion recovery likelihood, we compare against a baseline approach(T= 1;K= 180 ) where we use only one time step, so that the recovery likelihood becomesmarginal likelihood. The approach is adopted by Nijkamp et al. (2019b) and Du & Mordatch (2019).For fair comparison, we equip the baseline method the same budget of MCMC sampling as our T6setting (i.e., 180 sampling steps). Our method outperforms this baseline method by a large margin.Also the models are trained more efficiently as the number of sampling steps per iteration is reducedand amortized by time steps.Next, we report the sample quality of setting T1k. We test two training objectives for this setting: (1)maximizing recovery likelihoods (T = 1000, K = 0) and (2) maximizing the approximated normaldistributions (T=1000, K=0 (DSM)). As mentioned in section 3.4, (2) is equivalent to the trainingobjectives of denoising score matching (Song & Ermon, 2019; 2020) and diffusion probabilisticmodel (Ho et al., 2020), except that the score functions are taken as the gradients of explicit energyfunctions. In practice, for a direct comparison, (2) follows the same implementation as in Ho et al.(2020), except that the score function is parametrized as the gradients of the explicit energy functionused in our method. (1) and (2) achieve similar sample quality in terms of quantitative metrics,where (2) results in a slightly better FID score yet a slightly worse inception score. This verifies thefact that the training objectives of (1) and (2) are consistent. Both (1) and (2) performs worse thansetting T6. A possible explanation is that the sampling error may accumulate with many time steps,so that a more flexible schedule of time steps accompanied with certain amount of sampling steps ispreferred.Last, we examine the influence of varying the number of sampling steps while fixing the numberof time steps. The training becomes unstable when the number of sampling steps are not enough(T= 6;K= 10 ), and more sampling steps lead to better sample quality. However, since K= 50does not gain significant improvement versus K= 30 , yet of much higher computational cost, wekeepK= 30 for image generation on all datasets. See Appendix C.1 for a plot of FID scores overiterations.4.2 L ONG -RUN CHAIN ANALYSISBesides achieving high quality generation, a perhaps equally important aspect of learning EBMs isto obtain a faithful energy potential. A principle way to check the validity of the learned potentialis to perform long-run sampling chains and see if the samples still remain realistic. However, aspointed out in Nijkamp et al. (2019a), almost all existing methods of learning EBMs fail in gettingrealistic long-run chain samples. In this subsection, we demonstrate that by composing a thousanddiffusion time steps ( T1ksetting), we can form steady long-run MCMC chains for the conditionaldistributions.First we prepare a faithful sampler for conducting long-run sampling. Specifically, after training themodel under T1ksetting by maximizing diffusion recovery likelihood, for each time step, we firstsample from the normal approximation and count it as one sampling step, and then use HamiltonianMonte Carlo (HMC) (Neal et al., 2011) with 2 leapfrog steps to perform the consecutive samplingsteps. To obtain a reasonable schedule of sampling step size, for each time step we adaptively adjustthe step size of HMC to make the average acceptance rate range in [0:6;0:9], which is computed8Published as a conference paper at ICLR 2021Figure 7: Left: Adjusted step size of HMC over time step. Center : Acceptance rate over time step.Right : Estimated log partition function over number of samples with different number of samplingsteps per time step. The x axis is plotted in log scale.over1000 chains for 100steps. Figure 7 displays the adjusted step size ( left) and acceptance rate(center ) over time step. The adjusted step size increases logarithmically. With this step size schedule,we generate long-run chains from the learned sequence of conditional distributions. As shown inFigure 8, images remain realistic for even 100ksampling steps in total (i.e., 100sampling steps pertime step), resulting in FID 24.89. This score is close to the one computed on samples generated by1ksteps (i.e., sampled from normal approximation), which is 25.12. As a further check, we recruita No-U-Turn Sampler (Hoffman & Gelman, 2014) with the same step size schedule as HMC toperform long-run sampling, where the samples also remain realistic. See Appendix C.2 for details.Figure 8: Long-run chain samplesfrom model- T1k with different totalamount of HMC steps. From lefttoright :1ksteps, 10ksteps and 100ksteps.More interestingly, given the faithful long-run MCMC sam-ples from the conditional distributions, we can estimatethe log ratio of the partition functions of the marginal dis-tributions, and further estimate the partition function ofp(y0). The strategy is based on annealed importance sam-pling (Neal, 2001). See Appendix A.6 for the implementa-tion details. The right subfigure of Figure 7 depicts the es-timated log partition function of p(y0)over the number ofMCMC samples used. To verify the estimation strategy andagain check the long-run chain samples, we conduct mul-tiple runs using samples generated with different numbersof HMC steps and display the estimation curves. All thecurves saturate to values close to each other at the end, indicating the stability of long-run chain sam-ples and the effectiveness of the estimation strategy. With the estimated partition function, by changeof variable , we can estimate the normalized density of data as g(x0) =p121p(p121x0).We report test bits per dimension on CIFAR-10 in Table 4. Note that the result should be taken witha grain of salt, because the partition function is estimated by samples and as shown in AppendixA.6, it is a stochastic lower bound of the true value, that will converge to the true value when thenumber of samples grows large.5 C ONCLUSIONWe propose to learn EBMs by diffusion recovery likelihood, a variant of MLE applied to diffusionprocesses. We achieve high quality image synthesis, and with a thousand noise levels, we obtainfaithful long-run MCMC samples that indicate the validity of the learned energy potentials. Sincethis method can learn EBMs efficiently with small budget of MCMC, we are also interested inscaling it up to higher resolution images and investigating this method in other data modalities inthe future.ACKNOWLEDGEMENTThe work was done while Ruiqi Gao and Yang Song were interns at Google Brain during the summerof 2020. The work of Ying Nian Wu is supported by NSF DMS-2015577. We thank Alexander A.Alemi, Jonathan Ho, Tim Salimans and Kevin Murphy for their insightful discussions during thecourse of this project.9Published as a conference paper at ICLR 2021
0N7YHITYd0e
Interesting approach for training EBMs as a sequence of conditional EBMs
7: Good paper, accept
This paper describes training a sequence of conditional EBMs (inspired by Ho et al. (2020)) instead of training unconditional EBMs. Each conditional energy describes the probability of recovering x, given its noisy version \hat{x}. The noisy version of x can be described as a normal distribution centered at x, so the condition EBM has an additional term ||x - \hat{x}||^2, which constrains the Langevin dynamics to remain in the vicinity of \hat{x}, so it converges faster! The inference is that starting at white noise x_0, it defines a conditional EBMs given x_0, runs Langaving dynamics to sample from P(x|x_0) defined by the conditional EBMs, and then use the sample as the evidence of another conditional EBMs. I assume the main advantage of this training over Ho et al. (2020) is having an energy-based model that can be used in other applications, so that would be nice to see the performance of trained EBMs on some applications other than image generation such as image inpainting and robust classification as discussed in Du and Mordatch (2019). In general, I believe that this is an exciting paper and an important step towards training better EBMs. In connection to score matching, Saremi et al. (2018) and Saremi and Hyvarinen (2019) should be cited as well. Saremi et al. (2018), Deep Energy Estimator Networks. Saremi and Hyvarinen (2019), Neural Empirical Bayes. typo: "score-based based methods" in Section 4.1 Alg2: for t \gets T - 1 to 0 do
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Energy-Based Models by Diffusion Recovery Likelihood ### Paper Abstract While energy-based models (EBMs) exhibit a number of desirable properties, training and sampling on high-dimensional datasets remains challenging. Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset. Each EBM is trained with recovery likelihood, which maximizes the conditional probability of the data at a certain noise level given their noisy versions at a higher noise level. Optimizing recovery likelihood is more tractable than marginal likelihood, as sampling from the conditional distributions is much easier than sampling from the marginal distributions. After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution and progressively samples the conditional distributions at decreasingly lower noise levels. Our method generates high fidelity samples on various image datasets. On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs. Moreover, we demonstrate that unlike previous work on EBMs, our long-run MCMC samples from the conditional distributions do not diverge and still represent realistic images, allowing us to accurately estimate the normalized density of data even for high-dimensional datasets. Our implementation is available at \url{https://github.com/ruiqigao/recovery_likelihood}. ### Paper Keywords ["energy-based model", "EBM", "recovery likelihood", "generative model", "diffusion process", "MCMC", "Langevin dynamics", "HMC"] ### Paper Content ABSTRACTWhile energy-based models (EBMs) exhibit a number of desirable properties,training and sampling on high-dimensional datasets remains challenging. Inspiredby recent progress on diffusion probabilistic models, we present a diffusion re-covery likelihood method to tractably learn and sample from a sequence of EBMstrained on increasingly noisy versions of a dataset. Each EBM is trained withrecovery likelihood, which maximizes the conditional probability of the data at acertain noise level given their noisy versions at a higher noise level. Optimizing re-covery likelihood is more tractable than marginal likelihood, as sampling from theconditional distributions is much easier than sampling from the marginal distribu-tions. After training, synthesized images can be generated by the sampling processthat initializes from Gaussian white noise distribution and progressively samplesthe conditional distributions at decreasingly lower noise levels. Our method gener-ates high fidelity samples on various image datasets. On unconditional CIFAR-10our method achieves FID 9.58 and inception score 8.30, superior to the majorityof GANs. Moreover, we demonstrate that unlike previous work on EBMs, ourlong-run MCMC samples from the conditional distributions do not diverge andstill represent realistic images, allowing us to accurately estimate the normalizeddensity of data even for high-dimensional datasets. Our implementation is avail-able at https://github.com/ruiqigao/recovery_likelihood .1 I NTRODUCTIONEBMs (LeCun et al., 2006; Ngiam et al., 2011; Kim & Bengio, 2016; Zhao et al., 2016; Goyal et al.,2017; Xie et al., 2016b; Finn et al., 2016; Gao et al., 2018; Kumar et al., 2019; Nijkamp et al.,2019b; Du & Mordatch, 2019; Grathwohl et al., 2019; Desjardins et al., 2011; Gao et al., 2020; Cheet al., 2020; Grathwohl et al., 2020; Qiu et al., 2019; Rhodes et al., 2020) are an appealing class ofprobabilistic models, which can be viewed as generative versions of discriminators (Jin et al., 2017;Lazarow et al., 2017; Lee et al., 2018; Grathwohl et al., 2020), yet can be learned from unlabeleddata. Despite a number of desirable properties, two challenges remain for training EBMs on high-dimensional datasets. First, learning EBMs by maximum likelihood requires Markov Chain MonteCarlo (MCMC) to generate samples from the model, which can be extremely expensive. Second, aspointed out in Nijkamp et al. (2019a), the energy potentials learned with non-convergent MCMCdo not have a valid steady-state, in the sense that samples from long-run Markov chains can differgreatly from observed samples, making it difficult to evaluate the learned energy potentials.Another line of work, originating from Sohl-Dickstein et al. (2015), is to learn from a diffusedversion of the data, which are obtained from the original data via a diffusion process that sequentiallyadds Gaussian white noise. From such diffusion data, one can learn the conditional model of the dataat a certain noise level given their noisy versions at the higher noise level of the diffusion process.After learning the sequence of conditional models that invert the diffusion process, one can thengenerate synthesized images from Gaussian white noise images by ancestral sampling. Building on1Published as a conference paper at ICLR 2021Figure 1: Generated samples on LSUN 1282church outdoor ( left), LSUN 1282bedroom ( center )and CelebA 642(right ).Sohl-Dickstein et al. (2015), Ho et al. (2020) further developed the method, obtaining strong imagesynthesis results.Inspired by Sohl-Dickstein et al. (2015) and Ho et al. (2020), we propose a diffusion recovery like-lihood method to tackle the challenge of training EBMs directly on a dataset by instead learning asequence of EBMs for the marginal distributions of the diffusion process. The sequence of marginalEBMs are learned with recovery likelihoods that are defined as the conditional distributions that in-vert the diffusion process. Compared to standard maximum likelihood estimation (MLE) of EBMs,learning marginal EBMs by diffusion recovery likelihood only requires sampling from the condi-tional distributions, which is much easier than sampling from the marginal distributions. After learn-ing the marginal EBMs, we can generate synthesized images by a sequence of conditional samplesinitialized from the Gaussian white noise distribution. Unlike Ho et al. (2020) that approximates thereverse process by normal distributions, in our case the conditional distributions are derived fromthe marginal EBMs, which are more flexible. The framework of recovery likelihood was originallyproposed in Bengio et al. (2013). In our work, we adapt it to learning the sequence of marginalEBMs from the diffusion data.Our work is also related to the denoising score matching method of Vincent (2011), which wasfurther developed by Song & Ermon (2019; 2020) for learning from diffusion data. The training ob-jective used for diffusion probabilisitic models is a weighted version of the denoising score matchingobjective, as revealed by Ho et al. (2020). These methods learn the score functions (the gradientsof the energy functions) directly, instead of using the gradients of learned energy functions as inEBMs. On the other hand, Saremi et al. (2018) parametrizes the score function as the gradient of aMLP energy function, and Saremi & Hyvarinen (2019) further unifies denoising score matching andneural empirical Bayes.We demonstrate the efficacy of diffusion recovery likelihood on CIFAR-10, CelebA and LSUNdatasets. The generated samples are of high fidelity and comparable to GAN-based methods. OnCIFAR-10, we achieve FID 9.58 and inception score 8.30, exceeding existing methods of learningexplicit EBMs to a large extent. We also demonstrate that diffusion recovery likelihood outperformsdenoising score matching from diffusion data if we naively take the gradients of explicit energyfunctions as the score functions. More interestingly, by using a thousand diffusion time steps, wedemonstrate that even very long MCMC chains from the sequence of conditional distributions pro-duce samples that represent realistic images. With the faithful long-run MCMC samples from theconditional distributions, we can accurately estimate the marginal partition function at zero noiselevel by importance sampling, and thus evaluate the normalized density of data under the EBM.2Published as a conference paper at ICLR 2021Figure 3: Illustration of diffusion recovery likelihood on 2D checkerboard example. Top: progres-sively generated samples. Bottom : estimated marginal densities.2 B ACKGROUNDLetxpdata(x)denote a training example, and p(x)denote a model’s probability density func-tion that aims to approximates pdata(x). An energy-based model (EBM) is defined as:p(x) =1Zexp(f(x)); (1)whereZ=Rexp(f(x))dxis the partition function, which is analytically intractable for high-dimensional x. For images, we parameterize f(x)with a convolutional neural network with ascalar output.The energy-based model in equation 1 can, in principle, be learned through MLE. Specifically,suppose we observe samples xipdata(x)fori= 1;2;:::;n . The log-likelihood function isL() =1nnXi=1logp(xi):=Expdata[logp(x)]: (2)In MLE, we seek to maximize the log-likelihood function, where the gradient approximately fol-lows (Xie et al., 2016b)@@DKL(pdatakp) =Expdata@@f(x)Exp@@f(x): (3)The expectations can be approximated by averaging over the observed samples and the synthesizedsamples drawn from the model distribution p(x)respectively. Generating synthesized samplesfromp(x)can be done with Markov Chain Monte Carlo (MCMC) such as Langevin dynamics (orHamiltonian Monte Carlo (Girolami & Calderhead, 2011)), which iteratesx+1=x+22rxf(x) +; (4)Figure 2: Comparison of learningEBMs by diffusion recovery likeli-hood (Ours) versus marginal likelihood(Short-run).whereindexes the time, is the step size, and N(0;I). The difficulty lies in the fact that for high-dimensional and multi-modal distributions, MCMC sam-pling can take a long time to converge, and the samplingchains may have difficulty traversing modes. As demon-strated in Figure 2, training EBMs with synthesized sam-ples from non-convergent MCMC results in malformedenergy landscapes (Nijkamp et al., 2019b), even if the sam-ples from the model look reasonable.3 R ECOVERY LIKELIHOOD3.1 F ROM MARGINAL TO CONDITIONALGiven the difficulty of sampling from the marginal densityp(x), following Bengio et al. (2013), we use the recovery likelihood defined by the density of the3Published as a conference paper at ICLR 2021observed sample conditional on a noisy sample perturbed by isotropic Gaussian noise. Specifically,let~x=x+be the noisy observation of x, whereN(0;I). Supposep(x)is defined by theEBM in equation 1, then the conditional EBM can be derived asp(xj~x) =1~Z(~x)expf(x)122k~xxk2; (5)where ~Z(~x) =Rexpf(x)122k~xxk2dxis the partition function of this conditional EBM.See Appendix A.1 for the derivation. Compared to p(x)(equation 1), the extra quadratic term122k~xxk2inp(xj~x)constrains the energy landscape to be localized around ~x, making the latterless multi-modal and easier to sample from. As we will show later, when is small,p(xj~x)isapproximately a single mode Gaussian distribution, which greatly reduces the burden of MCMC.A more general formulation is ~x=ax+, whereais a positive constant. In that case, we can lety=ax, and treat yas the observed sample. Assume p(y) =1Zexp(f(y)), then by change ofvariable , the density function of xcan be derived as g(x) =ap(ax).3.2 M AXIMIZING RECOVERY LIKELIHOODWith the conditional EBM, assume we have observed samples xipdata(x)and the correspondingperturbed samples ~xi=xi+ifori= 1;:::;n . We define the recovery log-likelihood function asJ() =1nnXi=1logp(xij~xi): (6)The term recovery indicates that we attempt to recover the clean sample xifrom the noisy sample~xi. Thus, instead of maximizing L()in equation 2, we can maximize J(), whose distributionsare easier to sample from. Specifically, we generate synthesized samples by Ksteps of Langevindynamics that iteratesx+1=x+22(rxf(x) +12(~xx)) +: (7)The model is then updated following the same learning gradients as MLE (equation 3), because thequadratic term122k~xxk2is not related to . Following the classical analysis of MLE, we canshow that the point estimate given by maximizing recovery likelihood is an unbiased estimator ofthe true parameters, which means that given enough data, a rich enough model and exact synthesis,maximizing the recovery likelihood learns such thatpdata(x) =p(x). See Appendix A.2 for atheoretical explanation.3.3 N ORMAL APPROXIMATION TO CONDITIONALWhen the variance of perturbed noise 2is small,p(xj~x)can be approximated by a normal distri-bution via a first order Taylor expansion at ~x. Specifically, the negative conditional energy isE(xj~x) =f(x)122k~xxk2(8):=f(~x) +hrxf(~x);x~xi122k~xxk2(9)=122kx(~x+2rxf(~x))k2+c; (10)wherecinclude terms irrelevant of x(see Appendix A.3 for a detailed derivation). In the aboveapproximation, we do not perform second order Taylor expansion because 2is small, andk~xxk2=22will dominate all the second order terms from Taylor expansion. Thus we can approximatep(xj~x)by a Gaussian approximation ep(xj~x):ep(xj~x) =Nx;~x+2rxf(~x);2: (11)We can sample from this distribution using:xgen=~x+2rxf(~x) +; (12)whereN(0;I). This resembles a single step of Langevin dynamics, except that is replacedbyp2in Langevin dynamics. This normal approximation has two traits: (1) it verifies the factthat the conditional density p(xj~x)can be generally easier to sample from when is small; (2) itprovides hints of choosing the step size of Langevin dynamics, as discussed in section 3.5.4Published as a conference paper at ICLR 20213.4 C ONNECTION TO VARIATIONAL INFERENCE AND SCORE MATCHINGThe normal approximation to the conditional distribution leads to a natural connection to diffu-sion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and denoising score match-ing (Vincent, 2011; Song & Ermon, 2019; 2020; Saremi et al., 2018; Saremi & Hyvarinen, 2019).Specifically, in diffusion probabilistic models, instead of modeling p(x)as an energy-based model,it recruits variational inference and directly models the conditional density asp(xj~x)N~x+2s(~x);2; (13)which is in agreement with the normal approximation (equation 11), with s(x) =rxf(x). Onthe other hand, the training objective of denoising score matching is to minimize122Ep(x;~x)[kx(~x+2s(~x))k2]; (14)wheres(x)is the score of the density of ~x. This objective is in agreement with the objective ofmaximizing log-likelihood of the normal approximation (equation 10), except that for normal ap-proximation,rxf()is the score of density of x, instead of ~x. However, the difference between thescores of density of xand~xis ofO(2), which is negligible when is sufficiently small (see Ap-pendix A.4 for details). We can further show that the learning gradient of maximizing log-likelihoodof the normal approximation is approximately the same as the learning gradient of maximizing theoriginal recovery log-likelihood with one step of Langevin dynamics (see Appendix A.5). It indi-cates that the training process of maximizing recovery likelihood agrees with the one of diffusionprobabilistic models and denoising score matching when is small.As the normal approximation is accurate only when is small, it requires many time steps in thediffusion process for this approximation to work well, which is also reported in Ho et al. (2020)and Song & Ermon (2020). In contrast, the diffusion recovery likelihood framework can be moreflexible in choosing the number of time steps and the magnitude of .3.5 D IFFUSION RECOVERY LIKELIHOODAs we discuss, sampling from p(xj~x)becomes simple only when is small. In the extreme casewhen!1 ,p(xj~x)converges to the marginal distribution p(x), which is again highly multi-modal and difficult to sample from. To keep small and meanwhile equip the model with the abil-ity to generate new samples initialized from white noise, inspired by Sohl-Dickstein et al. (2015)and Ho et al. (2020), we propose to learn a sequence of recovery likelihoods, on gradually per-turbed observed data based on a diffusion process. Specifically, assume a sequence of perturbedobservations x0;x1;:::;xTsuch thatx0pdata(x);xt+1=q12t+1xt+t+1t+1; t= 0;1;:::T1: (15)The scaling factorq12t+1ensures that the sequence is a spherical interpolation between theobserved sample and Gaussian white noise. Let yt=q12t+1xt, and we assume a sequence ofconditional EBMsp(ytjxt+1) =1~Z;t(xt+1)expf(yt;t)122t+1kxt+1ytk2; t= 0;1;:::;T1;(16)wheref(yt;t)is defined by a neural network conditioned on t.We follow the learning algorithm in section 3.2. A question is how to determine the step sizescheduletof Langevin dynamics. Inspired by the sampling procedure of the normal approximation(equation 12), we set the step size t=bt, whereb<1is a tuned hyperparameter. This scheduleturns out to work well in practice. Thus the Ksteps of Langevin dynamics iteratesy+1t=yt+b22t2(ryf(yt;t) +12t(xt+1yt)) +bt: (17)Algorithm 1 summarizes the training procedure. After training, we initialize the MCMC samplingfrom Gaussian white noise, and the synthesized sample at each time step serves to initialize theMCMC that samples from the model of the previous time step. See algorithm 2. To show the efficacyof our method, Figures 3 and 2 display several 2D toy examples learned by diffusion recoverylikelihood.5Published as a conference paper at ICLR 2021Algorithm 1 TrainingrepeatSampletUnif(f0;:::;T1g).Sample pairs (yt;xt+1).Set synthesized sample yt=xt+1.for 1toKdoUpdate ytaccording to equation 17.end forUpdatefollowing the gradients@@f(yt;t)@@f(yt;t).until converged.Algorithm 2 Progressive samplingSample xTN(0;I).fort T1to0doyt=xt+1.for 1toKdoUpdate ytaccording to equation 17.end forxt=yt=q12t+1.end forreturn x0.4 E XPERIMENTSTo show that diffusion recovery likelihood is flexible for diffusion process of various magnitudes ofnoise, we test the method under two settings: (1) T= 6, withK= 30 steps of Langevin dynamicper time step; (2) T= 1000 , with sampling from normal approximation. (2) resembles the noiseschedule of Ho et al. (2020) and the magnitude of noise added at each time step is much smallercompared to (1). For both settings, we set 2tto increase linearly. The network structure of f(x;t)is based on Wide ResNet (Zagoruyko & Komodakis, 2016) and we remove weight normalization. tis encoded by Transformer sinusoidal position embedding as in (Ho et al., 2020). For (1), we findthat adding another scaling factor ctto the step size thelps. Architecture and training details are inAppendix B. Henceforth we simply refer the two settings as T6andT1k.4.1 I MAGE GENERATIONFigures 1 and 4 display uncurated samples generated from learned models on CIFAR-10, CelebA6464, LSUN 6464and128128datasets under T6setting. The samples are of high fidelityand comparable to GAN-based methods. Appendix C.5 provides more generated samples. Tables1 and 3 summarize the quantitative evaluations on CIFAR-10 and CelebA datasets, in terms ofFrechet Inception Distance (FID) (Heusel et al., 2017) and inception scores (Salimans et al., 2016).On CIFAR-10, our model achieves FID 9.58 and inception score 8.30, which outperforms existingmethods of learning explicit energy-based models to a large extent, and is superior to a majority ofGAN-based methods. On CelebA, our model obtains results comparable with the state-of-the-artGAN-based methods, and outperforms score-based methods (Song & Ermon, 2019; 2020). Notethat the score-based methods (Song & Ermon, 2019; 2020) and diffusion probabilistic models (Hoet al., 2020) directly parametrize and learn the score of data distribution, whereas our goal is to learnexplicit energy-based models.Figure 4: Generated samples on unconditional CIFAR-10 ( left) and LSUN 642church outdoor ( cen-ter) and LSUN 642bedroom ( right ).6Published as a conference paper at ICLR 2021Table 1: FID and inception scores on CIFAR-10.Model FID #Inception"GAN-basedWGAN-GP (Gulrajani et al., 2017) 36:47.86.07SNGAN (Miyato et al., 2018) 21.7 8.22 .05SNGAN-DDLS (Che et al., 2020) 15.42 9.09 .10StyleGAN2-ADA (Karras et al., 2020) 3.26 9.74.05Score-basedNCSN (Song & Ermon, 2019) 25.32 8.87 .12NCSN-v2 (Song & Ermon, 2020) 10.87 8.40 .07DDPM (Ho et al., 2020) 3.17 9.46.11Explicit EBM-conditionalCoopNets (Xie et al., 2019) - 7.30EBM-IG (Du & Mordatch, 2019) 37.9 8.30JEM (Grathwohl et al., 2019) 38.4 8.76Explicit EBMMuli-grid (Gao et al., 2018) 40.01 6.56CoopNets (Xie et al., 2016a) 33.61 6.55EBM-SR (Nijkamp et al., 2019b) - 6.21EBM-IG (Du & Mordatch, 2019) 38.2 6.78Ours (T6) 9.58 8.30.11Table 2: Ablation of training objectives,time steps Tand sampling steps KonCIFAR-10. K= 0 indicates that we sam-ple from the normal approximation.Setting / Objective FID #Inception"T = 1, K = 180 32.12 6.72 0.12T = 1000, K = 0 22.58 7.71 0.08T = 1000, K = 0 (DSM) 21.76 7.76 0.11T = 6, K = 10 - -T = 6, K = 30 9.58 8.30 0.11T = 6, K = 50 9.36 8.460.13Table 3: FID scores on CelebA 642.Model FID #QA-GAN (Parimala & Channappayya, 2019) 6.42COCO-GAN (Lin et al., 2019) 4.0NV AE (Vahdat & Kautz, 2020) 14.74NCSN (Song & Ermon, 2019) 25.30NCSN-v2 (Song & Ermon, 2020) 10.23EBM-SR (Nijkamp et al., 2019b) 23.02EBM-Triangle (Han et al., 2020) 24.70Ours (T6) 5.98Figure 5: Interpolation results between the leftmost and rightmost generated samples. For toptobottom : LSUN church outdoor 1282, LSUN bedroom 1282and CelebA 642.Table 4: Test bits per dimension on CIFAR-10.yindicates that we estimate the bit perdimension with the approximated log partitionfunction instead of analytically computing it.See section 4.2.Model BPD #DDPM (Ho et al., 2020) 3.70Glow (Kingma & Dhariwal, 2018) 3.35Flow++ (Ho et al., 2019) 3.08GPixelCNN (Van den Oord et al., 2016) 3.03Sparse Transformer (Child et al., 2019) 2.80DistAug (Jun et al., 2020) 2.56Oursy(T1k) 3.18Figure 6: Image inpainting on LSUN church outdoor1282(left) and CelebA 642(right ). With each block,the top row are mask images while the bottom roware inpainted images.Interpolation. As shown in Figure 5, our model is capable of smooth interpolation between twogenerated samples. Specifically, for two samples x(0)0andx(1)0, we do a sphere interpolation between7Published as a conference paper at ICLR 2021the initial white noise images x(0)Tandx(1)Tand the noise terms of Langevin dynamics (0)t;and(1)t;for every sampling step at every time step. More interpolation results can be found in Appendix C.3.Image inpainting. A promising application of energy-based models is to use the learned modelas a prior model for image processing, such as image inpainting, denoising and super-resolution(Gao et al., 2018; Du & Mordatch, 2019; Song & Ermon, 2019). In Figure 6, we demonstrate thatthe learned models by maximizing recovery likelihoods are capable of realistic and semanticallymeaningful image inpainting. Specifically, given a masked image and the corresponding mask, wefirst obtain a sequence of perturbed masked images at different noise levels. The inpainting can beeasily achieved by running Langevin dynamics progressively on the masked pixels while keepingthe observed pixels fixed at decreasingly lower noise levels. Additional image inpainting results canbe found in Appendix C.4.Ablation study. Table 2 summarizes the results of ablation study on CIFAR-10. We investigatethe effect of changing the numbers of time steps Tand sampling steps K. First, to show thatit is beneficial to learn by diffusion recovery likelihood, we compare against a baseline approach(T= 1;K= 180 ) where we use only one time step, so that the recovery likelihood becomesmarginal likelihood. The approach is adopted by Nijkamp et al. (2019b) and Du & Mordatch (2019).For fair comparison, we equip the baseline method the same budget of MCMC sampling as our T6setting (i.e., 180 sampling steps). Our method outperforms this baseline method by a large margin.Also the models are trained more efficiently as the number of sampling steps per iteration is reducedand amortized by time steps.Next, we report the sample quality of setting T1k. We test two training objectives for this setting: (1)maximizing recovery likelihoods (T = 1000, K = 0) and (2) maximizing the approximated normaldistributions (T=1000, K=0 (DSM)). As mentioned in section 3.4, (2) is equivalent to the trainingobjectives of denoising score matching (Song & Ermon, 2019; 2020) and diffusion probabilisticmodel (Ho et al., 2020), except that the score functions are taken as the gradients of explicit energyfunctions. In practice, for a direct comparison, (2) follows the same implementation as in Ho et al.(2020), except that the score function is parametrized as the gradients of the explicit energy functionused in our method. (1) and (2) achieve similar sample quality in terms of quantitative metrics,where (2) results in a slightly better FID score yet a slightly worse inception score. This verifies thefact that the training objectives of (1) and (2) are consistent. Both (1) and (2) performs worse thansetting T6. A possible explanation is that the sampling error may accumulate with many time steps,so that a more flexible schedule of time steps accompanied with certain amount of sampling steps ispreferred.Last, we examine the influence of varying the number of sampling steps while fixing the numberof time steps. The training becomes unstable when the number of sampling steps are not enough(T= 6;K= 10 ), and more sampling steps lead to better sample quality. However, since K= 50does not gain significant improvement versus K= 30 , yet of much higher computational cost, wekeepK= 30 for image generation on all datasets. See Appendix C.1 for a plot of FID scores overiterations.4.2 L ONG -RUN CHAIN ANALYSISBesides achieving high quality generation, a perhaps equally important aspect of learning EBMs isto obtain a faithful energy potential. A principle way to check the validity of the learned potentialis to perform long-run sampling chains and see if the samples still remain realistic. However, aspointed out in Nijkamp et al. (2019a), almost all existing methods of learning EBMs fail in gettingrealistic long-run chain samples. In this subsection, we demonstrate that by composing a thousanddiffusion time steps ( T1ksetting), we can form steady long-run MCMC chains for the conditionaldistributions.First we prepare a faithful sampler for conducting long-run sampling. Specifically, after training themodel under T1ksetting by maximizing diffusion recovery likelihood, for each time step, we firstsample from the normal approximation and count it as one sampling step, and then use HamiltonianMonte Carlo (HMC) (Neal et al., 2011) with 2 leapfrog steps to perform the consecutive samplingsteps. To obtain a reasonable schedule of sampling step size, for each time step we adaptively adjustthe step size of HMC to make the average acceptance rate range in [0:6;0:9], which is computed8Published as a conference paper at ICLR 2021Figure 7: Left: Adjusted step size of HMC over time step. Center : Acceptance rate over time step.Right : Estimated log partition function over number of samples with different number of samplingsteps per time step. The x axis is plotted in log scale.over1000 chains for 100steps. Figure 7 displays the adjusted step size ( left) and acceptance rate(center ) over time step. The adjusted step size increases logarithmically. With this step size schedule,we generate long-run chains from the learned sequence of conditional distributions. As shown inFigure 8, images remain realistic for even 100ksampling steps in total (i.e., 100sampling steps pertime step), resulting in FID 24.89. This score is close to the one computed on samples generated by1ksteps (i.e., sampled from normal approximation), which is 25.12. As a further check, we recruita No-U-Turn Sampler (Hoffman & Gelman, 2014) with the same step size schedule as HMC toperform long-run sampling, where the samples also remain realistic. See Appendix C.2 for details.Figure 8: Long-run chain samplesfrom model- T1k with different totalamount of HMC steps. From lefttoright :1ksteps, 10ksteps and 100ksteps.More interestingly, given the faithful long-run MCMC sam-ples from the conditional distributions, we can estimatethe log ratio of the partition functions of the marginal dis-tributions, and further estimate the partition function ofp(y0). The strategy is based on annealed importance sam-pling (Neal, 2001). See Appendix A.6 for the implementa-tion details. The right subfigure of Figure 7 depicts the es-timated log partition function of p(y0)over the number ofMCMC samples used. To verify the estimation strategy andagain check the long-run chain samples, we conduct mul-tiple runs using samples generated with different numbersof HMC steps and display the estimation curves. All thecurves saturate to values close to each other at the end, indicating the stability of long-run chain sam-ples and the effectiveness of the estimation strategy. With the estimated partition function, by changeof variable , we can estimate the normalized density of data as g(x0) =p121p(p121x0).We report test bits per dimension on CIFAR-10 in Table 4. Note that the result should be taken witha grain of salt, because the partition function is estimated by samples and as shown in AppendixA.6, it is a stochastic lower bound of the true value, that will converge to the true value when thenumber of samples grows large.5 C ONCLUSIONWe propose to learn EBMs by diffusion recovery likelihood, a variant of MLE applied to diffusionprocesses. We achieve high quality image synthesis, and with a thousand noise levels, we obtainfaithful long-run MCMC samples that indicate the validity of the learned energy potentials. Sincethis method can learn EBMs efficiently with small budget of MCMC, we are also interested inscaling it up to higher resolution images and investigating this method in other data modalities inthe future.ACKNOWLEDGEMENTThe work was done while Ruiqi Gao and Yang Song were interns at Google Brain during the summerof 2020. The work of Ying Nian Wu is supported by NSF DMS-2015577. We thank Alexander A.Alemi, Jonathan Ho, Tim Salimans and Kevin Murphy for their insightful discussions during thecourse of this project.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Interesting approach for training EBMs as a sequence of conditional EBMs ### Review Text This paper describes training a sequence of conditional EBMs (inspired by Ho et al. (2020)) instead of training unconditional EBMs. Each conditional energy describes the probability of recovering x, given its noisy version \hat{x}. The noisy version of x can be described as a normal distribution centered at x, so the condition EBM has an additional term ||x - \hat{x}||^2, which constrains the Langevin dynamics to remain in the vicinity of \hat{x}, so it converges faster! The inference is that starting at white noise x_0, it defines a conditional EBMs given x_0, runs Langaving dynamics to sample from P(x|x_0) defined by the conditional EBMs, and then use the sample as the evidence of another conditional EBMs. I assume the main advantage of this training over Ho et al. (2020) is having an energy-based model that can be used in other applications, so that would be nice to see the performance of trained EBMs on some applications other than image generation such as image inpainting and robust classification as discussed in Du and Mordatch (2019). In general, I believe that this is an exciting paper and an important step towards training better EBMs. In connection to score matching, Saremi et al. (2018) and Saremi and Hyvarinen (2019) should be cited as well. Saremi et al. (2018), Deep Energy Estimator Networks. Saremi and Hyvarinen (2019), Neural Empirical Bayes. typo: "score-based based methods" in Section 4.1 Alg2: for t \gets T - 1 to 0 do ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Byk-VI9eg
ICLR.cc/2017/conference
2017
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
ABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14
HkYCUhr4g
6: Marginally above acceptance threshold
This work brings multiple discriminators into GAN. From the result, multiple discriminators is useful for stabilizing. The main problem of stabilizing seems is from gradient signal from discriminator, the authors motivation is using multiple discriminators to reduce this effect. I think this work indicates the direction is promising, however I think the authors may consider to add more result vs approach which enforce discriminator gradient, such as GAN with DAE (Improving Generative Adversarial Networks with Denoising Feature Matching), to show advantages of multiple discriminators.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Generative Multi-Adversarial Networks ### Paper Abstract Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric. ### Paper Keywords ["Deep learning", "Unsupervised Learning", "Games"] ### Paper Content ABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14<|im_end|> <|im_start|>assistant ### Review Title ### Review Text This work brings multiple discriminators into GAN. From the result, multiple discriminators is useful for stabilizing. The main problem of stabilizing seems is from gradient signal from discriminator, the authors motivation is using multiple discriminators to reduce this effect. I think this work indicates the direction is promising, however I think the authors may consider to add more result vs approach which enforce discriminator gradient, such as GAN with DAE (Improving Generative Adversarial Networks with Denoising Feature Matching), to show advantages of multiple discriminators. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
B1ecYsqSuN
ICLR.cc/2019/Workshop/LLD
2019
INCORPORATING BILINGUAL DICTIONARIES FOR LOW RESOURCE SEMI-SUPERVISED NEURAL MACHINE TRANSLATION
["Mihir Kale", "Sreyashi Nag", "Varun Lakshinarasimhan", "Swapnil Singhavi"]
We explore ways of incorporating bilingual dictionaries to enable semi-supervised neural machine translation. Conventional back-translation methods have shown success in leveraging target side monolingual data. However, since the quality of back-translation models is tied to the size of the available parallel corpora, this could adversely impact the synthetically generated sentences in a low resource setting. We propose a simple data augmentation technique to address both this shortcoming. We incorporate widely available bilingual dictionaries that yield word-by-word translations to generate synthetic sentences. This automatically expands the vocabulary of the model while maintaining high quality content. Our method shows an appreciable improvement in performance over strong baselines.
["bilingual dictionaries", "neural machine translation", "low resource", "ways", "conventional", "methods", "success", "quality", "models"]
ABSTRACTWe explore ways of incorporating bilingual dictionaries to enable semi-supervisedneural machine translation. Conventional back-translation methods have shownsuccess in leveraging target side monolingual data. However, since the quality ofback-translation models is tied to the size of the available parallel corpora, thiscould adversely impact the synthetically generated sentences in a low resourcesetting. We propose a simple data augmentation technique to address both thisshortcoming. We incorporate widely available bilingual dictionaries that yieldword-by-word translations to generate synthetic sentences. This automaticallyexpands the vocabulary of the model while maintaining high quality content. Ourmethod shows an appreciable improvement in performance over strong baselines.1 I NTRODUCTIONNeural Machine Translation (NMT) methods require large amounts of parallel data to perform well.This poses a significant challenge in low-resource and out-of-domain scenarios where the amount ofparallel data is usually limited. A proven way to mitigate this issue has been by leveraging the vastamounts of monolingual data in conjunction with parallel data to improve performance. Prior workin the field has explored several methods to achieve this. One of the most successful approaches hasbeen Back-Translation (BT) Sennrich et al. (2015), that generates artificial parallel data from targetmonolingual corpora by training a translation model in the reverse direction. Another approach(COPY) proposed by Currey et al. (2017) directly copies target monolingual data to the source,focused on capturing entities that do not change across languages.The methods mentioned above suffer from a couple of limitations. The quality of the generatedsource translations in the BT model are dependent on the amount of parallel data. Furthermore,the vocabulary available to the model is also limited to that of the parallel data, which increasesthe probability of out-of-vocabulary words. The COPY model, on the other hand, adds vocabulary,albeit only on the target side. In this paper, we propose a simple yet effective data augmentationtechnique that utilizes bilingual dictionaries that expands vocabulary on both source and target sides,thus significantly reducing the probability of out-of-vocabulary words. Our method also ensuresthat correlations between the source and target languages are modelled in the monolingual data. Inparticular, our contributions are as follows:We propose the Word-on-Word (WoW) data augmentation method, that outperforms previ-ous data augmentation methods in a low-resource setting.We show that our method benefits from both in-domain as well as out-of-domain monolin-gual data and shows encouraging results for domain-adaptation.Finally, we also apply our method over other augmentation techniques and show its effec-tiveness in enhancing performance.Equal contribution1Published as a conference paper at ICLR 20192 R ELATED WORKBack-translation Sennrich et al. (2015) has emerged as a popular way of using monolingual dataon the target side. Burlot & Yvon (2018) show that the quality of the reverse model directly im-pacts translation quality - augmenting data generated from a weak backtranslation model leads toonly small improvements. Our method directly addresses this issue by utilizing high-quality bilin-gual dictionaries. Zhang & Zong (2016b) consider using data on the source side in a self-trainingsetup. Other ways of data augmentation include copying target data on the source side Currey et al.(2017). Sennrich et al. (2015) use target side monolingual data by using null sentences on the sourceside, effectively performing language modelling as an auxiliary task. Another way of incorporatingmonolingual data is via hidden states from pre-trained language models, as done by Gulcehre et al.(2015).In terms of incorporating bilingual dictionaries into NMT, Zhang & Zong (2016a) use them for dataaugmentation. However, they focus mainly on rare words, and unlike our approach, their methodhas a dependency on statistical phrase-based translation models. Arthur et al. (2016) use transla-tion lexicons, but their objective is to use them for influencing the probability distribution of thedecoder. Word-by-word translation has also been used for unsupervised translation Lample et al.(2017), while our goal is to utilize it in the semi-supervised setup.3 E XPERIMENTAL SETUPWe use the TED Talks corpus Qi et al. (2018) with the provided train, dev and test splits. Specif-ically we consider German-English ( de-en ) and Spanish-English ( es-en ) translation tasks. We usesubsets from the given train split to simulate low-resource settings. We use freely available bilingualdictionaries provided by Facebook1. For our experiments, we employ a 1-layer 256 dimensionalencoder-decoder model with attention Bahdanau et al. (2014) with a beam width of 5 for decoding.Training uses a batch size of 32 and the Adam optimizer Kinga & Adam (2015) with an initial learn-ing rate of 0.001, with cosine annealing. Models are trained for 50 epochs, while model selection isperformed according to performance on the validation set using BLEU Papineni et al. (2002).TargetEnglishThe work of a transportation commissioner isn’t just about stop signs and trafficsignals .GroundTruthEl trabajo de una Comisaria de Transporte o es solo sobre seales de pare ysemforos .CopiedTarget(COPY)The work of a transportation commissioner isn’t just about stop signs andtraffic signals .BackTranslation(BT)el trabajo de una evolucin funcional no est hablando con los altibajos y aman alas seales .BilingualDictionarytrabajo del una transportacin comisionado just acerca pare letreros trfico sealesTable 1: Comparison of various data augmentation techniques4 M ETHODOur approach utilizes bilingual dictionaries to obtain word-on-word translations (WoW) for targetside monolingual data. Given a sentence in the target language (in our case, English), we applythe dictionary on each word to obtain corresponding translations in the source language. We thenaugment our parallel corpus with this synthetically created data on the source side and the groundtruth monolingual data on the target side. We then train our model on this augmented dataset toachieve our final translations. Figure 1 shows the popular approaches of data augmentation usingmonolingual corpora on the target side and how they compare with our proposed approach.The main benefit of WoW over back-translation (BT) is in the quality of the generated synthetic1https://github.com/facebookresearch/MUSEground-truth-bilingual-dictionaries2Published as a conference paper at ICLR 2019data. BT in very low-resource settings results in a poor model, which in turn generates poor qualitysynthetic sentences. WoW on the other hand, relies on strong bilingual dictionaries. Hence, even ifthe sentence structure due to the word-on-word translation is poor, it at least ensures that the wordsin the synthetic sentences are accurate. With the right words on the source side and an approximatelycorrect word ordering, the model has a rough sketch of the semantics of the sentence.Another clear benefit of this method is that it allows the model to expand its vocabulary. For instance,starting with 10k parallel pairs, adding 10K WoW pairs helps increase vocabulary coverage on thedevelopment set from 65% to 92% for es-en on the target side. Note that the vocabulary expansioneffect is both on the target as well as source side. On the source side, the model is exposed to alot fewer unks, and the coverage increases from 60% to 88%. Theoretically, COPY also expandsvocabulary on the target side. However, it does so in a way that is independent of the source sentence.With WoW on the other hand, the model can make direct correlations between new target words andthe source words. Note that for every pair in the augmented dataset, the target sentence is always ahigh quality, real world sentence. The synthetic data is only added to the source side, and the qualityof the supervised labels for the decoder remain untarnished.5 R ESULTS AND ANALYSIS5.1 T RANSLATION PERFORMANCEWe compare WoW with 3 baselines :-Parallel : Only 10k parallel data.-BT: The parallel corpus is augmented with 10K back-translated pairs.-COPY : The parallel corpus is augmented with 10K copied target data.As shown in table 2, BT outperforms the parallel-only baseline, showing that even weak syntheticsentences can improve BLEU scores. However, BT itself is beaten by the rather simple COPYmethod. The low performance of BT compared to COPY can be attributed to the poor qualitysource sentences that it generates.WoW outperforms all baselines, including COPY , for both language pairs. WoW beats the bestdata augmentation baseline (COPY) by 0.85 points for es-en and 0.79 points for de-en . Gains overparallel-only data are 2.5 for de-en and 2.8 for es-en . For comparison, we also report BLEU scoresfor 20k parallel data, which gives us an upper bound on what data augmentation techniques canachieve.experiment de-en es-enparallel 10k 9.46 9.51+ 10k BT 10.86 11.47+ 10k COPY 11.24 11.49+ 10k WoW 12.03 12.34+ 10k parallel 13.01 13.46Table 2: Comparison with Baselinesexperiment de-en es-enparallel 10k 9.46 9.51+ 10k WoW 12.03 12.34+ 20k WoW 12.66 12.96+ 40k WoW 13.31 13.79+ 10k parallel 13.01 13.46Table 3: Increasing Monolingual Data5.2 E FFECT OF SIZE OF MONOLINGUAL DATAWe perform further experiments, increasing the size of the monolingual data used for augmentation.Three settings are considered: 1:1 (10k synthetic), 1:2 (20k synthetic) and 1:4 (40k synthetic) ratiosfor the parallel and synthetic data respectively. For both language pairs, we see substantial improve-ments - upto 3.8 BLEU points for de-en and 4.3 points for es-en - with the increase in monolingualdata. This can be seen in table 3. We observe that adding 40k synthetic sentences brings about morebenefit than adding 10k high quality parallel sentences.3Published as a conference paper at ICLR 2019experiment de-en es-enparallel 10k 9.46 9.51+ 10k WoW 11.59 12.27+ 20k WoW 12.36 12.73+ 40k WoW 13.0 12.91+ 10k parallel 13.01 13.46Table 4: Using Out-of-Domain Monolingual Dataexperiment TED (source) UN (target)parallel 10k 9.46 -+ 10k WoW 4.13 6.16+ 20k WoW 4.2 7.37+ 40k WoW 4.85 7.51Table 5: Using Out-Of-Domain Test Set for es-en5.3 E FFECT OF OUT-OF-DOMAIN MONOLINGUAL DATAThere might be scenarios where we want to perform translations for a specific domain where only asmall parallel corpus is available, but monolingual data from a different domain is readily available.In this section, we explore such a setting. While the TED Talks corpus consists of spoken languagedata (in-domain), monolingual data is drawn from the news domain ( out-of-domain ). We randomlysample 10k, 20k and 40k data from the WMT 2017 News task dataset. Table 4 shows that usingout-of-domain monolingual corpora also shows significant improvements over all three baselines.Forde-en , adding 40K out-of-domain synthetic samples achieves the same performance as adding10K in-domain parallel samples.5.4 D OMAIN ADAPTATIONAnother realistic scenario is one where parallel corpus is available in a source domain (TED Talks),but we care about translation in a target domain (news), where only monolingual data is available.Methods like BT would not perform well here, since the reverse model has only seen data fromthe source domain. Moreover, the out-of-vocabulary problem is only exacerbated in such a setting,making our approach even more attractive. Our setup here is the same as in section 5.3, except thatwe use an out-of-domain (10k pairs from the UN es-en corpus2) test set. From table 5, we observethat WoW trained on the dataset augmented using monolingual data in the target domain showsimpressive gains compared to both the parallel baseline (trained only on the source domain) as wellas WoW using just the source domain. Adapting to domains like medicine using domain specificbilingual lexicons would be an interesting line of future work.experiment de-en es-enparallel 10k 9.46 9.51+ 20k WoW 12.66 12.34+ 10k WoW + 10k COPY 12.09 13.03+ 10k WoW + 10k BT 11.46 12.02Table 6: Combining Approaches5.5 C OMBINING WITH OTHER APPROACHESIt is trivial to combine WoW with other augmentation approaches like COPY and BT. We hypoth-esized that combination with COPY is most promising, given that Currey et al. (2017) have shownthat the benefits are complimentary to back-translation. Specifically, COPY helps the model copyunchanged words like named entities. We explore such combinations by running experiments forWoW + COPY and WoW + BT (table 6). As expected, combination with COPY performs bettersince the copied target sentences contains words like named entities which are entirely missing fromthe bilingual dictionaries. Interestingly, 10k WoW + 10k COPY outperforms using 20k WoW fores-en , as the model is able to draw upon the complimentary benefits of both methods. On the otherhand, combining our method with BT leads to a decrease in performance, which goes on to showthat a low quality BT model does not add any complementary benefit.2http://www.statmt.org/wmt11/4Published as a conference paper at ICLR 20196 C ONCLUSION AND FUTURE WORKWe propose a simple yet effective data augmentation technique by utilizing bilingual dictionariesfor low resource NMT. In this work, we used ground truth dictionaries. A direct line of future workis to create synthetic samples using induced dictionaries and also incorporating phrase tables.
BkghQVbBYE
Simple technique for improving low-resource translation when bilingual dictionaries are given.
3: Marginally above acceptance threshold
This paper investigates the idea of using bilingual dictionaries to create synthetic sources for target-side monolingual data in order to improve over NMT models trained with small amounts of parallel data. This strategy is compared with back-translation and copying the target to the source side and evaluated on TED data for de-en and es-en in a simulated low-resource and a domain adaptation setting. The empirical results show that when little parallel data is available in addition to bilingual dictionaries, this method can outperform back-translation and copying. Pros: - Written clearly - Reproducible (hyperparameters, data) - Evaluation shows improvements of the proposed model over baselines, despite the simplicity of the data and the noise in the sources. - Effect of data sizes are studied. - Good review of related work. Cons: - The low-resource setting is only simulated. It would have been to take a truely low-resource language and evaluate the methods on that (e.g. the other language pairs presented in Qi et al. 2018). - The requirement of bilingual dictionaries and their coverage and their domain dependence is not discussed. If little parallel data is available, can we simply assume the existence of large dictionaries? - It is assumed that the word-by-word dictionary translation "at least ensures that the words in the synthetic sentences are accurate" (§4). This is critical since it ignores the problem of polysemy - one word in the target language can often have more than one meaning in the source language: which one is picked for generating the synthetic sentence? In summary, despite its clarity and simplicity, I don't find the paper very creative regarding the methodology, and it does not sufficiently answer the question when dictionaries outperform back-translation, since the properties of the additional resource, i.e. the dictionary, are not discussed/investigated, neither its limitations. It would have been interesting to see the same approach in a truely low-resource problem where the dictionary might be limited as well. Details: - Consider changing the acronym of the method, it seems widely adopted for World of Warcraft. - Table 1 has encoding problems for á, ò etc.
3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title INCORPORATING BILINGUAL DICTIONARIES FOR LOW RESOURCE SEMI-SUPERVISED NEURAL MACHINE TRANSLATION ### Paper Abstract We explore ways of incorporating bilingual dictionaries to enable semi-supervised neural machine translation. Conventional back-translation methods have shown success in leveraging target side monolingual data. However, since the quality of back-translation models is tied to the size of the available parallel corpora, this could adversely impact the synthetically generated sentences in a low resource setting. We propose a simple data augmentation technique to address both this shortcoming. We incorporate widely available bilingual dictionaries that yield word-by-word translations to generate synthetic sentences. This automatically expands the vocabulary of the model while maintaining high quality content. Our method shows an appreciable improvement in performance over strong baselines. ### Paper Keywords ["bilingual dictionaries", "neural machine translation", "low resource", "ways", "conventional", "methods", "success", "quality", "models"] ### Paper Content ABSTRACTWe explore ways of incorporating bilingual dictionaries to enable semi-supervisedneural machine translation. Conventional back-translation methods have shownsuccess in leveraging target side monolingual data. However, since the quality ofback-translation models is tied to the size of the available parallel corpora, thiscould adversely impact the synthetically generated sentences in a low resourcesetting. We propose a simple data augmentation technique to address both thisshortcoming. We incorporate widely available bilingual dictionaries that yieldword-by-word translations to generate synthetic sentences. This automaticallyexpands the vocabulary of the model while maintaining high quality content. Ourmethod shows an appreciable improvement in performance over strong baselines.1 I NTRODUCTIONNeural Machine Translation (NMT) methods require large amounts of parallel data to perform well.This poses a significant challenge in low-resource and out-of-domain scenarios where the amount ofparallel data is usually limited. A proven way to mitigate this issue has been by leveraging the vastamounts of monolingual data in conjunction with parallel data to improve performance. Prior workin the field has explored several methods to achieve this. One of the most successful approaches hasbeen Back-Translation (BT) Sennrich et al. (2015), that generates artificial parallel data from targetmonolingual corpora by training a translation model in the reverse direction. Another approach(COPY) proposed by Currey et al. (2017) directly copies target monolingual data to the source,focused on capturing entities that do not change across languages.The methods mentioned above suffer from a couple of limitations. The quality of the generatedsource translations in the BT model are dependent on the amount of parallel data. Furthermore,the vocabulary available to the model is also limited to that of the parallel data, which increasesthe probability of out-of-vocabulary words. The COPY model, on the other hand, adds vocabulary,albeit only on the target side. In this paper, we propose a simple yet effective data augmentationtechnique that utilizes bilingual dictionaries that expands vocabulary on both source and target sides,thus significantly reducing the probability of out-of-vocabulary words. Our method also ensuresthat correlations between the source and target languages are modelled in the monolingual data. Inparticular, our contributions are as follows:We propose the Word-on-Word (WoW) data augmentation method, that outperforms previ-ous data augmentation methods in a low-resource setting.We show that our method benefits from both in-domain as well as out-of-domain monolin-gual data and shows encouraging results for domain-adaptation.Finally, we also apply our method over other augmentation techniques and show its effec-tiveness in enhancing performance.Equal contribution1Published as a conference paper at ICLR 20192 R ELATED WORKBack-translation Sennrich et al. (2015) has emerged as a popular way of using monolingual dataon the target side. Burlot & Yvon (2018) show that the quality of the reverse model directly im-pacts translation quality - augmenting data generated from a weak backtranslation model leads toonly small improvements. Our method directly addresses this issue by utilizing high-quality bilin-gual dictionaries. Zhang & Zong (2016b) consider using data on the source side in a self-trainingsetup. Other ways of data augmentation include copying target data on the source side Currey et al.(2017). Sennrich et al. (2015) use target side monolingual data by using null sentences on the sourceside, effectively performing language modelling as an auxiliary task. Another way of incorporatingmonolingual data is via hidden states from pre-trained language models, as done by Gulcehre et al.(2015).In terms of incorporating bilingual dictionaries into NMT, Zhang & Zong (2016a) use them for dataaugmentation. However, they focus mainly on rare words, and unlike our approach, their methodhas a dependency on statistical phrase-based translation models. Arthur et al. (2016) use transla-tion lexicons, but their objective is to use them for influencing the probability distribution of thedecoder. Word-by-word translation has also been used for unsupervised translation Lample et al.(2017), while our goal is to utilize it in the semi-supervised setup.3 E XPERIMENTAL SETUPWe use the TED Talks corpus Qi et al. (2018) with the provided train, dev and test splits. Specif-ically we consider German-English ( de-en ) and Spanish-English ( es-en ) translation tasks. We usesubsets from the given train split to simulate low-resource settings. We use freely available bilingualdictionaries provided by Facebook1. For our experiments, we employ a 1-layer 256 dimensionalencoder-decoder model with attention Bahdanau et al. (2014) with a beam width of 5 for decoding.Training uses a batch size of 32 and the Adam optimizer Kinga & Adam (2015) with an initial learn-ing rate of 0.001, with cosine annealing. Models are trained for 50 epochs, while model selection isperformed according to performance on the validation set using BLEU Papineni et al. (2002).TargetEnglishThe work of a transportation commissioner isn’t just about stop signs and trafficsignals .GroundTruthEl trabajo de una Comisaria de Transporte o es solo sobre seales de pare ysemforos .CopiedTarget(COPY)The work of a transportation commissioner isn’t just about stop signs andtraffic signals .BackTranslation(BT)el trabajo de una evolucin funcional no est hablando con los altibajos y aman alas seales .BilingualDictionarytrabajo del una transportacin comisionado just acerca pare letreros trfico sealesTable 1: Comparison of various data augmentation techniques4 M ETHODOur approach utilizes bilingual dictionaries to obtain word-on-word translations (WoW) for targetside monolingual data. Given a sentence in the target language (in our case, English), we applythe dictionary on each word to obtain corresponding translations in the source language. We thenaugment our parallel corpus with this synthetically created data on the source side and the groundtruth monolingual data on the target side. We then train our model on this augmented dataset toachieve our final translations. Figure 1 shows the popular approaches of data augmentation usingmonolingual corpora on the target side and how they compare with our proposed approach.The main benefit of WoW over back-translation (BT) is in the quality of the generated synthetic1https://github.com/facebookresearch/MUSEground-truth-bilingual-dictionaries2Published as a conference paper at ICLR 2019data. BT in very low-resource settings results in a poor model, which in turn generates poor qualitysynthetic sentences. WoW on the other hand, relies on strong bilingual dictionaries. Hence, even ifthe sentence structure due to the word-on-word translation is poor, it at least ensures that the wordsin the synthetic sentences are accurate. With the right words on the source side and an approximatelycorrect word ordering, the model has a rough sketch of the semantics of the sentence.Another clear benefit of this method is that it allows the model to expand its vocabulary. For instance,starting with 10k parallel pairs, adding 10K WoW pairs helps increase vocabulary coverage on thedevelopment set from 65% to 92% for es-en on the target side. Note that the vocabulary expansioneffect is both on the target as well as source side. On the source side, the model is exposed to alot fewer unks, and the coverage increases from 60% to 88%. Theoretically, COPY also expandsvocabulary on the target side. However, it does so in a way that is independent of the source sentence.With WoW on the other hand, the model can make direct correlations between new target words andthe source words. Note that for every pair in the augmented dataset, the target sentence is always ahigh quality, real world sentence. The synthetic data is only added to the source side, and the qualityof the supervised labels for the decoder remain untarnished.5 R ESULTS AND ANALYSIS5.1 T RANSLATION PERFORMANCEWe compare WoW with 3 baselines :-Parallel : Only 10k parallel data.-BT: The parallel corpus is augmented with 10K back-translated pairs.-COPY : The parallel corpus is augmented with 10K copied target data.As shown in table 2, BT outperforms the parallel-only baseline, showing that even weak syntheticsentences can improve BLEU scores. However, BT itself is beaten by the rather simple COPYmethod. The low performance of BT compared to COPY can be attributed to the poor qualitysource sentences that it generates.WoW outperforms all baselines, including COPY , for both language pairs. WoW beats the bestdata augmentation baseline (COPY) by 0.85 points for es-en and 0.79 points for de-en . Gains overparallel-only data are 2.5 for de-en and 2.8 for es-en . For comparison, we also report BLEU scoresfor 20k parallel data, which gives us an upper bound on what data augmentation techniques canachieve.experiment de-en es-enparallel 10k 9.46 9.51+ 10k BT 10.86 11.47+ 10k COPY 11.24 11.49+ 10k WoW 12.03 12.34+ 10k parallel 13.01 13.46Table 2: Comparison with Baselinesexperiment de-en es-enparallel 10k 9.46 9.51+ 10k WoW 12.03 12.34+ 20k WoW 12.66 12.96+ 40k WoW 13.31 13.79+ 10k parallel 13.01 13.46Table 3: Increasing Monolingual Data5.2 E FFECT OF SIZE OF MONOLINGUAL DATAWe perform further experiments, increasing the size of the monolingual data used for augmentation.Three settings are considered: 1:1 (10k synthetic), 1:2 (20k synthetic) and 1:4 (40k synthetic) ratiosfor the parallel and synthetic data respectively. For both language pairs, we see substantial improve-ments - upto 3.8 BLEU points for de-en and 4.3 points for es-en - with the increase in monolingualdata. This can be seen in table 3. We observe that adding 40k synthetic sentences brings about morebenefit than adding 10k high quality parallel sentences.3Published as a conference paper at ICLR 2019experiment de-en es-enparallel 10k 9.46 9.51+ 10k WoW 11.59 12.27+ 20k WoW 12.36 12.73+ 40k WoW 13.0 12.91+ 10k parallel 13.01 13.46Table 4: Using Out-of-Domain Monolingual Dataexperiment TED (source) UN (target)parallel 10k 9.46 -+ 10k WoW 4.13 6.16+ 20k WoW 4.2 7.37+ 40k WoW 4.85 7.51Table 5: Using Out-Of-Domain Test Set for es-en5.3 E FFECT OF OUT-OF-DOMAIN MONOLINGUAL DATAThere might be scenarios where we want to perform translations for a specific domain where only asmall parallel corpus is available, but monolingual data from a different domain is readily available.In this section, we explore such a setting. While the TED Talks corpus consists of spoken languagedata (in-domain), monolingual data is drawn from the news domain ( out-of-domain ). We randomlysample 10k, 20k and 40k data from the WMT 2017 News task dataset. Table 4 shows that usingout-of-domain monolingual corpora also shows significant improvements over all three baselines.Forde-en , adding 40K out-of-domain synthetic samples achieves the same performance as adding10K in-domain parallel samples.5.4 D OMAIN ADAPTATIONAnother realistic scenario is one where parallel corpus is available in a source domain (TED Talks),but we care about translation in a target domain (news), where only monolingual data is available.Methods like BT would not perform well here, since the reverse model has only seen data fromthe source domain. Moreover, the out-of-vocabulary problem is only exacerbated in such a setting,making our approach even more attractive. Our setup here is the same as in section 5.3, except thatwe use an out-of-domain (10k pairs from the UN es-en corpus2) test set. From table 5, we observethat WoW trained on the dataset augmented using monolingual data in the target domain showsimpressive gains compared to both the parallel baseline (trained only on the source domain) as wellas WoW using just the source domain. Adapting to domains like medicine using domain specificbilingual lexicons would be an interesting line of future work.experiment de-en es-enparallel 10k 9.46 9.51+ 20k WoW 12.66 12.34+ 10k WoW + 10k COPY 12.09 13.03+ 10k WoW + 10k BT 11.46 12.02Table 6: Combining Approaches5.5 C OMBINING WITH OTHER APPROACHESIt is trivial to combine WoW with other augmentation approaches like COPY and BT. We hypoth-esized that combination with COPY is most promising, given that Currey et al. (2017) have shownthat the benefits are complimentary to back-translation. Specifically, COPY helps the model copyunchanged words like named entities. We explore such combinations by running experiments forWoW + COPY and WoW + BT (table 6). As expected, combination with COPY performs bettersince the copied target sentences contains words like named entities which are entirely missing fromthe bilingual dictionaries. Interestingly, 10k WoW + 10k COPY outperforms using 20k WoW fores-en , as the model is able to draw upon the complimentary benefits of both methods. On the otherhand, combining our method with BT leads to a decrease in performance, which goes on to showthat a low quality BT model does not add any complementary benefit.2http://www.statmt.org/wmt11/4Published as a conference paper at ICLR 20196 C ONCLUSION AND FUTURE WORKWe propose a simple yet effective data augmentation technique by utilizing bilingual dictionariesfor low resource NMT. In this work, we used ground truth dictionaries. A direct line of future workis to create synthetic samples using induced dictionaries and also incorporating phrase tables.<|im_end|> <|im_start|>assistant ### Review Title Simple technique for improving low-resource translation when bilingual dictionaries are given. ### Review Text This paper investigates the idea of using bilingual dictionaries to create synthetic sources for target-side monolingual data in order to improve over NMT models trained with small amounts of parallel data. This strategy is compared with back-translation and copying the target to the source side and evaluated on TED data for de-en and es-en in a simulated low-resource and a domain adaptation setting. The empirical results show that when little parallel data is available in addition to bilingual dictionaries, this method can outperform back-translation and copying. Pros: - Written clearly - Reproducible (hyperparameters, data) - Evaluation shows improvements of the proposed model over baselines, despite the simplicity of the data and the noise in the sources. - Effect of data sizes are studied. - Good review of related work. Cons: - The low-resource setting is only simulated. It would have been to take a truely low-resource language and evaluate the methods on that (e.g. the other language pairs presented in Qi et al. 2018). - The requirement of bilingual dictionaries and their coverage and their domain dependence is not discussed. If little parallel data is available, can we simply assume the existence of large dictionaries? - It is assumed that the word-by-word dictionary translation "at least ensures that the words in the synthetic sentences are accurate" (§4). This is critical since it ignores the problem of polysemy - one word in the target language can often have more than one meaning in the source language: which one is picked for generating the synthetic sentence? In summary, despite its clarity and simplicity, I don't find the paper very creative regarding the methodology, and it does not sufficiently answer the question when dictionaries outperform back-translation, since the properties of the additional resource, i.e. the dictionary, are not discussed/investigated, neither its limitations. It would have been interesting to see the same approach in a truely low-resource problem where the dictionary might be limited as well. Details: - Consider changing the acronym of the method, it seems widely adopted for World of Warcraft. - Table 1 has encoding problems for á, ò etc. ### Review Rating 3: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
YPm0fzy_z6R
ICLR.cc/2021/Conference
2021
Signed Graph Diffusion Network
["Jinhong Jung", "Jaemin Yoo", "U Kang"]
Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs have received considerable attention to model trust relationships. Learning node representations is crucial to effectively analyze graph data, and various techniques such as network embedding and graph convolutional network (GCN) have been proposed for learning signed graphs. However, traditional network embedding methods are not end-to-end for a specific task such as link sign prediction, and GCN-based methods suffer from a performance degradation problem when their depth increases. In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a random walk technique specially designed for signed graphs so that SGDNet effectively diffuses hidden node features. Through extensive experiments, we demonstrate that SGDNet outperforms state-of-the-art models in terms of link sign prediction accuracy.
["graph neural network", "signed graph analysis", "representation learning", "graph diffusion", "random walk", "link sign prediction"]
ABSTRACTGiven a signed social graph, how can we learn appropriate node representations toinfer the signs of missing edges? Signed social graphs have received considerableattention to model trust relationships. Learning node representations is crucial toeffectively analyze graph data, and various techniques such as network embeddingand graph convolutional network (GCN) have been proposed for learning signedgraphs. However, traditional network embedding methods are not end-to-end fora specific task such as link sign prediction, and GCN-based methods suffer froma performance degradation problem when their depth increases. In this paper,we propose SIGNED GRAPH DIFFUSION NETWORK (SGDN ET), a novel graphneural network that achieves end-to-end node representation learning for linksign prediction in signed social graphs. We propose a random walk techniquespecially designed for signed graphs so that SGDN ETeffectively diffuses hiddennode features. Through extensive experiments, we demonstrate that SGDN EToutperforms state-of-the-art models in terms of link sign prediction accuracy.1 I NTRODUCTIONGiven a signed social graph, how can we learn appropriate node representations to infer the signs ofmissing edges? Signed social graphs model trust relationships between people with positive (trust)and negative (distrust) edges. Many online social services such as Epinions (Guha et al., 2004) andSlashdot (Kunegis et al., 2009) that allow users to express their opinions are naturally representedas signed social graphs. Such graphs have attracted considerable attention for diverse applicationsincluding link sign prediction (Leskovec et al., 2010a; Kumar et al., 2016), node ranking (Junget al., 2016; Li et al., 2019b), community analysis (Yang et al., 2007; Chu et al., 2016), graphgeneration (Derr et al., 2018a; Jung et al., 2020), and anomaly detection (Kumar et al., 2014).Node representation learning is a fundamental building block for analyzing graph data, and manyresearchers have put tremendous efforts into developing effective models for unsigned graphs. Graphconvolutional networks (GCN) and their variants (Kipf & Welling, 2017; Velickovic et al., 2018)have spurred great attention in machine learning community, and recent works (Klicpera et al., 2019;Li et al., 2019a) have demonstrated stunning progress by handling the performance degradationcaused by over-smoothing (Li et al., 2018; Oono & Suzuki, 2020) (i.e., node representations becomeindistinguishable as the number of propagation increases) or the vanishing gradient problem (Liet al., 2019a) in the first generation of GCN models. However, all of these models have a limitedperformance on node representation learning in signed graphs since they only consider unsignededges under the homophily assumption (Kipf & Welling, 2017).Many studies have been recently conducted to consider such signed edges, and they are categorizedinto network embedding and GCN-based models. Network embedding (Kim et al., 2018; Xu et al.,2019b) learns the representations of nodes by optimizing an unsupervised loss that primarily aimsto locate two nodes’ embeddings closely (or far) if they are positively (or negatively) connected.However, they are not trained jointly with a specific task in an end-to-end manner, i.e., latentfeatures and the task are trained separately. Thus, their performance is limited unless each of them istuned delicately. GCN-based models (Derr et al., 2018b; Li et al., 2020) have extended the graphconvolutions to signed graphs using balance theory (Holland & Leinhardt, 1971) in order to properlypropagate node features on signed edges. However, these models are directly extended from existingGCNs without consideration of the over-smoothing problem that degrades their performance. Thisproblem hinders them from exploiting more information from multi-hop neighbors for learning nodefeatures in signed graphs.1Under review as a conference paper at ICLR 2021SGD LayerSGD LayerSGD LayerLossXH($)=H(')⋯H())GGG(a) SGDN ETF"⋅: Signed Random Walk DiffusionFeature TransformationFeature TransformationP(&)M(&)GH(&,-)H(&)tanhH2(&)Skip Connection (b)l-th SGD layer (c) Signed random walk diffusionFigure 1: Overall architecture of SGDN ET. (a) Given a signed graph Gand initial node featuresX,SGDN ETwith multiple SGD layers produces the final embeddings H(L), which is fed to a lossfunction under an end-to-end framework. (b) A single SGD layer learns node embeddings based onsigned random walk diffusion. (c) Our diffusion module aggregates the features of node vso thatthey are similar to those connected by +edges (e.g., node u), and different from those connectedbyedges (e.g., node t). Also, it injects the local feature (i.e., the input feature of each module) ofnodevat each aggregation to make the aggregated features distinguishable.We propose SGDN ET(SIGNED GRAPH DIFFUSION NETWORK ), a novel graph neural network fornode representation learning in signed graphs. Our main contributions are summarized as follows:End-to-end learning. We design SGDN ETthat performs end-to-end node representationlearning. Given a signed graph, SGDN ETproduces node embeddings through multiplesigned graph diffusion (SGD) layers (Figure 1(a)), which are fed into a loss function of aspecific task such as link sign prediction.Novel feature diffusion. We propose a signed random walk diffusion method that prop-agates node embeddings on signed edges based on random walks considering signs, andinjects local features (Figure 1(c)). This enables SGDN ETto learn distinguishable noderepresentations considering multi-hop neighbors while preserving local information.Experiments. Extensive experiments show that SGDN ETeffectively learns node represen-tations of signed social graphs for link sign prediction, giving at least 3.9% higher accuracythan the state-of-the-art models in real datasets (Table 2).2 R ELATED WORK2.1 G RAPH CONVOLUTIONAL NETWORKS ON UNSIGNED GRAPHSGraph convolutional network (GCN) (Kipf & Welling, 2017) models the latent representation of anode by employing a convolutional operation on the features of its neighbors. Various GCN-basedapproaches (Kipf & Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017) have arousedconsiderable attention since they enable diverse graph supervised tasks (Kipf & Welling, 2017; Yaoet al., 2019; Xu et al., 2019a) to be performed concisely under an end-to-end framework. However,the first generation of GCN models exhibit performance degradation due to the over-smoothingand vanishing gradient problems. Several works (Li et al., 2018; Oono & Suzuki, 2020) havetheoretically revealed the over-smoothing problem. Also, Li et al. (Li et al., 2019a) have empiricallyshown that stacking more GCN layers leads to the vanishing gradient problem as in convolutionalneural networks (He et al., 2016). Consequently, most GCN-based models (Kipf & Welling, 2017;Velickovic et al., 2018; Hamilton et al., 2017) are shallow; i.e., they do not use the feature informationin faraway nodes when modeling node embeddings.A recent research direction aims at resolving the limitation. Klicpera et al. (Klicpera et al., 2019)proposed APPNP exploiting Personalized PageRank (Jeh & Widom, 2003) to not only propagatehidden node embeddings far but also preserve local features, thereby preventing aggregated featuresfrom being over-smoothed. Li et al. (Li et al., 2019a) suggested ResGCN adding skip connectionsbetween GCN layers, as in ResNet (He et al., 2016). However, all of these models do not provide howto use signed edges since they are based on the homophily assumption (Kipf & Welling, 2017), i.e.,2Under review as a conference paper at ICLR 2021users having connections are likely to be similar, which is not valid for negative edges. As opposedto the homophily, negative edges have the semantics of heterophily (Rogers, 2010), i.e., users havingconnections are dissimilar. Although these methods can still be applied to signed graphs by ignoringthe edge signs, their trained features have limited capacity.2.2 NETWORK EMBEDDING AND GRAPH CONVOLUTIONAL NETWORKS ON SIGNED GRAPHSTraditional methods on network embedding extract latent node features specialized for signed graphsin an unsupervised manner. Kim et al. (Kim et al., 2018) proposed SIDE which optimizes a likelihoodover direct and indirect signed connections on truncated random walks sampled from a signed graph.Xu et al. (Xu et al., 2019b) developed SLF considering positive, negative, and non-linked relationshipsbetween nodes to learn non-negative node embeddings. However, such approaches are not end-to-end,i.e., they are not directly optimized for solving a supervised task such as link prediction.There are recent progresses on end-to-end learning on signed networks under the GCN framework.Derr et al. (Derr et al., 2018b) proposed SGCN which extends the GCN mechanism to signed graphsconsidering balanced and unbalanced relationships supported by structural balance theory (Holland& Leinhardt, 1971). Yu et al. (Li et al., 2020) developed SNEA using attention techniques to revealthe importance of these relationships. However, such state-of-the-art models do not consider theover-smoothing problem since they are directly extended from GCN.3 P ROPOSED METHODWe propose SGDN ET(SIGNED GRAPH DIFFUSION NETWORK ), a novel end-to-end model for noderepresentation learning in signed graphs. Our SGDN ETaims to properly aggregate node features onsigned edges, and to effectively use the features of multi-hop neighbors so that generated features arenot over-smoothed. Our main ideas are to diffuse node features along random walks considering thesigns of edges, and to inject local node features at each aggregation.Figure 1 depicts the overall architecture of SGDN ET. Given a signed graph Gand initial node featuresX2Rnd0as shown in Figure 1(a), SGDN ETextracts the final node embeddings H(L)2RndLthrough multiple SGD layers where nis the number of nodes, Lis the number of SGD layers, and dlis the embedding dimension of the l-th layer. Then, H(L)is fed into a loss function of a specific taskso that they are jointly trained in an end-to-end framework. Given H(l1), thel-th SGD layer aimsto learn H(l)based on feature transformations and signed random walk diffusion Fd()as shown inFigure 1(b). The layer also uses the skip connection to prevent the vanishing gradient problem whenthe depth of SGDN ETincreases.Figure 1(c) illustrates the intuition behind the signed random walk diffusion. Each node has twofeatures corresponding to positive and negative surfers, respectively. The surfer flips its sign whenmoving along negative edges, while the sign is kept along positive edges. For example, the positive(or negative) surfer becomes positive at node vif it moves from a positively connected node u(or anegatively connected node t). As a result, the aggregated features at node vbecome similar to thoseconnected by positive edges (e.g., node u), and different from those connected by negative edges(e.g., nodet). In other words, it satisfies homophily and heterophily at the same time while unsignedGCNs cannot handle the heterophily of negative edges. Furthermore, we inject the local feature (i.e.,the input feature of the module) of node vat each aggregation so that the resulting features remaindistinguishable during the diffusion.3.1 S IGNED GRAPH DIFFUSION LAYERGiven a signed graph Gand the node embeddings H(l1)from the previous layer, the l-th SGD layerlearns new embeddings H(l)as shown in Figure 1(b). It first transforms H(l1)into hidden features~H(l)as~H(l)=H(l1)W(l)twith a learnable parameter W(l)t2Rdl1dl. Then, it applies the signedrandom walk diffusion which is represented as the function Fd(G;~H(l))that returns P(l)2RndlandM(l)2Rndlas the positive and the negative embeddings, respectively (details in Section 3.2). Theembeddings are concatenated and transformed as follows:H(l)=hP(l)jjM(l)iW(l)n+H(l1)(1)3Under review as a conference paper at ICLR 2021st+++++st+−++st−++st−−+−+−−−(a) Signed random walksuvtp%('())m%('())+−p.('())m.('())p/(')m/(')+−+−h/(1)+−1/N%∼1/N. (b) Feature diffusion for p(k)vandm(k)vFigure 2: Feature diffusion by signed random walks in SGDN ET. (a) Signed random walks properlyconsider edge signs. (b) The positive and the negative feature vectors p(k)vandm(k)vare updated fromthe previous feature vectors and the local feature vector ~h(l)vas described in Equation (2).where()is a non-linear activator such as tanh ,jjdenotes horizontal concatenation of two matrices,andW(l)n2R2dldlis a trainable weight matrix that learns a relationship between P(l)andM(l).We use the skip connection (He et al., 2016; Li et al., 2019a) with H(l1)in Equation (1)to avoid thevanishing gradient issue which frequently occurs when multiple layers are stacked.3.2 S IGNED RANDOM WALK DIFFUSIONWe design the signed random walk diffusion operator Fd()used in thel-th SGD layer. Given thesigned graphGand the hidden node embeddings ~H(l), the diffusion operator Fd()diffuses the nodefeatures based on random walks considering edge signs so that it properly aggregates node featureson signed edges and prevents the aggregated features from being over-smoothed.Signed random walks are performed by a signed random surfer (Jung et al., 2016) who has the +orsign when moving around the graph. Figure 2(a) shows signed random walks on four casesaccording to edge signs: 1) a friend’s friend, 2) a friend’s enemy, 3) an enemy’s friend, and 4) anenemy’s enemy. The surfer starts from node swith the +sign. If it encounters a negative edge, thesurfer flips its sign from +to, or vice versa. Otherwise, the sign is kept. The surfer determineswhether a target node tis a friend of node sor not according to its sign.Fd()exploits the signed random walk for diffusing node features on signed edges. Each node isrepresented by two feature vectors which represent the positive and negative signs, respectively. Let kdenote the number of diffusion steps or random walk steps. Then, p(k)v2Rdl1andm(k)v2Rdl1areaggregated at node v, respectively, where p(k)v(orm(k)v) is the feature vector visited by the positive(or negative) surfer at step k. These are recursively obtained by the following equations:p(k)v= (1c)Xu2 N+v1j !Nujp(k1)u +Xt2 Nv1j !Ntjm(k1)t+c~h(l)vm(k)v= (1c)Xt2 Nv1j !Ntjp(k1)t +Xu2 N+v1j !Nujm(k1)u (2)where Nsvis the set of incoming neighbors to node vconnected with edges of sign s, !Nuis the setof outgoing neighbors from node uregardless of edge signs, ~h(l)vis the local feature of node v(i.e.,thev-th row vector of ~H(l)), and 0< c < 1is a local feature injection ratio. That is, the featuresare computed by the signed random walk feature diffusion with weight 1cand the local featureinjection with weight cwith the following details.Signed Random Walk Feature Diffusion. Figure 2(b) illustrates how p(k)vandm(k)vare diffusedby the signed random walks according to Equation (2). Suppose the positive surfer visits node vatstepk. For this to happen, the positive surfer of an incoming neighbor uat stepk1should choosethe edge (u!v;+)by a probability 1=j !Nuj. This transition to node valong the positive edge allowsto keep the surfer’s positive sign. At the same time, the negative surfer of an incoming neighbor tatstepk1should move along the edge (t!v;)by a probability 1=j !Ntj. In this case, the surfer4Under review as a conference paper at ICLR 2021flips its sign fromto+. Considering these signed random walks, p(k)vis obtained by the weightedaggregation of p(k1)u andm(k1)t . Similarly, m(k)vis aggregated as shown in Figure 2(b).Local Feature Injection. Although the feature diffusion above properly considers edge signs, thegenerated features could be over-smoothed after many steps if we depend solely on the diffusion. Inother words, it considers only the graph information explored by the signed random surfer, while thelocal information in the hidden feature ~h(l)vis disregarded during the diffusion. Hence, as shown inFigure 2(b), we explicitly inject the local feature ~h(l)vtop(k)vwith weight cat each aggregation inEquation (2)so that the diffused features are not over-smoothed. The reason why local features areonly injected to +embeddings is that we consider a node should trust ( +) its own information (i.e.,its local feature).3.3 C ONVERGENCE GUARANTEE OF SIGNED RANDOM WALK DIFFUSIONSuppose that P(k)= [p(k)>1;;p(k)>n]andM(k)= [m(k)>1;;m(k)>n]represent the positive andnegative embeddings of all nodes, respectively, where ;denotes vertical concatenation. Let Asbe theadjacency matrix for sign ssuch that Asuvis1for signed edge (u!v;s), and 0otherwise. Then,Equation (2) is vectorized as follows:P(k)= (1c)(~A>+P(k1)+~A>M(k1)) +c~H(l)M(k)= (1c)(~A>P(k1)+~A>+M(k1))))T(k)= (1c)~BT(k1)+cQ (3)where ~As=D1Asis the normalized matrix for sign s, andDis a diagonal out-degree matrix (i.e.,Dii=j !Nij). The left equation of Equation (3) is compactly represented as the right equation whereT(k)=P(k)M(k)~B=~A>+~A>~A>~A>+Q=~H(l)0:Then, T(k)is guaranteed to converge as shown in the following theorem.Theorem 1 The diffused features in T(k)converge to equilibrium for c2(0;1)as follows:T= limk!1T(k)= limk!1 k1Xi=0(1c)i~Bi!~Q= (I(1c)~B)1~Q (~Q:=cQ) (4)If we iterate Equation (3)Ktimes for 1kK, the exact solution Tis approximated asTT(K)=~Q+ (1c)~B~Q++ (1c)K1~BK1~Q+ (1c)K~BKT(0)(5)wherekTT(K)k1(1c)KkTT(0)k1, and T(0)=P(0)M(0)is the initial value ofEquation (3). 2Proof 1 A proof sketch is to show the spectral radius of ~Bis less than or equal to 1, which guaranteesthe convergence of the geometric series with (1c)~B. See the details in Appendix A.1. 2According to Theorem 1, ~BK~Qis the node features diffused by K-step signed random walks with~Qwhere ~BKis interpreted as the transition matrix of K-step signed random walks. Thus, theapproximation is the sum of the diffused features from 1toKsteps with a decaying factor 1c, i.e.,the effect of distant nodes gradually decreases while that of neighboring nodes is high. This is thereason why SGDN ETprevents diffused features from being over-smoothed. Also, the approximationerrorkTT(K)k1exponentially deceases as Kincreases due to the term (1c)K. Another point isthat the iteration of Equation (3)converges to the same solution no matter what P(0)andM(0)aregiven. In this work, we initialize P(0)with~H(l), and randomly initialize M(0)in[1;1].The signed random walk diffusion operator Fd()iterates Equation (3)Ktimes for 1kKwhereKis the number of diffusion steps, and it returns P(l) P(K)andM(l) M(K)as theoutputs of the diffusion module at the l-th SGD layer. The detailed pseudocode of SGDN ETisdescribed in Appendix A.3, and its time complexity is analyzed in Appendix A.2.5Under review as a conference paper at ICLR 2021Table 1: Dataset statistics of signed graphs. jVjandjEjare the number of nodes and edges, respectively.Given signs2f+;g,jEsjand(s)are the number and percentage of edges with sign s, respectively.Dataset jVj j Ej j E+j j Ej(+) ()Bitcoin-Alpha13,783 24,186 22,650 1,536 93.65% 6.35%Bitcoin-OTC15,881 35,592 32,029 3,563 89.99% 10.01%Slashdot279,120 515,397 392,326 123,071 76.12% 23.88%Epinions3131,828 841,372 717,667 123,705 85.30% 14.70%1https://snap.stanford.edu/data/soc-sign-bitcoin-otc.html2http://konect.uni-koblenz.de/networks/slashdot-zoo3http://www.trustlet.org/wiki/Extended_Epinions_dataset3.4 L OSSFUNCTION FOR LINKSIGNPREDICTIONThe link sign prediction is to predict the missing sign of a given edge. As shown in Figure 1(a),SGDN ETproduces the final node embeddings H(L). The embeddings are fed into a loss functionL(G;H(L);) =Lsign(G;H(L)) +Lreg()where is the set of model parameters, Lsign()is thebinary cross entropy loss, and Lreg()is theL2regularization loss with weight decay . For a signededge (u!v;s), the edge feature is zuv2R12dL=h(L)ujjh(L)vwhere h(L)uis theu-th row vector ofH(L). Let E be the set of signed edges. Then, Lsign()is represented as follows:Lsign(G;X) =X(u!v;s)2EXt2f+;gI(t=s) log ( softmax t(zuvW))where W2R2dL2is a learnable weight matrix, softmax t()is the probability for sign taftersoftmax operation, and I()returns 1if a given predicate is true, and 0otherwise.4 E XPERIMENTSWe evaluate the effectiveness of SGDN ETthrough the link sign prediction task.Datasets. We perform experiments on four standard signed graphs summarized in Table 1: Bitcoin-Alpha (Kumar et al., 2016), Bitcoin-OTC (Kumar et al., 2016), Slashdot (Kunegis et al., 2009), andEpinions (Guha et al., 2004). We provide the detailed description of each dataset in Appendix A.4.We also report additional experiments on Wikipedia dataset (Leskovec et al., 2010b) in Appendix A.5.Competitors. We compare our proposed SGDN ETwith the following competitors:APPNP (Klicpera et al., 2019): an unsigned GCN model based on Personalized PageRank.ResGCN (Li et al., 2019a): another unsigned GCN model exploiting skip connections todeeply stack multiple layers.SIDE (Kim et al., 2018): a network embedding model optimizing the likelihood over signededges using random walk sequences to encode structural information into node embeddings.SLF (Xu et al., 2019b): another network embedding model considering positive, negative,and non-linked relationships to learn non-negative node embeddings.SGCN (Derr et al., 2018b): a state-of-the-art signed GCN model considering balanced andunbalanced paths motivated from balance theory to propagate embeddings.SNEA (Li et al., 2020): another signed GCN model extending SGCN by learning attentionson the balanced and unbalanced paths.We use the absolute adjacency matrix for APPNP and ResGCN since they handle only unsignededges. All methods are implemented by PyTorch and Numpy in Python. We use a machine with IntelE5-2630 v4 2.2GHz CPU and Geforce GTX 1080 Ti for the experiments.Evaluation Metrics. We randomly split the edges of a signed graph into training and test sets by the8:2 ratio. As shown in Table 1, the sign ratio is highly skewed to the positive sign, i.e., the sampleddatasets are naturally imbalanced. Considering the class imbalance, we measure the area under thecurve (AUC) to evaluate predictive performance. We also report F1-macro measuring the average ofthe ratios of correct predictions for each sign since negative edges need to be treated as important aspositive edges (i.e., it gives equal importance to each class). A higher value of AUC or F1-macroindicates better performance. We repeat each experiment 10times with different random seeds andreport the average and standard deviation of test values.6Under review as a conference paper at ICLR 2021Table 2: SGDN ETgives the best link sign prediction performance in terms of AUC. The best modelis in bold, and the second best model is underlined. The % increase measures the best accuracyagainst the second best accuracy.AUC Bitcoin-Alpha Bitcoin-OTC Slashdot EpinionsAPPNP (Klicpera et al., 2019) 0.8540.010 0.8670.009 0.8370.003 0.8700.002ResGCN (Li et al., 2019a) 0.8530.017 0.8760.010 0.7440.004 0.8710.002SIDE (Kim et al., 2018) 0.8010.020 0.8390.013 0.8140.003 0.8800.003SLF (Xu et al., 2019b) 0.7790.023 0.7970.014 0.8330.006 0.8760.005SGCN (Derr et al., 2018b) 0.8240.018 0.8570.008 0.8270.004 0.8950.002SNEA (Li et al., 2020) 0.8550.006 0.8580.008 0.7540.005 0.7710.004SGDNet (proposed) 0.9110.007 0.9210.005 0.8860.001 0.9320.001% increase 6.4% 4.9% 5.9% 3.9%Table 3: SGDN ETgives the best link sign prediction performance in terms of F1-macro.F1-macro Bitcoin-Alpha Bitcoin-OTC Slashdot EpinionsAPPNP (Klicpera et al., 2019) 0.6820.005 0.7620.009 0.7480.003 0.7730.004ResGCN (Li et al., 2019a) 0.6580.006 0.7350.015 0.6090.004 0.7840.003SIDE (Kim et al., 2018) 0.6630.008 0.7090.008 0.6850.009 0.7850.006SLF (Xu et al., 2019b) 0.6150.027 0.6410.025 0.7330.008 0.8100.008SGCN (Derr et al., 2018b) 0.6900.014 0.7760.008 0.7520.013 0.8440.002SNEA (Li et al., 2020) 0.6700.005 0.7420.011 0.6900.005 0.8050.005SGDNet (proposed) 0.7570.012 0.7990.007 0.7780.002 0.8540.002% increase 7.4% 1.6% 3.5% 1.2%Hyperparameter Settings. We set the dimension of final node embeddings to 32for all methodsso that their embeddings have the same learning capacity (see its effect in Appendix A.6). Weperform 5-fold cross-validation for each method to find the best hyperparameters and measure thetest accuracy with the selected ones. In the cross-validation for SGDN ET, the number Lof SGDlayers is sought between 1and6, and the restart probability cis selected from 0:05to0:95by stepsize0:1. We set the number Kof diffusion steps to 10and the feature dimension dlof each layer to32. We follow the range of each hyperparameter recommended in its corresponding paper for thecross-validation of other models. Our model is trained by the Adam optimizer (Kingma & Ba, 2015),where the learning rate is 0:01, the weight decay is0:001, and the number of epochs is 100. Wesummarize the hyperparameters used by SGDN ETfor each dataset in Appendix A.7.4.1 L INKSIGNPREDICTIONWe evaluate the performance of each method on link sign prediction. Tables 2 and 3 summarize theexperimental results in terms of AUC and F1-macro, respectively. Note that our SGDN ETshowsthe best performance in terms of AUC and F1-macro scores. SGDN ETpresents 3:96:4% and1:27:4% improvements over the second best models in terms of AUC and F1-macro, respectively.We have the following observations.The unsigned GCN models APPNP and ResGCN show worse performance than SGDN ET,which shows the importance of using sign information.The performance of network embedding techniques such as SIDE and SLF is worse than thatof other GCN-based models; this shows the importance of jointly learning feature extractionand link sign prediction for the performance.The performance of SGCN and SNEA which use limited features from nodes within 23hops is worse than that of SGDN ETwhich exploits up to K-hop neighbors’ features whereKis set to 10in these experiments. It indicates that carefully exploiting features fromdistant nodes as well as neighboring ones is crucial for the performance.4.2 E FFECT OF DIFFUSION STEPSWe investigate the effect of the feature diffusion in SGDN ETfor learning signed graphs. We useone SGD layer, and set the restart probability cto0:15to evaluate the pure effect of the diffusion7Under review as a conference paper at ICLR 2021(a) Bitcoin-Alpha (b) Bitcoin-OTC (c) Slashdot (d) EpinionsFigure 3: Effect of SGDN ET’s feature diffusion compared to state-of-the-art SGCN. The performanceof SGDN ETis boosted while that of SGCN degrades as the number Kof diffusion steps increases.(a) Bitcoin-Alpha (b) Bitcoin-OTC (c) Slashdot (d) EpinionsFigure 4: Effect of local injection ratio cofSGDN ET. A relatively small value ( 0:150:35) ofcis the best for the Bitcoin-Alpha and Bitcoin-OTC (small) datasets while caround 0:5shows betteraccuracy for the Slashdot and Epinions (large) datasets.module; we vary the number Kof diffusion steps from 1to10and evaluate the performance ofSGDN ETin terms of F1-macro for each diffusion step. Also, we compare SGDN ETto SGCN,a state-of-the-art-model for learning signed graphs. The number of diffusion steps of SGCN isdetermined by its number of layers. Figure 3 shows that the performance of SGDN ETgraduallyimproves as Kincreases while that of SGCN dramatically decreases over all datasets. This indicatesthat SGCN suffers from the performance degradation problem when its network becomes deep, i.e.,it is difficult to use more information beyond 3hops in SGCN. On the other hand, SGDN ETutilizesfeatures of farther nodes, and generates more expressive and stable features than SGCN does. Notethat the performance of SGDN ETconverges in general after a sufficient number of diffusion steps,which is highly associated with Theorem 1.4.3 E FFECT OF LOCAL INJECTION RATIOWe examine the effect of the local injection ratio cin the diffusion module of SGDN ET. We use oneSGD layer, and set the number Kof diffusion steps to 10; we varycfrom 0:05to0:95by0:1, andmeasure the performance of the link sign prediction task in terms of F1-macro. Figure 4 shows theeffect ofcto the predictive performance of SGDN ET. For small datasets such as Bitcoin-Alpha andBitcoin-OTC, cbetween 0:15and0:35provides better performance. On the other hand, caround 0:5shows higher accuracy for large datasets such as Slashdot and Epinions. For all datasets, a too low ortoo high value of c(e.g., 0:05or0:95) results in a poor performance.5 C ONCLUSIONIn this paper, we propose SIGNED GRAPH DIFFUSION NETWORK (SGDN ET), a novel graph neuralnetwork that performs end-to-end node representation learning for link sign prediction in signedgraphs. We propose a signed random walk diffusion method to properly diffuse node features onsigned edges, and suggest a local feature injection method to make diffused features distinguishable.Our diffusion method empowers SGDN ETto effectively train node embeddings considering multi-hop neighbors while preserving local information. Our extensive experiments show that SGDN ETprovides the best accuracy outperforming the state-of-the-art models in link sign prediction. Futureresearch directions include extending our method for multi-view networks.8Under review as a conference paper at ICLR 2021
-DrNlWUCLi8
The paper propose SGDNet, Signed Graph Diffusion Network, to perform end-to-end node presentation and signed link prediction. It also considers the over-smoothing issue and use local feature injection to prevent over-smoothing.
4: Ok but not good enough - rejection
The strength: - Empirical results show improvements of the proposed method over the baseline methods, including methods for unsigned graphs, signed embedding emthods, and signed GCN methods not addressing over-smoothing. The weakness: - The proposed method is a straightforward integration of existing methods on signed graphs and on handling over-smoothing in GCNs. There is very little new idea in the proposed method. - The convergence result (Theorem 1) is straightforward based on linear algebra, and it is only for the diffusion part in each GCN layer. There is no overall theoretical results on the SGDNet architecture. Overall, the method proposed is a simple combination of past methods on handling signed graphs and handling over-smoothing in graphs. It is just that the two has not been combined together, and so the only contribution I see is the combination of these two methods and verify it in the signed edge prediction task. The theoretical result on convergence is only on the diffusion convergence at each layer, and it is a straightforward application of the linear algebra. It is unclear to me why we need a diffusion convergence at each layer and then also need GCN with multiple layers. What is the connection between the diffusion steps K and the GCN layers L? In summary, the paper shows improvement when combining previous methods on signed networks and on handling over-smoothing. It may fit into a second-tier conference to record the result, but I feel that it does not meet the high bar of ICLR.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Signed Graph Diffusion Network ### Paper Abstract Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs have received considerable attention to model trust relationships. Learning node representations is crucial to effectively analyze graph data, and various techniques such as network embedding and graph convolutional network (GCN) have been proposed for learning signed graphs. However, traditional network embedding methods are not end-to-end for a specific task such as link sign prediction, and GCN-based methods suffer from a performance degradation problem when their depth increases. In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a random walk technique specially designed for signed graphs so that SGDNet effectively diffuses hidden node features. Through extensive experiments, we demonstrate that SGDNet outperforms state-of-the-art models in terms of link sign prediction accuracy. ### Paper Keywords ["graph neural network", "signed graph analysis", "representation learning", "graph diffusion", "random walk", "link sign prediction"] ### Paper Content ABSTRACTGiven a signed social graph, how can we learn appropriate node representations toinfer the signs of missing edges? Signed social graphs have received considerableattention to model trust relationships. Learning node representations is crucial toeffectively analyze graph data, and various techniques such as network embeddingand graph convolutional network (GCN) have been proposed for learning signedgraphs. However, traditional network embedding methods are not end-to-end fora specific task such as link sign prediction, and GCN-based methods suffer froma performance degradation problem when their depth increases. In this paper,we propose SIGNED GRAPH DIFFUSION NETWORK (SGDN ET), a novel graphneural network that achieves end-to-end node representation learning for linksign prediction in signed social graphs. We propose a random walk techniquespecially designed for signed graphs so that SGDN ETeffectively diffuses hiddennode features. Through extensive experiments, we demonstrate that SGDN EToutperforms state-of-the-art models in terms of link sign prediction accuracy.1 I NTRODUCTIONGiven a signed social graph, how can we learn appropriate node representations to infer the signs ofmissing edges? Signed social graphs model trust relationships between people with positive (trust)and negative (distrust) edges. Many online social services such as Epinions (Guha et al., 2004) andSlashdot (Kunegis et al., 2009) that allow users to express their opinions are naturally representedas signed social graphs. Such graphs have attracted considerable attention for diverse applicationsincluding link sign prediction (Leskovec et al., 2010a; Kumar et al., 2016), node ranking (Junget al., 2016; Li et al., 2019b), community analysis (Yang et al., 2007; Chu et al., 2016), graphgeneration (Derr et al., 2018a; Jung et al., 2020), and anomaly detection (Kumar et al., 2014).Node representation learning is a fundamental building block for analyzing graph data, and manyresearchers have put tremendous efforts into developing effective models for unsigned graphs. Graphconvolutional networks (GCN) and their variants (Kipf & Welling, 2017; Velickovic et al., 2018)have spurred great attention in machine learning community, and recent works (Klicpera et al., 2019;Li et al., 2019a) have demonstrated stunning progress by handling the performance degradationcaused by over-smoothing (Li et al., 2018; Oono & Suzuki, 2020) (i.e., node representations becomeindistinguishable as the number of propagation increases) or the vanishing gradient problem (Liet al., 2019a) in the first generation of GCN models. However, all of these models have a limitedperformance on node representation learning in signed graphs since they only consider unsignededges under the homophily assumption (Kipf & Welling, 2017).Many studies have been recently conducted to consider such signed edges, and they are categorizedinto network embedding and GCN-based models. Network embedding (Kim et al., 2018; Xu et al.,2019b) learns the representations of nodes by optimizing an unsupervised loss that primarily aimsto locate two nodes’ embeddings closely (or far) if they are positively (or negatively) connected.However, they are not trained jointly with a specific task in an end-to-end manner, i.e., latentfeatures and the task are trained separately. Thus, their performance is limited unless each of them istuned delicately. GCN-based models (Derr et al., 2018b; Li et al., 2020) have extended the graphconvolutions to signed graphs using balance theory (Holland & Leinhardt, 1971) in order to properlypropagate node features on signed edges. However, these models are directly extended from existingGCNs without consideration of the over-smoothing problem that degrades their performance. Thisproblem hinders them from exploiting more information from multi-hop neighbors for learning nodefeatures in signed graphs.1Under review as a conference paper at ICLR 2021SGD LayerSGD LayerSGD LayerLossXH($)=H(')⋯H())GGG(a) SGDN ETF"⋅: Signed Random Walk DiffusionFeature TransformationFeature TransformationP(&)M(&)GH(&,-)H(&)tanhH2(&)Skip Connection (b)l-th SGD layer (c) Signed random walk diffusionFigure 1: Overall architecture of SGDN ET. (a) Given a signed graph Gand initial node featuresX,SGDN ETwith multiple SGD layers produces the final embeddings H(L), which is fed to a lossfunction under an end-to-end framework. (b) A single SGD layer learns node embeddings based onsigned random walk diffusion. (c) Our diffusion module aggregates the features of node vso thatthey are similar to those connected by +edges (e.g., node u), and different from those connectedbyedges (e.g., node t). Also, it injects the local feature (i.e., the input feature of each module) ofnodevat each aggregation to make the aggregated features distinguishable.We propose SGDN ET(SIGNED GRAPH DIFFUSION NETWORK ), a novel graph neural network fornode representation learning in signed graphs. Our main contributions are summarized as follows:End-to-end learning. We design SGDN ETthat performs end-to-end node representationlearning. Given a signed graph, SGDN ETproduces node embeddings through multiplesigned graph diffusion (SGD) layers (Figure 1(a)), which are fed into a loss function of aspecific task such as link sign prediction.Novel feature diffusion. We propose a signed random walk diffusion method that prop-agates node embeddings on signed edges based on random walks considering signs, andinjects local features (Figure 1(c)). This enables SGDN ETto learn distinguishable noderepresentations considering multi-hop neighbors while preserving local information.Experiments. Extensive experiments show that SGDN ETeffectively learns node represen-tations of signed social graphs for link sign prediction, giving at least 3.9% higher accuracythan the state-of-the-art models in real datasets (Table 2).2 R ELATED WORK2.1 G RAPH CONVOLUTIONAL NETWORKS ON UNSIGNED GRAPHSGraph convolutional network (GCN) (Kipf & Welling, 2017) models the latent representation of anode by employing a convolutional operation on the features of its neighbors. Various GCN-basedapproaches (Kipf & Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017) have arousedconsiderable attention since they enable diverse graph supervised tasks (Kipf & Welling, 2017; Yaoet al., 2019; Xu et al., 2019a) to be performed concisely under an end-to-end framework. However,the first generation of GCN models exhibit performance degradation due to the over-smoothingand vanishing gradient problems. Several works (Li et al., 2018; Oono & Suzuki, 2020) havetheoretically revealed the over-smoothing problem. Also, Li et al. (Li et al., 2019a) have empiricallyshown that stacking more GCN layers leads to the vanishing gradient problem as in convolutionalneural networks (He et al., 2016). Consequently, most GCN-based models (Kipf & Welling, 2017;Velickovic et al., 2018; Hamilton et al., 2017) are shallow; i.e., they do not use the feature informationin faraway nodes when modeling node embeddings.A recent research direction aims at resolving the limitation. Klicpera et al. (Klicpera et al., 2019)proposed APPNP exploiting Personalized PageRank (Jeh & Widom, 2003) to not only propagatehidden node embeddings far but also preserve local features, thereby preventing aggregated featuresfrom being over-smoothed. Li et al. (Li et al., 2019a) suggested ResGCN adding skip connectionsbetween GCN layers, as in ResNet (He et al., 2016). However, all of these models do not provide howto use signed edges since they are based on the homophily assumption (Kipf & Welling, 2017), i.e.,2Under review as a conference paper at ICLR 2021users having connections are likely to be similar, which is not valid for negative edges. As opposedto the homophily, negative edges have the semantics of heterophily (Rogers, 2010), i.e., users havingconnections are dissimilar. Although these methods can still be applied to signed graphs by ignoringthe edge signs, their trained features have limited capacity.2.2 NETWORK EMBEDDING AND GRAPH CONVOLUTIONAL NETWORKS ON SIGNED GRAPHSTraditional methods on network embedding extract latent node features specialized for signed graphsin an unsupervised manner. Kim et al. (Kim et al., 2018) proposed SIDE which optimizes a likelihoodover direct and indirect signed connections on truncated random walks sampled from a signed graph.Xu et al. (Xu et al., 2019b) developed SLF considering positive, negative, and non-linked relationshipsbetween nodes to learn non-negative node embeddings. However, such approaches are not end-to-end,i.e., they are not directly optimized for solving a supervised task such as link prediction.There are recent progresses on end-to-end learning on signed networks under the GCN framework.Derr et al. (Derr et al., 2018b) proposed SGCN which extends the GCN mechanism to signed graphsconsidering balanced and unbalanced relationships supported by structural balance theory (Holland& Leinhardt, 1971). Yu et al. (Li et al., 2020) developed SNEA using attention techniques to revealthe importance of these relationships. However, such state-of-the-art models do not consider theover-smoothing problem since they are directly extended from GCN.3 P ROPOSED METHODWe propose SGDN ET(SIGNED GRAPH DIFFUSION NETWORK ), a novel end-to-end model for noderepresentation learning in signed graphs. Our SGDN ETaims to properly aggregate node features onsigned edges, and to effectively use the features of multi-hop neighbors so that generated features arenot over-smoothed. Our main ideas are to diffuse node features along random walks considering thesigns of edges, and to inject local node features at each aggregation.Figure 1 depicts the overall architecture of SGDN ET. Given a signed graph Gand initial node featuresX2Rnd0as shown in Figure 1(a), SGDN ETextracts the final node embeddings H(L)2RndLthrough multiple SGD layers where nis the number of nodes, Lis the number of SGD layers, and dlis the embedding dimension of the l-th layer. Then, H(L)is fed into a loss function of a specific taskso that they are jointly trained in an end-to-end framework. Given H(l1), thel-th SGD layer aimsto learn H(l)based on feature transformations and signed random walk diffusion Fd()as shown inFigure 1(b). The layer also uses the skip connection to prevent the vanishing gradient problem whenthe depth of SGDN ETincreases.Figure 1(c) illustrates the intuition behind the signed random walk diffusion. Each node has twofeatures corresponding to positive and negative surfers, respectively. The surfer flips its sign whenmoving along negative edges, while the sign is kept along positive edges. For example, the positive(or negative) surfer becomes positive at node vif it moves from a positively connected node u(or anegatively connected node t). As a result, the aggregated features at node vbecome similar to thoseconnected by positive edges (e.g., node u), and different from those connected by negative edges(e.g., nodet). In other words, it satisfies homophily and heterophily at the same time while unsignedGCNs cannot handle the heterophily of negative edges. Furthermore, we inject the local feature (i.e.,the input feature of the module) of node vat each aggregation so that the resulting features remaindistinguishable during the diffusion.3.1 S IGNED GRAPH DIFFUSION LAYERGiven a signed graph Gand the node embeddings H(l1)from the previous layer, the l-th SGD layerlearns new embeddings H(l)as shown in Figure 1(b). It first transforms H(l1)into hidden features~H(l)as~H(l)=H(l1)W(l)twith a learnable parameter W(l)t2Rdl1dl. Then, it applies the signedrandom walk diffusion which is represented as the function Fd(G;~H(l))that returns P(l)2RndlandM(l)2Rndlas the positive and the negative embeddings, respectively (details in Section 3.2). Theembeddings are concatenated and transformed as follows:H(l)=hP(l)jjM(l)iW(l)n+H(l1)(1)3Under review as a conference paper at ICLR 2021st+++++st+−++st−++st−−+−+−−−(a) Signed random walksuvtp%('())m%('())+−p.('())m.('())p/(')m/(')+−+−h/(1)+−1/N%∼1/N. (b) Feature diffusion for p(k)vandm(k)vFigure 2: Feature diffusion by signed random walks in SGDN ET. (a) Signed random walks properlyconsider edge signs. (b) The positive and the negative feature vectors p(k)vandm(k)vare updated fromthe previous feature vectors and the local feature vector ~h(l)vas described in Equation (2).where()is a non-linear activator such as tanh ,jjdenotes horizontal concatenation of two matrices,andW(l)n2R2dldlis a trainable weight matrix that learns a relationship between P(l)andM(l).We use the skip connection (He et al., 2016; Li et al., 2019a) with H(l1)in Equation (1)to avoid thevanishing gradient issue which frequently occurs when multiple layers are stacked.3.2 S IGNED RANDOM WALK DIFFUSIONWe design the signed random walk diffusion operator Fd()used in thel-th SGD layer. Given thesigned graphGand the hidden node embeddings ~H(l), the diffusion operator Fd()diffuses the nodefeatures based on random walks considering edge signs so that it properly aggregates node featureson signed edges and prevents the aggregated features from being over-smoothed.Signed random walks are performed by a signed random surfer (Jung et al., 2016) who has the +orsign when moving around the graph. Figure 2(a) shows signed random walks on four casesaccording to edge signs: 1) a friend’s friend, 2) a friend’s enemy, 3) an enemy’s friend, and 4) anenemy’s enemy. The surfer starts from node swith the +sign. If it encounters a negative edge, thesurfer flips its sign from +to, or vice versa. Otherwise, the sign is kept. The surfer determineswhether a target node tis a friend of node sor not according to its sign.Fd()exploits the signed random walk for diffusing node features on signed edges. Each node isrepresented by two feature vectors which represent the positive and negative signs, respectively. Let kdenote the number of diffusion steps or random walk steps. Then, p(k)v2Rdl1andm(k)v2Rdl1areaggregated at node v, respectively, where p(k)v(orm(k)v) is the feature vector visited by the positive(or negative) surfer at step k. These are recursively obtained by the following equations:p(k)v= (1c)Xu2 N+v1j !Nujp(k1)u +Xt2 Nv1j !Ntjm(k1)t+c~h(l)vm(k)v= (1c)Xt2 Nv1j !Ntjp(k1)t +Xu2 N+v1j !Nujm(k1)u (2)where Nsvis the set of incoming neighbors to node vconnected with edges of sign s, !Nuis the setof outgoing neighbors from node uregardless of edge signs, ~h(l)vis the local feature of node v(i.e.,thev-th row vector of ~H(l)), and 0< c < 1is a local feature injection ratio. That is, the featuresare computed by the signed random walk feature diffusion with weight 1cand the local featureinjection with weight cwith the following details.Signed Random Walk Feature Diffusion. Figure 2(b) illustrates how p(k)vandm(k)vare diffusedby the signed random walks according to Equation (2). Suppose the positive surfer visits node vatstepk. For this to happen, the positive surfer of an incoming neighbor uat stepk1should choosethe edge (u!v;+)by a probability 1=j !Nuj. This transition to node valong the positive edge allowsto keep the surfer’s positive sign. At the same time, the negative surfer of an incoming neighbor tatstepk1should move along the edge (t!v;)by a probability 1=j !Ntj. In this case, the surfer4Under review as a conference paper at ICLR 2021flips its sign fromto+. Considering these signed random walks, p(k)vis obtained by the weightedaggregation of p(k1)u andm(k1)t . Similarly, m(k)vis aggregated as shown in Figure 2(b).Local Feature Injection. Although the feature diffusion above properly considers edge signs, thegenerated features could be over-smoothed after many steps if we depend solely on the diffusion. Inother words, it considers only the graph information explored by the signed random surfer, while thelocal information in the hidden feature ~h(l)vis disregarded during the diffusion. Hence, as shown inFigure 2(b), we explicitly inject the local feature ~h(l)vtop(k)vwith weight cat each aggregation inEquation (2)so that the diffused features are not over-smoothed. The reason why local features areonly injected to +embeddings is that we consider a node should trust ( +) its own information (i.e.,its local feature).3.3 C ONVERGENCE GUARANTEE OF SIGNED RANDOM WALK DIFFUSIONSuppose that P(k)= [p(k)>1;;p(k)>n]andM(k)= [m(k)>1;;m(k)>n]represent the positive andnegative embeddings of all nodes, respectively, where ;denotes vertical concatenation. Let Asbe theadjacency matrix for sign ssuch that Asuvis1for signed edge (u!v;s), and 0otherwise. Then,Equation (2) is vectorized as follows:P(k)= (1c)(~A>+P(k1)+~A>M(k1)) +c~H(l)M(k)= (1c)(~A>P(k1)+~A>+M(k1))))T(k)= (1c)~BT(k1)+cQ (3)where ~As=D1Asis the normalized matrix for sign s, andDis a diagonal out-degree matrix (i.e.,Dii=j !Nij). The left equation of Equation (3) is compactly represented as the right equation whereT(k)=P(k)M(k)~B=~A>+~A>~A>~A>+Q=~H(l)0:Then, T(k)is guaranteed to converge as shown in the following theorem.Theorem 1 The diffused features in T(k)converge to equilibrium for c2(0;1)as follows:T= limk!1T(k)= limk!1 k1Xi=0(1c)i~Bi!~Q= (I(1c)~B)1~Q (~Q:=cQ) (4)If we iterate Equation (3)Ktimes for 1kK, the exact solution Tis approximated asTT(K)=~Q+ (1c)~B~Q++ (1c)K1~BK1~Q+ (1c)K~BKT(0)(5)wherekTT(K)k1(1c)KkTT(0)k1, and T(0)=P(0)M(0)is the initial value ofEquation (3). 2Proof 1 A proof sketch is to show the spectral radius of ~Bis less than or equal to 1, which guaranteesthe convergence of the geometric series with (1c)~B. See the details in Appendix A.1. 2According to Theorem 1, ~BK~Qis the node features diffused by K-step signed random walks with~Qwhere ~BKis interpreted as the transition matrix of K-step signed random walks. Thus, theapproximation is the sum of the diffused features from 1toKsteps with a decaying factor 1c, i.e.,the effect of distant nodes gradually decreases while that of neighboring nodes is high. This is thereason why SGDN ETprevents diffused features from being over-smoothed. Also, the approximationerrorkTT(K)k1exponentially deceases as Kincreases due to the term (1c)K. Another point isthat the iteration of Equation (3)converges to the same solution no matter what P(0)andM(0)aregiven. In this work, we initialize P(0)with~H(l), and randomly initialize M(0)in[1;1].The signed random walk diffusion operator Fd()iterates Equation (3)Ktimes for 1kKwhereKis the number of diffusion steps, and it returns P(l) P(K)andM(l) M(K)as theoutputs of the diffusion module at the l-th SGD layer. The detailed pseudocode of SGDN ETisdescribed in Appendix A.3, and its time complexity is analyzed in Appendix A.2.5Under review as a conference paper at ICLR 2021Table 1: Dataset statistics of signed graphs. jVjandjEjare the number of nodes and edges, respectively.Given signs2f+;g,jEsjand(s)are the number and percentage of edges with sign s, respectively.Dataset jVj j Ej j E+j j Ej(+) ()Bitcoin-Alpha13,783 24,186 22,650 1,536 93.65% 6.35%Bitcoin-OTC15,881 35,592 32,029 3,563 89.99% 10.01%Slashdot279,120 515,397 392,326 123,071 76.12% 23.88%Epinions3131,828 841,372 717,667 123,705 85.30% 14.70%1https://snap.stanford.edu/data/soc-sign-bitcoin-otc.html2http://konect.uni-koblenz.de/networks/slashdot-zoo3http://www.trustlet.org/wiki/Extended_Epinions_dataset3.4 L OSSFUNCTION FOR LINKSIGNPREDICTIONThe link sign prediction is to predict the missing sign of a given edge. As shown in Figure 1(a),SGDN ETproduces the final node embeddings H(L). The embeddings are fed into a loss functionL(G;H(L);) =Lsign(G;H(L)) +Lreg()where is the set of model parameters, Lsign()is thebinary cross entropy loss, and Lreg()is theL2regularization loss with weight decay . For a signededge (u!v;s), the edge feature is zuv2R12dL=h(L)ujjh(L)vwhere h(L)uis theu-th row vector ofH(L). Let E be the set of signed edges. Then, Lsign()is represented as follows:Lsign(G;X) =X(u!v;s)2EXt2f+;gI(t=s) log ( softmax t(zuvW))where W2R2dL2is a learnable weight matrix, softmax t()is the probability for sign taftersoftmax operation, and I()returns 1if a given predicate is true, and 0otherwise.4 E XPERIMENTSWe evaluate the effectiveness of SGDN ETthrough the link sign prediction task.Datasets. We perform experiments on four standard signed graphs summarized in Table 1: Bitcoin-Alpha (Kumar et al., 2016), Bitcoin-OTC (Kumar et al., 2016), Slashdot (Kunegis et al., 2009), andEpinions (Guha et al., 2004). We provide the detailed description of each dataset in Appendix A.4.We also report additional experiments on Wikipedia dataset (Leskovec et al., 2010b) in Appendix A.5.Competitors. We compare our proposed SGDN ETwith the following competitors:APPNP (Klicpera et al., 2019): an unsigned GCN model based on Personalized PageRank.ResGCN (Li et al., 2019a): another unsigned GCN model exploiting skip connections todeeply stack multiple layers.SIDE (Kim et al., 2018): a network embedding model optimizing the likelihood over signededges using random walk sequences to encode structural information into node embeddings.SLF (Xu et al., 2019b): another network embedding model considering positive, negative,and non-linked relationships to learn non-negative node embeddings.SGCN (Derr et al., 2018b): a state-of-the-art signed GCN model considering balanced andunbalanced paths motivated from balance theory to propagate embeddings.SNEA (Li et al., 2020): another signed GCN model extending SGCN by learning attentionson the balanced and unbalanced paths.We use the absolute adjacency matrix for APPNP and ResGCN since they handle only unsignededges. All methods are implemented by PyTorch and Numpy in Python. We use a machine with IntelE5-2630 v4 2.2GHz CPU and Geforce GTX 1080 Ti for the experiments.Evaluation Metrics. We randomly split the edges of a signed graph into training and test sets by the8:2 ratio. As shown in Table 1, the sign ratio is highly skewed to the positive sign, i.e., the sampleddatasets are naturally imbalanced. Considering the class imbalance, we measure the area under thecurve (AUC) to evaluate predictive performance. We also report F1-macro measuring the average ofthe ratios of correct predictions for each sign since negative edges need to be treated as important aspositive edges (i.e., it gives equal importance to each class). A higher value of AUC or F1-macroindicates better performance. We repeat each experiment 10times with different random seeds andreport the average and standard deviation of test values.6Under review as a conference paper at ICLR 2021Table 2: SGDN ETgives the best link sign prediction performance in terms of AUC. The best modelis in bold, and the second best model is underlined. The % increase measures the best accuracyagainst the second best accuracy.AUC Bitcoin-Alpha Bitcoin-OTC Slashdot EpinionsAPPNP (Klicpera et al., 2019) 0.8540.010 0.8670.009 0.8370.003 0.8700.002ResGCN (Li et al., 2019a) 0.8530.017 0.8760.010 0.7440.004 0.8710.002SIDE (Kim et al., 2018) 0.8010.020 0.8390.013 0.8140.003 0.8800.003SLF (Xu et al., 2019b) 0.7790.023 0.7970.014 0.8330.006 0.8760.005SGCN (Derr et al., 2018b) 0.8240.018 0.8570.008 0.8270.004 0.8950.002SNEA (Li et al., 2020) 0.8550.006 0.8580.008 0.7540.005 0.7710.004SGDNet (proposed) 0.9110.007 0.9210.005 0.8860.001 0.9320.001% increase 6.4% 4.9% 5.9% 3.9%Table 3: SGDN ETgives the best link sign prediction performance in terms of F1-macro.F1-macro Bitcoin-Alpha Bitcoin-OTC Slashdot EpinionsAPPNP (Klicpera et al., 2019) 0.6820.005 0.7620.009 0.7480.003 0.7730.004ResGCN (Li et al., 2019a) 0.6580.006 0.7350.015 0.6090.004 0.7840.003SIDE (Kim et al., 2018) 0.6630.008 0.7090.008 0.6850.009 0.7850.006SLF (Xu et al., 2019b) 0.6150.027 0.6410.025 0.7330.008 0.8100.008SGCN (Derr et al., 2018b) 0.6900.014 0.7760.008 0.7520.013 0.8440.002SNEA (Li et al., 2020) 0.6700.005 0.7420.011 0.6900.005 0.8050.005SGDNet (proposed) 0.7570.012 0.7990.007 0.7780.002 0.8540.002% increase 7.4% 1.6% 3.5% 1.2%Hyperparameter Settings. We set the dimension of final node embeddings to 32for all methodsso that their embeddings have the same learning capacity (see its effect in Appendix A.6). Weperform 5-fold cross-validation for each method to find the best hyperparameters and measure thetest accuracy with the selected ones. In the cross-validation for SGDN ET, the number Lof SGDlayers is sought between 1and6, and the restart probability cis selected from 0:05to0:95by stepsize0:1. We set the number Kof diffusion steps to 10and the feature dimension dlof each layer to32. We follow the range of each hyperparameter recommended in its corresponding paper for thecross-validation of other models. Our model is trained by the Adam optimizer (Kingma & Ba, 2015),where the learning rate is 0:01, the weight decay is0:001, and the number of epochs is 100. Wesummarize the hyperparameters used by SGDN ETfor each dataset in Appendix A.7.4.1 L INKSIGNPREDICTIONWe evaluate the performance of each method on link sign prediction. Tables 2 and 3 summarize theexperimental results in terms of AUC and F1-macro, respectively. Note that our SGDN ETshowsthe best performance in terms of AUC and F1-macro scores. SGDN ETpresents 3:96:4% and1:27:4% improvements over the second best models in terms of AUC and F1-macro, respectively.We have the following observations.The unsigned GCN models APPNP and ResGCN show worse performance than SGDN ET,which shows the importance of using sign information.The performance of network embedding techniques such as SIDE and SLF is worse than thatof other GCN-based models; this shows the importance of jointly learning feature extractionand link sign prediction for the performance.The performance of SGCN and SNEA which use limited features from nodes within 23hops is worse than that of SGDN ETwhich exploits up to K-hop neighbors’ features whereKis set to 10in these experiments. It indicates that carefully exploiting features fromdistant nodes as well as neighboring ones is crucial for the performance.4.2 E FFECT OF DIFFUSION STEPSWe investigate the effect of the feature diffusion in SGDN ETfor learning signed graphs. We useone SGD layer, and set the restart probability cto0:15to evaluate the pure effect of the diffusion7Under review as a conference paper at ICLR 2021(a) Bitcoin-Alpha (b) Bitcoin-OTC (c) Slashdot (d) EpinionsFigure 3: Effect of SGDN ET’s feature diffusion compared to state-of-the-art SGCN. The performanceof SGDN ETis boosted while that of SGCN degrades as the number Kof diffusion steps increases.(a) Bitcoin-Alpha (b) Bitcoin-OTC (c) Slashdot (d) EpinionsFigure 4: Effect of local injection ratio cofSGDN ET. A relatively small value ( 0:150:35) ofcis the best for the Bitcoin-Alpha and Bitcoin-OTC (small) datasets while caround 0:5shows betteraccuracy for the Slashdot and Epinions (large) datasets.module; we vary the number Kof diffusion steps from 1to10and evaluate the performance ofSGDN ETin terms of F1-macro for each diffusion step. Also, we compare SGDN ETto SGCN,a state-of-the-art-model for learning signed graphs. The number of diffusion steps of SGCN isdetermined by its number of layers. Figure 3 shows that the performance of SGDN ETgraduallyimproves as Kincreases while that of SGCN dramatically decreases over all datasets. This indicatesthat SGCN suffers from the performance degradation problem when its network becomes deep, i.e.,it is difficult to use more information beyond 3hops in SGCN. On the other hand, SGDN ETutilizesfeatures of farther nodes, and generates more expressive and stable features than SGCN does. Notethat the performance of SGDN ETconverges in general after a sufficient number of diffusion steps,which is highly associated with Theorem 1.4.3 E FFECT OF LOCAL INJECTION RATIOWe examine the effect of the local injection ratio cin the diffusion module of SGDN ET. We use oneSGD layer, and set the number Kof diffusion steps to 10; we varycfrom 0:05to0:95by0:1, andmeasure the performance of the link sign prediction task in terms of F1-macro. Figure 4 shows theeffect ofcto the predictive performance of SGDN ET. For small datasets such as Bitcoin-Alpha andBitcoin-OTC, cbetween 0:15and0:35provides better performance. On the other hand, caround 0:5shows higher accuracy for large datasets such as Slashdot and Epinions. For all datasets, a too low ortoo high value of c(e.g., 0:05or0:95) results in a poor performance.5 C ONCLUSIONIn this paper, we propose SIGNED GRAPH DIFFUSION NETWORK (SGDN ET), a novel graph neuralnetwork that performs end-to-end node representation learning for link sign prediction in signedgraphs. We propose a signed random walk diffusion method to properly diffuse node features onsigned edges, and suggest a local feature injection method to make diffused features distinguishable.Our diffusion method empowers SGDN ETto effectively train node embeddings considering multi-hop neighbors while preserving local information. Our extensive experiments show that SGDN ETprovides the best accuracy outperforming the state-of-the-art models in link sign prediction. Futureresearch directions include extending our method for multi-view networks.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title The paper propose SGDNet, Signed Graph Diffusion Network, to perform end-to-end node presentation and signed link prediction. It also considers the over-smoothing issue and use local feature injection to prevent over-smoothing. ### Review Text The strength: - Empirical results show improvements of the proposed method over the baseline methods, including methods for unsigned graphs, signed embedding emthods, and signed GCN methods not addressing over-smoothing. The weakness: - The proposed method is a straightforward integration of existing methods on signed graphs and on handling over-smoothing in GCNs. There is very little new idea in the proposed method. - The convergence result (Theorem 1) is straightforward based on linear algebra, and it is only for the diffusion part in each GCN layer. There is no overall theoretical results on the SGDNet architecture. Overall, the method proposed is a simple combination of past methods on handling signed graphs and handling over-smoothing in graphs. It is just that the two has not been combined together, and so the only contribution I see is the combination of these two methods and verify it in the signed edge prediction task. The theoretical result on convergence is only on the diffusion convergence at each layer, and it is a straightforward application of the linear algebra. It is unclear to me why we need a diffusion convergence at each layer and then also need GCN with multiple layers. What is the connection between the diffusion steps K and the GCN layers L? In summary, the paper shows improvement when combining previous methods on signed networks and on handling over-smoothing. It may fit into a second-tier conference to record the result, but I feel that it does not meet the high bar of ICLR. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
r1esys0Nd4
ICLR.cc/2019/Workshop/LLD
2019
Train Neural Network by Embedding Space Probabilistic Constraint
["Kaiyuan Chen", "Zhanyuan Yin"]
Using higher order knowledge to reduce training data has become a popular research topic. However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels. Based on this observation, we consider constraining output probability distribution as higher order domain knowledge. We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution. We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.
["probability", "constraint", "constraint learning", "weak supervision", "embedding", "deep neural network"]
ABSTRACTUsing higher order knowledge to reduce training data has become a popular re-search topic. However, the ability for available methods to draw effective deci-sion boundaries is still limited: when training set is small, neural networks willbe biased to certain labels. Based on this observation, we consider constrainingoutput probability distribution as higher order domain knowledge. We design anovel algorithm that jointly optimizes output probability distribution on a clus-tered embedding space to make neural networks draw effective decision bound-aries. While directly applying probability constraint is not effective, users need toprovide additional very weak supervisions: mark some batches that have outputdistribution greatly differ from target probability distribution. We use experimentsto empirically prove that our model can converge to an accuracy higher than otherstate-of-art semi-supervised learning models with less high quality labeled train-ing examples.1 I NTRODUCTIONProbability is an abstract measure on how a certain event occurs independent of features of theevents. Knowing how likely a certain event occurs, people leverages such prior knowledge to theirdecision making. For example, doctors know certain diseases are rare, even if they are told interms of probabilities instead of ”training examples”. Based on this knowledge, they make lesspredictions on these diseases than those common ones. Do neural networks behave in a similar way?Unfortunately, the answer is no. When we train a multi-layer perceptron(MLP) for MNIST classifier(LeCun et al. (1998)) with limited labelled examples, the output distribution can be extremely biasedin favor of some of the labels. In Figure 1a, we compare the predicted number of labels with groundtruth. While the training accuracy is 1.0, the model clearly overfits to those training examples andleave labels between training data points undefined in high dimensional feature space. As we plotthe last hidden layer of a MLP trained with 50 labelled MNIST data as shown in Figure 1b, we findneural networks fail to learn the decision boundary correctly from a limited number of examples.Thus, it is natural to consider introducing output label probability distribution as higher order knowl-edge when we train neural networks. Different from traditional logical constraints (Xu et al. (2018))or functional constraints (Stewart & Ermon (2016), we propose a novel embedding space proba-bilistic constraint. Because of the sparsity of high dimensional feature space with only a few labeledexamples, we perform our probabilistic constraint on neural network’s embedding space, whichis constructed unsupervisedly by projecting data into low dimensional space through autoencoder.Based on observation by Xie et al. (2016), Zhang et al. (2016), embedding space preserves informa-tion of separations of different label clusters. In the embedding space, we pool softmax activation1Published as a conference paper at ICLR 2019(a) Strong imbalanced output distribution oflabels when training set is limited(b) Chaotic embedding space in the hiddenlayer of the classifier trained with 50 labelledexamplesFigure 1: Limited training data cannot train neural networks to learn accurate decision boundariesoutputs and optimize towards target distribution. By training with very few high quality labelledexamples and marking on batches that have output distribution greatly different from target proba-bility distribution, we use experiments to empirically prove that our model can converge to a highaccuracy faster than state-of-art semi-supervised learning methods.2 R ELATED WORKSWeak Supervision Current supervised representation learning algorithms have gained great suc-cess on various tasks in computer vision (He et al. (2017), Kostrikov et al. (2018)), natural languageprocessing (Trivedi et al. (2018), Athiwaratkun et al. (2018)) with little domain knowledge, butthey require large quantity and high-quality labels for training. Thus, there is a growing trend ofresearch that address this problem by transferring knowledge learned from different datasets (Aziz-zadenesheli et al. (2018), Shen et al. (2017)) or introducing higher level knowledge.In this work, we consider incomplete weak supervision problem (Zhou (2017)). A typical incom-plete supervision problem (Chapelle et al. (2006)) is formulated as following: with a dataset fX;Ygthat consists of labeled dataset X1=fX1;y1gand unlabeled dataset X2=fX2;y2g, wherefy2gis not visible during training. jX1jjX2j. This problem can usually be tackled by state-of-artsemi-supervised learning algorithms like AtlasRBF (Pitelis et al. (2014)), Neural Rendering Model(Ho et al. (2018))or LadderNet (Rasmus et al. (2015) or using novel approaches such as logical con-straints (Xu et al. (2018)). While they still rely on certain amount of high quality labeled data, whilein this work, we further decrease the number of labeled data needed for convergence.Learning With Constraints Learning with constraints takes various higher order domain knowl-edge into the optimization of neural networks. Based on domain knowledge, different constraints areeffective on different tasks. For example, Pathak et al. (2015) uses linear constraints on the outputspace and optimizes the training objective as a biconvex optimization for linear models to performdense pixelwise semantic segmentation. Frameworks such as semantic loss by Xu et al. (2018) andlogical loss by Hu et al. (2016) specify logic rules when training neural networks. Stewart & Ermon(2016) propose a novel framework that one can learn physical or causal relationship without labels.In this work, we consider the case where limited labeled examples lead to biased output distribution.Different from these arithmetic or logical constraints, we consider placing an output probabilityconstraint.3 E MBEDDING SPACE PROBABILISTIC CONSTRAINTIn this section, we state our problem formulation and describe the proposed algorithm and architec-ture for this problem.2Published as a conference paper at ICLR 2019Figure 2: Flowchart of our probability constraintHigher-order Knowledge Formulation Based on the incomplete weak supervision defined inSection 2, we specify our introduced higher order knowledge. We assume from our domain knowl-edge of output probability distribution, the model can acquire a set Q=f(k;P(Y=k) +gk2fYg).One thing to note is that domain knowledge distribution Qdoes not necessarily cover all k2Y.We use2Rdrawn from Gaussian distribution such that N (0;2)to reflect the vari-ance of human domain knowledge from true Y. We use= 0:05throughout this paper. Weneed a training algorithm A(fX1;y1g;X2;Qg)such that it trains a multi-layer perceptron f(x) =(Wm:::(W2(W1x)), whereWiis model’s weights, is an nonlinear function like ReLU, andis the softmax function. This algorithm minimizes the loss function `:fY;f(X)g !R.Batchwise Probability Constraint Following from this problem formulation, we define a lossterm`c:R2 !Rto regulate the output distribution. We regard a single update batch fX0;Y0g2fX;Ygwith sizecas the unit of output probability distribution. Inspired by Liang et al. (2017) andHendrycks & Gimpel (2016), the activation of final softmax layer of classifier ()can reflect neuralnetwork’s confidence towards a certain label. Instead of performing counting the arguments of themaxima for all labels, which is not inefficient, we consider calculating the mean pooling of all theactivation outputs(Wang et al. (2018)). It can be written mathematically as1cPci=0f(X0i). Here weusefkto denote the softmax activation of the kth label. This potentially improves the accuracy ofdetecting low confidence or out-of-distribution examples. A basic flowchart of our mechanism canbe found in Figure 2.It is natural to use Kullback-Leibler (KL) divergence as a metric of our output distribution differentfrom the reference domain knowledge probability distribution Q, that is,`c=1cX(k;q)2QcXi=0fk(X0i) logqcPci=0fk(X0i)(1)One may notice the probability of labels in the batch does not always reflect the domain knowledgedistributionQ. That is,jP(Y0=k)Q(k)j> for some >0. For the simplicity of this work,we assume additional but very weak supervision on identifying some of those batches and usingdifferent but noisy batch probability distribution. However, this supervision can be easily donethrough at-a-glance (abriel Ryan (2018)) supervision or auto-regressive algorithms similar to Reedet al. (2017). Our proposed algorithm and its convergence analysis can be found in Appendix A.Constraint on Embedding Space In order to use existing unlabeled data to draw decisionboundaries, we propose to jointly optimize this probability constrained classifier with an em-bedding space regularizer. Embedding is a lower dimensional form that structurally preservesdata from original hyper-dimensional space. In our case, we treat a hyperparameter ith hiddenlayer of perceptron E(x)as our embedding space, where E(x) =Wi(:::(W2(W1x)))andf(x) =(Wm((Wm1(:::(Wi+1)))E(x), where the dimension of E(x)should be muchsmaller than dimension of input x. Zhang et al. (2016) propose using unsupervised loss can pre-serve information of separations between different label clusters. Thus, we adopt the structure ofdecoder of autoencoder and define a multi-layer neural network D()as a decoder of our embeddingspace. For a single batch fX0;Y0g, our loss function for training a separation-preserving embedding3Published as a conference paper at ICLR 2019space by reconstructing the original input, that is,`r=1ccXi=0jjX0D(E(X0))jj2General Framework Our proposed method uses unsupervised loss `rto construct an embeddingin low dimensional space, uses limited labeled data to identify the cluster location in the embeddingspace by original classification loss `original , and uses domain knowledge of output probabilitydistribution to determine the actual decision boundaries. Then our updating loss function is`=1`original +2`r+3`c, where1,2and3are hyperparameter constants.4 E VALUATIONExperiment setup We evaluate our proposed embedding space probabilistic constraint in semi-supervised learning setting. Using the similar base multilayer perceptron model as in Rasmus et al.(2015) and Xu et al. (2018) All the experiments are repeated five times with different seeds. We addan additional embedding layer with width 40, and the decoder has a symmetric architecture as thefeed forward neural network.Model Description To guarantee that our comparison focuses on output probability distributioninstead of one single instance’s label, we train our models with batch size 128. We experiment ourmodel under different level of constraints. Datasetwise probability constraints assumes the targetoutput should be all 10%, and the noisy datasetwise probability constraints adds a random noisedrawn fromN(0;0:3)to simulate user’s knowledge. Also, we use batchwise probability constraint,which assumes we know the probability of labels in every batch, as an upper bound for our algorithm.We compare our model with other state-of-art semi-supervised learning models (Pitelis et al. (2014),Rasmus et al. (2015)), and logical constraint model (Xu et al. (2018). Since we require more humansupervision than other semi-supervised learning models, we use their results to demonstrate ourmodel can converge to high accuracy with much less high quality labeled examples. We also chooseother baselines models without both losses and without embedding loss to show the benefit of ourarchitecture.Accuracy/# of labelled per class 3 5 10 allAtlasRBF (Pitelis et al. (2014)) 73:580:95 84:280:21 91:540:13 98:200:25Ladder Net (Rasmus et al. (2015)) 79:390:60 93:671:42 97:690:25 99:010:22Baseline: MLP 53:520:07 64:070:19 73:240:13 98:820:03Semantic Loss (Xu et al. (2018)) 75:361:02 82:531:39 96:031:39 98:531:39Baseline: MLP with constraint 78:393:76 92:973:28 96:822:39 97:013:03Datasetwise Probability Constraint(noisy) 82:121:34 96:050:05 96:320:24 97:850:93Datasetwise Probability Constraint 84:931:08 97:650:05 97:320:24 98:450:84Batchwise Probability Constraint 95:870:48 97:670:39 98:670:74 98:990:21Table 1: Semi-supervised learning on MNIST datasetAccuracy/# of labelled per class 3 5 10 allBaseline: MLP 49:720:12 55:720:11 68:640:10 86:320:21LadderNet (Rasmus et al. (2015)) 69:870:43 74:540:32 81:870:15 90:570:28Semantic Loss (Xu et al. (2018)) 70:241:30 75:410:21 83:250:23 89:920:32Batchwise Probability Constraint 81:150:02 85:150:85 88:150:92 89:011:20Table 2: Semi-supervised learning on FASHION dataset5 D ISCUSSIONIn this paper, we present an algorithm that constraints output probability distribution in an embed-ding space. Our result shows with a little more supervision than normal semi-supervised learning4Published as a conference paper at ICLR 2019Accuracy/# of labelled per class 200 400 allBaseline: CNN 59:720:12 76:640:10 90:320:21Semantic Loss(Xu et al. (2018)) 71:241:30 79:410:21 89:930:45Batchwise Probability Constraint 75:150:52 86:151:03 88:150:92Table 3: Semi-supervised learning on CIFAR 10 datasetalgorithms, we need far fewer high quality training examples to reach high accuracy. Thus, we con-clude jointly optimizing the output constraint with hypothesis can draw a decision boundary withsmaller labelled training data than other state-of-art methods.Our focus is to show the power of a very weak labelling method without high quality labelling tech-nique. We leave it as a future research direction to design an auto-regressive algorithm that requiresless supervisions. In addition, since our formulation sums all the activation functions together asa measure of confidence instead of counting-based probabilities, this allows us to use it as a futuredirection on confidence of classification in semi-supervised learning setting.
Skgk0KR9FV
Leveraging additional supervision to handle label distribution shift
3: Marginally above acceptance threshold
This paper tries to incorporate label distribution into model learning, when a limited number of training instances is available. Intuitively, the output label distribution could be wrongly biased, and the prior information like label distribution could be helpful. To handle this problem, the authors propose two different techniques, the first regulate the output distribution and the second regularize the constructed representation. Performance comparison demonstrated the effectiveness of the proposed method when only a limited number of instances are available. I think the studied problem is interesting and the proposed solution is novel and reasonable. My main concern is about the assumption of the algorithm. The proposed learning algorithm assumes that the algorithm can access a relative accurate label distribution, and the output distribution regularization depend on this term. But for real world applications, it could be hard to get such knowledge, since in order to get the required annotation (as described in Appendix), the user needs to have a good understanding of the real distribution or annotate all instances in that batch. Besides, I noticed that in the experiment results, the proposed method sometimes achieves worse performance than baselines when all training data is available. This phenomenon seems to me implies that the proposed method cannot fully leverage the additional information, as intuitively, with more information, it should perform better.
2: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Train Neural Network by Embedding Space Probabilistic Constraint ### Paper Abstract Using higher order knowledge to reduce training data has become a popular research topic. However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels. Based on this observation, we consider constraining output probability distribution as higher order domain knowledge. We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution. We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples. ### Paper Keywords ["probability", "constraint", "constraint learning", "weak supervision", "embedding", "deep neural network"] ### Paper Content ABSTRACTUsing higher order knowledge to reduce training data has become a popular re-search topic. However, the ability for available methods to draw effective deci-sion boundaries is still limited: when training set is small, neural networks willbe biased to certain labels. Based on this observation, we consider constrainingoutput probability distribution as higher order domain knowledge. We design anovel algorithm that jointly optimizes output probability distribution on a clus-tered embedding space to make neural networks draw effective decision bound-aries. While directly applying probability constraint is not effective, users need toprovide additional very weak supervisions: mark some batches that have outputdistribution greatly differ from target probability distribution. We use experimentsto empirically prove that our model can converge to an accuracy higher than otherstate-of-art semi-supervised learning models with less high quality labeled train-ing examples.1 I NTRODUCTIONProbability is an abstract measure on how a certain event occurs independent of features of theevents. Knowing how likely a certain event occurs, people leverages such prior knowledge to theirdecision making. For example, doctors know certain diseases are rare, even if they are told interms of probabilities instead of ”training examples”. Based on this knowledge, they make lesspredictions on these diseases than those common ones. Do neural networks behave in a similar way?Unfortunately, the answer is no. When we train a multi-layer perceptron(MLP) for MNIST classifier(LeCun et al. (1998)) with limited labelled examples, the output distribution can be extremely biasedin favor of some of the labels. In Figure 1a, we compare the predicted number of labels with groundtruth. While the training accuracy is 1.0, the model clearly overfits to those training examples andleave labels between training data points undefined in high dimensional feature space. As we plotthe last hidden layer of a MLP trained with 50 labelled MNIST data as shown in Figure 1b, we findneural networks fail to learn the decision boundary correctly from a limited number of examples.Thus, it is natural to consider introducing output label probability distribution as higher order knowl-edge when we train neural networks. Different from traditional logical constraints (Xu et al. (2018))or functional constraints (Stewart & Ermon (2016), we propose a novel embedding space proba-bilistic constraint. Because of the sparsity of high dimensional feature space with only a few labeledexamples, we perform our probabilistic constraint on neural network’s embedding space, whichis constructed unsupervisedly by projecting data into low dimensional space through autoencoder.Based on observation by Xie et al. (2016), Zhang et al. (2016), embedding space preserves informa-tion of separations of different label clusters. In the embedding space, we pool softmax activation1Published as a conference paper at ICLR 2019(a) Strong imbalanced output distribution oflabels when training set is limited(b) Chaotic embedding space in the hiddenlayer of the classifier trained with 50 labelledexamplesFigure 1: Limited training data cannot train neural networks to learn accurate decision boundariesoutputs and optimize towards target distribution. By training with very few high quality labelledexamples and marking on batches that have output distribution greatly different from target proba-bility distribution, we use experiments to empirically prove that our model can converge to a highaccuracy faster than state-of-art semi-supervised learning methods.2 R ELATED WORKSWeak Supervision Current supervised representation learning algorithms have gained great suc-cess on various tasks in computer vision (He et al. (2017), Kostrikov et al. (2018)), natural languageprocessing (Trivedi et al. (2018), Athiwaratkun et al. (2018)) with little domain knowledge, butthey require large quantity and high-quality labels for training. Thus, there is a growing trend ofresearch that address this problem by transferring knowledge learned from different datasets (Aziz-zadenesheli et al. (2018), Shen et al. (2017)) or introducing higher level knowledge.In this work, we consider incomplete weak supervision problem (Zhou (2017)). A typical incom-plete supervision problem (Chapelle et al. (2006)) is formulated as following: with a dataset fX;Ygthat consists of labeled dataset X1=fX1;y1gand unlabeled dataset X2=fX2;y2g, wherefy2gis not visible during training. jX1jjX2j. This problem can usually be tackled by state-of-artsemi-supervised learning algorithms like AtlasRBF (Pitelis et al. (2014)), Neural Rendering Model(Ho et al. (2018))or LadderNet (Rasmus et al. (2015) or using novel approaches such as logical con-straints (Xu et al. (2018)). While they still rely on certain amount of high quality labeled data, whilein this work, we further decrease the number of labeled data needed for convergence.Learning With Constraints Learning with constraints takes various higher order domain knowl-edge into the optimization of neural networks. Based on domain knowledge, different constraints areeffective on different tasks. For example, Pathak et al. (2015) uses linear constraints on the outputspace and optimizes the training objective as a biconvex optimization for linear models to performdense pixelwise semantic segmentation. Frameworks such as semantic loss by Xu et al. (2018) andlogical loss by Hu et al. (2016) specify logic rules when training neural networks. Stewart & Ermon(2016) propose a novel framework that one can learn physical or causal relationship without labels.In this work, we consider the case where limited labeled examples lead to biased output distribution.Different from these arithmetic or logical constraints, we consider placing an output probabilityconstraint.3 E MBEDDING SPACE PROBABILISTIC CONSTRAINTIn this section, we state our problem formulation and describe the proposed algorithm and architec-ture for this problem.2Published as a conference paper at ICLR 2019Figure 2: Flowchart of our probability constraintHigher-order Knowledge Formulation Based on the incomplete weak supervision defined inSection 2, we specify our introduced higher order knowledge. We assume from our domain knowl-edge of output probability distribution, the model can acquire a set Q=f(k;P(Y=k) +gk2fYg).One thing to note is that domain knowledge distribution Qdoes not necessarily cover all k2Y.We use2Rdrawn from Gaussian distribution such that N (0;2)to reflect the vari-ance of human domain knowledge from true Y. We use= 0:05throughout this paper. Weneed a training algorithm A(fX1;y1g;X2;Qg)such that it trains a multi-layer perceptron f(x) =(Wm:::(W2(W1x)), whereWiis model’s weights, is an nonlinear function like ReLU, andis the softmax function. This algorithm minimizes the loss function `:fY;f(X)g !R.Batchwise Probability Constraint Following from this problem formulation, we define a lossterm`c:R2 !Rto regulate the output distribution. We regard a single update batch fX0;Y0g2fX;Ygwith sizecas the unit of output probability distribution. Inspired by Liang et al. (2017) andHendrycks & Gimpel (2016), the activation of final softmax layer of classifier ()can reflect neuralnetwork’s confidence towards a certain label. Instead of performing counting the arguments of themaxima for all labels, which is not inefficient, we consider calculating the mean pooling of all theactivation outputs(Wang et al. (2018)). It can be written mathematically as1cPci=0f(X0i). Here weusefkto denote the softmax activation of the kth label. This potentially improves the accuracy ofdetecting low confidence or out-of-distribution examples. A basic flowchart of our mechanism canbe found in Figure 2.It is natural to use Kullback-Leibler (KL) divergence as a metric of our output distribution differentfrom the reference domain knowledge probability distribution Q, that is,`c=1cX(k;q)2QcXi=0fk(X0i) logqcPci=0fk(X0i)(1)One may notice the probability of labels in the batch does not always reflect the domain knowledgedistributionQ. That is,jP(Y0=k)Q(k)j> for some >0. For the simplicity of this work,we assume additional but very weak supervision on identifying some of those batches and usingdifferent but noisy batch probability distribution. However, this supervision can be easily donethrough at-a-glance (abriel Ryan (2018)) supervision or auto-regressive algorithms similar to Reedet al. (2017). Our proposed algorithm and its convergence analysis can be found in Appendix A.Constraint on Embedding Space In order to use existing unlabeled data to draw decisionboundaries, we propose to jointly optimize this probability constrained classifier with an em-bedding space regularizer. Embedding is a lower dimensional form that structurally preservesdata from original hyper-dimensional space. In our case, we treat a hyperparameter ith hiddenlayer of perceptron E(x)as our embedding space, where E(x) =Wi(:::(W2(W1x)))andf(x) =(Wm((Wm1(:::(Wi+1)))E(x), where the dimension of E(x)should be muchsmaller than dimension of input x. Zhang et al. (2016) propose using unsupervised loss can pre-serve information of separations between different label clusters. Thus, we adopt the structure ofdecoder of autoencoder and define a multi-layer neural network D()as a decoder of our embeddingspace. For a single batch fX0;Y0g, our loss function for training a separation-preserving embedding3Published as a conference paper at ICLR 2019space by reconstructing the original input, that is,`r=1ccXi=0jjX0D(E(X0))jj2General Framework Our proposed method uses unsupervised loss `rto construct an embeddingin low dimensional space, uses limited labeled data to identify the cluster location in the embeddingspace by original classification loss `original , and uses domain knowledge of output probabilitydistribution to determine the actual decision boundaries. Then our updating loss function is`=1`original +2`r+3`c, where1,2and3are hyperparameter constants.4 E VALUATIONExperiment setup We evaluate our proposed embedding space probabilistic constraint in semi-supervised learning setting. Using the similar base multilayer perceptron model as in Rasmus et al.(2015) and Xu et al. (2018) All the experiments are repeated five times with different seeds. We addan additional embedding layer with width 40, and the decoder has a symmetric architecture as thefeed forward neural network.Model Description To guarantee that our comparison focuses on output probability distributioninstead of one single instance’s label, we train our models with batch size 128. We experiment ourmodel under different level of constraints. Datasetwise probability constraints assumes the targetoutput should be all 10%, and the noisy datasetwise probability constraints adds a random noisedrawn fromN(0;0:3)to simulate user’s knowledge. Also, we use batchwise probability constraint,which assumes we know the probability of labels in every batch, as an upper bound for our algorithm.We compare our model with other state-of-art semi-supervised learning models (Pitelis et al. (2014),Rasmus et al. (2015)), and logical constraint model (Xu et al. (2018). Since we require more humansupervision than other semi-supervised learning models, we use their results to demonstrate ourmodel can converge to high accuracy with much less high quality labeled examples. We also chooseother baselines models without both losses and without embedding loss to show the benefit of ourarchitecture.Accuracy/# of labelled per class 3 5 10 allAtlasRBF (Pitelis et al. (2014)) 73:580:95 84:280:21 91:540:13 98:200:25Ladder Net (Rasmus et al. (2015)) 79:390:60 93:671:42 97:690:25 99:010:22Baseline: MLP 53:520:07 64:070:19 73:240:13 98:820:03Semantic Loss (Xu et al. (2018)) 75:361:02 82:531:39 96:031:39 98:531:39Baseline: MLP with constraint 78:393:76 92:973:28 96:822:39 97:013:03Datasetwise Probability Constraint(noisy) 82:121:34 96:050:05 96:320:24 97:850:93Datasetwise Probability Constraint 84:931:08 97:650:05 97:320:24 98:450:84Batchwise Probability Constraint 95:870:48 97:670:39 98:670:74 98:990:21Table 1: Semi-supervised learning on MNIST datasetAccuracy/# of labelled per class 3 5 10 allBaseline: MLP 49:720:12 55:720:11 68:640:10 86:320:21LadderNet (Rasmus et al. (2015)) 69:870:43 74:540:32 81:870:15 90:570:28Semantic Loss (Xu et al. (2018)) 70:241:30 75:410:21 83:250:23 89:920:32Batchwise Probability Constraint 81:150:02 85:150:85 88:150:92 89:011:20Table 2: Semi-supervised learning on FASHION dataset5 D ISCUSSIONIn this paper, we present an algorithm that constraints output probability distribution in an embed-ding space. Our result shows with a little more supervision than normal semi-supervised learning4Published as a conference paper at ICLR 2019Accuracy/# of labelled per class 200 400 allBaseline: CNN 59:720:12 76:640:10 90:320:21Semantic Loss(Xu et al. (2018)) 71:241:30 79:410:21 89:930:45Batchwise Probability Constraint 75:150:52 86:151:03 88:150:92Table 3: Semi-supervised learning on CIFAR 10 datasetalgorithms, we need far fewer high quality training examples to reach high accuracy. Thus, we con-clude jointly optimizing the output constraint with hypothesis can draw a decision boundary withsmaller labelled training data than other state-of-art methods.Our focus is to show the power of a very weak labelling method without high quality labelling tech-nique. We leave it as a future research direction to design an auto-regressive algorithm that requiresless supervisions. In addition, since our formulation sums all the activation functions together asa measure of confidence instead of counting-based probabilities, this allows us to use it as a futuredirection on confidence of classification in semi-supervised learning setting.<|im_end|> <|im_start|>assistant ### Review Title Leveraging additional supervision to handle label distribution shift ### Review Text This paper tries to incorporate label distribution into model learning, when a limited number of training instances is available. Intuitively, the output label distribution could be wrongly biased, and the prior information like label distribution could be helpful. To handle this problem, the authors propose two different techniques, the first regulate the output distribution and the second regularize the constructed representation. Performance comparison demonstrated the effectiveness of the proposed method when only a limited number of instances are available. I think the studied problem is interesting and the proposed solution is novel and reasonable. My main concern is about the assumption of the algorithm. The proposed learning algorithm assumes that the algorithm can access a relative accurate label distribution, and the output distribution regularization depend on this term. But for real world applications, it could be hard to get such knowledge, since in order to get the required annotation (as described in Appendix), the user needs to have a good understanding of the real distribution or annotate all instances in that batch. Besides, I noticed that in the experiment results, the proposed method sometimes achieves worse performance than baselines when all training data is available. This phenomenon seems to me implies that the proposed method cannot fully leverage the additional information, as intuitively, with more information, it should perform better. ### Review Rating 3: Marginally above acceptance threshold ### Review Confidence 2: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
rkdF0ZNKl
ICLR.cc/2017/workshop
2017
Fast Generation for Convolutional Autoregressive Models
["Prajit Ramachandran", "Tom Le Paine", "Pooya Khorrami", "Mohammad Babaeizadeh", "Shiyu Chang", "Yang Zhang", "Mark A. Hasegawa-Johnson", "Roy H. Campbell", "Thomas S. Huang"]
Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively.
["Deep learning", "Applications"]
ABSTRACTConvolutional autoregressive models have recently demonstrated state-of-the-artperformance on a number of generation tasks. While fast, parallel training methodshave been crucial for their success, generation is typically implemented in a na ̈ıvefashion where redundant computations are unnecessarily repeated. This results inslow generation, making such models infeasible for production environments. Inthis work, we describe a method to speed up generation in convolutional autoregres-sive models. The key idea is to cache hidden states to avoid redundant computation.We apply our fast generation method to the Wavenet and PixelCNN++ models andachieve up to 21and183speedups respectively.1 I NTRODUCTIONAutoregressive models are a powerful class of generative models that factorize the joint probability ofa data sample xinto a product of conditional probabilities. Autoregressive models such as Wavenet(van den Oord et al., 2016a), ByteNet (Kalchbrenner et al., 2016a), PixelCNN (van den Oord et al.,2016b;c), and Video Pixel Networks (Kalchbrenner et al., 2016b) have shown strong performance inaudio, textual, image, and video generation. Unfortunately, generating in a na ̈ıve fashion is typicallytoo slow for practical use. For example, generating a batch of 16 3232images using PixelCNN++(Salimans et al., 2017) takes more then 11 minutes on commodity hardware with a Tesla K40 GPU.The ability to do fast generation is useful for many applications. Production environments have tightlatency constraints, so real-time speech generation, machine translation, and image super-resolution(Dahl et al., 2017) all require fast generation. Furthermore, quick simulation of environment dynamicsis important for fast training in model-based reinforcement learning (Oh et al., 2015). However, slowgeneration hampers the use of convolutional autoregressive models in these situations.In this work, we present a method to significantly speed up generation in convolutional autoregressivemodels. The contributions of this work are as follows:1.We present a general method to enable fast generation for autoregressive models throughcaching. We describe specific implementations of this method for Wavenet (van den Oordet al., 2016a) and PixelCNN++ (Salimans et al., 2017). We demonstrate our fast generationachieves up to 21for Wavenet and 183for PixelCNN++ over their na ̈ıve counterparts.2.We open-source our implementation of fast generation for Wavenet1and PixelCNN++2. Ourgeneration code is compatible with other open-source implementations of these models thatalso implement training.2 M ETHODSNa ̈ıve generation for convolutional autoregressive models recalculates the entire receptive field atevery iteration (we refer readers to van den Oord et al. (2016a); Salimans et al. (2017) for details).This results in exponential time and space complexity with respect to the receptive field. In thissection, we propose a method that avoids this cost by caching previously computed hidden states andusing them in the subsequent iterations.Denotes equal contribution.1https://github.com/tomlepaine/fast-wavenet2https://github.com/PrajitR/fast-pixel-cnn1Workshop track - ICLR 20172.1 C ACHING FOR DILATED CONVOLUTIONSTo generate a single output y, computations must be performed over the entire receptive field which isexponential with respect to the number of layers. A na ̈ıve generation method repeats this computationover the entire receptive field at every step, which is illustrated in Figure 1A. However, this is wastefulbecause many hidden states in the receptive field can be re-used from previous iterations. This na ̈ıveapproach has been used in open-source implementations of Wavenet3.Instead of recomputing all of the hidden states at every iteration, we propose caching hidden statesfrom previous iterations. Figure 1B illustrates this idea, where each layer maintains a cache ofpreviously computed hidden states. During each generation step, hidden states are popped off thecache to perform the convolutions. The newly generated hidden states are then pushed back into thecache for future computation. Therefore, the computation and space complexity are linear in thenumber of layers instead of exponential.Figure 1: Comparison of na ̈ıve implementation of the generation process and our proposedmethod. Orange nodes are computed in the current timestep, blue nodes are previously cached states,and gray nodes are not involved in the current timestep. Notice that generating a single samplerequires O(2L)operations for the na ̈ıve implementation where Lis number of layers in the network.Meanwhile, our implementation only requires O(L)operations to generate a single sample.2.2 C ACHING FOR STRIDED CONVOLUTIONSThe caching algorithm for dilated convolutions is straightforward because the number of hidden statesin each layer is equal to the number of inputs. Thus, each layer can simply maintain a cache that isupdated on every step. However, strided convolutions pose an additional challenge since the numberof hidden states in each layer is different than the number of inputs.A downsampling (strided convolutional) layer will not necessarily generate an output at each timestep(see the first hidden layer in Figure 2) and may even skip over some inputs (see the second hiddenlayer in Figure 2). On the other hand, an upsampling (strided transposed convolutional) layer willproduce hidden states and outputs for multiple timesteps (see the last hidden layer in Figure 2).As a result, the cache cannot be updated in every timestep. Thus, each cache has an additionalproperty cache every , where the cache is only updated every cache every steps. Every downsamplinglayer increases the cache every property of the layer by the downsampling factor (2 in the case ofFigure 2). Conversely, every upsampling layer decreases the cache every property of the layer by theupsampling factor (also 2 in the case of Figure 2).2.3 M ODEL -SPECIFIC DETAILSWavenet uses 1D dilated convolutions. Our fast implementation of Wavenet follows directly from thecomponents outlined in Section 2.1.PixelCNN++ improves upon PixelCNN (van den Oord et al., 2016c) through a variety of modifications,including using strided convolutions and transposed convolutions instead of dilation for speed. Ourmethod scales from 1D to 2D with very few changes. The caches for each layer are now 2D, with aheight equal to the filter height and a width equal to the image width. After an entire row is generated,the oldest row of the cache is popped and the new row is pushed. Because strided convolutions areused, we use the cache every idea detailed in Section 2.2. For full details please refer to our code.3https://github.com/ibab/tensorflow-wavenet2Workshop track - ICLR 2017Figure 2: Fast generation for a network with strided convolutions . We show an example modelwith 2 convolutional and 2 transposed convolutional layers each with a stride of 2 (Dumoulin & Visin,2016). Due to the stride, each layer has fewer states than network inputs. Orange nodes are computedin the current timestep, blue nodes are previously cached states, and gray nodes are not involvedin the current timestep. In the first timestep ( t= 0), the first input is used to compute and cacheall nodes for which there is sufficient information to generate, including the first four outputs. Att= 1, there are no nodes that have sufficient information to be computed, but the output for t= 1hasalready been computed at t= 0. Att= 2, there is one new node that now has sufficient informationto be computed, although the output for t= 2has also been computed at t= 0. The t= 3scenariois similar to t= 1. Att= 4, there is enough information to compute multiple hidden states andgenerate the next four outputs. This is analogous to the t= 0scenario. t= 5is analogous to t= 1,and this cycle is followed for all future time steps.2 4 6 8 10 12 14Number of layers0.000.020.040.060.080.100.120.140.160.18Average time per sample (sec)0.0070.162 OursNaiveFigure 3: Wavenet timing experiments. Wegenerated from a model with 2sets of Ldi-lation layers each, using a na ̈ıve implemen-tation and ours. Results are averaged over100repeats. When Lis small, the na ̈ıve im-plementation performs better than expecteddue to GPU parallelization of the convolutionoperations. When Lis large, the difference inperformance is more pronounced.246101833611091831 2 4 8 16 32 64 128 256Batch size (log scale)110100Speedup (log scale)Figure 4: PixelCNN++ timing experiments. Wegenerated images using the model architecture de-scribed in (Salimans et al., 2017). Due to the hugenumber of convolution operations in the na ̈ıve im-plementation, GPU utilization is always high andthere is no room for parallelization across batch.Since our method avoids redundant computations,larger batch sizes result in larger speedups.3 E XPERIMENTSWe implemented our methods for Wavenet (van den Oord et al., 2016a) and PixelCNN++ (Salimanset al., 2017) in TensorFlow (Abadi et al., 2016). We compare our proposed method with a na ̈ıve imple-mentation of Wavenet4and a na ̈ıve implementation of PixelCNN++5in Figures 3 and 4 respectively.The results indicate significant speedups, up to 21for Wavenet and 183for PixelCNN++.4https://github.com/ibab/tensorflow-wavenet5https://github.com/openai/pixel-cnn3Workshop track - ICLR 2017
rkpqv3xoe
simple and good
7: Good paper, accept
This is a nice workshop paper. its a simple idea but people will be interested in it. If nothing else, the released code is valuable, and having the poster to advertise it is a good use of workshop poster space.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Fast Generation for Convolutional Autoregressive Models ### Paper Abstract Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively. ### Paper Keywords ["Deep learning", "Applications"] ### Paper Content ABSTRACTConvolutional autoregressive models have recently demonstrated state-of-the-artperformance on a number of generation tasks. While fast, parallel training methodshave been crucial for their success, generation is typically implemented in a na ̈ıvefashion where redundant computations are unnecessarily repeated. This results inslow generation, making such models infeasible for production environments. Inthis work, we describe a method to speed up generation in convolutional autoregres-sive models. The key idea is to cache hidden states to avoid redundant computation.We apply our fast generation method to the Wavenet and PixelCNN++ models andachieve up to 21and183speedups respectively.1 I NTRODUCTIONAutoregressive models are a powerful class of generative models that factorize the joint probability ofa data sample xinto a product of conditional probabilities. Autoregressive models such as Wavenet(van den Oord et al., 2016a), ByteNet (Kalchbrenner et al., 2016a), PixelCNN (van den Oord et al.,2016b;c), and Video Pixel Networks (Kalchbrenner et al., 2016b) have shown strong performance inaudio, textual, image, and video generation. Unfortunately, generating in a na ̈ıve fashion is typicallytoo slow for practical use. For example, generating a batch of 16 3232images using PixelCNN++(Salimans et al., 2017) takes more then 11 minutes on commodity hardware with a Tesla K40 GPU.The ability to do fast generation is useful for many applications. Production environments have tightlatency constraints, so real-time speech generation, machine translation, and image super-resolution(Dahl et al., 2017) all require fast generation. Furthermore, quick simulation of environment dynamicsis important for fast training in model-based reinforcement learning (Oh et al., 2015). However, slowgeneration hampers the use of convolutional autoregressive models in these situations.In this work, we present a method to significantly speed up generation in convolutional autoregressivemodels. The contributions of this work are as follows:1.We present a general method to enable fast generation for autoregressive models throughcaching. We describe specific implementations of this method for Wavenet (van den Oordet al., 2016a) and PixelCNN++ (Salimans et al., 2017). We demonstrate our fast generationachieves up to 21for Wavenet and 183for PixelCNN++ over their na ̈ıve counterparts.2.We open-source our implementation of fast generation for Wavenet1and PixelCNN++2. Ourgeneration code is compatible with other open-source implementations of these models thatalso implement training.2 M ETHODSNa ̈ıve generation for convolutional autoregressive models recalculates the entire receptive field atevery iteration (we refer readers to van den Oord et al. (2016a); Salimans et al. (2017) for details).This results in exponential time and space complexity with respect to the receptive field. In thissection, we propose a method that avoids this cost by caching previously computed hidden states andusing them in the subsequent iterations.Denotes equal contribution.1https://github.com/tomlepaine/fast-wavenet2https://github.com/PrajitR/fast-pixel-cnn1Workshop track - ICLR 20172.1 C ACHING FOR DILATED CONVOLUTIONSTo generate a single output y, computations must be performed over the entire receptive field which isexponential with respect to the number of layers. A na ̈ıve generation method repeats this computationover the entire receptive field at every step, which is illustrated in Figure 1A. However, this is wastefulbecause many hidden states in the receptive field can be re-used from previous iterations. This na ̈ıveapproach has been used in open-source implementations of Wavenet3.Instead of recomputing all of the hidden states at every iteration, we propose caching hidden statesfrom previous iterations. Figure 1B illustrates this idea, where each layer maintains a cache ofpreviously computed hidden states. During each generation step, hidden states are popped off thecache to perform the convolutions. The newly generated hidden states are then pushed back into thecache for future computation. Therefore, the computation and space complexity are linear in thenumber of layers instead of exponential.Figure 1: Comparison of na ̈ıve implementation of the generation process and our proposedmethod. Orange nodes are computed in the current timestep, blue nodes are previously cached states,and gray nodes are not involved in the current timestep. Notice that generating a single samplerequires O(2L)operations for the na ̈ıve implementation where Lis number of layers in the network.Meanwhile, our implementation only requires O(L)operations to generate a single sample.2.2 C ACHING FOR STRIDED CONVOLUTIONSThe caching algorithm for dilated convolutions is straightforward because the number of hidden statesin each layer is equal to the number of inputs. Thus, each layer can simply maintain a cache that isupdated on every step. However, strided convolutions pose an additional challenge since the numberof hidden states in each layer is different than the number of inputs.A downsampling (strided convolutional) layer will not necessarily generate an output at each timestep(see the first hidden layer in Figure 2) and may even skip over some inputs (see the second hiddenlayer in Figure 2). On the other hand, an upsampling (strided transposed convolutional) layer willproduce hidden states and outputs for multiple timesteps (see the last hidden layer in Figure 2).As a result, the cache cannot be updated in every timestep. Thus, each cache has an additionalproperty cache every , where the cache is only updated every cache every steps. Every downsamplinglayer increases the cache every property of the layer by the downsampling factor (2 in the case ofFigure 2). Conversely, every upsampling layer decreases the cache every property of the layer by theupsampling factor (also 2 in the case of Figure 2).2.3 M ODEL -SPECIFIC DETAILSWavenet uses 1D dilated convolutions. Our fast implementation of Wavenet follows directly from thecomponents outlined in Section 2.1.PixelCNN++ improves upon PixelCNN (van den Oord et al., 2016c) through a variety of modifications,including using strided convolutions and transposed convolutions instead of dilation for speed. Ourmethod scales from 1D to 2D with very few changes. The caches for each layer are now 2D, with aheight equal to the filter height and a width equal to the image width. After an entire row is generated,the oldest row of the cache is popped and the new row is pushed. Because strided convolutions areused, we use the cache every idea detailed in Section 2.2. For full details please refer to our code.3https://github.com/ibab/tensorflow-wavenet2Workshop track - ICLR 2017Figure 2: Fast generation for a network with strided convolutions . We show an example modelwith 2 convolutional and 2 transposed convolutional layers each with a stride of 2 (Dumoulin & Visin,2016). Due to the stride, each layer has fewer states than network inputs. Orange nodes are computedin the current timestep, blue nodes are previously cached states, and gray nodes are not involvedin the current timestep. In the first timestep ( t= 0), the first input is used to compute and cacheall nodes for which there is sufficient information to generate, including the first four outputs. Att= 1, there are no nodes that have sufficient information to be computed, but the output for t= 1hasalready been computed at t= 0. Att= 2, there is one new node that now has sufficient informationto be computed, although the output for t= 2has also been computed at t= 0. The t= 3scenariois similar to t= 1. Att= 4, there is enough information to compute multiple hidden states andgenerate the next four outputs. This is analogous to the t= 0scenario. t= 5is analogous to t= 1,and this cycle is followed for all future time steps.2 4 6 8 10 12 14Number of layers0.000.020.040.060.080.100.120.140.160.18Average time per sample (sec)0.0070.162 OursNaiveFigure 3: Wavenet timing experiments. Wegenerated from a model with 2sets of Ldi-lation layers each, using a na ̈ıve implemen-tation and ours. Results are averaged over100repeats. When Lis small, the na ̈ıve im-plementation performs better than expecteddue to GPU parallelization of the convolutionoperations. When Lis large, the difference inperformance is more pronounced.246101833611091831 2 4 8 16 32 64 128 256Batch size (log scale)110100Speedup (log scale)Figure 4: PixelCNN++ timing experiments. Wegenerated images using the model architecture de-scribed in (Salimans et al., 2017). Due to the hugenumber of convolution operations in the na ̈ıve im-plementation, GPU utilization is always high andthere is no room for parallelization across batch.Since our method avoids redundant computations,larger batch sizes result in larger speedups.3 E XPERIMENTSWe implemented our methods for Wavenet (van den Oord et al., 2016a) and PixelCNN++ (Salimanset al., 2017) in TensorFlow (Abadi et al., 2016). We compare our proposed method with a na ̈ıve imple-mentation of Wavenet4and a na ̈ıve implementation of PixelCNN++5in Figures 3 and 4 respectively.The results indicate significant speedups, up to 21for Wavenet and 183for PixelCNN++.4https://github.com/ibab/tensorflow-wavenet5https://github.com/openai/pixel-cnn3Workshop track - ICLR 2017<|im_end|> <|im_start|>assistant ### Review Title simple and good ### Review Text This is a nice workshop paper. its a simple idea but people will be interested in it. If nothing else, the released code is valuable, and having the poster to advertise it is a good use of workshop poster space. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rkEfPeZRb
ICLR.cc/2018/Conference
2018
Variance-based Gradient Compression for Efficient Distributed Deep Learning
["Yusuke Tsuzuku", "Hiroto Imachi", "Takuya Akiba"]
Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.
["distributed deep learning", "gradient compression", "collective communication", "data parallel distributed sgd", "image classification"]
ABSTRACTDue to the substantial computational cost, training state-of-the-art deep neuralnetworks for large-scale datasets often requires distributed training using multiplecomputation workers. However, by nature, workers need to frequently communi-cate gradients, causing severe bottlenecks, especially on lower bandwidth connec-tions. A few methods have been proposed to compress gradient for efficient com-munication, but they either suffer a low compression ratio or significantly harmthe resulting model accuracy, particularly when applied to convolutional neuralnetworks. To address these issues, we propose a method to reduce the communi-cation overhead of distributed deep learning. Our key observation is that gradientupdates can be delayed until an unambiguous (high amplitude, low variance) gra-dient has been calculated. We also present an efficient algorithm to compute thevariance and prove that it can be obtained with negligible additional cost. Weexperimentally show that our method can achieve very high compression ratiowhile maintaining the result model accuracy. We also analyze the efficiency us-ing computation and communication cost models and provide the evidence thatthis method enables distributed deep learning for many scenarios with commodityenvironments.1 I NTRODUCTIONDeep neural networks are attracting attention because of their outstanding prediction power in manyapplication fields such as image recognition, natural language processing, and speech recognition.In addition, software frameworks are publicly available, making it easier to apply deep learning.However, their crucial drawback is the substantial computational cost on training. For example, ittakes over a week to train ResNet-50 on the ImageNet dataset if using a single GPU. Such longtraining time limits the number of trials possible when creating models.Therefore, we must conduct distributed training using multiple computation workers (e.g., multipleGPUs in different nodes). However, by nature, workers need to frequently communicate gradients,which yields a severe bottleneck for scalability, especially when using lower bandwidth connec-tions. For example, when using 1000BASE-T Ethernet, communication takes at least ten timeslonger than forward and backward computation for ResNet-50, making multiple nodes impractical.High performance interconnections such as InfiniBand and Omni-Path are an order of magnitudemore expensive than commodity interconnections, which limits research and development of deeplearning using large-scale datasets to a small number of researchers.Although several methods have been proposed to compress gradient for efficient communication,they either suffer a low compression ratio or significantly harm the resulting model accuracy, par-ticularly when applied to convolutional neural networks. There are mainly two lines of research:quantization and sparsification. Quantization-based methods include 1-bit SGD (Seide et al. ,2014 )andTernGrad (Wen et al. ,2017 ). Though they achieve small loss of accuracy by using at least onebit for each parameter, the compression ratio is limited. Sparsification-based methods include Strom(2015 ) and QSGD (Alistarh et al. ,2017 ). While they can achieve high compression ratio, as we willsee in our experiments, they harm the resulting model accuracy or suffer a low compression ratio,particularly when applied to convolutional neural networks.To address these issues, we propose a new gradient compression algorithm to reduce the commu-nication overhead of distributed deep learning. The proposed method belongs to the sparsification1Under review as a conference paper at ICLR 2018approaches. Our key observation is that the variance of the gradient for each parameter point overiterations is a useful signal for compression. As almost all previous approaches of both sparsificationand quantization only look at the magnitude of gradient, we believe that we are opening a new doorfor this field. In addition, we also show that our method can be combined with previous compres-sion methods to further boost performance. We also present an efficient algorithm to compute thevariance and prove that it can be obtained with negligible additional cost.We experimentally demonstrate that our method can achieve a high compression ratio while main-taining result model accuracy. We also analyze the efficiency using computation and communicationcost models and provide evidence that our method enables distributed deep learning for many sce-narios with commodity environments.Organization. The remainder of this paper is organized as follows: Section 2 provides the defini-tions and notations used in this paper. Section 3 reviews related work in this field. Section 4 presentsthe proposed method. Section 5 analyzes performance. Section 6 shows our experimental results,and we conclude in Section 7.2 P RELIMINARIESIn this section, we describe an overview of distributed deep learning and parameter updates withcompressed gradients.2.1 C HALLENGES IN DATA PARALLEL STOCHASTIC GRADIENT DESCENTIn data parallel distributed Stochastic Gradient Descent (SGD), all workers have identical copies ofthe same model and calculate gradients using different subsets of training data. Gradients are sharedacross all workers, and each worker updates its local model using the shared gradients.There are two well-known approaches to communication of gradients: synchronous and asyn-chronous. Even though our method can be applied to both of them, we focus on the synchronousapproach in this paper. Each worker computes gradients and shares them with other workers usinga synchronized group communication routine in every single training iteration, typically using acommunication routine known as allreduce.The challenge is that the communication is possibly a severe bottleneck in a training process. Gra-dients typically consist of tens of millions of floating point values so the total size of exchangeddata can be large. For example, the model size of ResNet-50 ( He et al. (2016 )) is over 110 MB, andthe size of gradients becomes large accordingly. Thus, the communication time has a significanteffect on the total training time in environments equipped with a commodity interconnect hardware,such as 1Gb Ethernet. Also, in the synchronous approach, workers have to wait for completionof communication and their computing resources including GPUs are idle, which is a significantperformance loss.2.2 P ROBLEM FORMULATIONIn basic procedures of SGD, model parameters are updated asxt+1=xt∇ft(xt);where xtand∇ft(xt)are model parameters and calculated gradients in time step t, respectively. ftis a loss function and it differs between samples used in a mini-batch. is a step size.To reduce an amount of data to be exchanged over a network, either quantization or sparsification orboth are used as explained in Sec. 3.3 R ELATED WORKThere are two main approaches to gradient compression: quantization-based approaches andsparsification-based approaches. Quantization-based approaches reduce communication cost by ex-pressing each gradient with fewer bits. If a baseline uses 32-bit floating points in communication,2Under review as a conference paper at ICLR 2018then it can reduce the amount of communication by up to 32 times. Seide et al. (2014 ) showedthat neural networks can be trained using only one sign bit per parameter. There are two key tech-niques in their algorithm. First, they use different threshold to encode and decode gradient elementsfor each column of weight matrix. Second, quantization errors are added to the gradients calcu-lated in the next step. Its effectiveness has been experimentally verified through speech models.Wen et al. (2017 ) proposed TernGrad to encode gradients with 2 bits per parameter. The algorithmis characterized by its theoretically-guaranteed convergence and reported that it can successfullytrain GoogLeNet ( Szegedy et al. ,2015 ) on ImageNet with an average loss of accuracy of less than2%.As a second approach, sparsification-based approaches reduce communication cost by sending onlya small fraction of gradients. Even though they require sending not only the values of gradientsbut also parameters’ indexes, their strong sparsification reduces transmission requirements signif-icantly. Strom (2015 ) proposed sending only gradients whose absolute values are greater than auser-defined threshold. The algorithm sends only sign bits and encoded indexes of parameters. Gra-dients are decoded up to the threshold and quantization errors are added to the gradients calculatedin the next step as 1-bit stochastic gradients. Its effectiveness has also been experimentally verifiedon speech applications. Dryden et al. (2016 ) extended Strom’s method. They proposed to use anadaptive threshold instead of using a user-defined threshold. They also introduced repeated sparsi-fication of gradients in order to combine the algorithm with an efficient communication algorithm.Alistarh et al. (2017 ) proposed QSGD. QSGD stochastically rounds gradients to linearly quantizedvalues, which are calculated in the algorithm. Their work enjoys strong theoretical properties in con-vex optimizations. Furthermore, they can control the trade-off between accuracy and compression.On the other hand, Strom’s method does not work with small or large thresholds.4 P ROPOSED METHODSIn this section, we describe the proposed method. An efficient implementation and combinationwith other compression methods are also explained.Our work belongs to the sparsification-based approaches. In this section, we explicitly denote a gra-dient vector (∇f(x))and a gradient element (∇if(x))for clarity. Previous works in this directionhave focused on gradient elements with small magnitudes, and they rounded them to zero to spar-sify. Our work diverges at this point. We propose using approximated variances of gradient elementsinstead of magnitudes. Our method do not transmit ambiguous elements until additional data reducetheir ambiguity and significantly reduces communication while maintaining accuracy. This methodenables shifting the balance between accuracy and compression, as necessary. Furthermore, we cancombine our work with sparsity-promoting quantization like QSGD and Strom’s method. We showthe way of combination with the Strom’s method later.4.1 K EY CONCEPTSThe key idea of our method is delaying sending ambiguously estimated gradient elements. Weconsider a gradient element to be ambiguous when its amplitude is small compared to its varianceover the data points. We extend the standard updating method to the following:xt+1rt+1=xt(∇ft(xt) +rt):This extension follows Seide et al. (2014 ) and Strom (2015 ).In previous works, the approximation errors are accumulated in rtand used in future updates. Ineach step, parameters are updated only with approximated gradient elements represented by lessnumber of bits. In our work, we interpret rtas a delayed update, not approximation errors.We send the gradient element corresponding to the i-th parameter only when it satisfies the followingcriterion,′jBjVB[∇ifz(x)]<(∇ifB(x))2; (1)where jBjis a size of the mini-batch and ′is a hyper parameter representing required estimationaccuracy. zis a each sample and Bis a mini-batch, respectively and fzandfBare corresponding3Under review as a conference paper at ICLR 2018loss functions. VB[∇ifz(x)]is the sample variance of the gradient element corresponding to the i-thparameter over a mini-batch B.If we do not send some gradient elements, we add them to the next batch and recalculate ( 1) withincreased batch size. For example, if we postpone sending a gradient element nine times consec-utively, the criterion ( 1) is calculated as if ten times larger batch than usual mini-batch in the nextstep. Note that even though the criterion ( 1) is calculated as if we used a larger batch, what is usedfor an update is not the mean of the mini-batches across steps but the sum of them.Following lemma supports our formulation.Lemma 4.1. (De et al. ,2017 ) A sufficient condition that a vector gis a descent direction is∥g ∇f(x)∥22<∥g∥22: (2)We are interested in the case of g=∇fB(x), the gradient vector of the loss function over B. Bythe weak law of large numbers, when jBj>1, the left hand side of Eq. 2withg=∇fB(x)can beestimated as follows:E[∥∇fB(x) ∇f(x)∥22]1jBjVB[∇fz(x)]:Thus our formulation with ′1corresponds to an elementwise estimation of the sufficient con-dition ( 2) that a gradient vector decreases the loss function. Gradient elements become more likelyto be sent as sample size increases. However, if once gradient elements are estimated with too highvariances, it takes too long for the elements to be sent. Thus, we decay variance at every step. De-tails are described in subsection 4.4. In the combination with optimization methods like MomentumSGD, gradient elements not sent are assumed to be equal to zero.4.2 Q UANTIZATION AND PARAMETER ENCODINGTo allow for comparison with other compression methods, we propose a basic quantization process.In this section, we refer to a gradient as an accumulated gradient. After deciding which gradientelements to send, each worker sends pairs of a value of a gradient element and its parameter index asStrom (2015 ) and Alistarh et al. (2017 ). We quantize each element to 4-bit so that we can representeach pair in 32-bit as per Strom (2015 ). The 4-bit consists of one sign bit and three exponent bits.Our quantization except for the sign bit is as follows. For a weight matrix Wk(or a weight tensorin CNN), there is a group of gradient elements corresponding to the matrix. Let Mkbe the maxi-mum absolute value in the group. First, for each element giin the k-th group, if jgijis larger than2⌊log2Mk⌋truncate it to 2⌊log2Mk⌋, otherwise, round to the closer value of 2⌊log2jgij⌋or2⌈log2jgij⌉.Letg′ibe the preprocessed gradient element. Next, calculate a integer di:=⌊log2Mk⌋ log2g′i.Ifdi>7then we do not send the value, otherwise, encode the integer from 0to7using 3 bits.⌊log2Mk⌋is also sent for every weight matrix. An efficient implementation is presented in subsec-tion4.4. We do not adopt stochastic rounding like Alistarh et al. (2017 ) nor accumulate roundingerror gig′ifor the next batch because this simple rounding does not harm accuracy empirically.Appendix Bhas a running example of this quantization.Because the variance-based sparsification method described in subsection 4.1is orthogonal to thequantization shown above, we can reduce communication cost further using sparsity promotingquantization methods such as QSGD instead. However, we used the quantization to show thatenough level of sparsity is gained solely by our variance-based sparsification because the quanti-zation rounds only a small fraction of gradient elements to zero. We show how to combine ourmethod with a method in Strom (2015 ) later in this paper because the way of the combination is lessobvious. We use a naive encoding for parameter indexes because the rest 28-bits are enough. We canfurther reduce the number of bits by compressing parameter indexes ( Strom ,2015 ;Alistarh et al. ,2017 ).4.3 C OMMUNICATION BETWEEN WORKERSIn distributed deep learning, the most important operation is to take the global mean of the gradientelements calculated in each worker. The operation is referred to as “allreduce.” It consists of threesteps: (1) collects all local arrays in each worker, (2) reduce them using a given arithmetic operator,4Under review as a conference paper at ICLR 2018which is summation in this case, and (3) broadcast the result back to all workers so that all workersobtain the identical copies of the array.Conventional data parallel deep learning applications can enjoy the benefit of highly optimized allre-duce implementations thanks to the fact that only the sum of the values has to be kept during thecommunication. However, after applying the proposed method to the local gradient elements, theyare converted to a sparse data structure so the allreduce operation can no longer apply.Dryden et al. (2016 ) and Aji & Heafield (2017 ) proposed sparsifying gradient elements multipletimes to utilize a kind of allreduce for sparsification-based compressions. However, the accuracy ispossibly degraded when the elements are highly compressed through repetitive compressions. In-stead, we adopt allgatherv for communication, where each worker just sends the calculated elementsto other workers. We avoid to encode and decode elements multiple times by allgatherv. In allgath-erv communication cost hardly increase from a kind of allreduce because index overlap, which isneeded for summation, rarely occur if the compression ratio is sufficiently higher than the numberof workers. Thanks to the high compression ratio possible with this algorithm and its combinationwith other compression methods, even large numbers of workers can be supported. Some optimiza-tion methods, such as ADAM ( Ba & Kingma ,2015 ), require parameter updates and postprocessing.They are calculated locally after the communication.4.4 E FFICIENT IMPLEMENTATIONWe first describe the efficient computation of the criterion ( 1) and second how to quantize gradientelements without additional floating points operations. We can efficiently compare squared meanand variance in the criterion ( 1) by just comparing squared mean of gradient elements and sum ofsquared gradient elements. That is,(∑z2B1jBj∇ifz(x))2> ∑z2B(1jBj∇ifz(x))2(3)achieves our goal. Thus, we have to maintain only the sum of gradient elements and sum of squaredgradient elements. Details are described in Appendix A. Decay of variance, described in subsection4.1, is accomplished by multiplying hyperparameter (<1)to the sum of squared gradient elementsat every step. Alpha in Eq. 3controls how much unambiguity the algorithm require. The algorithmcompress more aggressively with larger alpha. A range from one to two is good for alpha from itsderivation. Fig. 1shows the final algorithm.The quantization of parameters described in subsection 4.2can also be efficiently implemented withthe standard binary floating point representation using only binary operations and integer arithmeticas follows. We can calculate 2⌊log2x⌋by truncating the mantissa. We can also round values byadding one to the most significant bit of mantissa as if xis an unsigned integer and then maskingmantissa to 0.4.5 H YBRID ALGORITHMWe describe the way to combine our method with Strom’s method. It becomes problematic how tomodify variance when only parts of gradient elements are exchanged. We solve the issue simply bymodifying a2to(ab)2. Let S be the value sent in a step (i.e. threshold, -threshold or 0). We correctsquared gradient elements∑(∇if)2to∑(∇if)22S∑(∇if) +S2. Fig. 2shows the algorithm.We show the effictiveness of this combined algorithm by experiments in Sec. 6. Combinations withother works like QSGD and TernGrad are rather straightforward and we do not explore further inthis paper.5 P ERFORMANCE ANALYSISBecause common deep learning libraries do not currently support access to gradients of each sample,it is difficult to contrast practical performance of an efficient implementation in the commonly usedsoftware environment. In light of this, we estimate speedup of each iteration by gradient compres-sion with a performance model of communication and computation. The total speed up of the wholetraining process is just the summation of each iteration because we adopt a synchronized approach.5Under review as a conference paper at ICLR 2018Algorithm 1: Basichyperparam: B=batch size, ; foreach parameters :ri= 0;vi= 0;while not converged :CalcGrad();foreach parameters :ri+=∑∇ifzB;vi+=∑(∇ifzB)2;ifr2i> v i:Encode( ri);ri= 0;vi= 0;else:vi=;CommunicateAndUpdate();Figure 1: Basic algorithm of our variance-based compression. andare hyperparam-eters. Recommended value for is 0.999.controls compression and accuracy. ∇ifzdenotes a gradient element of parameter ifor each sample zin mini-batch. CalcGrad()is backward and forward computaion. En-code() includes quantization and encoding ofindexes. CommunicateAndUpdate() requiressharing gradient elements and decode, thenupdate parameters.Algorithm 2: Hybridhyperparam: B=batch size, ; ; foreach parameters :ri= 0;vi= 0;while not converged :CalcGrad();foreach parameters :ri+=∑∇ifzB;vi+=∑(∇ifzB)2;ifjrij> andr2i> v i:Encode(Sign( ri));ri=Sign( ri);vi=max( vi2jrij+2;0);vi=;CommunicateAndUpdate();Figure 2: Hybrid algorithm of our variance-based compression and Strom’s method. isa user-defined threshold required in Strom’smethod. Other parameters and notations arethe same with Fig. 1.In the communication part, the pairs of the quantized values and the parameter indexes in eachnode are broadcast to all nodes. The i-th node’s input data size ni(i= 1; :::; p )may be differentamong nodes, where pdenotes the number of nodes. An MPI function called allgatherv realizessuch an operation. Because recent successful image recognition neural network models like VGG(Simonyan & Zisserman ,2015 ) or ResNet ( He et al. ,2016 ) have a large number of parameters, thelatency term in communication cost can be ignored even if we achieve very high compression ratiosuch as c > 1;000. In such cases, a type of collective communication algorithms called the ringalgorithm is efficient ( Thakur et al. ,2005 ). Its bandwidth term is relatively small even though itslatency term is proportional to p. Although the naive ring allgatherv algorithm costs unacceptableO(max inip)time, Tr ̈aff et al. (2008 ) proposed a method to mitigate it by dividing large input data,which is called pipelined ring algorithm. For example, an allgatherv implementation in MV APICHadopts a pipelined ring algorithm for large input data.The calculation of variance of gradients dominates in the additional cost for the computation part ofthe proposed method. The leading term of the number of multiply-add operations in it is 2NjBj,where NandjBjare the number of parameters and the local batch size, respectively. Other termssuch as determination of sent indexes and application of decay are at most O(N). Therefore here-after we ignore the additional cost for the computation part and concentrate to the communicationpart.We discuss the relationship between the compression ratio and the speedup of the communicationpart. As stated above, we ignore the latency term. The baseline is ring allreduce for uncompressedgradients. Its elapsed time is Tr= 2(p1)Ns=p , where sandare the bit size of each param-eter and the transfer time per bit, respectively. On the other hand, elapsed time of pipelined ringallgatherv is Tv=∑⌈((ni=m)1)m⌉, where mis the block size of pipelining. Defining cas theaveraged compression ratio including change of the number of bits per parameter, Tvis evaluated as6Under review as a conference paper at ICLR 2018Tv(∑ni+ (p1)m)= (Nsp=c + (p1)m):If we set msmall enough, relative speedupisTr=Tv2(p1)c=p2. Therefore we expect linear speedup in c > p= 2range.6 E XPERIMENTSIn this section, we experimentally evaluate the proposed method. Specifically, we demonstrate thatour method can significantly reduce the communication cost while maintaining test accuracy. Wealso show that it can reduce communication cost further when combined with other sparsificationmethods, and even improves test accuracy in some settings.We used CIFAR-10 ( Krizhevsky ,2009 ) and ImageNet ( Russakovsky et al. ,2015 ), the two mostpopular benchmark datasets of image classification. We fixed the hyperparameter in Fig. 1andFig.2to0:999in all experiments. We evaluated gradient compression algorithms from the followingtwo viewpoints: accuracy and compression ratio. The accuracy is defined as the test accuracy at thelast epoch, and the compression ratio is defined as the number of the total parameters of networksdivided by the average number of parameters sent. We do not consider the size of other non-essentialinformation required for the communication, because they are negligible. In addition, we can ignorethe number of bits to express each gradient because we assume that both a gradient and a parameterindex are enclosed in a 32 bit word as Strom (2015 ) in all algorithms. Please note that, as all methodsuse allgatherv for communication, communication cost increases in proportion to the number ofworkers. Thus, high compression ratio is required to achieve sufficient speed up when using tens orhundreds of workers. We have visualization of results in Appendix C.6.1 CIFAR-10For experiments on CIFAR-10, we used a convolutional neural network similar to VGG(Simonyan & Zisserman ,2015 ). The details of the network architecture are described in Ap-pendix D. We trained the network for 300 epochs with weight decay of 0.0005. A total numberof workers was 8 and batch size was 64 for each worker. We applied no data augmentation to train-ing images and center-cropped both training and test images into 32x32. We used two differentoptimization methods: Adam ( Ba & Kingma ,2015 ) and momentum SGD ( Sutskever et al. ,2013 ).For Adam, we used Adam’s default parameter described in Ba & Kingma (2015 ). For momentumSGD, we set the initial learning rate to 0:058and halved it at every 25 epochs. We used two’scomplement in implementation of QSGD and ”bit” represents the number of bits used to representeach element of gradients. ”d” represents a bucket size. For each configuration, we report the me-dian of the accuracy from five independent runs. Compression ratios are calculated based on theexecution that achieved the reported accuracy.Table 1summarizes the results. Our method successfully trained the network with slight accuracygain for the Adam setting and 2 to 3 % of accuracy degradation for the Momentum SGD setting.Compression ratios were also sufficiently high, and our method reduced communication cost beyondquantization-based approaches described in section 3. The hybrid algorithm’s compression ratio isseveral orders higher than existing compression methods with a low reduction in accuracy. Thisindicates the algorithm can make computation with a large number of nodes feasible on commoditylevel infrastructure that would have previously required high-end interconnections. Even thoughQSGD achieved higher accuracy than our method, its compression power is limited and our algo-rithm can reduce communication cost more aggressively. On the other hand, Strom’s method causedsignificant accuracy degradation. Counter-intuitively, the hybrid algorithm improved its accuracy, inaddition to the further reduction of communication.Our hypothesis for this phenomena is as follows. In Strom’s algorithm, when a large positive gradi-ent appears, it has no choice but send positive values for consequent steps even if calculated gradi-ents in following mini-batches have negative values. On the other hand, in the hybrid algorithm, iffollowing gradients have a different sign with a residual, the residual is not likely to be sent. We as-sume that this effect helped the training procedure and led to better accuracy. We also would like tomention the difficulty of hyperparameter tuning in Strom’s method. As Table 1shows, using lowerthreshold does not necessarily always lead to higher accuracy. This is because the hyperparametercontrols both its sparsification and quantization. Thus, users do not know whether to use a largeror smaller value as a threshold to maintain accuracy. We note that we observed unstable behaviors7Under review as a conference paper at ICLR 2018Table 1: Training of a VGG-like network on CIFAR-10. denotes the threshold in Strom’s method.is the hyperparameter of our method described in the criterion ( 3). The number of bits of QSGDrefers the number of bits to express gradients except for the sign bits. For each configuration, themedian of the accuracy from five independent runs is reported. The compression column lists thecompression ratio defined at the beginning of Sec. 6.Adam Momentum SGDMethod Accuracy Compression Accuracy Compressionno compression 88.1 1 91.7 1Strom, = 0:001 62.8 88.5 84.8 6.5Strom, = 0:01 85.0 230.1 10.6 990.7Strom, = 0:1 88.0 6,942.8 71.6 8,485.0our method, = 1 88.9 120.7 90.3 52.4our method, = 1:5 88.9 453.3 89.6 169.2our method, = 2:0 88.9 913.4 88.4 383.6hybrid, = 0:01; = 2:0 85.0 1,942.2 87.6 983.9hybrid, = 0:1; = 2:0 88.2 12,822.4 87.1 12,396.8QSGD (2bit, d= 128 ) 88.8 12.3 90.8 6.6QSGD (3bit, d= 512 ) 87.4 14.4 91.4 7.0QSGD (4bit, d= 512 ) 88.2 11.0 91.7 4.0Table 2: Training ResNet50 on ImageNet. denotes a threshold in Strom’s method. is thehyperparameter of our method described in the criterion ( 3). Accuracy is the test accuracy at the lastepoch. Compression refers compression ratio defined in the beginning of Sec. 6.Adam Momentum SGDMethod Accuracy Compression Accuracy Compressionno compression 56.2 1 76.0 1Strom, = 0:001 28.6 38.6 75.2 2.1Strom, = 0:01 50.0 156.2 75.5 35.2Strom, = 0:1 48.1 6,969.0 75.5 2,002.2our method, = 1 55.3 1,542.8 74.7 103.8our method, = 1:5 57.4 2,953.1 75.5 400.7our method, = 2:0 57.8 5,173.8 75.1 990.7hybrid, = 0:01; = 2:0 52.2 2,374.2 75.0 470.9hybrid, = 0:1; = 2:0 43.1 28,954.2 75.1 4,345.0with other thresholds around 0.01. On the other hand, our algorithm are free from such problem.Moreover, when we know good threshold for Strom’s algorithm, we can just combine it with oursto get further compression.6.2 I MAGE NETAs larger scale experiments, we trained ResNet-50 ( He et al. ,2016 ) on ImageNet. We followedtraining procedure of Goyal et al. (2017 ) including optimizer, hyperparameters and data augmenta-tion. We also evaluated algorithms with replacing MomentumSGD and its learning rate schedulingto Adam with its default hyperparameter. We used batch size 32 for each worker and used 16 work-ers.Table 2summarizes the results. In this example as well as the previous CIFAR10 example, Variance-based Gradient Compression shows a significantly high compression ratio, with comparable accu-racy. While in this case, Strom’s method’s accuracy was comparable with no compression, given8Under review as a conference paper at ICLR 2018the significant accuracy degradation with Strom’s method on CIFAR10, it appears Variance-basedGradient Compression provides a more robust solution. Note that the training configuration withMomentumSGD is highly optimized to training without any compression. For reference, the orig-inal paper of ResNet-50 reports its accuracy as 75.3% ( He et al. ,2016 ).Wen et al. (2017 ) reportsthat it caused up to 2% accuracy degradation in training with GoogLeNet ( Szegedy et al. ,2015 ) onImageNet and our method causes no more degradation compared to quantization-based approaches.7 C ONCLUSIONWe proposed a novel method for gradient compression. Our method can reduce communicationcost significantly with no or only slight accuracy degradation. Contributions of our work can besummarized in the following three points. First, we proposed a novel measurement of ambiguity(high variance, low amplitude) to determine when a gradient update is required. Second, we showedthe application of this measurement as a threshold for updates significantly reduces update require-ments, while providing comparable accuracy. Third, we demonstrated this method can be combinedwith other efficient gradient compression approaches to further reduce communication cost.
B1O_32YeM
The idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper.
6: Marginally above acceptance threshold
This paper proposes a variance-based gradient compression method to reduce the communication overhead of distributed deep learning. Experiments on real datasets are used for evaluation. The idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper. Firstly, the authors propose to combine two components to reduce communication cost, one being variance-based gradient compression and the other being quantization and parameter encoding. But the contributions of these two components are not separately analyzed or empirically verified. Secondly, the experimental results are unconvincing. The accuracy of Momentum SGD for ‘Strom, \tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter. Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom. In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Variance-based Gradient Compression for Efficient Distributed Deep Learning ### Paper Abstract Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments. ### Paper Keywords ["distributed deep learning", "gradient compression", "collective communication", "data parallel distributed sgd", "image classification"] ### Paper Content ABSTRACTDue to the substantial computational cost, training state-of-the-art deep neuralnetworks for large-scale datasets often requires distributed training using multiplecomputation workers. However, by nature, workers need to frequently communi-cate gradients, causing severe bottlenecks, especially on lower bandwidth connec-tions. A few methods have been proposed to compress gradient for efficient com-munication, but they either suffer a low compression ratio or significantly harmthe resulting model accuracy, particularly when applied to convolutional neuralnetworks. To address these issues, we propose a method to reduce the communi-cation overhead of distributed deep learning. Our key observation is that gradientupdates can be delayed until an unambiguous (high amplitude, low variance) gra-dient has been calculated. We also present an efficient algorithm to compute thevariance and prove that it can be obtained with negligible additional cost. Weexperimentally show that our method can achieve very high compression ratiowhile maintaining the result model accuracy. We also analyze the efficiency us-ing computation and communication cost models and provide the evidence thatthis method enables distributed deep learning for many scenarios with commodityenvironments.1 I NTRODUCTIONDeep neural networks are attracting attention because of their outstanding prediction power in manyapplication fields such as image recognition, natural language processing, and speech recognition.In addition, software frameworks are publicly available, making it easier to apply deep learning.However, their crucial drawback is the substantial computational cost on training. For example, ittakes over a week to train ResNet-50 on the ImageNet dataset if using a single GPU. Such longtraining time limits the number of trials possible when creating models.Therefore, we must conduct distributed training using multiple computation workers (e.g., multipleGPUs in different nodes). However, by nature, workers need to frequently communicate gradients,which yields a severe bottleneck for scalability, especially when using lower bandwidth connec-tions. For example, when using 1000BASE-T Ethernet, communication takes at least ten timeslonger than forward and backward computation for ResNet-50, making multiple nodes impractical.High performance interconnections such as InfiniBand and Omni-Path are an order of magnitudemore expensive than commodity interconnections, which limits research and development of deeplearning using large-scale datasets to a small number of researchers.Although several methods have been proposed to compress gradient for efficient communication,they either suffer a low compression ratio or significantly harm the resulting model accuracy, par-ticularly when applied to convolutional neural networks. There are mainly two lines of research:quantization and sparsification. Quantization-based methods include 1-bit SGD (Seide et al. ,2014 )andTernGrad (Wen et al. ,2017 ). Though they achieve small loss of accuracy by using at least onebit for each parameter, the compression ratio is limited. Sparsification-based methods include Strom(2015 ) and QSGD (Alistarh et al. ,2017 ). While they can achieve high compression ratio, as we willsee in our experiments, they harm the resulting model accuracy or suffer a low compression ratio,particularly when applied to convolutional neural networks.To address these issues, we propose a new gradient compression algorithm to reduce the commu-nication overhead of distributed deep learning. The proposed method belongs to the sparsification1Under review as a conference paper at ICLR 2018approaches. Our key observation is that the variance of the gradient for each parameter point overiterations is a useful signal for compression. As almost all previous approaches of both sparsificationand quantization only look at the magnitude of gradient, we believe that we are opening a new doorfor this field. In addition, we also show that our method can be combined with previous compres-sion methods to further boost performance. We also present an efficient algorithm to compute thevariance and prove that it can be obtained with negligible additional cost.We experimentally demonstrate that our method can achieve a high compression ratio while main-taining result model accuracy. We also analyze the efficiency using computation and communicationcost models and provide evidence that our method enables distributed deep learning for many sce-narios with commodity environments.Organization. The remainder of this paper is organized as follows: Section 2 provides the defini-tions and notations used in this paper. Section 3 reviews related work in this field. Section 4 presentsthe proposed method. Section 5 analyzes performance. Section 6 shows our experimental results,and we conclude in Section 7.2 P RELIMINARIESIn this section, we describe an overview of distributed deep learning and parameter updates withcompressed gradients.2.1 C HALLENGES IN DATA PARALLEL STOCHASTIC GRADIENT DESCENTIn data parallel distributed Stochastic Gradient Descent (SGD), all workers have identical copies ofthe same model and calculate gradients using different subsets of training data. Gradients are sharedacross all workers, and each worker updates its local model using the shared gradients.There are two well-known approaches to communication of gradients: synchronous and asyn-chronous. Even though our method can be applied to both of them, we focus on the synchronousapproach in this paper. Each worker computes gradients and shares them with other workers usinga synchronized group communication routine in every single training iteration, typically using acommunication routine known as allreduce.The challenge is that the communication is possibly a severe bottleneck in a training process. Gra-dients typically consist of tens of millions of floating point values so the total size of exchangeddata can be large. For example, the model size of ResNet-50 ( He et al. (2016 )) is over 110 MB, andthe size of gradients becomes large accordingly. Thus, the communication time has a significanteffect on the total training time in environments equipped with a commodity interconnect hardware,such as 1Gb Ethernet. Also, in the synchronous approach, workers have to wait for completionof communication and their computing resources including GPUs are idle, which is a significantperformance loss.2.2 P ROBLEM FORMULATIONIn basic procedures of SGD, model parameters are updated asxt+1=xt∇ft(xt);where xtand∇ft(xt)are model parameters and calculated gradients in time step t, respectively. ftis a loss function and it differs between samples used in a mini-batch. is a step size.To reduce an amount of data to be exchanged over a network, either quantization or sparsification orboth are used as explained in Sec. 3.3 R ELATED WORKThere are two main approaches to gradient compression: quantization-based approaches andsparsification-based approaches. Quantization-based approaches reduce communication cost by ex-pressing each gradient with fewer bits. If a baseline uses 32-bit floating points in communication,2Under review as a conference paper at ICLR 2018then it can reduce the amount of communication by up to 32 times. Seide et al. (2014 ) showedthat neural networks can be trained using only one sign bit per parameter. There are two key tech-niques in their algorithm. First, they use different threshold to encode and decode gradient elementsfor each column of weight matrix. Second, quantization errors are added to the gradients calcu-lated in the next step. Its effectiveness has been experimentally verified through speech models.Wen et al. (2017 ) proposed TernGrad to encode gradients with 2 bits per parameter. The algorithmis characterized by its theoretically-guaranteed convergence and reported that it can successfullytrain GoogLeNet ( Szegedy et al. ,2015 ) on ImageNet with an average loss of accuracy of less than2%.As a second approach, sparsification-based approaches reduce communication cost by sending onlya small fraction of gradients. Even though they require sending not only the values of gradientsbut also parameters’ indexes, their strong sparsification reduces transmission requirements signif-icantly. Strom (2015 ) proposed sending only gradients whose absolute values are greater than auser-defined threshold. The algorithm sends only sign bits and encoded indexes of parameters. Gra-dients are decoded up to the threshold and quantization errors are added to the gradients calculatedin the next step as 1-bit stochastic gradients. Its effectiveness has also been experimentally verifiedon speech applications. Dryden et al. (2016 ) extended Strom’s method. They proposed to use anadaptive threshold instead of using a user-defined threshold. They also introduced repeated sparsi-fication of gradients in order to combine the algorithm with an efficient communication algorithm.Alistarh et al. (2017 ) proposed QSGD. QSGD stochastically rounds gradients to linearly quantizedvalues, which are calculated in the algorithm. Their work enjoys strong theoretical properties in con-vex optimizations. Furthermore, they can control the trade-off between accuracy and compression.On the other hand, Strom’s method does not work with small or large thresholds.4 P ROPOSED METHODSIn this section, we describe the proposed method. An efficient implementation and combinationwith other compression methods are also explained.Our work belongs to the sparsification-based approaches. In this section, we explicitly denote a gra-dient vector (∇f(x))and a gradient element (∇if(x))for clarity. Previous works in this directionhave focused on gradient elements with small magnitudes, and they rounded them to zero to spar-sify. Our work diverges at this point. We propose using approximated variances of gradient elementsinstead of magnitudes. Our method do not transmit ambiguous elements until additional data reducetheir ambiguity and significantly reduces communication while maintaining accuracy. This methodenables shifting the balance between accuracy and compression, as necessary. Furthermore, we cancombine our work with sparsity-promoting quantization like QSGD and Strom’s method. We showthe way of combination with the Strom’s method later.4.1 K EY CONCEPTSThe key idea of our method is delaying sending ambiguously estimated gradient elements. Weconsider a gradient element to be ambiguous when its amplitude is small compared to its varianceover the data points. We extend the standard updating method to the following:xt+1rt+1=xt(∇ft(xt) +rt):This extension follows Seide et al. (2014 ) and Strom (2015 ).In previous works, the approximation errors are accumulated in rtand used in future updates. Ineach step, parameters are updated only with approximated gradient elements represented by lessnumber of bits. In our work, we interpret rtas a delayed update, not approximation errors.We send the gradient element corresponding to the i-th parameter only when it satisfies the followingcriterion,′jBjVB[∇ifz(x)]<(∇ifB(x))2; (1)where jBjis a size of the mini-batch and ′is a hyper parameter representing required estimationaccuracy. zis a each sample and Bis a mini-batch, respectively and fzandfBare corresponding3Under review as a conference paper at ICLR 2018loss functions. VB[∇ifz(x)]is the sample variance of the gradient element corresponding to the i-thparameter over a mini-batch B.If we do not send some gradient elements, we add them to the next batch and recalculate ( 1) withincreased batch size. For example, if we postpone sending a gradient element nine times consec-utively, the criterion ( 1) is calculated as if ten times larger batch than usual mini-batch in the nextstep. Note that even though the criterion ( 1) is calculated as if we used a larger batch, what is usedfor an update is not the mean of the mini-batches across steps but the sum of them.Following lemma supports our formulation.Lemma 4.1. (De et al. ,2017 ) A sufficient condition that a vector gis a descent direction is∥g ∇f(x)∥22<∥g∥22: (2)We are interested in the case of g=∇fB(x), the gradient vector of the loss function over B. Bythe weak law of large numbers, when jBj>1, the left hand side of Eq. 2withg=∇fB(x)can beestimated as follows:E[∥∇fB(x) ∇f(x)∥22]1jBjVB[∇fz(x)]:Thus our formulation with ′1corresponds to an elementwise estimation of the sufficient con-dition ( 2) that a gradient vector decreases the loss function. Gradient elements become more likelyto be sent as sample size increases. However, if once gradient elements are estimated with too highvariances, it takes too long for the elements to be sent. Thus, we decay variance at every step. De-tails are described in subsection 4.4. In the combination with optimization methods like MomentumSGD, gradient elements not sent are assumed to be equal to zero.4.2 Q UANTIZATION AND PARAMETER ENCODINGTo allow for comparison with other compression methods, we propose a basic quantization process.In this section, we refer to a gradient as an accumulated gradient. After deciding which gradientelements to send, each worker sends pairs of a value of a gradient element and its parameter index asStrom (2015 ) and Alistarh et al. (2017 ). We quantize each element to 4-bit so that we can representeach pair in 32-bit as per Strom (2015 ). The 4-bit consists of one sign bit and three exponent bits.Our quantization except for the sign bit is as follows. For a weight matrix Wk(or a weight tensorin CNN), there is a group of gradient elements corresponding to the matrix. Let Mkbe the maxi-mum absolute value in the group. First, for each element giin the k-th group, if jgijis larger than2⌊log2Mk⌋truncate it to 2⌊log2Mk⌋, otherwise, round to the closer value of 2⌊log2jgij⌋or2⌈log2jgij⌉.Letg′ibe the preprocessed gradient element. Next, calculate a integer di:=⌊log2Mk⌋ log2g′i.Ifdi>7then we do not send the value, otherwise, encode the integer from 0to7using 3 bits.⌊log2Mk⌋is also sent for every weight matrix. An efficient implementation is presented in subsec-tion4.4. We do not adopt stochastic rounding like Alistarh et al. (2017 ) nor accumulate roundingerror gig′ifor the next batch because this simple rounding does not harm accuracy empirically.Appendix Bhas a running example of this quantization.Because the variance-based sparsification method described in subsection 4.1is orthogonal to thequantization shown above, we can reduce communication cost further using sparsity promotingquantization methods such as QSGD instead. However, we used the quantization to show thatenough level of sparsity is gained solely by our variance-based sparsification because the quanti-zation rounds only a small fraction of gradient elements to zero. We show how to combine ourmethod with a method in Strom (2015 ) later in this paper because the way of the combination is lessobvious. We use a naive encoding for parameter indexes because the rest 28-bits are enough. We canfurther reduce the number of bits by compressing parameter indexes ( Strom ,2015 ;Alistarh et al. ,2017 ).4.3 C OMMUNICATION BETWEEN WORKERSIn distributed deep learning, the most important operation is to take the global mean of the gradientelements calculated in each worker. The operation is referred to as “allreduce.” It consists of threesteps: (1) collects all local arrays in each worker, (2) reduce them using a given arithmetic operator,4Under review as a conference paper at ICLR 2018which is summation in this case, and (3) broadcast the result back to all workers so that all workersobtain the identical copies of the array.Conventional data parallel deep learning applications can enjoy the benefit of highly optimized allre-duce implementations thanks to the fact that only the sum of the values has to be kept during thecommunication. However, after applying the proposed method to the local gradient elements, theyare converted to a sparse data structure so the allreduce operation can no longer apply.Dryden et al. (2016 ) and Aji & Heafield (2017 ) proposed sparsifying gradient elements multipletimes to utilize a kind of allreduce for sparsification-based compressions. However, the accuracy ispossibly degraded when the elements are highly compressed through repetitive compressions. In-stead, we adopt allgatherv for communication, where each worker just sends the calculated elementsto other workers. We avoid to encode and decode elements multiple times by allgatherv. In allgath-erv communication cost hardly increase from a kind of allreduce because index overlap, which isneeded for summation, rarely occur if the compression ratio is sufficiently higher than the numberof workers. Thanks to the high compression ratio possible with this algorithm and its combinationwith other compression methods, even large numbers of workers can be supported. Some optimiza-tion methods, such as ADAM ( Ba & Kingma ,2015 ), require parameter updates and postprocessing.They are calculated locally after the communication.4.4 E FFICIENT IMPLEMENTATIONWe first describe the efficient computation of the criterion ( 1) and second how to quantize gradientelements without additional floating points operations. We can efficiently compare squared meanand variance in the criterion ( 1) by just comparing squared mean of gradient elements and sum ofsquared gradient elements. That is,(∑z2B1jBj∇ifz(x))2> ∑z2B(1jBj∇ifz(x))2(3)achieves our goal. Thus, we have to maintain only the sum of gradient elements and sum of squaredgradient elements. Details are described in Appendix A. Decay of variance, described in subsection4.1, is accomplished by multiplying hyperparameter (<1)to the sum of squared gradient elementsat every step. Alpha in Eq. 3controls how much unambiguity the algorithm require. The algorithmcompress more aggressively with larger alpha. A range from one to two is good for alpha from itsderivation. Fig. 1shows the final algorithm.The quantization of parameters described in subsection 4.2can also be efficiently implemented withthe standard binary floating point representation using only binary operations and integer arithmeticas follows. We can calculate 2⌊log2x⌋by truncating the mantissa. We can also round values byadding one to the most significant bit of mantissa as if xis an unsigned integer and then maskingmantissa to 0.4.5 H YBRID ALGORITHMWe describe the way to combine our method with Strom’s method. It becomes problematic how tomodify variance when only parts of gradient elements are exchanged. We solve the issue simply bymodifying a2to(ab)2. Let S be the value sent in a step (i.e. threshold, -threshold or 0). We correctsquared gradient elements∑(∇if)2to∑(∇if)22S∑(∇if) +S2. Fig. 2shows the algorithm.We show the effictiveness of this combined algorithm by experiments in Sec. 6. Combinations withother works like QSGD and TernGrad are rather straightforward and we do not explore further inthis paper.5 P ERFORMANCE ANALYSISBecause common deep learning libraries do not currently support access to gradients of each sample,it is difficult to contrast practical performance of an efficient implementation in the commonly usedsoftware environment. In light of this, we estimate speedup of each iteration by gradient compres-sion with a performance model of communication and computation. The total speed up of the wholetraining process is just the summation of each iteration because we adopt a synchronized approach.5Under review as a conference paper at ICLR 2018Algorithm 1: Basichyperparam: B=batch size, ; foreach parameters :ri= 0;vi= 0;while not converged :CalcGrad();foreach parameters :ri+=∑∇ifzB;vi+=∑(∇ifzB)2;ifr2i> v i:Encode( ri);ri= 0;vi= 0;else:vi=;CommunicateAndUpdate();Figure 1: Basic algorithm of our variance-based compression. andare hyperparam-eters. Recommended value for is 0.999.controls compression and accuracy. ∇ifzdenotes a gradient element of parameter ifor each sample zin mini-batch. CalcGrad()is backward and forward computaion. En-code() includes quantization and encoding ofindexes. CommunicateAndUpdate() requiressharing gradient elements and decode, thenupdate parameters.Algorithm 2: Hybridhyperparam: B=batch size, ; ; foreach parameters :ri= 0;vi= 0;while not converged :CalcGrad();foreach parameters :ri+=∑∇ifzB;vi+=∑(∇ifzB)2;ifjrij> andr2i> v i:Encode(Sign( ri));ri=Sign( ri);vi=max( vi2jrij+2;0);vi=;CommunicateAndUpdate();Figure 2: Hybrid algorithm of our variance-based compression and Strom’s method. isa user-defined threshold required in Strom’smethod. Other parameters and notations arethe same with Fig. 1.In the communication part, the pairs of the quantized values and the parameter indexes in eachnode are broadcast to all nodes. The i-th node’s input data size ni(i= 1; :::; p )may be differentamong nodes, where pdenotes the number of nodes. An MPI function called allgatherv realizessuch an operation. Because recent successful image recognition neural network models like VGG(Simonyan & Zisserman ,2015 ) or ResNet ( He et al. ,2016 ) have a large number of parameters, thelatency term in communication cost can be ignored even if we achieve very high compression ratiosuch as c > 1;000. In such cases, a type of collective communication algorithms called the ringalgorithm is efficient ( Thakur et al. ,2005 ). Its bandwidth term is relatively small even though itslatency term is proportional to p. Although the naive ring allgatherv algorithm costs unacceptableO(max inip)time, Tr ̈aff et al. (2008 ) proposed a method to mitigate it by dividing large input data,which is called pipelined ring algorithm. For example, an allgatherv implementation in MV APICHadopts a pipelined ring algorithm for large input data.The calculation of variance of gradients dominates in the additional cost for the computation part ofthe proposed method. The leading term of the number of multiply-add operations in it is 2NjBj,where NandjBjare the number of parameters and the local batch size, respectively. Other termssuch as determination of sent indexes and application of decay are at most O(N). Therefore here-after we ignore the additional cost for the computation part and concentrate to the communicationpart.We discuss the relationship between the compression ratio and the speedup of the communicationpart. As stated above, we ignore the latency term. The baseline is ring allreduce for uncompressedgradients. Its elapsed time is Tr= 2(p1)Ns=p , where sandare the bit size of each param-eter and the transfer time per bit, respectively. On the other hand, elapsed time of pipelined ringallgatherv is Tv=∑⌈((ni=m)1)m⌉, where mis the block size of pipelining. Defining cas theaveraged compression ratio including change of the number of bits per parameter, Tvis evaluated as6Under review as a conference paper at ICLR 2018Tv(∑ni+ (p1)m)= (Nsp=c + (p1)m):If we set msmall enough, relative speedupisTr=Tv2(p1)c=p2. Therefore we expect linear speedup in c > p= 2range.6 E XPERIMENTSIn this section, we experimentally evaluate the proposed method. Specifically, we demonstrate thatour method can significantly reduce the communication cost while maintaining test accuracy. Wealso show that it can reduce communication cost further when combined with other sparsificationmethods, and even improves test accuracy in some settings.We used CIFAR-10 ( Krizhevsky ,2009 ) and ImageNet ( Russakovsky et al. ,2015 ), the two mostpopular benchmark datasets of image classification. We fixed the hyperparameter in Fig. 1andFig.2to0:999in all experiments. We evaluated gradient compression algorithms from the followingtwo viewpoints: accuracy and compression ratio. The accuracy is defined as the test accuracy at thelast epoch, and the compression ratio is defined as the number of the total parameters of networksdivided by the average number of parameters sent. We do not consider the size of other non-essentialinformation required for the communication, because they are negligible. In addition, we can ignorethe number of bits to express each gradient because we assume that both a gradient and a parameterindex are enclosed in a 32 bit word as Strom (2015 ) in all algorithms. Please note that, as all methodsuse allgatherv for communication, communication cost increases in proportion to the number ofworkers. Thus, high compression ratio is required to achieve sufficient speed up when using tens orhundreds of workers. We have visualization of results in Appendix C.6.1 CIFAR-10For experiments on CIFAR-10, we used a convolutional neural network similar to VGG(Simonyan & Zisserman ,2015 ). The details of the network architecture are described in Ap-pendix D. We trained the network for 300 epochs with weight decay of 0.0005. A total numberof workers was 8 and batch size was 64 for each worker. We applied no data augmentation to train-ing images and center-cropped both training and test images into 32x32. We used two differentoptimization methods: Adam ( Ba & Kingma ,2015 ) and momentum SGD ( Sutskever et al. ,2013 ).For Adam, we used Adam’s default parameter described in Ba & Kingma (2015 ). For momentumSGD, we set the initial learning rate to 0:058and halved it at every 25 epochs. We used two’scomplement in implementation of QSGD and ”bit” represents the number of bits used to representeach element of gradients. ”d” represents a bucket size. For each configuration, we report the me-dian of the accuracy from five independent runs. Compression ratios are calculated based on theexecution that achieved the reported accuracy.Table 1summarizes the results. Our method successfully trained the network with slight accuracygain for the Adam setting and 2 to 3 % of accuracy degradation for the Momentum SGD setting.Compression ratios were also sufficiently high, and our method reduced communication cost beyondquantization-based approaches described in section 3. The hybrid algorithm’s compression ratio isseveral orders higher than existing compression methods with a low reduction in accuracy. Thisindicates the algorithm can make computation with a large number of nodes feasible on commoditylevel infrastructure that would have previously required high-end interconnections. Even thoughQSGD achieved higher accuracy than our method, its compression power is limited and our algo-rithm can reduce communication cost more aggressively. On the other hand, Strom’s method causedsignificant accuracy degradation. Counter-intuitively, the hybrid algorithm improved its accuracy, inaddition to the further reduction of communication.Our hypothesis for this phenomena is as follows. In Strom’s algorithm, when a large positive gradi-ent appears, it has no choice but send positive values for consequent steps even if calculated gradi-ents in following mini-batches have negative values. On the other hand, in the hybrid algorithm, iffollowing gradients have a different sign with a residual, the residual is not likely to be sent. We as-sume that this effect helped the training procedure and led to better accuracy. We also would like tomention the difficulty of hyperparameter tuning in Strom’s method. As Table 1shows, using lowerthreshold does not necessarily always lead to higher accuracy. This is because the hyperparametercontrols both its sparsification and quantization. Thus, users do not know whether to use a largeror smaller value as a threshold to maintain accuracy. We note that we observed unstable behaviors7Under review as a conference paper at ICLR 2018Table 1: Training of a VGG-like network on CIFAR-10. denotes the threshold in Strom’s method.is the hyperparameter of our method described in the criterion ( 3). The number of bits of QSGDrefers the number of bits to express gradients except for the sign bits. For each configuration, themedian of the accuracy from five independent runs is reported. The compression column lists thecompression ratio defined at the beginning of Sec. 6.Adam Momentum SGDMethod Accuracy Compression Accuracy Compressionno compression 88.1 1 91.7 1Strom, = 0:001 62.8 88.5 84.8 6.5Strom, = 0:01 85.0 230.1 10.6 990.7Strom, = 0:1 88.0 6,942.8 71.6 8,485.0our method, = 1 88.9 120.7 90.3 52.4our method, = 1:5 88.9 453.3 89.6 169.2our method, = 2:0 88.9 913.4 88.4 383.6hybrid, = 0:01; = 2:0 85.0 1,942.2 87.6 983.9hybrid, = 0:1; = 2:0 88.2 12,822.4 87.1 12,396.8QSGD (2bit, d= 128 ) 88.8 12.3 90.8 6.6QSGD (3bit, d= 512 ) 87.4 14.4 91.4 7.0QSGD (4bit, d= 512 ) 88.2 11.0 91.7 4.0Table 2: Training ResNet50 on ImageNet. denotes a threshold in Strom’s method. is thehyperparameter of our method described in the criterion ( 3). Accuracy is the test accuracy at the lastepoch. Compression refers compression ratio defined in the beginning of Sec. 6.Adam Momentum SGDMethod Accuracy Compression Accuracy Compressionno compression 56.2 1 76.0 1Strom, = 0:001 28.6 38.6 75.2 2.1Strom, = 0:01 50.0 156.2 75.5 35.2Strom, = 0:1 48.1 6,969.0 75.5 2,002.2our method, = 1 55.3 1,542.8 74.7 103.8our method, = 1:5 57.4 2,953.1 75.5 400.7our method, = 2:0 57.8 5,173.8 75.1 990.7hybrid, = 0:01; = 2:0 52.2 2,374.2 75.0 470.9hybrid, = 0:1; = 2:0 43.1 28,954.2 75.1 4,345.0with other thresholds around 0.01. On the other hand, our algorithm are free from such problem.Moreover, when we know good threshold for Strom’s algorithm, we can just combine it with oursto get further compression.6.2 I MAGE NETAs larger scale experiments, we trained ResNet-50 ( He et al. ,2016 ) on ImageNet. We followedtraining procedure of Goyal et al. (2017 ) including optimizer, hyperparameters and data augmenta-tion. We also evaluated algorithms with replacing MomentumSGD and its learning rate schedulingto Adam with its default hyperparameter. We used batch size 32 for each worker and used 16 work-ers.Table 2summarizes the results. In this example as well as the previous CIFAR10 example, Variance-based Gradient Compression shows a significantly high compression ratio, with comparable accu-racy. While in this case, Strom’s method’s accuracy was comparable with no compression, given8Under review as a conference paper at ICLR 2018the significant accuracy degradation with Strom’s method on CIFAR10, it appears Variance-basedGradient Compression provides a more robust solution. Note that the training configuration withMomentumSGD is highly optimized to training without any compression. For reference, the orig-inal paper of ResNet-50 reports its accuracy as 75.3% ( He et al. ,2016 ).Wen et al. (2017 ) reportsthat it caused up to 2% accuracy degradation in training with GoogLeNet ( Szegedy et al. ,2015 ) onImageNet and our method causes no more degradation compared to quantization-based approaches.7 C ONCLUSIONWe proposed a novel method for gradient compression. Our method can reduce communicationcost significantly with no or only slight accuracy degradation. Contributions of our work can besummarized in the following three points. First, we proposed a novel measurement of ambiguity(high variance, low amplitude) to determine when a gradient update is required. Second, we showedthe application of this measurement as a threshold for updates significantly reduces update require-ments, while providing comparable accuracy. Third, we demonstrated this method can be combinedwith other efficient gradient compression approaches to further reduce communication cost.<|im_end|> <|im_start|>assistant ### Review Title The idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper. ### Review Text This paper proposes a variance-based gradient compression method to reduce the communication overhead of distributed deep learning. Experiments on real datasets are used for evaluation. The idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper. Firstly, the authors propose to combine two components to reduce communication cost, one being variance-based gradient compression and the other being quantization and parameter encoding. But the contributions of these two components are not separately analyzed or empirically verified. Secondly, the experimental results are unconvincing. The accuracy of Momentum SGD for ‘Strom, \tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter. Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom. In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
S1D8MPxA-
ICLR.cc/2018/Conference
2018
Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio
["Dongsoo Lee", "Daehyun Ahn", "Taesu Kim", "Pierce I. Chuang", "Jae-Joon Kim"]
Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.
["pruning", "sparse matrix", "memory footprint", "model size", "model compression"]
ABSTRACTWeight pruning has proven to be an effective method of reducing the model sizeand computation cost without sacrificing its model accuracy. Conventional sparsematrix formats, however, involve irregular index structures with large storage re-quirement and a sequential reconstruction process, resulting in inefficient use ofhighly parallel computing resources. Hence, pruning is usually restricted to infer-ence with a batch size of one, for which an efficient parallel matrix-vector multi-plication method exists. In this paper, a new class of sparse matrix representationis proposed utilizing the Viterbi algorithm that has a high, and more importantly,fixed index compression ratio regardless of the pruning rate. In this approach,numerous sparse matrix candidates are first generated by the Viterbi encoder, andthe candidate that aims to minimize the model accuracy degradation is then se-lected by the Viterbi algorithm. The model pruning process based on the proposedViterbi encoder and Viterbi algorithm is highly parallelizable, and can be imple-mented efficiently in hardware to achieve low-energy and a high-performanceindex decoding process. Compared with the existing magnitude-based pruningmethods, the index data storage requirement can be further compressed by 85.2%in MNIST and 83.9% in AlexNet while achieving a similar pruning rate. Evencompared with the relative index compression technique, our method can still re-duce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.1 I NTRODUCTIONDeep neural networks (DNNs) demand an increasing number of parameters as the required com-plexity of tasks and supporting number of training data continue to grow (Bengio & Lecun, 2007).Correspondingly, DNN incurs a considerable number of computations and amount of memory foot-print, and thus requires high performance parallel computing systems to meet the target responsetime. As an effort to realize energy-efficient DNN, researchers have suggested various low-costhardware implementation techniques. Among them, pruning has been actively studied to reduce theredundant connections while not degrading the model accuracy. It has been shown that pruning canachieve 9to13reduction in connections (Han et al., 2015).After pruning, the remaining parameters are often stored in sparse matrix formats. Different waysof representing indices of non-zero values constitute the different sparse matrix format, and havea significant impact on the level of achievable computational parallelism when a sparse matrix isused as an input operand (Bell & Garland, 2009). If the format is not properly designed, then theperformance of DNN with a sparse matrix can be even lower than the case with dense matrix (Yuet al., 2017). The two most important characteristics of a hardware-friendly sparse matrix format are1) reducing index storage footprint and 2) parallelizable index decoding process. As a compromisebetween index size reduction and index decoding complexity, numerous formats have been proposed(Bell & Garland, 2009).Work done while at POSTECH.1Published as a conference paper at ICLR 2018DDDDDinputout1out2out3out40 0 0 00 1 1 00 0 1 00 1 0 10 1 0 0Figure 1: Viterbi decompressor (VD) structure.0 0 0 05 8 0 00 0 3 00 6 0 0= [ 5 8 3 6 ]= [ 0 0 2 3 4 ]= [ 0 1 2 1 ]Dense Matrix after PruningCSR FormatAIAIAJA= [ 5 8 3 6 ]= [ 0 1 1 0 ]VCM FormatOutputs of Viterbi decompressor[ 0 0 0 0 ] 1stcycle[ 1 1 0 0 ] 2ndcycle[ 0 0 1 0 ] 3rdcycle[ 0 1 0 0 ] 4thcycleFigure 2: CSR Format and the proposed sparse matrix format comparison.DNN after pruning heavily involves sparse matrix-vector and matrix-matrix multiplications (SpMVand SpMM, respectively). Despite the sparse content, the computation time for SpMM is longerthan that of dense matrix multiplication in the modern graphic processing unit (GPU), due to itsserialized index decoding process and irregular memory access patterns. For example, the inferencelatency of AlexNet and VGG16 with SpMM can be increased by 2to5on GPUs or CPUs(Han et al., 2016a). The traditional pruning technique, therefore, is only attractive in the case whereSpMV can be utilized (i.e., batch size of 1) (Han et al., 2016b) (Yu et al., 2017). Therefore, a sparsematrix representation associated with parallelizable dense-matrix reconstruction in a wide range ofcomputing operations is the key to extending the use of pruning.We propose a new DNN-dedicated sparse matrix format and a new pruning method based on error-correction coding (ECC) techniques. A unique characteristic of this sparse matrix format is the fixed,yet high (as shown in Section 3) index compression ratio, regardless of the pruning rate. Moreover,sparse-to-dense matrix conversion employing the proposed format becomes a parallel process andis no longer the performance bottleneck. Notice that conventional sparse matrix formats entail atleast one column or row index value for each non-zero parameter such that the amount of index datais larger than that of non-zero values. On the other hand, the proposed approach compresses thelocations of non-zero values with a convolutional code which is a type of ECC code. Consequently,the size of the sparse matrix index becomes negligible.Conventional pruning approaches first identify the parameter candidates to be pruned, then constructa matrix (often sparse) using formats such as Compressed Sparse Row (CSR) to represent the sur-vived parameters. On the contrary, in the proposed scheme, pruning is performed in a restrictedmanner since a specific sparse matrix format is first constructed. A DNN-specific Viterbi encodertakes an input pattern and generates a sequence of random-number, where a “1” indicates the pa-rameter had survived, and had been pruned otherwise. Depending on the length of the input pattern,a vast (but limited) number of output patterns (hence candidates of the final sparse matrix represen-tations) are considered. In this case, the input pattern is used as the sparse matrix index. The contentof the input pattern, which generates a deterministic output random number sequence, is chosensuch that the accuracy degradation is minimized based on a user-defined cost function (more detailson Section 2). Both the Viterbi encoder and the algorithm have been shown to be computationallyefficient with an inherent parallelizable characteristic, as demonstrated in the digital communicationapplications (Viterbi, 1998). In this work, we further extend its application and demonstrate how theViterbi algorithm can be modified to perform energy-efficient DNN pruning.2Published as a conference paper at ICLR 20182 P RUNING USING VITERBI -BASED APPROACHFigure 1 illustrates the proposed Viterbi decompressor (VD), which is based on the Viterbi encoderwidely used in digital communication. VD has a simple structure consisting only of FlipFlops (FFs)and XOR gates. In this configuration, VD takes one input bit and produces four output bits everyclock cycle. Notice that FFs and XOR gates intermingle input bits and generate pseudo randomnumber outputs. Assume that a dense matrix is formed after pruning, as shown in Figure 2, and aninput sequence of f0;1;1;0gis applied to VD through four clock cycles to generate the outputs,where ‘1’ implies that the corresponding parameter has survived. In this case, the overhead in theindex for the proposed Viterbi-Compressible Matrix (VCM) format is significantly less than thatof CSR. In the VCM format, the input sequence to the VD becomes the index information. Thisindex size is independent of the number of non-zero values and can be determined in advance basedon the target index compression ratio1. Unlike the CSR format, the available VD-compressibledense matrix representation is limited, meaning that not all possible dense matrix representationsafter conventional magnitude-based pruning (such as (Han et al., 2015)) can be reconstructed byVD. Therefore, the pruning method considering VCM may result in a matrix that contains differentsurvived parameters compared to a pruning method using the CSR format. Thus, the key to thesuccess of VCM is to design a VD that allows diversified parameters to survive, and to efficientlysearch for the optimal VD input sequence that minimizes the accuracy degradation2.2.1 V ITERBI DECOMPRESSOR (VD) D ESIGN CONSIDERATIONSIf the input sequence length and the total output sequence length of a VD are denoted as pandq,respectively, then the index compression ratio can be calculated as q=p. Achieving a high indexcompression ratio (i.e., q >>p ) implies that the possible 2pVD-compressible dense matrix repre-sentations need to be uniformly distributed inside the 2qspace to maximize the likelihood of findinga dense matrix representation that is closely matched to the optimal case.In other words, the goal of VD is to act as a random number generator using the input sequence. Itis interesting to note that such an effort has already been studied in ECC design (Morelos-Zaragoza,2006). Since “random coding” has been introduced by C. Shannon to prove his channel capacitymodel (Shannon, 1948), practical ECC techniques with a fixed encoding rate was proposed to simu-late random coding with an allowed decoding complexity. We choose the Viterbi encoder, which isthe base model of VD, as a controllable random number generator because of its simplicity and flex-ible design when increasing the number of outputs. The randomness of VD outputs is determinedby the number of FFs and the XOR gates configuration. We present the details of the VD designmethodology in Appendix A.1.The basic structure of VD is similar to the design introduced in (Lee & Roy, 2012). VD targetingDNN applications, however, requires the number and/or distribution of 1(i.e., pruning rate) to be auser-defined parameter, whereas in the typical applications that require random number generation,such as ECC and VLSI testing, the number of 1s and 0s should be approximately the same. Inorder to control the pruning rate, the VD outputs are connected to binary number comparators. Forinstance, in Figure 1, one input of the comparator takes a two-bit number fout2;out1g, while theother input takes a user-defined threshold value ( THc). Iffout2;out1g(orfout4;out3g) is largerthan THc, the comparator produces a “1”, and a “0” otherwise. A trade-off occurs between thegranularity of the pruning rate and the index compression ratio. If the number of VD outputs, thenumber of comparator input bits, and the number of comparators (i.e., index compression ratio)are denoted as NUM v,NUM c, andR, respectively, then NUM v=NUM cR(see Figure 10).The proposed index decoding operation utilizing VD is inherently a parallel process with a smallhardware overhead. Unlike CSR or other similar formats that employ an irregular index structure,decoding VCM using VD does not incur significant buffer/memory overhead for indices and/or non-zero values, and most importantly, can be performed with a fixed and predictable rate. A fixed indexcompression ratio is also desirable for efficient memory bandwidth utilization and for applying thetiling technique to further improve the level of parallelism.1As an example, the structure shown in Figure 1 provides four output bits per one input bit, achieving anindex compression ratio of four2In the context of magnitude-based pruning, the objective of pruning using the VD is to identify a set of VDinput sequences that preserves maximum number of larger value weights3Published as a conference paper at ICLR 20180 01 1600001100001111111 23 171010011010010101T T+12 45 1810010101101001103 67 190011111100001100T T+114 2829 30101001101001010115 3031 310010111000011101T T+1{out1,out2,out3,out4}CurrentStateNextStateTransition by 0 Transition by 1State numberFigure 3: Trellis diagram of VD shown in Figure 1.2.2 V ITERBI ALGORITHM FOR PRUNINGThe basic idea of our proposed pruning method is to assign a cost function to each pruning caseenabled by VD and evaluate all possible ( 2p) pruning cases with a “Branch-and-Cut” algorithm.The pruning case that has the optimal (i.e., lowest) cost function should lead to minimal accuracydegradation. The Viterbi algorithm computes the maximum-likelihood sequence in a hidden Markovmodel (Forney, 1973), and can be utilized as a fast and efficient pruning exploration technique forour pruning method. Pruning using Viterbi algorithm follows the next 3 steps.The first step involves constructing a trellis diagram which is a time-indexed version of a statediagram. A state of VD can be represented using FF values, where the leftmost FF value becomesthe least significant bit. If VD has kFFs, the total number of states is 2k. Hence, VD in Figure 1 hasa total of 32 states as shown in Figure 3, where Tis the time index. Each possible transition withan input bit ( 0or1) produces multiple corresponding output bits. A trellis diagram holds the entireoperations inside VD in a compact fashion.The next step involves computing a cost function for possible transitions using the branch metric andthe path metric. The branch metric is expressed as i;jtwheretis a time index and iis a predecessorstate ofj.i;jtdenotes the cost of traversing along a transition from itojat the time index t. Byaccumulating the branch metrics and selecting one of two possible transitions reaching the samestate at the same time index, the path metric is defined asjt+1= maxi1t+i1;jt;i2t+i2;jt; (1)wherei1andi2are two predecessor states of j. In practice, path metrics can be normalized toavoid overflow. Note that we use max function for the path metric instead of min function in Eq.(1) because the metric values in our method describe a degree of ‘reward’ rather than ‘cost’. Forthe entire “survived path” selections during the path metric update, the decisions are stored in thememory and the old path metrics can be discarded. The objective of this Viterbi algorithm is to finda path maximizing the accumulation of the branch metrics ( i;jt), which is expressed as:Di;j;mt =Wi;j;mtTHp=S1;0Wi;j;mt;THp1i;j;mt =8<:tanhDi;jtS2; when survivedtanhDi;jtS2;when pruned; i;jt=RXm=1i;j;mt;(2)whereWi;j;mt is the magnitude of a parameter at the mthcomparator output and time index t, nor-malized by the maximum magnitude of all parameters inside the dense matrix to be pruned, andTHpis the pruning threshold value. Intuitively, i;j;mt favors(discourages) the survival(pruning) ofparameters with larger magnitude through the skewed tanh function. Pruning with the Viterbi algo-rithm is flexible such that different cost function can be assigned to the branch metric, depending onthe type of pruning approach, providing the pruning algorithm follows a hidden Markov model (Lou,1995)3. The two constants, S1andS2, are the scaling factors, and are empirically determined to be3Eq. (2) in this work is related to magnitude-based pruning4Published as a conference paper at ICLR 2018024681012-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningNUMv = 8NUMv = 20NUMv = 32NUMv = 40NUMv = 80Figure 4: Distribution of the weights of FC1 afterpruning with different NUM v. 0.5 1 1.5 2 2.5 3 1 20 40 60 80 100 120 140Baseline test error: 0.78 %Error rate (%)Retraining epochTest error of retraining with different NUMvNUMv = 8NUMv = 20NUMv = 32NUMv = 40NUMv = 80Magnitude-based PruningFigure 5: Test error of retraining with differentNUM v.5.0 and 104, respectively, for our experiments. Note that exploring diversified states (and hence, var-ious pruning cases) is achieved by maintaining approximately 50% of the ‘1’ and ‘0’ distributionsfor both inputs and outputs of VD (Forney, 1973). Consequently, the target pruning rate is mainlycontrolled by the comparator threshold value, THc(e.g., if THcis a 4-bit number and THc=3, then25%(= (3 + 1) =24)is the target pruning rate). THpis determined by considering the distribution ofparameters and the given target pruning rate (e.g., if the parameters follow a Gaussian distributionand the target pruning rate is 68:3%,THpcorresponding to one sigma is recommended).Once the final time index is reached, as the last step of Viterbi pruning, the state with the maximumpath metric is chosen, and the previous state is traced by reading the surviving path selection data.We continue this trace-back procedure to the first time index of a trellis diagram. Note that if theinitial states of FFs are all 0s, then the number of available states (hence the number of sparsematrix representations in the first few time indices) may be limited. As an alternative, a dummyinput sequence having the length equal to the number of FFs4in VD can be inserted such thatevery state of VD is reachable (refer to Figure 11). In this case, the compressed input index of theVCM is a combination of the survived dummy sequence and the input sequence. It should be notedthat the Viterbi algorithm can be implemented using a dynamic programming technique. The timecomplexity required to find the best pruning method becomes O(l2f)wherelis the length of theinput sequence and fis the number of FFs. As can be seen in Appendix A.1, fis small even with alarge number of VD outputs.3 E XPERIMENTAL RESULTSIn this section, the impact of different VD configurations and branch metric selections on modelaccuracy and the index compression ratio is analyzed. We empirically study the weight distributionafter pruning and the sensitivity of accuracy using MNIST. Then, the observations from MNIST areapplied to AlexNet to validate the scalability of our proposed method.3.1 VD D ESIGN AND BRANCH METRIC EXPLORATION USING MNISTWe perform experiments using the LeNet-5-like convolutional MNIST model5. For simplicity, boththe minimum Hamming distance and the XOR taps (introduced in Appendix A.1) are fixed to be 4,andNUM cis 4 (i.e., NUM v=4R). These parameters are selected for fast design exploration, andincreasing them will enhance randomness of VD output and target pruning rate resolution which arecritical to improving pruning rate with minimal accuracy degradation.Number of VD outputs ( NUM v): Immediately after training, we prune the weights with differentNUM vfor VD. Figure 4 shows the weight distributions after pruning in the FC1 layer with fixedTHcandTHp. Lower NUM v(i.e, lower index compression ratio) leads to sharper pruning around4The storage overhead of this dummy input sequence is negligible compared to the index data storage5https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/tutorials/mnist/mnist deep.py5Published as a conference paper at ICLR 20180123456789-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningTHp = 0.5THp = 0.6THp = 0.7Figure 6: Distribution of FC1’s weights afterpruning with different THp. 0.5 1 1.5 2 2.5 3 1 20 40 60 80 100 120 140Baseline test error: 0.78 %Error rate (%)Retraining epochTest error of retraining with different THpTHp = 0.60THp = 0.63THp = 0.67THp = 0.70Figure 7: Test error of retraining with differentTHp.01234567-0.3 -0.2 -0.1 0 0.1 0.2 0.3×104CountWeight valueDistribution of pruned weights1 Skip states3 Skip states7 Skip states024681012-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruning1 Skip states3 Skip states7 Skip statesFigure 8: Distributions of pruned (Left) and survived (Right) FC1 weights with different skip state.the weight determined by THp. Hence, NUM vprovides a trade-off between accuracy and the indexcompression ratio. Extensive experiments indicate that for the Conv layer, a low NUM vis desired,while for the FC layer, a wide range of NUM vcan lead to minimal accuracy degradation as shownin Figure 5 (magnitude-based pruning is from (Han et al., 2015)). For MNIST, NUM v=8 for Convlayers and NUM v=40 for FC layers have been chosen to achieve optimal trade-off between the indexcompression ratio and accuracy.Pruning threshold value ( THp): Even when the parameters before pruning follow a known distri-bution (e.g., Gaussian), it may still be an iterative task to search for an optimal THpthat results inthe target pruning rate, especially with high NUM v, as evident from Figure 4. Thus, it is necessaryto investigate the sensitivity of accuracy to THp. In Figure 6, THpaffects distributions of survivedweights and pruning rates given the same THc. Note that if the actual pruning rate differs from thetarget pruning rate, then VD outputs exhibit skewed supply of ‘1’s or ‘0’s to the comparators andthe trellis diagram path exploration is also biased. In contrast, Figure 7 clearly shows that all theretraining processes converge, despite the minor discrepancy between the target and actual pruningrate (target pruning rate is 93.75%).Skip state (Appendix A.2): Up to now, we have only considered the case where one input bit issupplied to VD at every clock cycle. However, if ninput bits are provided to VD at every clockcycle, thenn1time indices in a trellis diagram are skipped. While this results in a lower indexcompression ratio, which is defined as R/ (skip state + 1), the skip state allows for a more diversestate exploration and improves the pruning quality. As can be seen in Figure 8, a greater number oflarger magnitude weights are preserved with increasing number of skip states while fixing both THpandNUM v. In this work, the default skip state is one.6Published as a conference paper at ICLR 201801234567-0.3 -0.2 -0.1 0 0.1 0.2 0.3×104CountWeight valueDistribution of pruned weightstanh(x)exxx2σ(x)0123456789-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningtanh(x)exxx2σ(x)Figure 9: Left: Distribution of pruned (Left) and survived (Right) FC1 weights with different branchmetric equations.Table 1: MNIST test error and comparator threshold values with gradual pruning. Pruning is per-formed at the 50thepoch (50% target pruning rate), 100thepoch (70% target pruning rate),and150thepoch (final). 40 VD outputs are used for FC1, while 8 VD outputs for the others. 1 2 3 4 5 7 50 100 150 200 250Error rate (%)EpochLeNet-5 Test ErrorMagnitude-based PruningProposed Viterbi-based PruningLayercomparator threshold value50th100th150thEpoch Epoch EpochConv1 4 4 4Conv2 7 10 12FC1 7 10 14FC2 7 10 12Table 2: Sparse matrix comparison with MNIST using magnitude-based pruning (Han et al., 2015)and our proposed Viterbi-based pruning. We assume that 16 bits are used for the non-zero valuesand index for magnitude-based pruning.LayerMagnitude-Based Viterbi-Based MatrixWeight Pruning Sparse Matrix Pruning Sparse Matrix SizeSize Rate (CSR) Size Rate (VCM) Size ReductionConv1 0.8K 34.4% 2.12KB 32.3% 1.16KB 45.3%Conv2 51.2K 87.4% 25.41KB 81.3% 24.98KB 1.7%FC1 3211.3K 91.0% 1125.54KB 93.1% 512.82KB 54.4%FC2 10.2K 81.1% 7.62KB 80.4% 5.17KB 32.2%Total 3273.5K 90.9% 1160.69KB 92.8% 544.13KB 53.1%Test Error 0.77% 0.78%Branch Metric : For the branch metric, a variety of functions, such as exand the sigmoid function(x), has been investigated, as shown in Figure 9. Among them, the “ tanh ” function is selected dueto its pruning sharpness and low sensitivity to THpandNUM v.Based on the observations discussed above, we conducted a pruning and retraining process, andcompared the test errors of the magnitude-based pruning method (Han et al., 2015) and the proposedViterbi-based pruning method. For every round of pruning, all the weights, including the onespruned in the previous run, are considered. Table 1 illustrates the comparator threshold values THc7Published as a conference paper at ICLR 2018Table 3: Pruning and sparse matrix size comparison for AlexNet on ImageNet using magnitude-based pruning (Han et al., 2015) and our proposed Viterbi-based pruning. We assume that 16 bitsare used for the non-zero values and index for magnitude-based pruning.LayerMagnitude-Based Viterbi-Based MatrixWeight Pruning Sparse Matrix Pruning Sparse Matrix SizeSize Rate (CSR) Size Rate (VCM) Size ReductionConv1 34.8K 16% 69.70KB- 69.70KB0.0%Conv2 307.2K 62% 467.46KB 62.5% 268.99KB 42.5%Conv3 884.7K 65% 1239.40KB 62.3% 777.21KB 37.3%Conv4 663.6K 63% 982.82KB 62.0% 586.73KB 40.3%Conv5 442.4K 63% 655.22KB 56.0% 444.83KB 32.1%FC1 37.7M 91% 13597.74KB 90.3% 8284.93KB 39.1%FC2 16.8M 91% 6047.99KB 90.8% 3505.43KB 42.0%FC3 4.1M 75% 4098.00KB 73.7% 2670.18KB 34.8%Total 61.0M 89% 27158.31KB 88.2% 16607.99KB 38.1%Test Error (Top-1) 42.73% 42.68%Test Error (Top-5) 19.77% 19.78%Dense matrix size is considered in this layer because both CSR and VCM representation result in a largermemory footprint due to the low pruning rate.(MIN=0, MAX=15 with NUM c=4) used for each pruning round and test error results. Since Conv1is close to the input nodes, we choose a smaller THcto reduce the target pruning rate of Conv1.From Table 1, it is clear that the proposed pruning method successfully maintains accuracy duringthe entire training process.The final pruning rate and memory requirement for CSR and VCM for each layer are summarizedin Table 2. Notice that the sparse matrix represented using the VCM format leads to a significantmemory footprint reduction (by 53.1% ) compared to the sparse matrix represented with CSR with asimilar pruning rate. This is because VCM’s index storage is reduced by 85.2% compared to CSR’sindex size. Even if the CSR is represented with relative index using 5 bits (Han et al., 2016b), at theexpense of increased index decoding complexity, the VCM index size is still smaller by 52.7%6.In summary, VCM is superior to CSR due to its encoded index format that requires a smaller stor-age requirement and parallel dense matrix reconstruction process through VD while maintaining acomparable model accuracy.3.2 A LEXNET ON IMAGENET RESULTSWe verified the scalability of the VCM and Viterbi-based pruning methods using the AlexNet modelon ImageNet. The number of VD outputs is 50 for both the FC1 and FC2 layers ( NUM v=50,NUM c=5,R=10) and 8 for the other layers ( NUM v=8,NUM c=4,R=2). Similar to the MNISTresults, a higher index compression ratio is set for layers with larger number of weights. Since theskip state is one, the index compression ratio becomes R/2. The minimum Hamming distance andthe XOR taps are 4. Table 3 presents the pruning rates and matrix sizes assuming that non-zeroweights and CSR index are stored with 16-bit format.The38.1% reduction in matrix size achieved using VCM is mainly due to the significant reduction inthe index storage requirement ( 83.9% ). Compared with the 4-bit relative index scheme introducedin (Han et al., 2016b), the index size of VCM is reduced by 35.5% . The advantage of the indexcompression ratio of the proposed technique is largely attributed to the VD’s limited search spaceout of all possible encodable index formats, while pruning methods employing traditional sparsematrix formats do not consider such restriction. Despite such limitation, both methods achievesimilar top-1 and top-5 classification accuracy with the same retraining time.6Additional size reductions techniques, such as quantizing non-zero weights and Huffman coding (Hanet al., 2016b), can also be applied to our methods8Published as a conference paper at ICLR 20184 R ELATED WORKDenil et al. (2013) demonstrated that most neural networks parameters have significant redundancy.The redundancy increases the system complexity, and causes overfitting with small training dataset.Several approaches have been suggested to prune deep neural networks and increase the sparsity ofparameters in order to minimize both the memory overhead and the computation time, and avoidoverfitting.Chauvin (1989) and Hanson & Pratt (1989) introduced additional cost biases to the objective func-tion to decay the unimportant parameters. LeCun et al. (1990) and Hassibi et al. (1993) suggestedpruning parameters while minimizing the increase of error approximated by Hessian matrix. Opti-mal Brain Damage (OBD) (LeCun et al., 1990) restricts the Hessian matrix, forcing it to be diagonalto reduce the computational burden, at the cost of additional performance degradation. OptimalBrain Surgeon (OBS) (Hassibi et al., 1993) used a full Hessian matrix with additional computationcost to improve the pruning performance.Han et al. (2015) proposed pruning of deep neural networks by removing parameters based on themagnitude of their absolute values and then iteratively retraining the pruned network. A 9 and13pruning rate was achieved for AlexNet and VGG-16, respectively, without loss of accuracy onImageNet dataset. A follow-up paper compressed the pruned network further with weight sharingand Huffman coding (Han et al., 2016b). Although an impressive compression rate is achieved bythese suggested methods, the irregular sparsity of the survived parameters and the associated com-plicated index decoding process prevent common hardware such as GPUs from achieving noticeablespeed-up improvement. Alternatively, Han et al. (2016a) designed a dedicated hardware acceleratorto circumvent this problem.Recently, several papers suggested iterative hardware-efficient pruning methods to realize a fasterinference speed and smaller model size. Molchanov et al. (2017c) suggested iterative pruning ona feature-map level based on a heuristic approach to evaluate the importance of parameters. Thispaper, which shares a similar idea as that of OBS, uses first-degree Taylor polynomial to estimate theimportance of each parameter with reduced computational burden. Since the method prunes featuremaps rather than each parameter, a sparse matrix format is not required at the cost of a lower pruningrate. Li et al. (2017) suggested pruning all the convolution kernels together with correspondingfeature maps in CNN. Similar to Molchanov et al. (2017c), this coarse-level pruning avoids the useof a sparse matrix format, at the expense of a lower pruning rate. Park et al. (2017) introduced ahigh-performance sparse convolution algorithm, where the sparse convolution was formulated assparse-matrix-dense-matrix multiplication with the dense matrix generated on the fly. The papershows that this method can improve the inference speed of pruned networks with moderate sparsity,and can prune each parameter independently, leading to a better pruning rate. However, in the paper,the results were only demonstrated on CPUs; it was not shown whether the proposed method canalso be applied to throughput-oriented hardware such as GPUs.Ardakani et al. (2017) proposed a scheme to generate a masking matrix using linear-feedback shiftregisters (LFSRs) to randomly prune some of the synaptic weights connections. Even though thehardware structure for pruning can be simplified, it is not possible to selectively prune connectionsto improve the pruning quality. In addition, the scheme can only be applied to the fully-connectedlayer, not to the convolution layer.Kingma et al. (2015) explained Gaussian Dropout as a special case of Bayesian regularization. Un-like Gaussian Dropout which considers dropout rates as a hyperparameter, Variational Dropout the-oretically allows training dropout rates layer-wise, or even weight-wise. However, the paper didnot include any experimental result on weight-wise variational dropout. Molchanov et al. (2017a)extended Kingma et al. (2015) and showed the working case of weight-wise Variational Dropout.Molchanov et al. (2017a) suggested the use of this characteristic of Variational Dropout to prunedeep neural networks. By pruning out weights with a high dropout rate, high sparsity on a deepneural network was achieved for the CIFAR-10 classification task. Molchanov et al. (2017b) andLouizos et al. (2017) suggested pruning deep neural networks in a structured format with newBayesian models. The papers could prune deep neural networks either neuron-wise or channel-wise, keeping the weight matrices in dense format. Both papers showed state-of-the-art sparsity ondeep neural networks for the CIFAR-10 classification task.9Published as a conference paper at ICLR 2018In multiple works, attempts have been made to reduce the redundancy with popular lossy com-pression methods. Denton et al. (2014) applies low rank approximations to pre-trained weights.Gong et al. (2014) uses vector quantization to compress deep convolution neural networks. Chenet al. (2015) suggests HashedNets, which applies hashing tricks to reduce the model sizes. Iandolaet al. (2016) achieves AlexNet-level accuracy using 50x fewer parameters with SqueezeNet, whichis comprised of custom convolution filters called Fire modules. These methods are orthogonal tothe network pruning, and can be combined to achieve further model compression. For example,SqueezeNet combined with Deep Compression (Han et al., 2016b) achieves 510 compression ra-tio compared to the original AlexNet.5 F UTURE WORKMany other ECC techniques have been reported that can also be potentially used to search for sparsematrix forms with high index compression (Morelos-Zaragoza, 2006). Efficient parallel ECC decod-ing and encoding implementation have also been proposed and realized (Zhang, 2015). We believethat efforts to combine existing and new ECC techniques/algorithms with DNN pruning methodscreate a new dimension in realizing energy-efficient and high-performance DNN. Even though theproposed approach is best for dedicated ASIC or FPGA, the inherent parallel characteristics of VDand the Viterbi algorithm can also be utilized in GPUs through the construction of new kernels andlibraries. We have not considered quantization of non-zero weight values or entropy-related codingdesign in this paper. In the future, such considerations can be embedded into the branch metric orpath metric equations.6 C ONCLUSIONWe proposed a new DNN-dedicated sparse matrix format and pruning method using the Viterbiencoder structure and Viterbi algorithm. Unlike previous methods, we first consider only limitedchoices of pruning results, all of which have the advantage of a significant index compression ratioby our proposed index decompressing structures. One particular pruning result is selected fromthe limited pruning solution space based on the Viterbi algorithm with user-defined branch metricequations that aim to minimize the accuracy degradation. As a result, our proposed sparse matrix,VCM, shows noticeable index storage reduction even compared with the relative index scheme.Fixed index compression ratio and inherently parallel reconstruction scheme allows a wide range ofapplications, such as SpMM, since sparse matrices can be converted into dense matrices efficiently.ACKNOWLEDGMENTSThis research was in part supported by the MSIT(Ministry of Science and ICT), Korea, under theICT Consilience Creative program(IITP-2017-R0346-16-1007) supervised by the IITP(Institute forInformation & Communications Technology Promotion)
SktHYC_xM
The authors use Viterbi encoding to dramatically compress the sparse matrix index of a pruned network, reducing one of the main memory overheads of a pruned neural network and speeding up inference in the parallel setting.
7: Good paper, accept
quality: this paper is of good quality clarity: this paper is very clear but contains a few minor typos/grammatical mistakes (missing -s for plurals, etc.) originality: this paper is original significance: this paper is significant PROS - Using ECC theory for reducing the memory footprint of a neural network seems both intuitive and innovative, while being grounded in well-understood theory. - The authors address a consequence of current approaches to neural network pruning, i.e., the high cost of sparse matrix index storage. - The results are extensive and convincing. CONS - The authors mention in the introduction that this encoding can speed up inference by allowing efficient parallel sparse-to-dense matrix conversion, and hence batch inference, but do not provide any experimental confirmation. Main questions - It is not immediately clear to me why the objective function (2) correlates to a good accuracy of the pruned network. Did you try out other functions before settling on this one, or is there a larger reason for which (2) is a logical choice? - On a related note, I would find a plot of the final objective value assigned to a pruning scheme compared to the true network accuracy very helpful in understanding how these two correlate. - Could this approach be generalized to RNNs? - How long does the Viterbi pruning algorithm take, as it explores all 2^p possible prunings? - How difficult is it to tune the pruning algorithm hyper-parameters?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio ### Paper Abstract Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet. ### Paper Keywords ["pruning", "sparse matrix", "memory footprint", "model size", "model compression"] ### Paper Content ABSTRACTWeight pruning has proven to be an effective method of reducing the model sizeand computation cost without sacrificing its model accuracy. Conventional sparsematrix formats, however, involve irregular index structures with large storage re-quirement and a sequential reconstruction process, resulting in inefficient use ofhighly parallel computing resources. Hence, pruning is usually restricted to infer-ence with a batch size of one, for which an efficient parallel matrix-vector multi-plication method exists. In this paper, a new class of sparse matrix representationis proposed utilizing the Viterbi algorithm that has a high, and more importantly,fixed index compression ratio regardless of the pruning rate. In this approach,numerous sparse matrix candidates are first generated by the Viterbi encoder, andthe candidate that aims to minimize the model accuracy degradation is then se-lected by the Viterbi algorithm. The model pruning process based on the proposedViterbi encoder and Viterbi algorithm is highly parallelizable, and can be imple-mented efficiently in hardware to achieve low-energy and a high-performanceindex decoding process. Compared with the existing magnitude-based pruningmethods, the index data storage requirement can be further compressed by 85.2%in MNIST and 83.9% in AlexNet while achieving a similar pruning rate. Evencompared with the relative index compression technique, our method can still re-duce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.1 I NTRODUCTIONDeep neural networks (DNNs) demand an increasing number of parameters as the required com-plexity of tasks and supporting number of training data continue to grow (Bengio & Lecun, 2007).Correspondingly, DNN incurs a considerable number of computations and amount of memory foot-print, and thus requires high performance parallel computing systems to meet the target responsetime. As an effort to realize energy-efficient DNN, researchers have suggested various low-costhardware implementation techniques. Among them, pruning has been actively studied to reduce theredundant connections while not degrading the model accuracy. It has been shown that pruning canachieve 9to13reduction in connections (Han et al., 2015).After pruning, the remaining parameters are often stored in sparse matrix formats. Different waysof representing indices of non-zero values constitute the different sparse matrix format, and havea significant impact on the level of achievable computational parallelism when a sparse matrix isused as an input operand (Bell & Garland, 2009). If the format is not properly designed, then theperformance of DNN with a sparse matrix can be even lower than the case with dense matrix (Yuet al., 2017). The two most important characteristics of a hardware-friendly sparse matrix format are1) reducing index storage footprint and 2) parallelizable index decoding process. As a compromisebetween index size reduction and index decoding complexity, numerous formats have been proposed(Bell & Garland, 2009).Work done while at POSTECH.1Published as a conference paper at ICLR 2018DDDDDinputout1out2out3out40 0 0 00 1 1 00 0 1 00 1 0 10 1 0 0Figure 1: Viterbi decompressor (VD) structure.0 0 0 05 8 0 00 0 3 00 6 0 0= [ 5 8 3 6 ]= [ 0 0 2 3 4 ]= [ 0 1 2 1 ]Dense Matrix after PruningCSR FormatAIAIAJA= [ 5 8 3 6 ]= [ 0 1 1 0 ]VCM FormatOutputs of Viterbi decompressor[ 0 0 0 0 ] 1stcycle[ 1 1 0 0 ] 2ndcycle[ 0 0 1 0 ] 3rdcycle[ 0 1 0 0 ] 4thcycleFigure 2: CSR Format and the proposed sparse matrix format comparison.DNN after pruning heavily involves sparse matrix-vector and matrix-matrix multiplications (SpMVand SpMM, respectively). Despite the sparse content, the computation time for SpMM is longerthan that of dense matrix multiplication in the modern graphic processing unit (GPU), due to itsserialized index decoding process and irregular memory access patterns. For example, the inferencelatency of AlexNet and VGG16 with SpMM can be increased by 2to5on GPUs or CPUs(Han et al., 2016a). The traditional pruning technique, therefore, is only attractive in the case whereSpMV can be utilized (i.e., batch size of 1) (Han et al., 2016b) (Yu et al., 2017). Therefore, a sparsematrix representation associated with parallelizable dense-matrix reconstruction in a wide range ofcomputing operations is the key to extending the use of pruning.We propose a new DNN-dedicated sparse matrix format and a new pruning method based on error-correction coding (ECC) techniques. A unique characteristic of this sparse matrix format is the fixed,yet high (as shown in Section 3) index compression ratio, regardless of the pruning rate. Moreover,sparse-to-dense matrix conversion employing the proposed format becomes a parallel process andis no longer the performance bottleneck. Notice that conventional sparse matrix formats entail atleast one column or row index value for each non-zero parameter such that the amount of index datais larger than that of non-zero values. On the other hand, the proposed approach compresses thelocations of non-zero values with a convolutional code which is a type of ECC code. Consequently,the size of the sparse matrix index becomes negligible.Conventional pruning approaches first identify the parameter candidates to be pruned, then constructa matrix (often sparse) using formats such as Compressed Sparse Row (CSR) to represent the sur-vived parameters. On the contrary, in the proposed scheme, pruning is performed in a restrictedmanner since a specific sparse matrix format is first constructed. A DNN-specific Viterbi encodertakes an input pattern and generates a sequence of random-number, where a “1” indicates the pa-rameter had survived, and had been pruned otherwise. Depending on the length of the input pattern,a vast (but limited) number of output patterns (hence candidates of the final sparse matrix represen-tations) are considered. In this case, the input pattern is used as the sparse matrix index. The contentof the input pattern, which generates a deterministic output random number sequence, is chosensuch that the accuracy degradation is minimized based on a user-defined cost function (more detailson Section 2). Both the Viterbi encoder and the algorithm have been shown to be computationallyefficient with an inherent parallelizable characteristic, as demonstrated in the digital communicationapplications (Viterbi, 1998). In this work, we further extend its application and demonstrate how theViterbi algorithm can be modified to perform energy-efficient DNN pruning.2Published as a conference paper at ICLR 20182 P RUNING USING VITERBI -BASED APPROACHFigure 1 illustrates the proposed Viterbi decompressor (VD), which is based on the Viterbi encoderwidely used in digital communication. VD has a simple structure consisting only of FlipFlops (FFs)and XOR gates. In this configuration, VD takes one input bit and produces four output bits everyclock cycle. Notice that FFs and XOR gates intermingle input bits and generate pseudo randomnumber outputs. Assume that a dense matrix is formed after pruning, as shown in Figure 2, and aninput sequence of f0;1;1;0gis applied to VD through four clock cycles to generate the outputs,where ‘1’ implies that the corresponding parameter has survived. In this case, the overhead in theindex for the proposed Viterbi-Compressible Matrix (VCM) format is significantly less than thatof CSR. In the VCM format, the input sequence to the VD becomes the index information. Thisindex size is independent of the number of non-zero values and can be determined in advance basedon the target index compression ratio1. Unlike the CSR format, the available VD-compressibledense matrix representation is limited, meaning that not all possible dense matrix representationsafter conventional magnitude-based pruning (such as (Han et al., 2015)) can be reconstructed byVD. Therefore, the pruning method considering VCM may result in a matrix that contains differentsurvived parameters compared to a pruning method using the CSR format. Thus, the key to thesuccess of VCM is to design a VD that allows diversified parameters to survive, and to efficientlysearch for the optimal VD input sequence that minimizes the accuracy degradation2.2.1 V ITERBI DECOMPRESSOR (VD) D ESIGN CONSIDERATIONSIf the input sequence length and the total output sequence length of a VD are denoted as pandq,respectively, then the index compression ratio can be calculated as q=p. Achieving a high indexcompression ratio (i.e., q >>p ) implies that the possible 2pVD-compressible dense matrix repre-sentations need to be uniformly distributed inside the 2qspace to maximize the likelihood of findinga dense matrix representation that is closely matched to the optimal case.In other words, the goal of VD is to act as a random number generator using the input sequence. Itis interesting to note that such an effort has already been studied in ECC design (Morelos-Zaragoza,2006). Since “random coding” has been introduced by C. Shannon to prove his channel capacitymodel (Shannon, 1948), practical ECC techniques with a fixed encoding rate was proposed to simu-late random coding with an allowed decoding complexity. We choose the Viterbi encoder, which isthe base model of VD, as a controllable random number generator because of its simplicity and flex-ible design when increasing the number of outputs. The randomness of VD outputs is determinedby the number of FFs and the XOR gates configuration. We present the details of the VD designmethodology in Appendix A.1.The basic structure of VD is similar to the design introduced in (Lee & Roy, 2012). VD targetingDNN applications, however, requires the number and/or distribution of 1(i.e., pruning rate) to be auser-defined parameter, whereas in the typical applications that require random number generation,such as ECC and VLSI testing, the number of 1s and 0s should be approximately the same. Inorder to control the pruning rate, the VD outputs are connected to binary number comparators. Forinstance, in Figure 1, one input of the comparator takes a two-bit number fout2;out1g, while theother input takes a user-defined threshold value ( THc). Iffout2;out1g(orfout4;out3g) is largerthan THc, the comparator produces a “1”, and a “0” otherwise. A trade-off occurs between thegranularity of the pruning rate and the index compression ratio. If the number of VD outputs, thenumber of comparator input bits, and the number of comparators (i.e., index compression ratio)are denoted as NUM v,NUM c, andR, respectively, then NUM v=NUM cR(see Figure 10).The proposed index decoding operation utilizing VD is inherently a parallel process with a smallhardware overhead. Unlike CSR or other similar formats that employ an irregular index structure,decoding VCM using VD does not incur significant buffer/memory overhead for indices and/or non-zero values, and most importantly, can be performed with a fixed and predictable rate. A fixed indexcompression ratio is also desirable for efficient memory bandwidth utilization and for applying thetiling technique to further improve the level of parallelism.1As an example, the structure shown in Figure 1 provides four output bits per one input bit, achieving anindex compression ratio of four2In the context of magnitude-based pruning, the objective of pruning using the VD is to identify a set of VDinput sequences that preserves maximum number of larger value weights3Published as a conference paper at ICLR 20180 01 1600001100001111111 23 171010011010010101T T+12 45 1810010101101001103 67 190011111100001100T T+114 2829 30101001101001010115 3031 310010111000011101T T+1{out1,out2,out3,out4}CurrentStateNextStateTransition by 0 Transition by 1State numberFigure 3: Trellis diagram of VD shown in Figure 1.2.2 V ITERBI ALGORITHM FOR PRUNINGThe basic idea of our proposed pruning method is to assign a cost function to each pruning caseenabled by VD and evaluate all possible ( 2p) pruning cases with a “Branch-and-Cut” algorithm.The pruning case that has the optimal (i.e., lowest) cost function should lead to minimal accuracydegradation. The Viterbi algorithm computes the maximum-likelihood sequence in a hidden Markovmodel (Forney, 1973), and can be utilized as a fast and efficient pruning exploration technique forour pruning method. Pruning using Viterbi algorithm follows the next 3 steps.The first step involves constructing a trellis diagram which is a time-indexed version of a statediagram. A state of VD can be represented using FF values, where the leftmost FF value becomesthe least significant bit. If VD has kFFs, the total number of states is 2k. Hence, VD in Figure 1 hasa total of 32 states as shown in Figure 3, where Tis the time index. Each possible transition withan input bit ( 0or1) produces multiple corresponding output bits. A trellis diagram holds the entireoperations inside VD in a compact fashion.The next step involves computing a cost function for possible transitions using the branch metric andthe path metric. The branch metric is expressed as i;jtwheretis a time index and iis a predecessorstate ofj.i;jtdenotes the cost of traversing along a transition from itojat the time index t. Byaccumulating the branch metrics and selecting one of two possible transitions reaching the samestate at the same time index, the path metric is defined asjt+1= maxi1t+i1;jt;i2t+i2;jt; (1)wherei1andi2are two predecessor states of j. In practice, path metrics can be normalized toavoid overflow. Note that we use max function for the path metric instead of min function in Eq.(1) because the metric values in our method describe a degree of ‘reward’ rather than ‘cost’. Forthe entire “survived path” selections during the path metric update, the decisions are stored in thememory and the old path metrics can be discarded. The objective of this Viterbi algorithm is to finda path maximizing the accumulation of the branch metrics ( i;jt), which is expressed as:Di;j;mt =Wi;j;mtTHp=S1;0Wi;j;mt;THp1i;j;mt =8<:tanhDi;jtS2; when survivedtanhDi;jtS2;when pruned; i;jt=RXm=1i;j;mt;(2)whereWi;j;mt is the magnitude of a parameter at the mthcomparator output and time index t, nor-malized by the maximum magnitude of all parameters inside the dense matrix to be pruned, andTHpis the pruning threshold value. Intuitively, i;j;mt favors(discourages) the survival(pruning) ofparameters with larger magnitude through the skewed tanh function. Pruning with the Viterbi algo-rithm is flexible such that different cost function can be assigned to the branch metric, depending onthe type of pruning approach, providing the pruning algorithm follows a hidden Markov model (Lou,1995)3. The two constants, S1andS2, are the scaling factors, and are empirically determined to be3Eq. (2) in this work is related to magnitude-based pruning4Published as a conference paper at ICLR 2018024681012-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningNUMv = 8NUMv = 20NUMv = 32NUMv = 40NUMv = 80Figure 4: Distribution of the weights of FC1 afterpruning with different NUM v. 0.5 1 1.5 2 2.5 3 1 20 40 60 80 100 120 140Baseline test error: 0.78 %Error rate (%)Retraining epochTest error of retraining with different NUMvNUMv = 8NUMv = 20NUMv = 32NUMv = 40NUMv = 80Magnitude-based PruningFigure 5: Test error of retraining with differentNUM v.5.0 and 104, respectively, for our experiments. Note that exploring diversified states (and hence, var-ious pruning cases) is achieved by maintaining approximately 50% of the ‘1’ and ‘0’ distributionsfor both inputs and outputs of VD (Forney, 1973). Consequently, the target pruning rate is mainlycontrolled by the comparator threshold value, THc(e.g., if THcis a 4-bit number and THc=3, then25%(= (3 + 1) =24)is the target pruning rate). THpis determined by considering the distribution ofparameters and the given target pruning rate (e.g., if the parameters follow a Gaussian distributionand the target pruning rate is 68:3%,THpcorresponding to one sigma is recommended).Once the final time index is reached, as the last step of Viterbi pruning, the state with the maximumpath metric is chosen, and the previous state is traced by reading the surviving path selection data.We continue this trace-back procedure to the first time index of a trellis diagram. Note that if theinitial states of FFs are all 0s, then the number of available states (hence the number of sparsematrix representations in the first few time indices) may be limited. As an alternative, a dummyinput sequence having the length equal to the number of FFs4in VD can be inserted such thatevery state of VD is reachable (refer to Figure 11). In this case, the compressed input index of theVCM is a combination of the survived dummy sequence and the input sequence. It should be notedthat the Viterbi algorithm can be implemented using a dynamic programming technique. The timecomplexity required to find the best pruning method becomes O(l2f)wherelis the length of theinput sequence and fis the number of FFs. As can be seen in Appendix A.1, fis small even with alarge number of VD outputs.3 E XPERIMENTAL RESULTSIn this section, the impact of different VD configurations and branch metric selections on modelaccuracy and the index compression ratio is analyzed. We empirically study the weight distributionafter pruning and the sensitivity of accuracy using MNIST. Then, the observations from MNIST areapplied to AlexNet to validate the scalability of our proposed method.3.1 VD D ESIGN AND BRANCH METRIC EXPLORATION USING MNISTWe perform experiments using the LeNet-5-like convolutional MNIST model5. For simplicity, boththe minimum Hamming distance and the XOR taps (introduced in Appendix A.1) are fixed to be 4,andNUM cis 4 (i.e., NUM v=4R). These parameters are selected for fast design exploration, andincreasing them will enhance randomness of VD output and target pruning rate resolution which arecritical to improving pruning rate with minimal accuracy degradation.Number of VD outputs ( NUM v): Immediately after training, we prune the weights with differentNUM vfor VD. Figure 4 shows the weight distributions after pruning in the FC1 layer with fixedTHcandTHp. Lower NUM v(i.e, lower index compression ratio) leads to sharper pruning around4The storage overhead of this dummy input sequence is negligible compared to the index data storage5https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/tutorials/mnist/mnist deep.py5Published as a conference paper at ICLR 20180123456789-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningTHp = 0.5THp = 0.6THp = 0.7Figure 6: Distribution of FC1’s weights afterpruning with different THp. 0.5 1 1.5 2 2.5 3 1 20 40 60 80 100 120 140Baseline test error: 0.78 %Error rate (%)Retraining epochTest error of retraining with different THpTHp = 0.60THp = 0.63THp = 0.67THp = 0.70Figure 7: Test error of retraining with differentTHp.01234567-0.3 -0.2 -0.1 0 0.1 0.2 0.3×104CountWeight valueDistribution of pruned weights1 Skip states3 Skip states7 Skip states024681012-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruning1 Skip states3 Skip states7 Skip statesFigure 8: Distributions of pruned (Left) and survived (Right) FC1 weights with different skip state.the weight determined by THp. Hence, NUM vprovides a trade-off between accuracy and the indexcompression ratio. Extensive experiments indicate that for the Conv layer, a low NUM vis desired,while for the FC layer, a wide range of NUM vcan lead to minimal accuracy degradation as shownin Figure 5 (magnitude-based pruning is from (Han et al., 2015)). For MNIST, NUM v=8 for Convlayers and NUM v=40 for FC layers have been chosen to achieve optimal trade-off between the indexcompression ratio and accuracy.Pruning threshold value ( THp): Even when the parameters before pruning follow a known distri-bution (e.g., Gaussian), it may still be an iterative task to search for an optimal THpthat results inthe target pruning rate, especially with high NUM v, as evident from Figure 4. Thus, it is necessaryto investigate the sensitivity of accuracy to THp. In Figure 6, THpaffects distributions of survivedweights and pruning rates given the same THc. Note that if the actual pruning rate differs from thetarget pruning rate, then VD outputs exhibit skewed supply of ‘1’s or ‘0’s to the comparators andthe trellis diagram path exploration is also biased. In contrast, Figure 7 clearly shows that all theretraining processes converge, despite the minor discrepancy between the target and actual pruningrate (target pruning rate is 93.75%).Skip state (Appendix A.2): Up to now, we have only considered the case where one input bit issupplied to VD at every clock cycle. However, if ninput bits are provided to VD at every clockcycle, thenn1time indices in a trellis diagram are skipped. While this results in a lower indexcompression ratio, which is defined as R/ (skip state + 1), the skip state allows for a more diversestate exploration and improves the pruning quality. As can be seen in Figure 8, a greater number oflarger magnitude weights are preserved with increasing number of skip states while fixing both THpandNUM v. In this work, the default skip state is one.6Published as a conference paper at ICLR 201801234567-0.3 -0.2 -0.1 0 0.1 0.2 0.3×104CountWeight valueDistribution of pruned weightstanh(x)exxx2σ(x)0123456789-0.3 -0.2 -0.1 0 0.1 0.2 0.3×103CountWeight valueDistribution of survived weights after pruningtanh(x)exxx2σ(x)Figure 9: Left: Distribution of pruned (Left) and survived (Right) FC1 weights with different branchmetric equations.Table 1: MNIST test error and comparator threshold values with gradual pruning. Pruning is per-formed at the 50thepoch (50% target pruning rate), 100thepoch (70% target pruning rate),and150thepoch (final). 40 VD outputs are used for FC1, while 8 VD outputs for the others. 1 2 3 4 5 7 50 100 150 200 250Error rate (%)EpochLeNet-5 Test ErrorMagnitude-based PruningProposed Viterbi-based PruningLayercomparator threshold value50th100th150thEpoch Epoch EpochConv1 4 4 4Conv2 7 10 12FC1 7 10 14FC2 7 10 12Table 2: Sparse matrix comparison with MNIST using magnitude-based pruning (Han et al., 2015)and our proposed Viterbi-based pruning. We assume that 16 bits are used for the non-zero valuesand index for magnitude-based pruning.LayerMagnitude-Based Viterbi-Based MatrixWeight Pruning Sparse Matrix Pruning Sparse Matrix SizeSize Rate (CSR) Size Rate (VCM) Size ReductionConv1 0.8K 34.4% 2.12KB 32.3% 1.16KB 45.3%Conv2 51.2K 87.4% 25.41KB 81.3% 24.98KB 1.7%FC1 3211.3K 91.0% 1125.54KB 93.1% 512.82KB 54.4%FC2 10.2K 81.1% 7.62KB 80.4% 5.17KB 32.2%Total 3273.5K 90.9% 1160.69KB 92.8% 544.13KB 53.1%Test Error 0.77% 0.78%Branch Metric : For the branch metric, a variety of functions, such as exand the sigmoid function(x), has been investigated, as shown in Figure 9. Among them, the “ tanh ” function is selected dueto its pruning sharpness and low sensitivity to THpandNUM v.Based on the observations discussed above, we conducted a pruning and retraining process, andcompared the test errors of the magnitude-based pruning method (Han et al., 2015) and the proposedViterbi-based pruning method. For every round of pruning, all the weights, including the onespruned in the previous run, are considered. Table 1 illustrates the comparator threshold values THc7Published as a conference paper at ICLR 2018Table 3: Pruning and sparse matrix size comparison for AlexNet on ImageNet using magnitude-based pruning (Han et al., 2015) and our proposed Viterbi-based pruning. We assume that 16 bitsare used for the non-zero values and index for magnitude-based pruning.LayerMagnitude-Based Viterbi-Based MatrixWeight Pruning Sparse Matrix Pruning Sparse Matrix SizeSize Rate (CSR) Size Rate (VCM) Size ReductionConv1 34.8K 16% 69.70KB- 69.70KB0.0%Conv2 307.2K 62% 467.46KB 62.5% 268.99KB 42.5%Conv3 884.7K 65% 1239.40KB 62.3% 777.21KB 37.3%Conv4 663.6K 63% 982.82KB 62.0% 586.73KB 40.3%Conv5 442.4K 63% 655.22KB 56.0% 444.83KB 32.1%FC1 37.7M 91% 13597.74KB 90.3% 8284.93KB 39.1%FC2 16.8M 91% 6047.99KB 90.8% 3505.43KB 42.0%FC3 4.1M 75% 4098.00KB 73.7% 2670.18KB 34.8%Total 61.0M 89% 27158.31KB 88.2% 16607.99KB 38.1%Test Error (Top-1) 42.73% 42.68%Test Error (Top-5) 19.77% 19.78%Dense matrix size is considered in this layer because both CSR and VCM representation result in a largermemory footprint due to the low pruning rate.(MIN=0, MAX=15 with NUM c=4) used for each pruning round and test error results. Since Conv1is close to the input nodes, we choose a smaller THcto reduce the target pruning rate of Conv1.From Table 1, it is clear that the proposed pruning method successfully maintains accuracy duringthe entire training process.The final pruning rate and memory requirement for CSR and VCM for each layer are summarizedin Table 2. Notice that the sparse matrix represented using the VCM format leads to a significantmemory footprint reduction (by 53.1% ) compared to the sparse matrix represented with CSR with asimilar pruning rate. This is because VCM’s index storage is reduced by 85.2% compared to CSR’sindex size. Even if the CSR is represented with relative index using 5 bits (Han et al., 2016b), at theexpense of increased index decoding complexity, the VCM index size is still smaller by 52.7%6.In summary, VCM is superior to CSR due to its encoded index format that requires a smaller stor-age requirement and parallel dense matrix reconstruction process through VD while maintaining acomparable model accuracy.3.2 A LEXNET ON IMAGENET RESULTSWe verified the scalability of the VCM and Viterbi-based pruning methods using the AlexNet modelon ImageNet. The number of VD outputs is 50 for both the FC1 and FC2 layers ( NUM v=50,NUM c=5,R=10) and 8 for the other layers ( NUM v=8,NUM c=4,R=2). Similar to the MNISTresults, a higher index compression ratio is set for layers with larger number of weights. Since theskip state is one, the index compression ratio becomes R/2. The minimum Hamming distance andthe XOR taps are 4. Table 3 presents the pruning rates and matrix sizes assuming that non-zeroweights and CSR index are stored with 16-bit format.The38.1% reduction in matrix size achieved using VCM is mainly due to the significant reduction inthe index storage requirement ( 83.9% ). Compared with the 4-bit relative index scheme introducedin (Han et al., 2016b), the index size of VCM is reduced by 35.5% . The advantage of the indexcompression ratio of the proposed technique is largely attributed to the VD’s limited search spaceout of all possible encodable index formats, while pruning methods employing traditional sparsematrix formats do not consider such restriction. Despite such limitation, both methods achievesimilar top-1 and top-5 classification accuracy with the same retraining time.6Additional size reductions techniques, such as quantizing non-zero weights and Huffman coding (Hanet al., 2016b), can also be applied to our methods8Published as a conference paper at ICLR 20184 R ELATED WORKDenil et al. (2013) demonstrated that most neural networks parameters have significant redundancy.The redundancy increases the system complexity, and causes overfitting with small training dataset.Several approaches have been suggested to prune deep neural networks and increase the sparsity ofparameters in order to minimize both the memory overhead and the computation time, and avoidoverfitting.Chauvin (1989) and Hanson & Pratt (1989) introduced additional cost biases to the objective func-tion to decay the unimportant parameters. LeCun et al. (1990) and Hassibi et al. (1993) suggestedpruning parameters while minimizing the increase of error approximated by Hessian matrix. Opti-mal Brain Damage (OBD) (LeCun et al., 1990) restricts the Hessian matrix, forcing it to be diagonalto reduce the computational burden, at the cost of additional performance degradation. OptimalBrain Surgeon (OBS) (Hassibi et al., 1993) used a full Hessian matrix with additional computationcost to improve the pruning performance.Han et al. (2015) proposed pruning of deep neural networks by removing parameters based on themagnitude of their absolute values and then iteratively retraining the pruned network. A 9 and13pruning rate was achieved for AlexNet and VGG-16, respectively, without loss of accuracy onImageNet dataset. A follow-up paper compressed the pruned network further with weight sharingand Huffman coding (Han et al., 2016b). Although an impressive compression rate is achieved bythese suggested methods, the irregular sparsity of the survived parameters and the associated com-plicated index decoding process prevent common hardware such as GPUs from achieving noticeablespeed-up improvement. Alternatively, Han et al. (2016a) designed a dedicated hardware acceleratorto circumvent this problem.Recently, several papers suggested iterative hardware-efficient pruning methods to realize a fasterinference speed and smaller model size. Molchanov et al. (2017c) suggested iterative pruning ona feature-map level based on a heuristic approach to evaluate the importance of parameters. Thispaper, which shares a similar idea as that of OBS, uses first-degree Taylor polynomial to estimate theimportance of each parameter with reduced computational burden. Since the method prunes featuremaps rather than each parameter, a sparse matrix format is not required at the cost of a lower pruningrate. Li et al. (2017) suggested pruning all the convolution kernels together with correspondingfeature maps in CNN. Similar to Molchanov et al. (2017c), this coarse-level pruning avoids the useof a sparse matrix format, at the expense of a lower pruning rate. Park et al. (2017) introduced ahigh-performance sparse convolution algorithm, where the sparse convolution was formulated assparse-matrix-dense-matrix multiplication with the dense matrix generated on the fly. The papershows that this method can improve the inference speed of pruned networks with moderate sparsity,and can prune each parameter independently, leading to a better pruning rate. However, in the paper,the results were only demonstrated on CPUs; it was not shown whether the proposed method canalso be applied to throughput-oriented hardware such as GPUs.Ardakani et al. (2017) proposed a scheme to generate a masking matrix using linear-feedback shiftregisters (LFSRs) to randomly prune some of the synaptic weights connections. Even though thehardware structure for pruning can be simplified, it is not possible to selectively prune connectionsto improve the pruning quality. In addition, the scheme can only be applied to the fully-connectedlayer, not to the convolution layer.Kingma et al. (2015) explained Gaussian Dropout as a special case of Bayesian regularization. Un-like Gaussian Dropout which considers dropout rates as a hyperparameter, Variational Dropout the-oretically allows training dropout rates layer-wise, or even weight-wise. However, the paper didnot include any experimental result on weight-wise variational dropout. Molchanov et al. (2017a)extended Kingma et al. (2015) and showed the working case of weight-wise Variational Dropout.Molchanov et al. (2017a) suggested the use of this characteristic of Variational Dropout to prunedeep neural networks. By pruning out weights with a high dropout rate, high sparsity on a deepneural network was achieved for the CIFAR-10 classification task. Molchanov et al. (2017b) andLouizos et al. (2017) suggested pruning deep neural networks in a structured format with newBayesian models. The papers could prune deep neural networks either neuron-wise or channel-wise, keeping the weight matrices in dense format. Both papers showed state-of-the-art sparsity ondeep neural networks for the CIFAR-10 classification task.9Published as a conference paper at ICLR 2018In multiple works, attempts have been made to reduce the redundancy with popular lossy com-pression methods. Denton et al. (2014) applies low rank approximations to pre-trained weights.Gong et al. (2014) uses vector quantization to compress deep convolution neural networks. Chenet al. (2015) suggests HashedNets, which applies hashing tricks to reduce the model sizes. Iandolaet al. (2016) achieves AlexNet-level accuracy using 50x fewer parameters with SqueezeNet, whichis comprised of custom convolution filters called Fire modules. These methods are orthogonal tothe network pruning, and can be combined to achieve further model compression. For example,SqueezeNet combined with Deep Compression (Han et al., 2016b) achieves 510 compression ra-tio compared to the original AlexNet.5 F UTURE WORKMany other ECC techniques have been reported that can also be potentially used to search for sparsematrix forms with high index compression (Morelos-Zaragoza, 2006). Efficient parallel ECC decod-ing and encoding implementation have also been proposed and realized (Zhang, 2015). We believethat efforts to combine existing and new ECC techniques/algorithms with DNN pruning methodscreate a new dimension in realizing energy-efficient and high-performance DNN. Even though theproposed approach is best for dedicated ASIC or FPGA, the inherent parallel characteristics of VDand the Viterbi algorithm can also be utilized in GPUs through the construction of new kernels andlibraries. We have not considered quantization of non-zero weight values or entropy-related codingdesign in this paper. In the future, such considerations can be embedded into the branch metric orpath metric equations.6 C ONCLUSIONWe proposed a new DNN-dedicated sparse matrix format and pruning method using the Viterbiencoder structure and Viterbi algorithm. Unlike previous methods, we first consider only limitedchoices of pruning results, all of which have the advantage of a significant index compression ratioby our proposed index decompressing structures. One particular pruning result is selected fromthe limited pruning solution space based on the Viterbi algorithm with user-defined branch metricequations that aim to minimize the accuracy degradation. As a result, our proposed sparse matrix,VCM, shows noticeable index storage reduction even compared with the relative index scheme.Fixed index compression ratio and inherently parallel reconstruction scheme allows a wide range ofapplications, such as SpMM, since sparse matrices can be converted into dense matrices efficiently.ACKNOWLEDGMENTSThis research was in part supported by the MSIT(Ministry of Science and ICT), Korea, under theICT Consilience Creative program(IITP-2017-R0346-16-1007) supervised by the IITP(Institute forInformation & Communications Technology Promotion)<|im_end|> <|im_start|>assistant ### Review Title The authors use Viterbi encoding to dramatically compress the sparse matrix index of a pruned network, reducing one of the main memory overheads of a pruned neural network and speeding up inference in the parallel setting. ### Review Text quality: this paper is of good quality clarity: this paper is very clear but contains a few minor typos/grammatical mistakes (missing -s for plurals, etc.) originality: this paper is original significance: this paper is significant PROS - Using ECC theory for reducing the memory footprint of a neural network seems both intuitive and innovative, while being grounded in well-understood theory. - The authors address a consequence of current approaches to neural network pruning, i.e., the high cost of sparse matrix index storage. - The results are extensive and convincing. CONS - The authors mention in the introduction that this encoding can speed up inference by allowing efficient parallel sparse-to-dense matrix conversion, and hence batch inference, but do not provide any experimental confirmation. Main questions - It is not immediately clear to me why the objective function (2) correlates to a good accuracy of the pruned network. Did you try out other functions before settling on this one, or is there a larger reason for which (2) is a logical choice? - On a related note, I would find a plot of the final objective value assigned to a pruning scheme compared to the true network accuracy very helpful in understanding how these two correlate. - Could this approach be generalized to RNNs? - How long does the Viterbi pruning algorithm take, as it explores all 2^p possible prunings? - How difficult is it to tune the pruning algorithm hyper-parameters? ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
f75kMo1dnKD
thecvf.com/ECCV/2020/Workshop/VIPriors
2020
Injecting Prior Knowledge into Image Caption Generation
["Arushi Goel", "Basura Fernando", "Thanh-Son Nguyen", "Hakan Bilen"]
Automatically generating natural language descriptions from an image is a challenging problem in artificial intelligence that requires a good understanding of the visual and textual signals and the correlations between them. The state-of-the-art methods in image captioning struggle to approach human level performance, especially when data is limited. In this paper, we propose to improve the performance of the state-of-the-art image captioning models by incorporating two sources of prior knowledge: (i) a conditional latent topic attention, that uses a set of latent topics as an anchor to generate highly probable words and, (ii) a regularization technique that exploits the inductive biases in syntactic and semantic structure of captions and improves the generalization of image captioning models. Our experiments validate that our method produces more human interpretable captions and also leads to significant improvements on the MSCOCO dataset in both the full and low data regimes.
["Image Captioning", "Prior", "Attention", "Regularization"]
000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044ECCV#8ECCV#8Injecting Prior Knowledge into Image CaptionGenerationAnonymous ECCV submissionPaper ID 8Abstract. Automatically generating natural language descriptions froman image is a challenging problem in articial intelligence that requiresa good understanding of the visual and textual signals and the correla-tions between them. The state-of-the-art methods in image captioningstruggles to approach human level performance, especially when data islimited. In this paper, we propose to improve the performance of thestate-of-the-art image captioning models by incorporating two sources ofprior knowledge: (i) a conditional latent topic attention, that uses a set oflatent variables (topics) as an anchor to generate highly probable wordsand, (ii) a regularization technique that exploits the inductive biases insyntactic and semantic structure of captions and improves the general-ization of image captioning models. Our experiments validate that ourmethod produces more human interpretable captions and also leads tosignicant improvements on the MSCOCO dataset in both the full andlow data regimes.1 IntroductionIn recent years there has been a growing interest to develop end-to-end learn-ing algorithms in computer vision tasks. Despite the success in many problemssuch as image classication [17] and person recognition [21], the state-of-the-artmethods struggle to reach human-level performance in solving more challengingtasks such as image captioning within limited time and data which involves un-derstanding the visual scenes and describing them in a natural language. This isin contrast to humans who are eortlessly successful in understanding the sceneswhich they have never seen before and communicating them in a language. It islikely that this eciency is due to the strong prior knowledge of structure in thevisual world and language [11].Motivated by this observation, in this paper we ask \How can such priorknowledge be represented and utilized to learn better image captioning modelswith deep neural networks?". To this end, we look at the state-of-the-art encoder-decoder image captioning methods [39, 41, 3] where a Convolutional Neural Net-work (CNN) encoder extracts an embedding from the image, a Recurrent NeuralNetwork (RNN) decoder generates the text based on the embedding. This frame-work typically contains two dynamic mechanisms to model the sequential output:i) an attention module [4, 41] that identies the relevant parts of the image em-bedding based on the previous word and visual features and ii) the RNN decoder045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089ECCV#8ECCV#82 ECCV-20 submission ID 8LSTMCLTAOurs: A man jumping up to hit a tennis ball.LDA Topic PriorSentence PriorCaptionsCNN EncoderOurs: A man standing on a tennis court holding a racket.Fig. 1: Our Final Model with Conditional Latent Topic Attention (CLTA) andSentence Prior (Sentence Auto-Encoder (SAE) regularizer) both rely on priorknowledge to nd relevant words and generate non-template like and generalizedcaptions compared to the same Baseline caption for both images - A man hittinga tennis ball with a racket.that predicts the next words based on the its previous state and attended visualfeatures. While these two components are very powerful to model complex rela-tions between the visual and language cues, we hypothesize that they are alsocapable of and at the same time prone to overtting to wrong correlations, thusleading to poor generalization performance when the data is limited. Hence, wepropose to regulate these modules with two sources of prior knowledge.First, we propose an attention mechanism that accurately attends to rele-vant image regions and better cope with complex associations between wordsand image regions. For instance, in the example of a \man playing tennis", theinput visual attention encoder might only look at the local features ( tennis ball )leaving out the global visual information ( tennis court ). Hence, it generates atrivial caption as \A man is hitting a tennis ball", which is not the full descrip-tion of the image in context (as shown in g. 1). We solve this ambiguity byincorporating prior knowledge of latent topics [7], which are known to identifysemantically meaningful topics [8], into our attention module. In particular weintroduce a Conditional Latent Topic Attention (CLTA) module that modelsrelationship between a word and image regions through a latent shared spacei.e.latent topics to nd salient regions in an image. Tennis ball steers the modelto associate this word with the latent topic, \tennis", which further is respon-sible for localizing tennis court in the image. If a region-word pair has a higherprobability with respect to a latent topic and if the same topic has a higher prob-ability with respect to some other regions, then it is also a salient region and willbe highly weighted. Therefore, we compute two sets of probabilities conditionedon the current word of the captioning model. We use conditional-marginalizedprobability where marginalization is done over latent topics to nd salient imageregions to generate the next word. Our CLTA is modeled as a neural networkwhere marginalized probability is used to weight the image region features toobtain a context vector that is passed to a image captioning decoder to generatethe next word.Second, the complexity in the structure of natural language makes it harderto generate uent sentences while preserving a higher amount of encoded infor-090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134ECCV#8ECCV#8ECCV-20 submission ID 8 3mation (high Bleu-4 scores). Although current image captioning models are ableto model this linguistic structure, the generated captions follow a more template-like form, for instance, \A man hitting a tennisball with a racket." As shownin g. 1, visually similar images have template-like captions from the baselinemodel. Inspired from sequence-to-sequence (seq2seq) machine translation [35, 28,40, 16], we introduce a new regularization technique for captioning models coinedSAE Regularizer. In particular, we design and train an additional seq2seq sen-tence auto-encoder model (\SAE") that rst reads in a whole sentence as input,generates a xed dimensional vector, then the vector is further used to recon-struct the input sentence. Human languages are highly structured and followsimmense amount of regularity. Certain words are more likely to co-appear andcertain word patterns can be observed more often. Our SAE is trained to learnthe structure of the input (sentence) space in an oine manner by exploiting theregularity of the sentence space. The continuous latent space learned by SAEblends together both the syntactic and semantic information from the inputsentence space and generates high quality sentences during the reconstructionvia the SAE decoder. This suggests that the continuous latent space of SAEcontains sucient information regarding the syntactic and semantic structure ofinput sentences. Specically, we use SAE-Dec as an auxiliary decoder branch (seeg. 3). Adding this regularizer forces the representation from the image encoderand language decoder to be more representative of the visual content and lesslikely to overt. SAE-Dec is employed along with the original image captioningdecoder (\IC-Dec") to output the target sentence during training, however, wedo not use SAE regularizer at test time reducing additional computations.Both of the proposed improvements also help to overcome the problem oftraining on large image-caption paired data [26, 27] by incorporating prior knowl-edge which is learned from unstructured data in the form of latent topics andSAE. These priors { also known as \inductive biases" { help the models makeinferences that go beyond the observed training data. Through an extensive setof experiments, we demonstrate that our proposed CLTA module and SAE-Decregularizer improves the image captioning performance both in the limited dataand full data training regimes on the MSCOCO dataset [26].2 Related WorkHere, we rst discuss related attention mechanisms and then the use of knowledgetransfer in image captioning models.Attention mechanisms in image captioning. The pioneering work in neu-ral machine translation [4, 29, 9] has shown that attention in encoder-decoderarchitectures can signicantly boost the performance in sequential generationtasks. Visual attention is one of the biggest contributor in image captioning [15,41, 3, 19]. Soft attention and hard attention variants for image captioning wereintroduced in [41]. Bottom-Up and Top-Down self attention is eectively usedin [3]. Attention on attention is used in recent work [19]. Interestingly, they useattention at both encoder and the decoder step of the captioning process. Our135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179ECCV#8ECCV#84 ECCV-20 submission ID 8proposed attention signicantly diers in comparison to these attention mecha-nisms. First, the traditional attention methods, soft-attention [4] and scaled dotproduct attention [36] aims to nd features or regions in an image that highlycorrelates with a word representation [3, 4, 34]. In contrast, our conditional-latenttopic attention uses latent variables i.e.topics as anchors to nd relationship be-tween word representations and image regions (features). Some image regionsand word representations may project to the same set of latent topics more thanthe others and therefore more likely to co-occur. Our method learns to modelthese relationships between word-representations and image region features us-ing our latent space. We allow competition among regions and latent topics tocompute two sets of probabilities to nd salient regions. This competing strategyand our latent topics guided by pre-trained LDA topics [7] allow us to bettermodel relationships between visual features and word representations. Hence, theneural structure and our attention mechanism is quite dierent from all priorwork [41, 3, 19, 4].Knowledge transfer in image captioning. It is well known that languageconsists of semantic and syntactic biases [5, 30]. We exploit these biases by rsttraining a recurrent caption auto-encoder to capture this useful information us-ing [35]. Our captioning auto-encoder is trained to reconstruct the input sen-tence and hence, this decoder encapsulates the structural, syntactic and seman-tic information of input captions. During captioning process we regularize thecaptioning RNN with this pretrained caption-decoder to exploit biases in thelanguage domain and transfer them to the visual-language domain. To the bestof our knowledge, no prior work has attempted such knowledge transfer in imagecaptioning. Zhou et al. [46] encode external knowledge in the form of knowledgegraphs using Concept-Net [27] to improve image captioning. The closest to oursis the work of [42] where they propose to generate scene graphs from both sen-tences and images and then encode the scene graphs to a common dictionarybefore decoding them back to sentences. However, generation of scene graphsfrom images itself is an extremely challenging task. Finally, we propose to trans-fer syntactic and semantic information as a regularization technique during theimage captioning process as an auxiliary loss. Our experiments suggest that thisleads to considerable improvements, specially in more structured measures suchas CIDEr [37].3 MethodIn this section, we rst review image captioning with attention, introduce ourCLTA mechanism, and then our sentence auto-encoder (SAE) regularizer.3.1 Image Captioning with AttentionImage captioning models are based on encoder-decoder architecture [41] that usea CNN as image encoder and a Long Short-Term Memory (LSTM) [18] as thedecoder { see Fig.1.180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224ECCV#8ECCV#8ECCV-20 submission ID 8 5The encoder takes an image as input and extracts a feature set v=fv1;:::;vRgcorresponding to Rregions of the image, where vi2RDis theD-dimensionalfeature vector for the ithregion. The decoder outputs a caption yby generatingone word at each time step. At time step t, the feature set vis combined into asingle vector vtaby taking weighted sum as follows:vta=RXi=1tivi (1)wheretiis the CLTA weight for region iat timet, that is explained in the nextsection. The decoder LSTM then takes a concatenated vector [ vtajyt1] andthe previous hidden state ht1as input and generates the next hidden state ht:ht=([vtajEyt1];ht1;) (2)where,jdenotes concatenation, yt12RKis the one-hot vector of the wordgenerated at time t1,Kis the vocabulary size, ht2Rnis the hidden state ofthe LSTM at time t,nis the LSTM dimensionality, and are trainable param-eters of the LSTM. Finally, the decoder predicts the output word by applying alinear mapping on the hidden state and vtaas follows:yt= ([htjvta]; ) (3)where are trainable parameters. Our LSTM implementation closely followsthe formulation in [45]. The word embedding matrix E2RmKis trainedto translate one-hot vectors to word embeddings as in [41], where mis theword embedding dimension. In the next section, we describe our proposed CLTAmechanism.3.2 CLTA: Conditional Latent Topic AttentionAt time step t, our CLTA module takes the previous LSTM hidden state ( ht1)and image features to output the attention weights t. Specically, we use aset of latent topics to model the associations between textual ( ht1) and visualfeatures ( v) to compute the attention weights. The attention weight for region iis obtained by taking the conditional-marginalization over the latent topic lasfollows:ti=P(region =ijht1;v) =CXl=1P(region =ijht1;v;l)P(ljht1;vi) (4)wherelis a topic variable in the C-dimensional latent space. To compute P(ljht1;vi),we rst project both textual and visual features to a common C-dimensionalshared latent space, and obtain the associations by summing the projected fea-tures as follows:225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269ECCV#8ECCV#86 ECCV-20 submission ID 8Up-Down: A dirty bathroom with a toilet and a sink.CLTA: A bathroom with a toilet and a roll of toilet paper.Top-20 Topic Words: toilet, bathroom, white, floor, small, wall, next, sitting, tiled, tile, seat, urinal, public, restroom, stall, room, paper roll, lid, dirty.Up-Down: A kitchen with a refrigerator and a stove.CLTA: A kitchen with wooden cabinets and stainless steel appliances.Top-20 Topic Words: kitchen, refrigerator, cabinet, white, sink, appliance, counter, fridge, small, wood, stove, wooden, steel, large, floor, stainless, area, top, clean, island.Fig. 2: Image-Caption pairs generated from our CLTA module with 128 dimen-sions and visualization of Top-20 words from the latent topics.qti=Wscvi+Whcht1(5)whereWsc2RCDandWhc2RCnare the trainable projection matrices forvisual and textual features, respectively. Then the latent topic probability isgiven by:PL=P(ljht1;vi) =exp(qtil)PCk=1exp(qtik)(6)Afterwards, we compute the probability of a region given the textual, visionfeatures and latent topic variable as follows:rti=Wsrvi+Whrht1(7)P(region =ijht1;v;l) =exp(rtil)PRk=1exp(rtkl)(8)whereWsr2RCDandWhr2RCnare the trainable projection matrices forvisual and textual features, respectively.The latent topic posterior in eq. (6) is pushed to the pre-trained LDA topicprior by adding a KL-divergence term to the image captioning objective. Weapply Latent Dirichlet Allocation (LDA) [7] on the caption data. Then, eachcaption has an inferred topic distribution QTfrom the LDA model which actsas a prior on the latent topic distribution, PL. For doing this, we take the averageof the C-dimensional latent topics at all time steps from 0 ;:::;t1 as:PLavg=1tt1Xk=0P(ljhk;vi) (9)Hence, the KL-divergence objective is dened as:DKL(PLavgjjQT) =Xc2CPLavg(c)log(PLavg(c)QT(c)) (10)This learnt latent topic distribution captures the semantic relations betweenthe visual and textual features in the form of visual topics, and therefore wealso use this latent posterior, PLas a source of meaningful information during270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314ECCV#8ECCV#8ECCV-20 submission ID 8 7generation of the next hidden state. The modied hidden state htin eq. (2) isnow given by:ht=([vtajEyt1jPL];ht1;) (11)We visualize the distribution of latent topics in Figure 2. While traditional\soft-max" attention exploit simple correlation among textual and visual infor-mation, we make use of latent topics to model associations between them.3.3 SAE RegularizerEncoder-decoder methods are widely used for translating one language to an-other [10, 35, 4]. When the input and target sentences are the same, these modelsfunction as auto-encoders by rst encoding an entire sentence into a xed-(low)dimensional vector in a latent space, and then reconstructing it. Autoencodersare commonly employed for unsupervised training in text classication [13] andmachine translation [28].In this paper, our SAE regularizer has two advantages: i) acts as a softconstraint on the image captioning model to regularize the syntactic and se-mantic space of the captions for better generalization and, ii) encourages theimage captioning model to extract more context information for better mod-elling long-term memory. These two properties of the SAE regularizer generatessemantically meaningful captions for an image with syntactic generalizations andprevents generation of naive and template-like captions.Our SAE model uses network architecture of [35] with Gated Recurrent Units(GRU) [12]. Let us denote the parameter of the decoder GRU by D. A stochasticvariation of the vanilla sentence auto-encoders is de-noising auto-encoders [38]which are trained to \de-noise" corrupted versions of their inputs. To inject suchinput noise, we drop each word in the input sentence with a probability of 50%to reduce the contribution of a single word on the semantics of a sentence. Wetrain the SAE model in an oine stage on training set of the captioning dataset.After the SAE model is trained, we discard its encoder and integrate only itsdecoder to regularize the captioning model.As depicted in Figure 3, the pretrained SAE decoder takes the last hiddenstate vector of captioning LSTM has input and generates an extra caption(denoted as ysae) in addition to the output of the captioning model (denotedasylstm). We use output of the SAE decoder only in train time to regulatethe captioning model by implicitly transferring the previously learned latentstructure with SAE decoder.Our integrated model is optimized to generate two accurate captions ( i.e.ysaeandylstm) by minimizing a weighted average of two loss values:arg minL(y;ylstm) + (1)L(y;ysae) (12)whereLis the cross-entropy loss computed for each caption, word by wordagainst the ground truth caption y,is the trade-o parameter, and are315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359ECCV#8ECCV#88 ECCV-20 submission ID 8ImageCaptioningModelLSTM's Hidden StateSAE's DecoderysaeylstmSAE RegularizerL(, ) y∗ylstmL(, ) y∗ysaeFig. 3: Illustration of our proposed Sentence Auto-Encoder (SAE) regularizerwith the image captioning decoder. The captioning model is trained by addingthe SAE decoder as an auxiliary branch and thus acting as a regularizer.the parameters of our model. We consider two scenarios that we use during ourexperimentation.{First, we set the parameters of the SAE decoder Dto be the weights of thepre-trained SAE decoder and freeze them while optimizing Equation (12) interms of=f; ;Eg.{Second, we initialize Dwith the weights of the pre-trained SAE decoder andne-tune them along with the LSTM parameters, i.e.=f; ;E; Dg.As discussed in section 3.2, we also minimize the KL divergence in eq. (10) alongwith the nal regularized objective in eq. (12) as:arg minL(y;ylstm) + (1)L(y;ysae) +DKL(PLavgjjQT) (13)where,is the weight for the KL divergence loss.Discussion. An alternative way of exploiting the information from the pre-trained SAE model is to bring the representations from the captioning decodercloser to the encodings of the SAE encoder by minimizing the Euclidean distancebetween the hidden state from the SAE encoder and the hidden state from thecaptioning decoder at each time-step. However, we found this setting is toorestrictive on the learned hidden state of the LSTM.4 ExperimentsDataset. Our models are evaluated on the standard MSCOCO 2014 imagecaptioning dataset [26]. For fair comparisons, we use the same data splits fortraining, validation and testing as in [22] which have been used extensively inprior works. This split has 113,287 images for training, 5k images for validationand testing respectively with 5 captions for each image. We perform evaluationon all relevant metrics for generated sentence evaluation - CIDEr [37], Bleu [31],METEOR [14], ROUGE-L [25] and, SPICE [2].360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404ECCV#8ECCV#8ECCV-20 submission ID 8 9Implementation Details. For training our image captioning model, we com-pute the image features based on the Bottom-Up architecture proposed by [3],where the model is trained using a Faster-RCNN model [32] on the Visual-Genome Dataset [24] with object and attribute information. These features areextracted from Rregions and each region feature has Ddimensions, where RandDis 36 and 2048 respectively as proposed in [3]. We use these 36 2048image features in all our experiments.4.1 Experimental SetupLDA Topic Models. The LDA [7] model is learned in an oine manner to gener-ate aCdimensional topic distribution for each caption. Briey, the LDA modeltreats the captions as word-documents and group these words to form Ctopics(cluster of words), learns the word distribution for each topic ( CV) whereVis the vocabulary size and also generates a topic distribution for each inputcaption,QTwhere each Cthdimension denotes the probability for that topic.Sentence Auto-Encoder. The Sentence Auto-encoder is trained oine on theMSCOCO 2014 captioning dataset [26] with the same splits as discussed above.For the architecture, we have a single layer GRU for both the encoder and thedecoder. The word embeddings are learned with the network using an embeddinglayer and the dimension of both the hidden state and the word embeddings is1024. During training, the decoder is trained with teacher-forcing [6] with aprobability of 0.5. For inference, the decoder decodes till it reaches the end ofcaption token. The learning rate for this network is 2e-3 and it is trained usingthe ADAM [23] optimizer.Image Captioning Decoder with SAE Regularizer. The architecture of our imagecaptioning decoder is same as the Up-Down model [3] with their \soft-attention"replaced by our CLTA module and trained with the SAE regularizer. We alsoretrain the AoANet model proposed by Huang et al. [19] by incorporating ourCLTA module and the SAE regularizer. In the results section, we show improve-ments over the Up-Down and AoANet models using our proposed approaches.Note, the parameters for training Up-Down and AoANet baselines are sameas the original setting. While training the captioning models together with theSAE-decoder, we jointly learn an ane embedding layer (dimension 1024) bycombining the embeddings from the image captioning decoder and the SAE-decoder. During inference, we use beam search to generate captions from thecaptioning decoder using a beam size of 5 for Up-Down and a beam-size of 2 forAoANet. For training the overall objective function as given in Equation 13, thevalue ofis initialized by 0.7 and increased by a rate of 1.1 every 5 epochs untilit reaches a value of 0.9 and is xed to 0.1. We use the ADAM optimizer witha learning rate of 2e-4. Our code is implemented using PyTorch [1] and will bemade publicly available.405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449ECCV#8ECCV#810 ECCV-20 submission ID 85 Results and AnalysisFirst, we study the caption reconstruction performance of vanilla and denois-ing SAE, then report our model's image captioning performance on MS-COCOdataset with full and limited data, investigate multiple design decisions and an-alyze our results qualitatively.5.1 Sentence Auto-Encoder ResultsAn ideal SAE must learn mapping its input to a xed low dimensional space suchthat a whole sentence can be summarized and reconstructed accurately. To thisend, we experiment with two SAEs, Vanilla-SAE and Denoising-SAE and reporttheir reconstruction performances in terms of Bleu4 and cross-entropy (CE) lossin g.4. The vanilla model, when the inputs words are not corrupted, outperformsthe denoising one in both metrics. This is expected as the denoising model isonly trained with corrupted input sequences. The loss for both the Vanilla andDenoising SAE start from a relatively high value of approximately 0.8 and 0.4respectively, and converge to a signicantly low error of 0.1 and 0.2. For a betteranalysis, we also compute the Bleu-4 metrics on our decoded caption against the5 ground-truth captions. As reported in g.1, both models obtain signicantlyhigh Bleu-4 scores. This indicates that an entire caption can be compressed ina low dimensional vector (1024) and can be successfully reconstructed.ooooooooooooooooooooooo0 5 10 15 200.00.20.40.60.81.0EpochsReconstruction Loss+++++++++++++++++++++++o+Vanilla SAEDenoising SAEFig. 4: Error Curve for the SentenceAuto-Encoder on the Karpathy testsplit. The error starts increasing ap-proximately after 20 epochs.Models Bleu-4"CE-Loss#Vanilla SAE 96.33 0.12Denoising SAE 89.79 0.23Table 1: Bleu-4 Evaluation and Recon-struction Cross-Entropy Loss for theSentence Auto-Encoder on the Karpa-thy test split of MSCOCO 2014 captiondataset [26].5.2 Image Captioning ResultsHere we incorporate the proposed CLTA and SAE regularizer to recent image-captioning models including Up-Down [3] and AoANet [19] and report theirperformance on MS-COCO dataset in multiple metrics (see Table 2). The tablesreport the original results of these methods from their publications in the topblock and the rows in cyan show relative improvement of our models whencompared to the baselines.The baseline models are trained for two settings - 1)Up-Downy, is the modelre-trained on the architecture of Anderson et al. [3] and, 2) AoANety, is the450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494ECCV#8ECCV#8ECCV-20 submission ID 8 11Modelscross-entropy loss cider optimizationB-1 B-4 M R C S B-1 B-4 M R C SLSTM-A [44] 75.4 35.2 26.9 55.8 108.8 20.0 78.6 35.5 27.3 56.8 118.3 20.8RFNet [20] 76.4 35.8 27.4 56.8 112.5 20.5 79.1 36.5 27.7 57.3 121.9 21.2Up-Down [3] 77.2 36.2 27.0 56.4 113.5 20.3 79.8 36.3 27.7 56.9 120.1 21.4GCN-LSTM [43] 77.3 36.8 27.9 57.0 116.3 20.9 80.5 38.2 28.5 58.3 127.6 22.0AoANet [19] 77.4 37.2 28.4 57.5 119.8 21.3 80.2 38.9 29.2 58.8 129.8 22.4Up-Downy75.9 36.0 27.3 56.1 113.3 20.1 79.2 36.3 27.7 57.3 120.8 21.2Up-Downy+ CLTA + SAE-Reg 76.7 37.1 28.1 57.1 116.2 21.0 80.2 37.4 28.4 58.1 127.4 22.0Relative Improvement +0.8 +1.1 +0.8 +1.0 +2.9 +0.9 +1.0 +1.1 +0.7 +0.8 +6.6 +0.8AoANet77.3 36.9 28.5 57.3 118.4 21.6 80.5 39.1 29.0 58.9 128.9 22.7AoANety+ CLTA + SAE-Reg 78.1 37.9 28.457.5 119.9 21.7 80.8 39.3 29.1 59.1 130.1 22.9Relative Improvement +0.8 +1.0 -0.1 +0.2 +1.5 +0.1 +0.3 +0.2 +0.1 +0.2 +1.2 +0.2Table 2: Image captioning performance on the \Karpathy" test split of theMSCOCO 2014 caption dataset [26] from other state-of-the-art methods andour models. Our Conditional Latent Topic Attention with the SAE regularizersignicantly improves across all the metrics using both cross-entropy loss andcider optimization .ydenotes our trained models and * indicates the results obtainedfrom the publicly available pre-trained model.Attention-on-Attention model re-trained as in Huang et al. [19]. Note that forboth Up-Down and AoANet, we use the original source code to train them in ourown hardware. We replace the \soft-attention" module in our Up-Down baselineby CLTA directly. The AoANet model is based on the powerful Transformer [36]architecture with the multi-head dot attention in both encoder and decoder. ForAoANet, we replace the dot attention in the decoder of AoANet at each headby the CLTA which results in multi-head CLTA. The SAE-decoder is added as aregularizer on top of these models as also discussed in section 4.1. As discussedlater in section 5.5, we train all our models with 128 dimensions for the CLTAand with the Denoising SAE decoder (initialized with hlast).We evaluate our models with the cross-entropy loss training and also by usingthe CIDEr score oprimization [33] after the cross-entropy pre-training stage (ta-ble 2). For the cross-entropy one, our combined approach consistently improvesover the baseline performances across all metrics. It is clear from the resultsthat improvements in CIDEr and Bleu-4 are quite signicant which shows thatour approach generates more human-like and accurate sentences. It is interest-ing to note that AoANet with CLTA and SAE-regularizer also gives consistentimprovements despite having a strong transformer language model. We show insection 5.4 the dierences between our captions and the captions generated fromUp-Down and AoANet. Our method is modular and improves on state-of-the-artmodels despite the architectural dierences. Moreover, the SAE decoder is dis-carded after training and hence it brings no additional computational load duringtest-time but with signicant performance boost. For CIDEr optimization, ourmodels based on Up-Down and AoANet also show signicant improvements inall metrics for our proposed approach.495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539ECCV#8ECCV#812 ECCV-20 submission ID 8Models 50% data 75% data 100% dataBleu-4 CIDEr Bleu-4 CIDEr Bleu-4 CIDErUp-Down 35.4 112.0 35.8 112.7 36.0 113.3Up-Down+CLTA 36.3 113.7 36.3 114.5 36.5 115.0Up-Down+CLTA+SAE-Reg 36.6 114.8 36.8 115.6 37.1 116.2AoANet 36.6 116.1 36.8 118.1 36.9 118.4AoANet+CLTA 36.9 116.7 37.1 118.4 37.4 119.1AoANet+CLTA+SAE-Reg 37.2 117.5 37.6 118.9 37.9 119.9Table 3: Evaluation of our CLTA and SAE-Regularizer methods by training ona subset of the MSCOCO \Karpathy" Training split.5.3 Learning to Caption with Less DataTable 3 evaluates the performance of our proposed models for a subset of thetraining data, where x% is the percentage of the total data that is used for train-ing. All these subsets of the training samples are chosen randomly. Our CLTAmodule is trained with 128 dimensions for the latent topics along with the De-noising SAE Regularizer initialized with the last hidden state of the LSTM (Up-Down+CLTA+SAE-Reg). Despite the number of training samples, our averageimprovement with CLTA and SAE-Regularizer is around 1% in Bleu-4 and 2.9%in CIDEr for the Up-Down model and 0.8% in Bleu-4 and 1.2% in CIDEr for theAoANet model. The signicant improvements in Bleu-4 and CIDEr scores withonly 50% and 75% of the data compared to the baseline validates our proposedmethods as a form of rich prior.5.4 Qualitative ResultsIn g. 5, we show examples of images and captions generated by the baselinesUp-Down and AoANet along with our proposed methods, CLTA and SAE-Regularizer. The baseline models have repetitive words and errors while gener-ating captions ( in front of a mirror ,a dog in the rear view mirror ). Our modelscorrects these mistakes by nding relevant words according to the context andputting them together in a human-like caption format ( a rear view mirror showsa dog has the same meaning as a rear view mirror shows a dog in the rear viewmirror which is eciently corrected by our models by bringing in the correctmeaning). From all the examples shown, we can see that our model overcomesthe limitation of overtting in current methods by completing a caption withmore semantic and syntactic generalization ( e.g.:dierent avoured donuts andseveral trains on the tracks ).5.5 Ablation StudyConditional Latent Topic Attention (CLTA). Table 4a depicts the resultsfor the CLTA module that is described in section 3.2. Soft-attention is used as a540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584ECCV#8ECCV#8ECCV-20 submission ID 8 13Up-Down: A dog laying on the floor in front of a mirror.+CLTA: a black and white dog laying on the floor.+CLTA+SAE-Reg: a black and white dog laying on a wooden floor. Up-Down: A box of doughnuts with donuts in it.+CLTA: a box filled with different types of donuts.+CLTA+SAE-Reg: a box filled with lots of different flavoured donuts. Up-Down: A yellow bike with a bicycle on the street.+CLTA: a yellow bike with a yellow umbrella attached to it.+CLTA+SAE-Reg: a bicycle with an umbrella attached to it. GT: a black and white dog wearing a santa claus hat lying on the floor. GT: a box that contains multiple kinds of doughnuts. GT: a bicycle with an umbrella and a basket. AoANet: a train station with a train station with trains.+CLTA: a train station with several trains parked in it.+CLTA+SAE-Reg: a train station with several trains on the tracks. GT: a train station with several trains in the station. AoANet: a rear view mirror shows a dog in the rear view mirror.+CLTA: a rear view mirror with a dog hanging out the window.+CLTA+SAE-Reg: a rear view mirror showing a dog looking out the window. GT: dog looking out the window of a car in rearview mirror. AoANet: a bench sitting under a tree in a park.+CLTA: a park bench sitting in the middle of a forest.+CLTA+SAE-Reg: a park bench sitting in the middle of a forest. GT: a park bench surrounded by a green forest of trees.. Fig. 5: Example of generated captions from the baseline Up-Down, AoANet, ourproposed CLTA and, our nal models with both CLTA and SAE Regularizer.baseline and corresponds to the attention mechanism in [41] which is the mainattention module in Up-Down image captioning model by Anderson et al. [3]. Wereplace this attention with the CLTA and evaluate its performance for dierentnumber of latent dimensions, i.e.topics (C). The models trained with latenttopic dimensions of 128, 256 and 512 all outperform the baseline signicantly.The higher CIDEr and Bleu-4 scores for these latent topics show the model'scapability to generate more descriptive and accurate human-like sentences.As we increase the dimensions of latent topics from 128 to 512, we predictmore relevant keywords as new topics learnt by the CLTA module with 512dimensions are useful in encoding more information and hence generating mean-ingful captions.Models Baseline CLTASoft-Attention 128 256 512Bleu-4 36.0 36.5 36.6 36.7CIDEr 113.3 115.0 115.2 115.3(a) Evaluation scores for the Up-Downmodel with soft-attention and ablations ofour CLTA module.Models SAE-Decoder hBleu-4 CIDErBaseline No - 36.0 113.3CLTA-128VanillaFirst 36.9 115.8Last 36.8 115.3DenoisingFirst 36.8 116.1Last 37.1 116.2CLTA-512 Denoising Last 37.2 115.9(b) Additional quantitative evaluation re-sults from dierent settings of the SAE de-coder when trained with image captioningdecoder. hdenotes the hidden state.Table 4: Ablative Analysis for dierent settings on our (a) CLTA module and,(b) SAE regularizer training.Image Captioning Decoder with SAE Regularizer. Table 4b reportsablations for our full image captioning model (Up-Down with CLTA) and theSAE regularizer. As discussed in section 3.3, SAE decoder (parameters dened by585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629ECCV#8ECCV#814 ECCV-20 submission ID 8D) is initialized with the hidden state of the image captioning decoder. Duringtraining, we test dierent settings of how the SAE decoder is trained with theimage captioning decoder: (1) Vanilla vs Denoising SAE and, (2) hrstvshlast,whether the SAE decoder is initialized with the rst or last hidden state of theLSTM decoder. For all the settings, we ne-tune the parameters of GRU D(D)when trained with the image captioning model (the parameters are initializedwith the weights of the pre-trained Vanilla or Denoising SAE decoder).The results in Table 4b are reported on dierent combinations from the set-tings described above, with the CLTA having 128 and 512 dimensions in theimage captioning decoder. Adding the auxiliary branch of SAE decoder signif-icantly improves over the baseline model with CLTA and in the best setting,Denoising SAE with hlastimproves the CIDEr and Bleu-4 scores by 1.2 and0.6 respectively. As the SAE decoder is trained for the task of reconstruction,ne-tuning it to the task of captioning improves the image captioning decoder.Initializing the Vanilla SAE decoder with hlastdoes not provide enough gra-dient during training and quickly converges to a lower error, hence this bringslower generalization capacity to the image captioning decoder. As hrstis lessrepresentative of an entire caption compared to hlast, vanilla SAE with hrstismore helpful to improve the captioning decoder training. On the other hand, theDenoising SAE being robust to noisy summary vectors provide enough trainingsignal to improve the image captioning decoder when initialized with either hrstorhlastbut slightly better performance with hlastfor Bleu-4 and CIDEr as itforces hlastto have an accurate lower-dim representation for the SAE and hencebetter generalization. It is clear from the results in table 4b, that Denoising SAEwith hlasthelps to generate accurate and generalizable captions. From our ex-periments, we found that CLTA with 128 topics and Denoising SAE (with hlast)has better performance than even it's counterpart with 512 topics. Hence, for allour experiments in section 5.2 and section 5.3 our topic dimension is 128 withDenoising SAE initialized with hlast.6 ConclusionIn this paper, we have introduced two novel methods for image captioning thatexploit prior knowledge and hence help to improve state-of-the-art models evenwhen the data is limited. The rst method exploits association between visualand textual features by learning latent topics via an LDA topic prior and obtainsrobust attention weights for each image region. The second one is an SAE reg-ularizer that is pre-trained in an autoencoder framework to learn the structureof the captions and is plugged into the image captioning model to regulate itstraining. Using these modules, we obtain consistent improvements on two inves-tigate models, bottom-up top-down and the AoANet image captioning model,indicating the usefulness of our two modules as a strong prior. In future work,we plan to further investigate potential use of label space structure learning forother challenging vision tasks with limited data and to improve generalization.630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674ECCV#8ECCV#8ECCV-20 submission ID 8 15References1. Pytorch. https://pytorch.org/2. Anderson, P., Fernando, B., Johnson, M., Gould, S.: Spice: Semantic propositionalimage caption evaluation. In: European Conference on Computer Vision. pp. 382{398. Springer (2016)3. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.:Bottom-up and top-down attention for image captioning and visual question an-swering. In: Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. pp. 6077{6086 (2018)4. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learningto align and translate. arXiv preprint arXiv:1409.0473 (2014)5. Bao, Y., Zhou, H., Huang, S., Li, L., Mou, L., Vechtomova, O., Dai, X., Chen,J.: Generating sentences from disentangled syntactic and semantic spaces. arXivpreprint arXiv:1907.05789 (2019)6. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequenceprediction with recurrent neural networks. In: Advances in Neural InformationProcessing Systems. pp. 1171{1179 (2015)7. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. Journal of machineLearning research 3(Jan), 993{1022 (2003)8. Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L., Blei, D.M.: Reading tea leaves:How humans interpret topic models. In: Advances in neural information processingsystems. pp. 288{296 (2009)9. Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties ofneural machine translation: Encoder{decoder approaches. In: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation.pp. 103{111 (2014)10. Cho, K., Van Merri enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk,H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder forstatistical machine translation. arXiv preprint arXiv:1406.1078 (2014)11. Chomsky, N.: Aspects of the Theory of Syntax, vol. 11. MIT press (2014)12. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recur-rent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)13. Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in neuralinformation processing systems. pp. 3079{3087 (2015)14. Denkowski, M., Lavie, A.: Meteor universal: Language specic translation evalua-tion for any target language. In: Proceedings of the ninth workshop on statisticalmachine translation. pp. 376{380 (2014)15. Fang, H., Gupta, S., Iandola, F., Srivastava, R.K., Deng, L., Doll ar, P., Gao, J., He,X., Mitchell, M., Platt, J.C., et al.: From captions to visual concepts and back. In:Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 1473{1482 (2015)16. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutionalsequence to sequence learning. In: Proceedings of the 34th International Conferenceon Machine Learning-Volume 70. pp. 1243{1252. JMLR. org (2017)17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In:Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 770{778 (2016)18. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation9(8), 1735{1780 (1997)675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719ECCV#8ECCV#816 ECCV-20 submission ID 819. Huang, L., Wang, W., Chen, J., Wei, X.Y.: Attention on attention for image cap-tioning. In: The IEEE International Conference on Computer Vision (ICCV) (Oc-tober 2019)20. Jiang, W., Ma, L., Jiang, Y.G., Liu, W., Zhang, T.: Recurrent fusion network forimage captioning. In: Proceedings of the European Conference on Computer Vision(ECCV). pp. 499{515 (2018)21. Joon Oh, S., Benenson, R., Fritz, M., Schiele, B.: Person recognition in personalphoto collections. In: Proceedings of the IEEE international conference on com-puter vision. pp. 3862{3870 (2015)22. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image de-scriptions. In: Proceedings of the IEEE conference on computer vision and patternrecognition. pp. 3128{3137 (2015)23. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)24. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S.,Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting languageand vision using crowdsourced dense image annotations. International Journal ofComputer Vision 123(1), 32{73 (2017)25. Lin, C.Y., Och, F.J.: Automatic evaluation of machine translation quality usinglongest common subsequence and skip-bigram statistics. In: Proceedings of the42nd Annual Meeting on Association for Computational Linguistics. p. 605. Asso-ciation for Computational Linguistics (2004)26. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll ar, P.,Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conferenceon computer vision. pp. 740{755. Springer (2014)27. Liu, H., Singh, P.: Conceptnet|a practical commonsense reasoning tool-kit. BTtechnology journal 22(4), 211{226 (2004)28. Luong, M.T., Le, Q.V., Sutskever, I., Vinyals, O., Kaiser, L.: Multi-task sequenceto sequence learning. arXiv preprint arXiv:1511.06114 (2015)29. Luong, M.T., Pham, H., Manning, C.D.: Eective approaches to attention-basedneural machine translation. arXiv preprint arXiv:1508.04025 (2015)30. Marcheggiani, D., Bastings, J., Titov, I.: Exploiting semantics in neural machinetranslation with graph convolutional networks. arXiv preprint arXiv:1804.08313(2018)31. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automaticevaluation of machine translation. In: Proceedings of the 40th annual meeting onassociation for computational linguistics. pp. 311{318. Association for Computa-tional Linguistics (2002)32. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detec-tion with region proposal networks. In: Advances in neural information processingsystems. pp. 91{99 (2015)33. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequencetraining for image captioning. In: Proceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition. pp. 7008{7024 (2017)34. Sharma, P., Ding, N., Goodman, S., Soricut, R.: Conceptual captions: A cleaned,hypernymed, image alt-text dataset for automatic image captioning. In: Proceed-ings of the 56th Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers). pp. 2556{2565 (2018)35. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neuralnetworks. In: Advances in neural information processing systems. pp. 3104{3112(2014)720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764ECCV#8ECCV#8ECCV-20 submission ID 8 1736. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in neural informationprocessing systems. pp. 5998{6008 (2017)37. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: Consensus-based imagedescription evaluation. In: Proceedings of the IEEE conference on computer visionand pattern recognition. pp. 4566{4575 (2015)38. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composingrobust features with denoising autoencoders. In: Proceedings of the 25th interna-tional conference on Machine learning. pp. 1096{1103. ACM (2008)39. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural imagecaption generator. In: Proceedings of the IEEE conference on computer vision andpattern recognition. pp. 3156{3164 (2015)40. Wiseman, S., Rush, A.M.: Sequence-to-sequence learning as beam-search optimiza-tion. arXiv preprint arXiv:1606.02960 (2016)41. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R.,Bengio, Y.: Show, attend and tell: Neural image caption generation with visualattention. In: International conference on machine learning. pp. 2048{2057 (2015)42. Yang, X., Tang, K., Zhang, H., Cai, J.: Auto-encoding scene graphs for image cap-tioning. In: Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. pp. 10685{10694 (2019)43. Yao, T., Pan, Y., Li, Y., Mei, T.: Exploring visual relationship for image captioning.In: Proceedings of the European conference on computer vision (ECCV). pp. 684{699 (2018)44. Yao, T., Pan, Y., Li, Y., Qiu, Z., Mei, T.: Boosting image captioning with at-tributes. In: Proceedings of the IEEE International Conference on Computer Vi-sion. pp. 4894{4902 (2017)45. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization.arXiv preprint arXiv:1409.2329 (2014)46. Zhou, Y., Sun, Y., Honavar, V.: Improving image captioning by leveraging knowl-edge graphs. In: 2019 IEEE Winter Conference on Applications of Computer Vision(WACV). pp. 283{293. IEEE (2019)
HTjATmcpkQJ
Injecting Prior Knowledge into Image Caption Generation
9: Top 15% of accepted papers, strong accept
1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper tries to mitigate overfitting and generating easy captions by introducing prior knowledge from the dataset during training. To this end, authors propose to add visual-semantic relation prior knowledge by defining a series of Latent Topics, and semantic prior knowledge by training a Seq2seq module with the text. While the former is introduced in the training procedure as a self-attention with image region features, the latter is utilized to remove visual biased on semantic structures. Apart from increasing results of state-of-the-art approaches, they demonstrate that with their approach, image captioning models can rely on less data when training. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - The paper is easy to read. Ideas are easy to follow. - It is very well motivated. - Benefits of both modules (CLTA and SAE Regularizer) are clearly demonstrated in the experiments. - The implementation is very well explained in detail. - The benefits of adding prior knowledge (visual and semantic) is showed. - Additionally, authors demonstrate the relevance of prior knowledge as it allows to train models with less data. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Although the improvement exists, in some situations it is marginal. 4. [Overall rating] Paper rating. 9 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Good paper. Well written, well motivated, simple method and positive results. On top of that, very much in line with the workshop. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.)
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Injecting Prior Knowledge into Image Caption Generation ### Paper Abstract Automatically generating natural language descriptions from an image is a challenging problem in artificial intelligence that requires a good understanding of the visual and textual signals and the correlations between them. The state-of-the-art methods in image captioning struggle to approach human level performance, especially when data is limited. In this paper, we propose to improve the performance of the state-of-the-art image captioning models by incorporating two sources of prior knowledge: (i) a conditional latent topic attention, that uses a set of latent topics as an anchor to generate highly probable words and, (ii) a regularization technique that exploits the inductive biases in syntactic and semantic structure of captions and improves the generalization of image captioning models. Our experiments validate that our method produces more human interpretable captions and also leads to significant improvements on the MSCOCO dataset in both the full and low data regimes. ### Paper Keywords ["Image Captioning", "Prior", "Attention", "Regularization"] ### Paper Content 000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044ECCV#8ECCV#8Injecting Prior Knowledge into Image CaptionGenerationAnonymous ECCV submissionPaper ID 8Abstract. Automatically generating natural language descriptions froman image is a challenging problem in articial intelligence that requiresa good understanding of the visual and textual signals and the correla-tions between them. The state-of-the-art methods in image captioningstruggles to approach human level performance, especially when data islimited. In this paper, we propose to improve the performance of thestate-of-the-art image captioning models by incorporating two sources ofprior knowledge: (i) a conditional latent topic attention, that uses a set oflatent variables (topics) as an anchor to generate highly probable wordsand, (ii) a regularization technique that exploits the inductive biases insyntactic and semantic structure of captions and improves the general-ization of image captioning models. Our experiments validate that ourmethod produces more human interpretable captions and also leads tosignicant improvements on the MSCOCO dataset in both the full andlow data regimes.1 IntroductionIn recent years there has been a growing interest to develop end-to-end learn-ing algorithms in computer vision tasks. Despite the success in many problemssuch as image classication [17] and person recognition [21], the state-of-the-artmethods struggle to reach human-level performance in solving more challengingtasks such as image captioning within limited time and data which involves un-derstanding the visual scenes and describing them in a natural language. This isin contrast to humans who are eortlessly successful in understanding the sceneswhich they have never seen before and communicating them in a language. It islikely that this eciency is due to the strong prior knowledge of structure in thevisual world and language [11].Motivated by this observation, in this paper we ask \How can such priorknowledge be represented and utilized to learn better image captioning modelswith deep neural networks?". To this end, we look at the state-of-the-art encoder-decoder image captioning methods [39, 41, 3] where a Convolutional Neural Net-work (CNN) encoder extracts an embedding from the image, a Recurrent NeuralNetwork (RNN) decoder generates the text based on the embedding. This frame-work typically contains two dynamic mechanisms to model the sequential output:i) an attention module [4, 41] that identies the relevant parts of the image em-bedding based on the previous word and visual features and ii) the RNN decoder045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089ECCV#8ECCV#82 ECCV-20 submission ID 8LSTMCLTAOurs: A man jumping up to hit a tennis ball.LDA Topic PriorSentence PriorCaptionsCNN EncoderOurs: A man standing on a tennis court holding a racket.Fig. 1: Our Final Model with Conditional Latent Topic Attention (CLTA) andSentence Prior (Sentence Auto-Encoder (SAE) regularizer) both rely on priorknowledge to nd relevant words and generate non-template like and generalizedcaptions compared to the same Baseline caption for both images - A man hittinga tennis ball with a racket.that predicts the next words based on the its previous state and attended visualfeatures. While these two components are very powerful to model complex rela-tions between the visual and language cues, we hypothesize that they are alsocapable of and at the same time prone to overtting to wrong correlations, thusleading to poor generalization performance when the data is limited. Hence, wepropose to regulate these modules with two sources of prior knowledge.First, we propose an attention mechanism that accurately attends to rele-vant image regions and better cope with complex associations between wordsand image regions. For instance, in the example of a \man playing tennis", theinput visual attention encoder might only look at the local features ( tennis ball )leaving out the global visual information ( tennis court ). Hence, it generates atrivial caption as \A man is hitting a tennis ball", which is not the full descrip-tion of the image in context (as shown in g. 1). We solve this ambiguity byincorporating prior knowledge of latent topics [7], which are known to identifysemantically meaningful topics [8], into our attention module. In particular weintroduce a Conditional Latent Topic Attention (CLTA) module that modelsrelationship between a word and image regions through a latent shared spacei.e.latent topics to nd salient regions in an image. Tennis ball steers the modelto associate this word with the latent topic, \tennis", which further is respon-sible for localizing tennis court in the image. If a region-word pair has a higherprobability with respect to a latent topic and if the same topic has a higher prob-ability with respect to some other regions, then it is also a salient region and willbe highly weighted. Therefore, we compute two sets of probabilities conditionedon the current word of the captioning model. We use conditional-marginalizedprobability where marginalization is done over latent topics to nd salient imageregions to generate the next word. Our CLTA is modeled as a neural networkwhere marginalized probability is used to weight the image region features toobtain a context vector that is passed to a image captioning decoder to generatethe next word.Second, the complexity in the structure of natural language makes it harderto generate uent sentences while preserving a higher amount of encoded infor-090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134ECCV#8ECCV#8ECCV-20 submission ID 8 3mation (high Bleu-4 scores). Although current image captioning models are ableto model this linguistic structure, the generated captions follow a more template-like form, for instance, \A man hitting a tennisball with a racket." As shownin g. 1, visually similar images have template-like captions from the baselinemodel. Inspired from sequence-to-sequence (seq2seq) machine translation [35, 28,40, 16], we introduce a new regularization technique for captioning models coinedSAE Regularizer. In particular, we design and train an additional seq2seq sen-tence auto-encoder model (\SAE") that rst reads in a whole sentence as input,generates a xed dimensional vector, then the vector is further used to recon-struct the input sentence. Human languages are highly structured and followsimmense amount of regularity. Certain words are more likely to co-appear andcertain word patterns can be observed more often. Our SAE is trained to learnthe structure of the input (sentence) space in an oine manner by exploiting theregularity of the sentence space. The continuous latent space learned by SAEblends together both the syntactic and semantic information from the inputsentence space and generates high quality sentences during the reconstructionvia the SAE decoder. This suggests that the continuous latent space of SAEcontains sucient information regarding the syntactic and semantic structure ofinput sentences. Specically, we use SAE-Dec as an auxiliary decoder branch (seeg. 3). Adding this regularizer forces the representation from the image encoderand language decoder to be more representative of the visual content and lesslikely to overt. SAE-Dec is employed along with the original image captioningdecoder (\IC-Dec") to output the target sentence during training, however, wedo not use SAE regularizer at test time reducing additional computations.Both of the proposed improvements also help to overcome the problem oftraining on large image-caption paired data [26, 27] by incorporating prior knowl-edge which is learned from unstructured data in the form of latent topics andSAE. These priors { also known as \inductive biases" { help the models makeinferences that go beyond the observed training data. Through an extensive setof experiments, we demonstrate that our proposed CLTA module and SAE-Decregularizer improves the image captioning performance both in the limited dataand full data training regimes on the MSCOCO dataset [26].2 Related WorkHere, we rst discuss related attention mechanisms and then the use of knowledgetransfer in image captioning models.Attention mechanisms in image captioning. The pioneering work in neu-ral machine translation [4, 29, 9] has shown that attention in encoder-decoderarchitectures can signicantly boost the performance in sequential generationtasks. Visual attention is one of the biggest contributor in image captioning [15,41, 3, 19]. Soft attention and hard attention variants for image captioning wereintroduced in [41]. Bottom-Up and Top-Down self attention is eectively usedin [3]. Attention on attention is used in recent work [19]. Interestingly, they useattention at both encoder and the decoder step of the captioning process. Our135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179ECCV#8ECCV#84 ECCV-20 submission ID 8proposed attention signicantly diers in comparison to these attention mecha-nisms. First, the traditional attention methods, soft-attention [4] and scaled dotproduct attention [36] aims to nd features or regions in an image that highlycorrelates with a word representation [3, 4, 34]. In contrast, our conditional-latenttopic attention uses latent variables i.e.topics as anchors to nd relationship be-tween word representations and image regions (features). Some image regionsand word representations may project to the same set of latent topics more thanthe others and therefore more likely to co-occur. Our method learns to modelthese relationships between word-representations and image region features us-ing our latent space. We allow competition among regions and latent topics tocompute two sets of probabilities to nd salient regions. This competing strategyand our latent topics guided by pre-trained LDA topics [7] allow us to bettermodel relationships between visual features and word representations. Hence, theneural structure and our attention mechanism is quite dierent from all priorwork [41, 3, 19, 4].Knowledge transfer in image captioning. It is well known that languageconsists of semantic and syntactic biases [5, 30]. We exploit these biases by rsttraining a recurrent caption auto-encoder to capture this useful information us-ing [35]. Our captioning auto-encoder is trained to reconstruct the input sen-tence and hence, this decoder encapsulates the structural, syntactic and seman-tic information of input captions. During captioning process we regularize thecaptioning RNN with this pretrained caption-decoder to exploit biases in thelanguage domain and transfer them to the visual-language domain. To the bestof our knowledge, no prior work has attempted such knowledge transfer in imagecaptioning. Zhou et al. [46] encode external knowledge in the form of knowledgegraphs using Concept-Net [27] to improve image captioning. The closest to oursis the work of [42] where they propose to generate scene graphs from both sen-tences and images and then encode the scene graphs to a common dictionarybefore decoding them back to sentences. However, generation of scene graphsfrom images itself is an extremely challenging task. Finally, we propose to trans-fer syntactic and semantic information as a regularization technique during theimage captioning process as an auxiliary loss. Our experiments suggest that thisleads to considerable improvements, specially in more structured measures suchas CIDEr [37].3 MethodIn this section, we rst review image captioning with attention, introduce ourCLTA mechanism, and then our sentence auto-encoder (SAE) regularizer.3.1 Image Captioning with AttentionImage captioning models are based on encoder-decoder architecture [41] that usea CNN as image encoder and a Long Short-Term Memory (LSTM) [18] as thedecoder { see Fig.1.180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224ECCV#8ECCV#8ECCV-20 submission ID 8 5The encoder takes an image as input and extracts a feature set v=fv1;:::;vRgcorresponding to Rregions of the image, where vi2RDis theD-dimensionalfeature vector for the ithregion. The decoder outputs a caption yby generatingone word at each time step. At time step t, the feature set vis combined into asingle vector vtaby taking weighted sum as follows:vta=RXi=1tivi (1)wheretiis the CLTA weight for region iat timet, that is explained in the nextsection. The decoder LSTM then takes a concatenated vector [ vtajyt1] andthe previous hidden state ht1as input and generates the next hidden state ht:ht=([vtajEyt1];ht1;) (2)where,jdenotes concatenation, yt12RKis the one-hot vector of the wordgenerated at time t1,Kis the vocabulary size, ht2Rnis the hidden state ofthe LSTM at time t,nis the LSTM dimensionality, and are trainable param-eters of the LSTM. Finally, the decoder predicts the output word by applying alinear mapping on the hidden state and vtaas follows:yt= ([htjvta]; ) (3)where are trainable parameters. Our LSTM implementation closely followsthe formulation in [45]. The word embedding matrix E2RmKis trainedto translate one-hot vectors to word embeddings as in [41], where mis theword embedding dimension. In the next section, we describe our proposed CLTAmechanism.3.2 CLTA: Conditional Latent Topic AttentionAt time step t, our CLTA module takes the previous LSTM hidden state ( ht1)and image features to output the attention weights t. Specically, we use aset of latent topics to model the associations between textual ( ht1) and visualfeatures ( v) to compute the attention weights. The attention weight for region iis obtained by taking the conditional-marginalization over the latent topic lasfollows:ti=P(region =ijht1;v) =CXl=1P(region =ijht1;v;l)P(ljht1;vi) (4)wherelis a topic variable in the C-dimensional latent space. To compute P(ljht1;vi),we rst project both textual and visual features to a common C-dimensionalshared latent space, and obtain the associations by summing the projected fea-tures as follows:225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269ECCV#8ECCV#86 ECCV-20 submission ID 8Up-Down: A dirty bathroom with a toilet and a sink.CLTA: A bathroom with a toilet and a roll of toilet paper.Top-20 Topic Words: toilet, bathroom, white, floor, small, wall, next, sitting, tiled, tile, seat, urinal, public, restroom, stall, room, paper roll, lid, dirty.Up-Down: A kitchen with a refrigerator and a stove.CLTA: A kitchen with wooden cabinets and stainless steel appliances.Top-20 Topic Words: kitchen, refrigerator, cabinet, white, sink, appliance, counter, fridge, small, wood, stove, wooden, steel, large, floor, stainless, area, top, clean, island.Fig. 2: Image-Caption pairs generated from our CLTA module with 128 dimen-sions and visualization of Top-20 words from the latent topics.qti=Wscvi+Whcht1(5)whereWsc2RCDandWhc2RCnare the trainable projection matrices forvisual and textual features, respectively. Then the latent topic probability isgiven by:PL=P(ljht1;vi) =exp(qtil)PCk=1exp(qtik)(6)Afterwards, we compute the probability of a region given the textual, visionfeatures and latent topic variable as follows:rti=Wsrvi+Whrht1(7)P(region =ijht1;v;l) =exp(rtil)PRk=1exp(rtkl)(8)whereWsr2RCDandWhr2RCnare the trainable projection matrices forvisual and textual features, respectively.The latent topic posterior in eq. (6) is pushed to the pre-trained LDA topicprior by adding a KL-divergence term to the image captioning objective. Weapply Latent Dirichlet Allocation (LDA) [7] on the caption data. Then, eachcaption has an inferred topic distribution QTfrom the LDA model which actsas a prior on the latent topic distribution, PL. For doing this, we take the averageof the C-dimensional latent topics at all time steps from 0 ;:::;t1 as:PLavg=1tt1Xk=0P(ljhk;vi) (9)Hence, the KL-divergence objective is dened as:DKL(PLavgjjQT) =Xc2CPLavg(c)log(PLavg(c)QT(c)) (10)This learnt latent topic distribution captures the semantic relations betweenthe visual and textual features in the form of visual topics, and therefore wealso use this latent posterior, PLas a source of meaningful information during270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314ECCV#8ECCV#8ECCV-20 submission ID 8 7generation of the next hidden state. The modied hidden state htin eq. (2) isnow given by:ht=([vtajEyt1jPL];ht1;) (11)We visualize the distribution of latent topics in Figure 2. While traditional\soft-max" attention exploit simple correlation among textual and visual infor-mation, we make use of latent topics to model associations between them.3.3 SAE RegularizerEncoder-decoder methods are widely used for translating one language to an-other [10, 35, 4]. When the input and target sentences are the same, these modelsfunction as auto-encoders by rst encoding an entire sentence into a xed-(low)dimensional vector in a latent space, and then reconstructing it. Autoencodersare commonly employed for unsupervised training in text classication [13] andmachine translation [28].In this paper, our SAE regularizer has two advantages: i) acts as a softconstraint on the image captioning model to regularize the syntactic and se-mantic space of the captions for better generalization and, ii) encourages theimage captioning model to extract more context information for better mod-elling long-term memory. These two properties of the SAE regularizer generatessemantically meaningful captions for an image with syntactic generalizations andprevents generation of naive and template-like captions.Our SAE model uses network architecture of [35] with Gated Recurrent Units(GRU) [12]. Let us denote the parameter of the decoder GRU by D. A stochasticvariation of the vanilla sentence auto-encoders is de-noising auto-encoders [38]which are trained to \de-noise" corrupted versions of their inputs. To inject suchinput noise, we drop each word in the input sentence with a probability of 50%to reduce the contribution of a single word on the semantics of a sentence. Wetrain the SAE model in an oine stage on training set of the captioning dataset.After the SAE model is trained, we discard its encoder and integrate only itsdecoder to regularize the captioning model.As depicted in Figure 3, the pretrained SAE decoder takes the last hiddenstate vector of captioning LSTM has input and generates an extra caption(denoted as ysae) in addition to the output of the captioning model (denotedasylstm). We use output of the SAE decoder only in train time to regulatethe captioning model by implicitly transferring the previously learned latentstructure with SAE decoder.Our integrated model is optimized to generate two accurate captions ( i.e.ysaeandylstm) by minimizing a weighted average of two loss values:arg minL(y;ylstm) + (1)L(y;ysae) (12)whereLis the cross-entropy loss computed for each caption, word by wordagainst the ground truth caption y,is the trade-o parameter, and are315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359ECCV#8ECCV#88 ECCV-20 submission ID 8ImageCaptioningModelLSTM's Hidden StateSAE's DecoderysaeylstmSAE RegularizerL(, ) y∗ylstmL(, ) y∗ysaeFig. 3: Illustration of our proposed Sentence Auto-Encoder (SAE) regularizerwith the image captioning decoder. The captioning model is trained by addingthe SAE decoder as an auxiliary branch and thus acting as a regularizer.the parameters of our model. We consider two scenarios that we use during ourexperimentation.{First, we set the parameters of the SAE decoder Dto be the weights of thepre-trained SAE decoder and freeze them while optimizing Equation (12) interms of=f; ;Eg.{Second, we initialize Dwith the weights of the pre-trained SAE decoder andne-tune them along with the LSTM parameters, i.e.=f; ;E; Dg.As discussed in section 3.2, we also minimize the KL divergence in eq. (10) alongwith the nal regularized objective in eq. (12) as:arg minL(y;ylstm) + (1)L(y;ysae) +DKL(PLavgjjQT) (13)where,is the weight for the KL divergence loss.Discussion. An alternative way of exploiting the information from the pre-trained SAE model is to bring the representations from the captioning decodercloser to the encodings of the SAE encoder by minimizing the Euclidean distancebetween the hidden state from the SAE encoder and the hidden state from thecaptioning decoder at each time-step. However, we found this setting is toorestrictive on the learned hidden state of the LSTM.4 ExperimentsDataset. Our models are evaluated on the standard MSCOCO 2014 imagecaptioning dataset [26]. For fair comparisons, we use the same data splits fortraining, validation and testing as in [22] which have been used extensively inprior works. This split has 113,287 images for training, 5k images for validationand testing respectively with 5 captions for each image. We perform evaluationon all relevant metrics for generated sentence evaluation - CIDEr [37], Bleu [31],METEOR [14], ROUGE-L [25] and, SPICE [2].360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404ECCV#8ECCV#8ECCV-20 submission ID 8 9Implementation Details. For training our image captioning model, we com-pute the image features based on the Bottom-Up architecture proposed by [3],where the model is trained using a Faster-RCNN model [32] on the Visual-Genome Dataset [24] with object and attribute information. These features areextracted from Rregions and each region feature has Ddimensions, where RandDis 36 and 2048 respectively as proposed in [3]. We use these 36 2048image features in all our experiments.4.1 Experimental SetupLDA Topic Models. The LDA [7] model is learned in an oine manner to gener-ate aCdimensional topic distribution for each caption. Briey, the LDA modeltreats the captions as word-documents and group these words to form Ctopics(cluster of words), learns the word distribution for each topic ( CV) whereVis the vocabulary size and also generates a topic distribution for each inputcaption,QTwhere each Cthdimension denotes the probability for that topic.Sentence Auto-Encoder. The Sentence Auto-encoder is trained oine on theMSCOCO 2014 captioning dataset [26] with the same splits as discussed above.For the architecture, we have a single layer GRU for both the encoder and thedecoder. The word embeddings are learned with the network using an embeddinglayer and the dimension of both the hidden state and the word embeddings is1024. During training, the decoder is trained with teacher-forcing [6] with aprobability of 0.5. For inference, the decoder decodes till it reaches the end ofcaption token. The learning rate for this network is 2e-3 and it is trained usingthe ADAM [23] optimizer.Image Captioning Decoder with SAE Regularizer. The architecture of our imagecaptioning decoder is same as the Up-Down model [3] with their \soft-attention"replaced by our CLTA module and trained with the SAE regularizer. We alsoretrain the AoANet model proposed by Huang et al. [19] by incorporating ourCLTA module and the SAE regularizer. In the results section, we show improve-ments over the Up-Down and AoANet models using our proposed approaches.Note, the parameters for training Up-Down and AoANet baselines are sameas the original setting. While training the captioning models together with theSAE-decoder, we jointly learn an ane embedding layer (dimension 1024) bycombining the embeddings from the image captioning decoder and the SAE-decoder. During inference, we use beam search to generate captions from thecaptioning decoder using a beam size of 5 for Up-Down and a beam-size of 2 forAoANet. For training the overall objective function as given in Equation 13, thevalue ofis initialized by 0.7 and increased by a rate of 1.1 every 5 epochs untilit reaches a value of 0.9 and is xed to 0.1. We use the ADAM optimizer witha learning rate of 2e-4. Our code is implemented using PyTorch [1] and will bemade publicly available.405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449ECCV#8ECCV#810 ECCV-20 submission ID 85 Results and AnalysisFirst, we study the caption reconstruction performance of vanilla and denois-ing SAE, then report our model's image captioning performance on MS-COCOdataset with full and limited data, investigate multiple design decisions and an-alyze our results qualitatively.5.1 Sentence Auto-Encoder ResultsAn ideal SAE must learn mapping its input to a xed low dimensional space suchthat a whole sentence can be summarized and reconstructed accurately. To thisend, we experiment with two SAEs, Vanilla-SAE and Denoising-SAE and reporttheir reconstruction performances in terms of Bleu4 and cross-entropy (CE) lossin g.4. The vanilla model, when the inputs words are not corrupted, outperformsthe denoising one in both metrics. This is expected as the denoising model isonly trained with corrupted input sequences. The loss for both the Vanilla andDenoising SAE start from a relatively high value of approximately 0.8 and 0.4respectively, and converge to a signicantly low error of 0.1 and 0.2. For a betteranalysis, we also compute the Bleu-4 metrics on our decoded caption against the5 ground-truth captions. As reported in g.1, both models obtain signicantlyhigh Bleu-4 scores. This indicates that an entire caption can be compressed ina low dimensional vector (1024) and can be successfully reconstructed.ooooooooooooooooooooooo0 5 10 15 200.00.20.40.60.81.0EpochsReconstruction Loss+++++++++++++++++++++++o+Vanilla SAEDenoising SAEFig. 4: Error Curve for the SentenceAuto-Encoder on the Karpathy testsplit. The error starts increasing ap-proximately after 20 epochs.Models Bleu-4"CE-Loss#Vanilla SAE 96.33 0.12Denoising SAE 89.79 0.23Table 1: Bleu-4 Evaluation and Recon-struction Cross-Entropy Loss for theSentence Auto-Encoder on the Karpa-thy test split of MSCOCO 2014 captiondataset [26].5.2 Image Captioning ResultsHere we incorporate the proposed CLTA and SAE regularizer to recent image-captioning models including Up-Down [3] and AoANet [19] and report theirperformance on MS-COCO dataset in multiple metrics (see Table 2). The tablesreport the original results of these methods from their publications in the topblock and the rows in cyan show relative improvement of our models whencompared to the baselines.The baseline models are trained for two settings - 1)Up-Downy, is the modelre-trained on the architecture of Anderson et al. [3] and, 2) AoANety, is the450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494ECCV#8ECCV#8ECCV-20 submission ID 8 11Modelscross-entropy loss cider optimizationB-1 B-4 M R C S B-1 B-4 M R C SLSTM-A [44] 75.4 35.2 26.9 55.8 108.8 20.0 78.6 35.5 27.3 56.8 118.3 20.8RFNet [20] 76.4 35.8 27.4 56.8 112.5 20.5 79.1 36.5 27.7 57.3 121.9 21.2Up-Down [3] 77.2 36.2 27.0 56.4 113.5 20.3 79.8 36.3 27.7 56.9 120.1 21.4GCN-LSTM [43] 77.3 36.8 27.9 57.0 116.3 20.9 80.5 38.2 28.5 58.3 127.6 22.0AoANet [19] 77.4 37.2 28.4 57.5 119.8 21.3 80.2 38.9 29.2 58.8 129.8 22.4Up-Downy75.9 36.0 27.3 56.1 113.3 20.1 79.2 36.3 27.7 57.3 120.8 21.2Up-Downy+ CLTA + SAE-Reg 76.7 37.1 28.1 57.1 116.2 21.0 80.2 37.4 28.4 58.1 127.4 22.0Relative Improvement +0.8 +1.1 +0.8 +1.0 +2.9 +0.9 +1.0 +1.1 +0.7 +0.8 +6.6 +0.8AoANet77.3 36.9 28.5 57.3 118.4 21.6 80.5 39.1 29.0 58.9 128.9 22.7AoANety+ CLTA + SAE-Reg 78.1 37.9 28.457.5 119.9 21.7 80.8 39.3 29.1 59.1 130.1 22.9Relative Improvement +0.8 +1.0 -0.1 +0.2 +1.5 +0.1 +0.3 +0.2 +0.1 +0.2 +1.2 +0.2Table 2: Image captioning performance on the \Karpathy" test split of theMSCOCO 2014 caption dataset [26] from other state-of-the-art methods andour models. Our Conditional Latent Topic Attention with the SAE regularizersignicantly improves across all the metrics using both cross-entropy loss andcider optimization .ydenotes our trained models and * indicates the results obtainedfrom the publicly available pre-trained model.Attention-on-Attention model re-trained as in Huang et al. [19]. Note that forboth Up-Down and AoANet, we use the original source code to train them in ourown hardware. We replace the \soft-attention" module in our Up-Down baselineby CLTA directly. The AoANet model is based on the powerful Transformer [36]architecture with the multi-head dot attention in both encoder and decoder. ForAoANet, we replace the dot attention in the decoder of AoANet at each headby the CLTA which results in multi-head CLTA. The SAE-decoder is added as aregularizer on top of these models as also discussed in section 4.1. As discussedlater in section 5.5, we train all our models with 128 dimensions for the CLTAand with the Denoising SAE decoder (initialized with hlast).We evaluate our models with the cross-entropy loss training and also by usingthe CIDEr score oprimization [33] after the cross-entropy pre-training stage (ta-ble 2). For the cross-entropy one, our combined approach consistently improvesover the baseline performances across all metrics. It is clear from the resultsthat improvements in CIDEr and Bleu-4 are quite signicant which shows thatour approach generates more human-like and accurate sentences. It is interest-ing to note that AoANet with CLTA and SAE-regularizer also gives consistentimprovements despite having a strong transformer language model. We show insection 5.4 the dierences between our captions and the captions generated fromUp-Down and AoANet. Our method is modular and improves on state-of-the-artmodels despite the architectural dierences. Moreover, the SAE decoder is dis-carded after training and hence it brings no additional computational load duringtest-time but with signicant performance boost. For CIDEr optimization, ourmodels based on Up-Down and AoANet also show signicant improvements inall metrics for our proposed approach.495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539ECCV#8ECCV#812 ECCV-20 submission ID 8Models 50% data 75% data 100% dataBleu-4 CIDEr Bleu-4 CIDEr Bleu-4 CIDErUp-Down 35.4 112.0 35.8 112.7 36.0 113.3Up-Down+CLTA 36.3 113.7 36.3 114.5 36.5 115.0Up-Down+CLTA+SAE-Reg 36.6 114.8 36.8 115.6 37.1 116.2AoANet 36.6 116.1 36.8 118.1 36.9 118.4AoANet+CLTA 36.9 116.7 37.1 118.4 37.4 119.1AoANet+CLTA+SAE-Reg 37.2 117.5 37.6 118.9 37.9 119.9Table 3: Evaluation of our CLTA and SAE-Regularizer methods by training ona subset of the MSCOCO \Karpathy" Training split.5.3 Learning to Caption with Less DataTable 3 evaluates the performance of our proposed models for a subset of thetraining data, where x% is the percentage of the total data that is used for train-ing. All these subsets of the training samples are chosen randomly. Our CLTAmodule is trained with 128 dimensions for the latent topics along with the De-noising SAE Regularizer initialized with the last hidden state of the LSTM (Up-Down+CLTA+SAE-Reg). Despite the number of training samples, our averageimprovement with CLTA and SAE-Regularizer is around 1% in Bleu-4 and 2.9%in CIDEr for the Up-Down model and 0.8% in Bleu-4 and 1.2% in CIDEr for theAoANet model. The signicant improvements in Bleu-4 and CIDEr scores withonly 50% and 75% of the data compared to the baseline validates our proposedmethods as a form of rich prior.5.4 Qualitative ResultsIn g. 5, we show examples of images and captions generated by the baselinesUp-Down and AoANet along with our proposed methods, CLTA and SAE-Regularizer. The baseline models have repetitive words and errors while gener-ating captions ( in front of a mirror ,a dog in the rear view mirror ). Our modelscorrects these mistakes by nding relevant words according to the context andputting them together in a human-like caption format ( a rear view mirror showsa dog has the same meaning as a rear view mirror shows a dog in the rear viewmirror which is eciently corrected by our models by bringing in the correctmeaning). From all the examples shown, we can see that our model overcomesthe limitation of overtting in current methods by completing a caption withmore semantic and syntactic generalization ( e.g.:dierent avoured donuts andseveral trains on the tracks ).5.5 Ablation StudyConditional Latent Topic Attention (CLTA). Table 4a depicts the resultsfor the CLTA module that is described in section 3.2. Soft-attention is used as a540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584ECCV#8ECCV#8ECCV-20 submission ID 8 13Up-Down: A dog laying on the floor in front of a mirror.+CLTA: a black and white dog laying on the floor.+CLTA+SAE-Reg: a black and white dog laying on a wooden floor. Up-Down: A box of doughnuts with donuts in it.+CLTA: a box filled with different types of donuts.+CLTA+SAE-Reg: a box filled with lots of different flavoured donuts. Up-Down: A yellow bike with a bicycle on the street.+CLTA: a yellow bike with a yellow umbrella attached to it.+CLTA+SAE-Reg: a bicycle with an umbrella attached to it. GT: a black and white dog wearing a santa claus hat lying on the floor. GT: a box that contains multiple kinds of doughnuts. GT: a bicycle with an umbrella and a basket. AoANet: a train station with a train station with trains.+CLTA: a train station with several trains parked in it.+CLTA+SAE-Reg: a train station with several trains on the tracks. GT: a train station with several trains in the station. AoANet: a rear view mirror shows a dog in the rear view mirror.+CLTA: a rear view mirror with a dog hanging out the window.+CLTA+SAE-Reg: a rear view mirror showing a dog looking out the window. GT: dog looking out the window of a car in rearview mirror. AoANet: a bench sitting under a tree in a park.+CLTA: a park bench sitting in the middle of a forest.+CLTA+SAE-Reg: a park bench sitting in the middle of a forest. GT: a park bench surrounded by a green forest of trees.. Fig. 5: Example of generated captions from the baseline Up-Down, AoANet, ourproposed CLTA and, our nal models with both CLTA and SAE Regularizer.baseline and corresponds to the attention mechanism in [41] which is the mainattention module in Up-Down image captioning model by Anderson et al. [3]. Wereplace this attention with the CLTA and evaluate its performance for dierentnumber of latent dimensions, i.e.topics (C). The models trained with latenttopic dimensions of 128, 256 and 512 all outperform the baseline signicantly.The higher CIDEr and Bleu-4 scores for these latent topics show the model'scapability to generate more descriptive and accurate human-like sentences.As we increase the dimensions of latent topics from 128 to 512, we predictmore relevant keywords as new topics learnt by the CLTA module with 512dimensions are useful in encoding more information and hence generating mean-ingful captions.Models Baseline CLTASoft-Attention 128 256 512Bleu-4 36.0 36.5 36.6 36.7CIDEr 113.3 115.0 115.2 115.3(a) Evaluation scores for the Up-Downmodel with soft-attention and ablations ofour CLTA module.Models SAE-Decoder hBleu-4 CIDErBaseline No - 36.0 113.3CLTA-128VanillaFirst 36.9 115.8Last 36.8 115.3DenoisingFirst 36.8 116.1Last 37.1 116.2CLTA-512 Denoising Last 37.2 115.9(b) Additional quantitative evaluation re-sults from dierent settings of the SAE de-coder when trained with image captioningdecoder. hdenotes the hidden state.Table 4: Ablative Analysis for dierent settings on our (a) CLTA module and,(b) SAE regularizer training.Image Captioning Decoder with SAE Regularizer. Table 4b reportsablations for our full image captioning model (Up-Down with CLTA) and theSAE regularizer. As discussed in section 3.3, SAE decoder (parameters dened by585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629ECCV#8ECCV#814 ECCV-20 submission ID 8D) is initialized with the hidden state of the image captioning decoder. Duringtraining, we test dierent settings of how the SAE decoder is trained with theimage captioning decoder: (1) Vanilla vs Denoising SAE and, (2) hrstvshlast,whether the SAE decoder is initialized with the rst or last hidden state of theLSTM decoder. For all the settings, we ne-tune the parameters of GRU D(D)when trained with the image captioning model (the parameters are initializedwith the weights of the pre-trained Vanilla or Denoising SAE decoder).The results in Table 4b are reported on dierent combinations from the set-tings described above, with the CLTA having 128 and 512 dimensions in theimage captioning decoder. Adding the auxiliary branch of SAE decoder signif-icantly improves over the baseline model with CLTA and in the best setting,Denoising SAE with hlastimproves the CIDEr and Bleu-4 scores by 1.2 and0.6 respectively. As the SAE decoder is trained for the task of reconstruction,ne-tuning it to the task of captioning improves the image captioning decoder.Initializing the Vanilla SAE decoder with hlastdoes not provide enough gra-dient during training and quickly converges to a lower error, hence this bringslower generalization capacity to the image captioning decoder. As hrstis lessrepresentative of an entire caption compared to hlast, vanilla SAE with hrstismore helpful to improve the captioning decoder training. On the other hand, theDenoising SAE being robust to noisy summary vectors provide enough trainingsignal to improve the image captioning decoder when initialized with either hrstorhlastbut slightly better performance with hlastfor Bleu-4 and CIDEr as itforces hlastto have an accurate lower-dim representation for the SAE and hencebetter generalization. It is clear from the results in table 4b, that Denoising SAEwith hlasthelps to generate accurate and generalizable captions. From our ex-periments, we found that CLTA with 128 topics and Denoising SAE (with hlast)has better performance than even it's counterpart with 512 topics. Hence, for allour experiments in section 5.2 and section 5.3 our topic dimension is 128 withDenoising SAE initialized with hlast.6 ConclusionIn this paper, we have introduced two novel methods for image captioning thatexploit prior knowledge and hence help to improve state-of-the-art models evenwhen the data is limited. The rst method exploits association between visualand textual features by learning latent topics via an LDA topic prior and obtainsrobust attention weights for each image region. The second one is an SAE reg-ularizer that is pre-trained in an autoencoder framework to learn the structureof the captions and is plugged into the image captioning model to regulate itstraining. Using these modules, we obtain consistent improvements on two inves-tigate models, bottom-up top-down and the AoANet image captioning model,indicating the usefulness of our two modules as a strong prior. In future work,we plan to further investigate potential use of label space structure learning forother challenging vision tasks with limited data and to improve generalization.630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674ECCV#8ECCV#8ECCV-20 submission ID 8 15References1. Pytorch. https://pytorch.org/2. Anderson, P., Fernando, B., Johnson, M., Gould, S.: Spice: Semantic propositionalimage caption evaluation. In: European Conference on Computer Vision. pp. 382{398. Springer (2016)3. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.:Bottom-up and top-down attention for image captioning and visual question an-swering. In: Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. pp. 6077{6086 (2018)4. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learningto align and translate. arXiv preprint arXiv:1409.0473 (2014)5. Bao, Y., Zhou, H., Huang, S., Li, L., Mou, L., Vechtomova, O., Dai, X., Chen,J.: Generating sentences from disentangled syntactic and semantic spaces. arXivpreprint arXiv:1907.05789 (2019)6. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequenceprediction with recurrent neural networks. In: Advances in Neural InformationProcessing Systems. pp. 1171{1179 (2015)7. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. Journal of machineLearning research 3(Jan), 993{1022 (2003)8. Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L., Blei, D.M.: Reading tea leaves:How humans interpret topic models. In: Advances in neural information processingsystems. pp. 288{296 (2009)9. Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties ofneural machine translation: Encoder{decoder approaches. In: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation.pp. 103{111 (2014)10. Cho, K., Van Merri enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk,H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder forstatistical machine translation. arXiv preprint arXiv:1406.1078 (2014)11. Chomsky, N.: Aspects of the Theory of Syntax, vol. 11. MIT press (2014)12. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recur-rent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)13. Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in neuralinformation processing systems. pp. 3079{3087 (2015)14. Denkowski, M., Lavie, A.: Meteor universal: Language specic translation evalua-tion for any target language. In: Proceedings of the ninth workshop on statisticalmachine translation. pp. 376{380 (2014)15. Fang, H., Gupta, S., Iandola, F., Srivastava, R.K., Deng, L., Doll ar, P., Gao, J., He,X., Mitchell, M., Platt, J.C., et al.: From captions to visual concepts and back. In:Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 1473{1482 (2015)16. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutionalsequence to sequence learning. In: Proceedings of the 34th International Conferenceon Machine Learning-Volume 70. pp. 1243{1252. JMLR. org (2017)17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In:Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 770{778 (2016)18. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation9(8), 1735{1780 (1997)675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719ECCV#8ECCV#816 ECCV-20 submission ID 819. Huang, L., Wang, W., Chen, J., Wei, X.Y.: Attention on attention for image cap-tioning. In: The IEEE International Conference on Computer Vision (ICCV) (Oc-tober 2019)20. Jiang, W., Ma, L., Jiang, Y.G., Liu, W., Zhang, T.: Recurrent fusion network forimage captioning. In: Proceedings of the European Conference on Computer Vision(ECCV). pp. 499{515 (2018)21. Joon Oh, S., Benenson, R., Fritz, M., Schiele, B.: Person recognition in personalphoto collections. In: Proceedings of the IEEE international conference on com-puter vision. pp. 3862{3870 (2015)22. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image de-scriptions. In: Proceedings of the IEEE conference on computer vision and patternrecognition. pp. 3128{3137 (2015)23. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)24. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S.,Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting languageand vision using crowdsourced dense image annotations. International Journal ofComputer Vision 123(1), 32{73 (2017)25. Lin, C.Y., Och, F.J.: Automatic evaluation of machine translation quality usinglongest common subsequence and skip-bigram statistics. In: Proceedings of the42nd Annual Meeting on Association for Computational Linguistics. p. 605. Asso-ciation for Computational Linguistics (2004)26. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll ar, P.,Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conferenceon computer vision. pp. 740{755. Springer (2014)27. Liu, H., Singh, P.: Conceptnet|a practical commonsense reasoning tool-kit. BTtechnology journal 22(4), 211{226 (2004)28. Luong, M.T., Le, Q.V., Sutskever, I., Vinyals, O., Kaiser, L.: Multi-task sequenceto sequence learning. arXiv preprint arXiv:1511.06114 (2015)29. Luong, M.T., Pham, H., Manning, C.D.: Eective approaches to attention-basedneural machine translation. arXiv preprint arXiv:1508.04025 (2015)30. Marcheggiani, D., Bastings, J., Titov, I.: Exploiting semantics in neural machinetranslation with graph convolutional networks. arXiv preprint arXiv:1804.08313(2018)31. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automaticevaluation of machine translation. In: Proceedings of the 40th annual meeting onassociation for computational linguistics. pp. 311{318. Association for Computa-tional Linguistics (2002)32. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detec-tion with region proposal networks. In: Advances in neural information processingsystems. pp. 91{99 (2015)33. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequencetraining for image captioning. In: Proceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition. pp. 7008{7024 (2017)34. Sharma, P., Ding, N., Goodman, S., Soricut, R.: Conceptual captions: A cleaned,hypernymed, image alt-text dataset for automatic image captioning. In: Proceed-ings of the 56th Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers). pp. 2556{2565 (2018)35. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neuralnetworks. In: Advances in neural information processing systems. pp. 3104{3112(2014)720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764ECCV#8ECCV#8ECCV-20 submission ID 8 1736. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in neural informationprocessing systems. pp. 5998{6008 (2017)37. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: Consensus-based imagedescription evaluation. In: Proceedings of the IEEE conference on computer visionand pattern recognition. pp. 4566{4575 (2015)38. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composingrobust features with denoising autoencoders. In: Proceedings of the 25th interna-tional conference on Machine learning. pp. 1096{1103. ACM (2008)39. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural imagecaption generator. In: Proceedings of the IEEE conference on computer vision andpattern recognition. pp. 3156{3164 (2015)40. Wiseman, S., Rush, A.M.: Sequence-to-sequence learning as beam-search optimiza-tion. arXiv preprint arXiv:1606.02960 (2016)41. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R.,Bengio, Y.: Show, attend and tell: Neural image caption generation with visualattention. In: International conference on machine learning. pp. 2048{2057 (2015)42. Yang, X., Tang, K., Zhang, H., Cai, J.: Auto-encoding scene graphs for image cap-tioning. In: Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. pp. 10685{10694 (2019)43. Yao, T., Pan, Y., Li, Y., Mei, T.: Exploring visual relationship for image captioning.In: Proceedings of the European conference on computer vision (ECCV). pp. 684{699 (2018)44. Yao, T., Pan, Y., Li, Y., Qiu, Z., Mei, T.: Boosting image captioning with at-tributes. In: Proceedings of the IEEE International Conference on Computer Vi-sion. pp. 4894{4902 (2017)45. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization.arXiv preprint arXiv:1409.2329 (2014)46. Zhou, Y., Sun, Y., Honavar, V.: Improving image captioning by leveraging knowl-edge graphs. In: 2019 IEEE Winter Conference on Applications of Computer Vision(WACV). pp. 283{293. IEEE (2019)<|im_end|> <|im_start|>assistant ### Review Title Injecting Prior Knowledge into Image Caption Generation ### Review Text 1. [Summary] In 2-3 sentences, describe the key ideas, experiments, and their significance. The paper tries to mitigate overfitting and generating easy captions by introducing prior knowledge from the dataset during training. To this end, authors propose to add visual-semantic relation prior knowledge by defining a series of Latent Topics, and semantic prior knowledge by training a Seq2seq module with the text. While the former is introduced in the training procedure as a self-attention with image region features, the latter is utilized to remove visual biased on semantic structures. Apart from increasing results of state-of-the-art approaches, they demonstrate that with their approach, image captioning models can rely on less data when training. 2. [Strengths] What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. - The paper is easy to read. Ideas are easy to follow. - It is very well motivated. - Benefits of both modules (CLTA and SAE Regularizer) are clearly demonstrated in the experiments. - The implementation is very well explained in detail. - The benefits of adding prior knowledge (visual and semantic) is showed. - Additionally, authors demonstrate the relevance of prior knowledge as it allows to train models with less data. 3. [Weaknesses] What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. - Although the improvement exists, in some situations it is marginal. 4. [Overall rating] Paper rating. 9 5. [Justification of rating] Please explain how the strengths and weaknesses aforementioned were weighed in for the rating. Good paper. Well written, well motivated, simple method and positive results. On top of that, very much in line with the workshop. 6. [Detailed comments] Additional comments regarding the paper (e.g. typos or other possible improvements you would like to see for the camera-ready version of the paper, if any.) ### Review Rating 9: Top 15% of accepted papers, strong accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
32B5lOqZUiO
ICLR.cc/2021/Conference
2021
Pareto-Frontier-aware Neural Architecture Search
["Yong Guo", "Yaofo Chen", "Yin Zheng", "Peilin Zhao", "Jian Chen", "Junzhou Huang", "Mingkui Tan"]
Designing feasible and effective architectures is essential for deploying deep models to real-world scenarios. In practice, one has to consider multiple objectives (e.g., model performance and computational cost) and diverse constraints incurred by different computation resources. To address this, most methods seek to find promising architectures via optimizing a well pre-defined utility function. However, it is often non-trivial to design an ideal function that could well trade-off different objectives. More critically, in many real scenarios, even for the same platform, we may have different applications with various latency budgets. To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary. Nevertheless, it would be fantastic if we can produce multiple promising architectures to fulfill each budget in the same search process. In this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method which seeks to learn the Pareto frontier (i.e., the set of Pareto optimal architectures) w.r.t. multiple objectives. Here, we formulate the Pareto frontier learning problem as a Markov decision process (MDP). Relied on the MDP, we transform and absorb the objectives other than model performance into the constraints. To learn the whole Pareto frontier, we propose to find a set of Pareto optimal architectures which are uniformly distributed on the range of budget to form a frontier. Based on the learned frontier, we are able to easily find multiple promising architectures to fulfill all considered constraints in the same search process. Extensive experiments on three hardware platforms (i.e., mobile, CPU, and GPU) show that the searched architectures by our PFNAS outperform the ones obtained by existing methods under different budgets.
["Neural Architecture Search", "Pareto Frontier Learning", "Resource Constraint"]
ABSTRACTDesigning feasible and effective architectures is essential for deploying deep modelsto real-world scenarios. In practice, one has to consider multiple objectives ( e.g.,model performance and computational cost) and diverse constraints incurred bydifferent computation resources. To address this, most methods seek to findpromising architectures via optimizing a well pre-defined utility function. However,it is often non-trivial to design an ideal function that could well trade-off differentobjectives. More critically, in many real scenarios, even for the same platform, wemay have different applications with various latency budgets. It would be fantasticif we can produce multiple promising architectures to fulfill each budget in the samesearch process. However, existing methods may have to perform an independentsearch for each budget, which is very inefficient and unnecessary. In this paper,we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) methodwhich seeks to learn the Pareto frontier ( i.e., the set of Pareto optimal architectures)w.r.t. multiple objectives. Here, we formulate the Pareto frontier learning problemas a Markov decision process (MDP). Relied on the MDP, we transform and absorbthe objectives other than model performance into the constraints. To learn thewhole Pareto frontier, we propose to find a set of Pareto optimal architectureswhich are uniformly distributed on the range of budget to form a frontier. Based onthe learned frontier, we are able to easily find multiple promising architectures tofulfill all considered constraints in the same search process. Extensive experimentson three hardware platforms ( i.e., mobile, CPU, and GPU) show that the searchedarchitectures by our PFNAS outperform the ones obtained by existing methodsunder different budgets.1 I NTRODUCTIONDeep neural networks (DNNs) (LeCun et al., 1989) have been the workhorse of many challengingtasks, including image classification (Krizhevsky et al., 2012; Srivastava et al., 2015) and semanticsegmentation (Long et al., 2015; Noh et al., 2015). However, designing effective architectures isoften labor-intensive and relies heavily on human expertise. To alleviate the computation burden ofarchitecture design, neural architecture search (NAS) methods have been proposed to automaticallydesign architectures (Zoph & Le, 2017; Liu et al., 2019). Existing studies show that the automati-cally discovered architectures often outperform the manually designed architectures in both imageclassification and language modeling tasks (Pham et al., 2018; Liu et al., 2019).However, deep models often contain a large number of parameters and thus come with a veryhigh computational cost. As a result, it is hard to deploy deep models to the hardware devices orapplication scenarios with limited computation resources. To obtain promising architectures thatfulfill the computation constraint, we have to consider multiple kinds of objectives ( e.g., accuracy andcomputational cost). Thus, we seek to solve a Pareto optimization problem w.r.t. multiple objectives toperform architecture search (Tan et al., 2019). To solve this problem, one can design a utility functionby computing a weighted sum/product of different objectives to find the desired architectures (Tanet al., 2019; Stamoulis et al., 2019). However, it is hard to design a utility function that couldtrade-off different kinds of objectives (Miettinen, 2012). As a result, the searched architectures donot necessarily satisfy the constraints (See results in Figure 3(b)). Thus, how to find feasible andeffective architectures to satisfy the constraint becomes an important problem.1Under review as a conference paper at ICLR 202160 80 100 120 140 160 180 200Mobile Latency (ms)747678808284Validation Acc. (%)Initial ArchitectureSearched ArchitectureInitial FrontierSearched FrontierInitial ArchitectureSearchedArchitectureInitial FrontierSearched FrontierSearch Direction of NASSearch Direction of PFNAS(a) Comparisons of search strategies be-tween NAS and PFNAS.c(α%&)≤T*c(α%,)≤T-Pareto-Frontier-awareNeural Architecture Searchα%=fT;θBudgetConstraintsc(α%2)≤T3α%&α%,α%2(b) Model deployment under diverse budget constraints.Figure 1: Illustration of the search strategy and applications of PFNAS. (a) Instead of finding a singlePareto optimal architecture, PFNAS seeks to learn the Pareto frontier. (b) PFNAS takes arbitrarybudget constraint as input and output the architecture satisfied the budget constraint.More critically, even for the same hardware platform, we may have different applications which resultin diverse deployment budgets/requirements in terms of a specific objective. For example, a companymay develop/maintain multiple applications on the same hardware device and each of them has aspecific requirement of latency. To handle multiple application scenarios, we need to find a set ofPareto optimal architectures ( i.e., Pareto frontier) over multiple objectives. To this end, existing NASmethods may have to perform an independent search for each scenario (Tan et al., 2019), which isvery inefficient yet unnecessary. To address this issue, we propose to directly learn the Pareto frontierover multiple objectives rather than find a single optimal architecture (See Figure 1(a)). Based onthe learned Pareto frontier, one can easily find the desired architecture to fulfill an arbitrary budget.However, it is still unknown how to learn the Pareto frontier to produce effective architectures formultiple scenarios with diverse budgets.In this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) methodto learn the Pareto frontier over two kinds of objectives. As shown in Figure 1(a), unlike existingmethods that find the optimal architectures, we seek to learn the Pareto frontier ( i.e., improving theblue curve to the red curve). To this end, we formulate the Pareto frontier learning problem as aMarkov decision process (MDP). Based on MDP, we transform and absorb the objectives other thanmodel performance as the constraints and make decisions to find promising architectures satisfyingthem. To learn the whole Pareto frontier, we uniformly sampling budgets from its distribution andfind a set of Pareto optimal architectures satisfying these budgets to form a frontier. Then, we exploitpolicy gradient to maximize the expected reward over different budgets. Based on the learnedfrontier, we may easily obtain the desired architectures given arbitrary budgets. To provide accuratereward, we propose an architecture evaluator to learn a Pareto dominance rule, which judges whetheran architecture is better than another w.r.t. multiple objectives. By taking such a rule as the reward, weare able to iteratively find better frontiers during training. More critically, since our PFNAS exploitsthe shared knowledge across the search processes with multiple budgets, we find better architecturesthan those searched by an independent search for each budget (See results in Table 1).We summarize the contributions of our paper as follows.We propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method thatsimultaneously finds multiple Pareto optimal architectures ( i.e., Pareto frontier) over thewhole range of computational cost ( e.g., latency). Based on the learned frontier, PFNAStakes arbitrary latency as the budget and automatically finds feasible architectures.We propose a Pareto dominance rule to judge whether an architecture is better than anotherunder diverse computation budgets. By taking such a rule as the reward, our PFNAS is ableto iteratively find better frontiers to approach the ground-truth Pareto frontier.Extensive experiments on three hardware platforms show the proposed method is able tofind the architectures that not only satisfy diverse computation budgets but also outperformthe architectures searched by existing methods.2Under review as a conference paper at ICLR 20212 R ELATED WORKNeural Architecture Search. Neural architecture search (NAS) has been proposed to automaticallydesign effective architectures. Zoph & Le (2017) use reinforcement learning to discover the optimalconfiguration of each layer. Real et al. (2019) employ evolution algorithms and propose a newregularization method. Liu et al. (2019) propose DARTS, a differentiable NAS method by relaxingthe search space to be continuous. However, these methods only search for the architectures withhigh accuracy but ignore the resource constraints of real-world applications.Architecture Design with Resource Constraints. There is a growing interest in designing architec-tures under a resource constraint automatically. OFA (Cai et al., 2020) trains a powerful super network,from which we can directly get a specialized sub-network without additional training. Recently,PONAS (Huang & Chu, 2020) has been proposed to build an accuracy table to find architecturesunder a single constraint. However, given various resource budgets, these methods need to repeat thearchitecture search process for each budget. By contrast, our PFNAS only needs to search once toproduce multiple architectures that satisfy diverse resource budgets simultaneously.Pareto Frontier Learning. Pareto frontier learning aims to find a set of Pareto optimal solutionsby solving a multi-objective optimization problem. Most methods convert the problem into asingle-objective problem by constructing a weighted sum/product utility function (Wierzbicki, 1982;Miettinen, 2012). To simultaneously find multiple Pareto optimal solutions ( i.e., Pareto frontier),many methods exploit evolutionary algorithms (Deb et al., 2002; Kim et al., 2004) to perform aparallel search. Recently, some NAS methods aim to find a single Pareto optimal architecture bymaking a trade-off between accuracy and computational cost (Cheng et al., 2018; Dong et al., 2018).However, it is still unknown how to learn the Pareto frontier in NAS.3 P ROPOSED METHOD3.1 P ROBLEM DEFINITIONNotations. LetTbe the distribution of discrete random variable TT, whereTdenotes the upperbound of any budget constraint, such as latency, the number of multiply-adds (MAdds) and memoryconsumption. Given an architecture space and an architecture , we usec()andAcc()tomeasure the cost and the validation accuracy of , respectively. We compute the reward of usinga functionR(;w)parameterized by w. Without loss of generality, we use (i)Tto denote the i-thsearched architecture under the budget constraint c((i)T)T. We use 1[A]to denote an indicatorfunction, where 1[A] = 1 ifAis true and 1[A] = 0 otherwise.In this paper, we focus on the neural architecture search problem with multiple objectives ( e.g., themodel performance and latency) and seek to find promising architectures under arbitrary constraintsin the same search process. This problem, however, is non-trivial since it is hard to design a utilityfunction to well trade-off the multiple kinds of objectives (Miettinen, 2012). Moreover, in real-worldapplications where we should consider diverse application scenarios with different budget constraints,performing an independent search for each scenario (Tan et al., 2019) would very inefficient yetunnecessary. To address these issues, we propose to learn the Pareto frontier ( i.e., the set of Paretooptimal architectures) w.r.t. different objectives instead of finding a single optimal architecture. Basedon the learned frontier, it would be easy to select an architecture that fulfills arbitrary latency budgets.3.2 P ARETO -FRONTIER -AWARE NEURAL ARCHITECTURE SEARCHIn this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method thatsimultaneously finds multiple Pareto optimal architectures ( i.e., Pareto frontier). To this end, weformulate the optimization problem into a Markov decision process (MDP). Specifically, we transformand absorb the objectives other than model performance as a constraint and make decisions to findpromising architectures to fulfill this budget. To cover the whole range of budgets, we uniformlysample different budgets and maximize the expected reward over all the decisions conditioned onthem. Formally, a typical MDP is defined by a tuple (S;A;P;R ), whereSis a finite set of states, Ais a finite set of actions, P:SAS! Ris the state transition distribution, and R:SA! Ris the reward function. Here, we define the budget as a state, the decision to find an architecture3Under review as a conference paper at ICLR 2021Sample ArchitecturesαT~π(∙|T;θ)Pareto -Frontier -aware Controllercα≤T,T~TInput ConstraintArchitecture Evaluator Sampled Architecture αTUpdate ControllerR(αT|T;w)TFC + ReLUFC + ReLUFC + ReLUαTSelectDepthLSTM LSTM LSTMSelectWidthSelectKernel Size (a)(c)(b)(d)Figure 2: The overview of the proposed PFNAS. Our PFNAS takes a budget constraint as input andproduces promising architectures that satisfy the budget constraints. The orange and green boxes in(c) denote the embeddings of the architecture Tand the budget w.r.t. T.satisfying any budget as an action. To find Pareto optimal (also called non-dominated) solutions, wedevelop a Pareto dominance rule to compute the reward (See Section 3.3). Here, we exploit the policygradient method (Williams, 1992) to the MDP problem.As shown in Figure 2, to find promising architectures under diverse budget constraints, we develop aconditional model that takes a budget Tas input and outputs an architecture Tsatisfying the budgetconstraintc(T)T. Formally, the PFNAS model can be represented by T=f(T;), whereTdenotes the architecture under this budget constraint and denotes the learnable parameters.Based on the searhed architecture T, we further feed it together with the considered budget Tintoan architecture evaluator to compute the reward. To illustrate our method, we first revisit the NASproblem with a single budget and then generalize it to the problem with diverse budgets. Note thatit is non-trivial to directly find the optimal architecture (Zoph & Le, 2017). By contrast, one canfirst learn a policy (;)and then conduct sampling from it to find promising architectures, i.e.,(;). Given a budget T, the optimization problem becomesmaxE(;)[R(;w)];s.t.c()T: (1)Here,(;)is the learned policy parameterized by , andR(;w)is the reward function parame-terized bywthat measures the joint performance of both the accuracy and the latency of . We useE(;)[]to denote the expectation over the searched architectures.However, Problem (1) only focuses on one specific budget constraint. In fact, we seek to learn thePareto frontier over the whole range of budgets ( e.g., latency). However, this problem is hard to solvesince there exist infinite Pareto optimal architectures. To address this, one can learn the approximatedPareto frontier by finding a set of uniformly distributed Pareto optimal points (Grosan & Abraham,2008). In this paper, we uniformly sample latencies as the budgets and maximize the expected rewardover them. Thus, the optimization problem can be formulated asmaxETThET(jT;)[R(TjT;w)]i;s.t.c(T)T; TT; (2)where ETT[]denotes the expectation over the budget. Unlike Eqn. (1), (jT;)is the learnedpolicy conditioned on the budget of T. To find the architectures satisfying the budget constraint, wetakeTinto account to compute the reward R(jT;w). We will illustrate this in Section 3.3.From Eqn. (2), we aim to improve the overall ability to find promising architectures under arbitrarylatency budget, i.e., learning the Pareto frontier. It is worth noting that simultaneously finding multiplePareto optimal architectures would benefit the search process for each scenario with a specific latencyconstraint due to the shared knowledge across them (See results in Table 1). To be specific, if we finda good architecture w.r.t. one budget, we can slightly change the width or depth of some modules toobtain promising architectures that satisfy the adjacent budgets.4Under review as a conference paper at ICLR 20213.3 R EWARD DESIGN FOR PARETO FRONTIER LEARNINGIn this section, we propose a Pareto dominance reward to train PFNAS. Specifically, to obtain thePareto optimal architectures, we have to find the Pareto improvement direction to iteratively findbetter architectures. Here, Pareto improvement is a situation where some objectives will increase andno objectives will decrease. This situation is also called Pareto dominance where the better solutiondominates the worse one. In this sense, an architecture is defined to be Pareto optimal when it isnot dominated by any architectures in the search space. Thus, since the Pareto frontier is the set ofPareto optimal architectures, the key challenge to Pareto frontier learning becomes how to find Paretooptimal architectures by judging whether an architecture dominates another architecture.To address this, we define a Pareto dominance rule for the NAS problem. In practice, the quality ofan architecture should depend on both the satisfaction of the budget and the accuracy. Specifically,given a specific budget T, a good architecture should be the one with the cost lower than or equaltoTand with high accuracy. Motivated by this, we devise a rule to compare two architecture andjudge which one is dominative. Given any two architectures 1;2, 1) ifc(1)Tandc(2)T,the architecture with higher accuracy is dominative; 2) if at least one architecture has the latencyhigher thanT, the architecture with lower latency is dominative. Formally, we use a function d()torepresent the above rule:d1;2;T=(1[Acc(1)Acc(2)];ifc(1)Tandc(2)T;1[c(1)c(2)]; otherwise:(3)Here,d(1;2;T) = 1 if1dominates2andd(1;2;T) = 0 otherwise. Similar rules are alsofound in the conventional constrained optimization problems (Deb et al., 2002). Note that Eqn. (3) canbe considered as a hard threshold function which helps to guide the controller to find an architecturesatisfying the budget constraints (See results in Sections 4).From Eqn. (3), the Pareto dominance rule requires architecture pairs to find the Pareto optimalarchitectures. However, the controller model only finds an architecture at a time and thus Eqn. (3)cannot be directly used to compute the reward. To address this issue, we propose to train anarchitecture evaluator R(jT;w)to learn the proposed Pareto dominance rule and output a scalar asthe reward for the scenario with c()T. Since the proposed rules are built on the architecturecomparisons, we train the architecture evaluator using a pairwise ranking loss, which has been widelyused in ranking problems (Freund et al., 2003; Burges et al., 2005; Chen et al., 2009). Given Marchitectures, there are M(M1)architecture pairs in total after omitting the pairs with the samearchitecture. Assuming that there are Kbudgets, the pairwise ranking loss becomesL(w) =1KM(M1)KXk=1MXi=1MXj=1;j6=i(R(ijTk;w)R(jjTk;w))d(i;j;Tk); (4)where(z) = max(0;1z)is the hinge loss function. Due to the page limit, we put more discussionson Eqn. (4) in the supplementary.Data preparation. We first train a super network with the progressive shrinking strategy. Then, werandomly sample M=16;000architectures from the architecture space and measure their accuracyAcc()on10;000validation images sampled from the training set of ImageNet (Deng et al., 2009)using the super network. We also measure their latency c()on hardware devices. We record theresults using a set of triplets f(i;c(i);Acc(i))gMi=1.3.4 T RAINING AND INFERENCE METHODSAs shown in Algorithm 1, we first train the super network using the progressive shrinking tech-nique (Cai et al., 2020). Then, we sequentially train the architecture evaluator and the controller. Wedetail the training methods of the architecture evaluator and the controller below.Learning the architecture evaluator R(jT;w).To learn the Pareto frontier, we propose anarchitecture evaluator to compute the reward based on the proposed Pareto dominance rule. Accordingto Eqn. (4), based on collected training data and fTkgKk=1, the gradient w.r.t. wbecomesrw(w) =1KM(M1)KXk=1MXi=1MXj=1;j6=irw(R(ijTk;w)R(jjTk;w))d(i;j;Tk):(5)5Under review as a conference paper at ICLR 2021Algorithm 1 Training method of PFNAS.Require: Latency distribution T, learning rate , parameters M,NandK.1: Initialize model parameters for the controller and wfor the architecture evaluator.2: // Collect the architectures with the validation accuracy and the latency3: Train the super network on the training set with progressive shrinking strategy (Cai et al., 2020).4: Randomly sample a set of architecture figMi=1from the search space.5: Measure the cost and the accuracy on the validation data set to construct set f(i;c(i);Acc(i))gMi=1.6: // Train the architecture evaluator7:while not converge do8: Sample a set of latencies fTkgKk=1fromT.9: Update the architecture evaluator parameters wby descending the gradient:10:w w1KM (M1)KPk=1MPi=1MPj=1;j6=irw(R(ijTk;w)R(jjTk;w))d(i;j;Tk).11:end while12: // Train the controller13:while not converge do14: Sample a set of latencies fTkgKk=1fromT.15: Obtainf(i)TkgNi=1according to the policy (jTk;)for eachk2f1;;Kg.16: Update the controller parameters via policy gradient by ascending the gradient:17: +1KNKPk=1NPi=1rlog((i)TkjTk;)R((i)TkjTk;w)+rH((jTk;)).18:end whileLearning the controller f(T;).The controller model f(T;)takes any given latency budget asinput and outputs promising architectures to fulfill the budget. We learn the controller with policygradient and use an entropy regularization term to encourage exploration. The objective becomesJ() =ETThET(jT;)[R(TjT;w)] +H(jT;)i; (6)whereH()evaluates the entropy of the policy and is a hyper-parameter. In each iteration, we firstsamplefTkgKk=1from the distribution T, and then sample Narchitecturesf(i)TkgNi=1for each budgetTk. Thus, the gradient of Eqn. (6) w.r.t. becomes1rJ()1KNKXk=1NXi=1hrlog((i)TkjTk;)R((i)TkjTk;w) +rH((jTk;))i: (7)Inferring architecture under diverse budgets. Based on the learned policy (jT;), we conductsampling to find promising architectures. Specifically, given any arbitrary latency T, we first sampleseveral candidate architectures from (jT;)and then select the architecture with the highestvalidation accuracy. Note that we train PFNAS using a finite number of discrete latencies. Duringinference, to enable Tto be any value, we perform a linear interpolation between the embeddings oftwo adjacent discrete latencies to represent the considered latency (See more details in supplementary).Training cost of PFNAS. The training of super network takes about 1.5 days based on 32 GPUs.The data preparation process takes about 40 GPU hours. We train the architecture evaluator for about15 minutes. The training of the controller takes about 2 GPU hours, which is more efficient thanthe search process of most methods. Given Kbudgets, PFNAS only searches once and thus takesapproximately 1=Ksearch cost of the total cost to train a controller model for each budget.4 E XPERIMENTSWe apply PFNAS to search for architectures under diverse latency budgets on three kinds of hardwareplatforms, including mobile devices (Google Pixel1), CPU devices (Intel Core i5-7400), and GPUdevices (NVIDIA TITAN X). For convenience, we use “Architecture- T” to represent the searchedarchitecture that satisfies the latency budget w.r.t. T,e.g., PFNAS-80. Due to the page limit, we putthe visualizations of the searched architectures in the supplementary.1We put the derivations of Eqn. (7) in the supplementary.6Under review as a conference paper at ICLR 2021Table 1: Comparisons with state-of-the-art architectures on Google Pixel1 phone.denotes thebest architecture reported in the original paper. “-” denotes the results that are not reported. All themodels are evaluated on 224224images of ImageNet.Architecture Latency (ms) Top-1 Acc. (%) Top-5 Acc. (%) #Params (M) #MAdds (M)MobileNetV3-Large (0.75 ) (Howard et al., 2019) 93.0 73.3 - 4.0 155MobileNetV2 (1.0) (Sandler et al., 2018) 90.3 72.0 - 3.4 300OFA-S-80 76.8 76.8 93.3 6.1 350OFA-MO-80 77.6 76.6 93.2 7.9 340PFNAS-80 (Ours) 79.9 77.5 93.7 7.3 349ProxylessNAS-Mobile (Cai et al., 2019) 97.7 74.6 - 4.1 319MobileNetV3-Large (1.0 ) (Howard et al., 2019) 107.7 75.2 - 5.4 219OFA (Cai et al., 2020) 109.3 78.1 94.0 8.2 354OFA-S-110 109.2 77.5 93.6 6.4 406OFA-MO-110 106.3 78.0 93.8 8.4 478PFNAS-110 (Ours) 106.8 78.4 94.2 9.9 451MnasNet-A1 (1.0) (Tan et al., 2019) 120.7 75.2 92.5 3.4 300FBNet-C (Wu et al., 2019) 135.2 74.9 - 5.5 375OFA (Cai et al., 2020) 133.7 78.4 94.1 8.4 388OFA-S-140 130.0 77.7 93.7 6.6 428OFA-MO-140 139.0 78.4 94.0 9.5 486PFNAS-140 (Ours) 127.8 78.7 94.3 9.2 492PONAS-C (Huang & Chu, 2020) 145.1 75.2 - 5.6 376OFA(Cai et al., 2020) 150.9 78.9 94.4 9.1 511OFA-S-170 163.6 78.2 94.1 7.8 534OFA-MO-170 165.0 78.8 94.4 8.5 584PFNAS-170 (Ours) 167.1 79.0 94.5 10.0 606DARTS (Liu et al., 2019) 176.6 73.1 91.0 4.7 574PC-DARTS (Xu et al., 2020) 194.1 75.8 92.7 5.3 597MnasNet-A1 (1.4) (Tan et al., 2019) 205.5 77.2 93.5 6.1 592EfficientNet B0 (Tan & Le, 2019) 237.7 77.3 93.5 5.3 390OFA-S-200 197.5 78.3 94.2 8.4 629OFA-MO-200 187.4 78.9 94.4 9.1 630PFNAS-200 (Ours) 193.9 79.2 94.7 10.4 724(a) Ground-truth latency histogram. (b) Comparisons of search results. (c) Comparisons of Pareto frontiers.Figure 3: Latency histograms and Pareto curves of the architectures on mobile devices. (a) Ground-truth latency histogram of 16;000architectures that are uniformly sampled from the search space. (b)The latency histogram of 1;000architectures sampled by different methods given T=110 ms. (c) ThePareto frontier of the architectures sampled by different methods.Implementation details. We use MobileNetV3 (Howard et al., 2019) as the backbone to build thesearch space (Cai et al., 2020). We first obtain the range of latency by randomly sampling M=16;000architectures from the search space (See Figure 3(a)). Then, we select K=5latency budgets byevenly dividing the range ( e.g.,f80;110;140;170;200gon mobile devices). We put the discussionon the impact of Kon the search performance of PFNAS and more implementation details in thesupplementary materials.4.1 C OMPARISONS ON HARDWARE DEVICESWe compare PFNAS with state-of-the-art methods on Google Pixel1 phone. We also consider thefollowing baselines: 1) OFA-MO conducts architecture search based on OFA super network byexploiting the multi-objective reward (Tan et al., 2019). 2) OFA-S finds the best one from 16;000architectures sampled from the learned super network. From Table 1 and Figure 4(a), our PFNASconsistently achieves higher accuracy than other methods. Moreover, compared with the methodssearching for different constrained architectures independently ( e.g., OFA and OFA-MO), our PFNASonly needs to search once to find the Pareto frontier instead of a single Pareto optimal solution. The7Under review as a conference paper at ICLR 202150 75 100 125 150 175 200 225 250Mobile Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours)(a) Results on Google Pixel1.20 25 30 35 40 45 50 55 60CPU Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours) (b) Results on Intel Core i5 CPU.50 75 100 125 150 175 200GPU Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours) (c) Results on TITAN X GPU.Figure 4: Comparisons of the architectures obtained by different methods on three hardware devices.Table 2: Comparisons with different reward functions and search strategies on ImageNet. Ps(%)denotes the proportion of the searched architectures that satisfy the corresponding budget.RewardPareto Frontier T=80msT=110 msT=140 msT=170 msT=200 msLearning Acc.Ps Acc.Ps Acc.Ps Acc.Ps Acc.PsMulti-objective Reward (Tan et al., 2019)7 76.6 1.4 78.0 33.2 78.4 54.0 78.8 90.5 78.9 99.9X 77.0 3.2 78.1 43.9 78.5 88.4 78.9 99.1 78.9 99.9Pareto Dominance Reward7 76.3 92.8 78.1 92.5 78.6 93.2 78.9 94.3 79.0 99.9X 77.2 30.2 78.4 76.9 78.7 82.5 79.0 88.1 79.2 99.5learned Pareto frontier benefits from the shared knowledge across the search process under differentbudgets, helping the decision makers to further select their preferred architectures.We also visualize the latency histograms of the architectures searched on mobile devices in Figure 3(b).Given a latency budget of 110ms, only a few architectures produced by OFA-MO satisfy the budget,which demonstrates the multi-objective reward is hard to design to obtain the preferred architectures.By contrast, PFNAS uses the Pareto dominance reward to encourage the architectures to satisfy thedesired budget constraints. Moreover, we compare the searched frontiers of different methods. Here,we combine the architectures searched by 5 independent OFA-MO runs under different budgets andselect all the best architectures to generate an entire Pareto frontier. From Figure 3(c), our PFNASfinds a better frontier than OFA-MO due to the shared knowledge across the search process underdifferent budgets. We also evaluate PFNAS on Intel Core i5-7400 CPU and NVIDIA TITAN X GPU.From Figure 4, PFNAS consistently find better architectures than existing methods for each latencybudget on all considered devices (See more results in supplementary).4.2 A BLATION STUDIES OF THE PROPOSED METHODIn this experiment, we investigate the effectiveness of the Pareto frontier search strategy and thePareto dominance reward. From Table 2, the Pareto frontier search strategy tends to find betterthan the independent search process due to the shared knowledge across the search processes underdifferent budgets. Moreover, compared with multi-objective reward, the Pareto dominance rewardencourages the controller to find more architectures that satisfy the considered budget constraints.For example, even if only a few architectures have the latency lower than 80ms (See Figure 3(a)), westill achieve the Psof92:8%with the Pareto dominance reward. With both the Pareto frontier searchstrategy and the Pareto dominance reward, we yield the best search results under all budgets.5 C ONCLUSIONIn this paper, we have proposed a novel Pareto-Frontier-aware Neural Architecture Search (PFNAS)method to find the Pareto frontier under diverse computation budget ( i.e., latency). Specifically, wetrain the PFNAS model to learn the Pareto frontier by maximizing the expected reward over a set ofbudgets. To provide accurate rewards under diverse budgets, we propose a Pareto dominance rule tojudge whether an architecture is better than another and devise an architecture evaluator to learn thisrule. In this way, PFNAS is able to learn the Pareto frontier and find promising architectures underdiverse budgets. Extensive experiments on three platforms ( i.e., mobile, CPU, and GPU devices)demonstrate the effectiveness of the proposed method.8Under review as a conference paper at ICLR 2021
Sdp9_TojDx5
Official Review
4: Ok but not good enough - rejection
Paper Summary The paper considers the problem of Neural Architecture Search (NAS) when multiple objectives needs to be optimized jointly. An approach called Pareto-Frontier-aware Neural Architecture Search (PF-NAS) is proposed for optimizing over two objectives (specifically latency and accuracy). The approach consists of sampling multiple latency budgets uniformly and finding a Pareto set of architectures that satisfies the budget constraints. To compute a single objective value for a proposed architecture with a given budget, a model (named 'architecture evaluator') is learned with pairwise ranking loss. Experiments are performed on three platforms of different latencies. Detailed Comments - The paper considers an important problem relevant in practice (where multiple objective optimization is the usual norm). - One key part of the proposed approach is 'formulating the optimization problem into a Markov Decision Process (MDP)'. However, the write up is confusing and there are many details not described properly. For example, state and action space description is given as follows: 'Here, we define the budget as a state, the decision to find an architecture satisfying any budget as an action'. Is the action space binary (whether we take the decision or not)? Similarly, the state transition function is not clear. Given a state and an action, which state does the agent land in? These are basic questions that needs to be addressed clearly and explicitly. - There is a big assumption in the paper that by learning a Pareto set of architectures for a set 'L' of sampled latency constraints, we can capture any latency by considering a simple interpolation of the latencies from set 'L'. This assumes that the entire space of latencies can be captured by a small sampled set used in training. It is an important assumption that needs to be discussed and tested in much more detail. This is also at odds with the main motivation of the paper that a single utility function cannot be utilized for multi-objective optimization problems. - It is suggested that the main reason for good performance of PF-NAS is that the 'learned Pareto frontier benefits from the shared knowledge across the search process under different budgets'. However, the performance drops significantly and monotonically by increasing the number of sampled budgets (K variable) (Section G in supplementary). This is in contrast with the former statement. If the method leverages shared knowledge across search process under different budgets, the performance should ideally increase (or remain same at least). - The writing of the paper comes across as if the proposed approach is general enough for multiple objectives. However, the proposed solution is specific for two objectives. Please let me know if there is a straightforward extension of the approach to more than two objectives. - Please consider adding a comparison of the proposed approach with other baselines on training time as well. Although PF-NAS does better than the baselines on accuracy metric, but the improvement is within single decimal points and even that might not be statistically significant. Therefore, training time comparison is very important because a practitioner will prefer a method which requires less training time if the accuracy gain is limited by a superior approach but with large training time. - The writing of the paper can be substantially improved. For example, it is not clear what does 'learning the whole Pareto frontier mean'? - Some specific questions on the experimental section - What embedding is used for budget part in the architecture evaluator?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Pareto-Frontier-aware Neural Architecture Search ### Paper Abstract Designing feasible and effective architectures is essential for deploying deep models to real-world scenarios. In practice, one has to consider multiple objectives (e.g., model performance and computational cost) and diverse constraints incurred by different computation resources. To address this, most methods seek to find promising architectures via optimizing a well pre-defined utility function. However, it is often non-trivial to design an ideal function that could well trade-off different objectives. More critically, in many real scenarios, even for the same platform, we may have different applications with various latency budgets. To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary. Nevertheless, it would be fantastic if we can produce multiple promising architectures to fulfill each budget in the same search process. In this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method which seeks to learn the Pareto frontier (i.e., the set of Pareto optimal architectures) w.r.t. multiple objectives. Here, we formulate the Pareto frontier learning problem as a Markov decision process (MDP). Relied on the MDP, we transform and absorb the objectives other than model performance into the constraints. To learn the whole Pareto frontier, we propose to find a set of Pareto optimal architectures which are uniformly distributed on the range of budget to form a frontier. Based on the learned frontier, we are able to easily find multiple promising architectures to fulfill all considered constraints in the same search process. Extensive experiments on three hardware platforms (i.e., mobile, CPU, and GPU) show that the searched architectures by our PFNAS outperform the ones obtained by existing methods under different budgets. ### Paper Keywords ["Neural Architecture Search", "Pareto Frontier Learning", "Resource Constraint"] ### Paper Content ABSTRACTDesigning feasible and effective architectures is essential for deploying deep modelsto real-world scenarios. In practice, one has to consider multiple objectives ( e.g.,model performance and computational cost) and diverse constraints incurred bydifferent computation resources. To address this, most methods seek to findpromising architectures via optimizing a well pre-defined utility function. However,it is often non-trivial to design an ideal function that could well trade-off differentobjectives. More critically, in many real scenarios, even for the same platform, wemay have different applications with various latency budgets. It would be fantasticif we can produce multiple promising architectures to fulfill each budget in the samesearch process. However, existing methods may have to perform an independentsearch for each budget, which is very inefficient and unnecessary. In this paper,we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) methodwhich seeks to learn the Pareto frontier ( i.e., the set of Pareto optimal architectures)w.r.t. multiple objectives. Here, we formulate the Pareto frontier learning problemas a Markov decision process (MDP). Relied on the MDP, we transform and absorbthe objectives other than model performance into the constraints. To learn thewhole Pareto frontier, we propose to find a set of Pareto optimal architectureswhich are uniformly distributed on the range of budget to form a frontier. Based onthe learned frontier, we are able to easily find multiple promising architectures tofulfill all considered constraints in the same search process. Extensive experimentson three hardware platforms ( i.e., mobile, CPU, and GPU) show that the searchedarchitectures by our PFNAS outperform the ones obtained by existing methodsunder different budgets.1 I NTRODUCTIONDeep neural networks (DNNs) (LeCun et al., 1989) have been the workhorse of many challengingtasks, including image classification (Krizhevsky et al., 2012; Srivastava et al., 2015) and semanticsegmentation (Long et al., 2015; Noh et al., 2015). However, designing effective architectures isoften labor-intensive and relies heavily on human expertise. To alleviate the computation burden ofarchitecture design, neural architecture search (NAS) methods have been proposed to automaticallydesign architectures (Zoph & Le, 2017; Liu et al., 2019). Existing studies show that the automati-cally discovered architectures often outperform the manually designed architectures in both imageclassification and language modeling tasks (Pham et al., 2018; Liu et al., 2019).However, deep models often contain a large number of parameters and thus come with a veryhigh computational cost. As a result, it is hard to deploy deep models to the hardware devices orapplication scenarios with limited computation resources. To obtain promising architectures thatfulfill the computation constraint, we have to consider multiple kinds of objectives ( e.g., accuracy andcomputational cost). Thus, we seek to solve a Pareto optimization problem w.r.t. multiple objectives toperform architecture search (Tan et al., 2019). To solve this problem, one can design a utility functionby computing a weighted sum/product of different objectives to find the desired architectures (Tanet al., 2019; Stamoulis et al., 2019). However, it is hard to design a utility function that couldtrade-off different kinds of objectives (Miettinen, 2012). As a result, the searched architectures donot necessarily satisfy the constraints (See results in Figure 3(b)). Thus, how to find feasible andeffective architectures to satisfy the constraint becomes an important problem.1Under review as a conference paper at ICLR 202160 80 100 120 140 160 180 200Mobile Latency (ms)747678808284Validation Acc. (%)Initial ArchitectureSearched ArchitectureInitial FrontierSearched FrontierInitial ArchitectureSearchedArchitectureInitial FrontierSearched FrontierSearch Direction of NASSearch Direction of PFNAS(a) Comparisons of search strategies be-tween NAS and PFNAS.c(α%&)≤T*c(α%,)≤T-Pareto-Frontier-awareNeural Architecture Searchα%=fT;θBudgetConstraintsc(α%2)≤T3α%&α%,α%2(b) Model deployment under diverse budget constraints.Figure 1: Illustration of the search strategy and applications of PFNAS. (a) Instead of finding a singlePareto optimal architecture, PFNAS seeks to learn the Pareto frontier. (b) PFNAS takes arbitrarybudget constraint as input and output the architecture satisfied the budget constraint.More critically, even for the same hardware platform, we may have different applications which resultin diverse deployment budgets/requirements in terms of a specific objective. For example, a companymay develop/maintain multiple applications on the same hardware device and each of them has aspecific requirement of latency. To handle multiple application scenarios, we need to find a set ofPareto optimal architectures ( i.e., Pareto frontier) over multiple objectives. To this end, existing NASmethods may have to perform an independent search for each scenario (Tan et al., 2019), which isvery inefficient yet unnecessary. To address this issue, we propose to directly learn the Pareto frontierover multiple objectives rather than find a single optimal architecture (See Figure 1(a)). Based onthe learned Pareto frontier, one can easily find the desired architecture to fulfill an arbitrary budget.However, it is still unknown how to learn the Pareto frontier to produce effective architectures formultiple scenarios with diverse budgets.In this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) methodto learn the Pareto frontier over two kinds of objectives. As shown in Figure 1(a), unlike existingmethods that find the optimal architectures, we seek to learn the Pareto frontier ( i.e., improving theblue curve to the red curve). To this end, we formulate the Pareto frontier learning problem as aMarkov decision process (MDP). Based on MDP, we transform and absorb the objectives other thanmodel performance as the constraints and make decisions to find promising architectures satisfyingthem. To learn the whole Pareto frontier, we uniformly sampling budgets from its distribution andfind a set of Pareto optimal architectures satisfying these budgets to form a frontier. Then, we exploitpolicy gradient to maximize the expected reward over different budgets. Based on the learnedfrontier, we may easily obtain the desired architectures given arbitrary budgets. To provide accuratereward, we propose an architecture evaluator to learn a Pareto dominance rule, which judges whetheran architecture is better than another w.r.t. multiple objectives. By taking such a rule as the reward, weare able to iteratively find better frontiers during training. More critically, since our PFNAS exploitsthe shared knowledge across the search processes with multiple budgets, we find better architecturesthan those searched by an independent search for each budget (See results in Table 1).We summarize the contributions of our paper as follows.We propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method thatsimultaneously finds multiple Pareto optimal architectures ( i.e., Pareto frontier) over thewhole range of computational cost ( e.g., latency). Based on the learned frontier, PFNAStakes arbitrary latency as the budget and automatically finds feasible architectures.We propose a Pareto dominance rule to judge whether an architecture is better than anotherunder diverse computation budgets. By taking such a rule as the reward, our PFNAS is ableto iteratively find better frontiers to approach the ground-truth Pareto frontier.Extensive experiments on three hardware platforms show the proposed method is able tofind the architectures that not only satisfy diverse computation budgets but also outperformthe architectures searched by existing methods.2Under review as a conference paper at ICLR 20212 R ELATED WORKNeural Architecture Search. Neural architecture search (NAS) has been proposed to automaticallydesign effective architectures. Zoph & Le (2017) use reinforcement learning to discover the optimalconfiguration of each layer. Real et al. (2019) employ evolution algorithms and propose a newregularization method. Liu et al. (2019) propose DARTS, a differentiable NAS method by relaxingthe search space to be continuous. However, these methods only search for the architectures withhigh accuracy but ignore the resource constraints of real-world applications.Architecture Design with Resource Constraints. There is a growing interest in designing architec-tures under a resource constraint automatically. OFA (Cai et al., 2020) trains a powerful super network,from which we can directly get a specialized sub-network without additional training. Recently,PONAS (Huang & Chu, 2020) has been proposed to build an accuracy table to find architecturesunder a single constraint. However, given various resource budgets, these methods need to repeat thearchitecture search process for each budget. By contrast, our PFNAS only needs to search once toproduce multiple architectures that satisfy diverse resource budgets simultaneously.Pareto Frontier Learning. Pareto frontier learning aims to find a set of Pareto optimal solutionsby solving a multi-objective optimization problem. Most methods convert the problem into asingle-objective problem by constructing a weighted sum/product utility function (Wierzbicki, 1982;Miettinen, 2012). To simultaneously find multiple Pareto optimal solutions ( i.e., Pareto frontier),many methods exploit evolutionary algorithms (Deb et al., 2002; Kim et al., 2004) to perform aparallel search. Recently, some NAS methods aim to find a single Pareto optimal architecture bymaking a trade-off between accuracy and computational cost (Cheng et al., 2018; Dong et al., 2018).However, it is still unknown how to learn the Pareto frontier in NAS.3 P ROPOSED METHOD3.1 P ROBLEM DEFINITIONNotations. LetTbe the distribution of discrete random variable TT, whereTdenotes the upperbound of any budget constraint, such as latency, the number of multiply-adds (MAdds) and memoryconsumption. Given an architecture space and an architecture , we usec()andAcc()tomeasure the cost and the validation accuracy of , respectively. We compute the reward of usinga functionR(;w)parameterized by w. Without loss of generality, we use (i)Tto denote the i-thsearched architecture under the budget constraint c((i)T)T. We use 1[A]to denote an indicatorfunction, where 1[A] = 1 ifAis true and 1[A] = 0 otherwise.In this paper, we focus on the neural architecture search problem with multiple objectives ( e.g., themodel performance and latency) and seek to find promising architectures under arbitrary constraintsin the same search process. This problem, however, is non-trivial since it is hard to design a utilityfunction to well trade-off the multiple kinds of objectives (Miettinen, 2012). Moreover, in real-worldapplications where we should consider diverse application scenarios with different budget constraints,performing an independent search for each scenario (Tan et al., 2019) would very inefficient yetunnecessary. To address these issues, we propose to learn the Pareto frontier ( i.e., the set of Paretooptimal architectures) w.r.t. different objectives instead of finding a single optimal architecture. Basedon the learned frontier, it would be easy to select an architecture that fulfills arbitrary latency budgets.3.2 P ARETO -FRONTIER -AWARE NEURAL ARCHITECTURE SEARCHIn this paper, we propose a Pareto-Frontier-aware Neural Architecture Search (PFNAS) method thatsimultaneously finds multiple Pareto optimal architectures ( i.e., Pareto frontier). To this end, weformulate the optimization problem into a Markov decision process (MDP). Specifically, we transformand absorb the objectives other than model performance as a constraint and make decisions to findpromising architectures to fulfill this budget. To cover the whole range of budgets, we uniformlysample different budgets and maximize the expected reward over all the decisions conditioned onthem. Formally, a typical MDP is defined by a tuple (S;A;P;R ), whereSis a finite set of states, Ais a finite set of actions, P:SAS! Ris the state transition distribution, and R:SA! Ris the reward function. Here, we define the budget as a state, the decision to find an architecture3Under review as a conference paper at ICLR 2021Sample ArchitecturesαT~π(∙|T;θ)Pareto -Frontier -aware Controllercα≤T,T~TInput ConstraintArchitecture Evaluator Sampled Architecture αTUpdate ControllerR(αT|T;w)TFC + ReLUFC + ReLUFC + ReLUαTSelectDepthLSTM LSTM LSTMSelectWidthSelectKernel Size (a)(c)(b)(d)Figure 2: The overview of the proposed PFNAS. Our PFNAS takes a budget constraint as input andproduces promising architectures that satisfy the budget constraints. The orange and green boxes in(c) denote the embeddings of the architecture Tand the budget w.r.t. T.satisfying any budget as an action. To find Pareto optimal (also called non-dominated) solutions, wedevelop a Pareto dominance rule to compute the reward (See Section 3.3). Here, we exploit the policygradient method (Williams, 1992) to the MDP problem.As shown in Figure 2, to find promising architectures under diverse budget constraints, we develop aconditional model that takes a budget Tas input and outputs an architecture Tsatisfying the budgetconstraintc(T)T. Formally, the PFNAS model can be represented by T=f(T;), whereTdenotes the architecture under this budget constraint and denotes the learnable parameters.Based on the searhed architecture T, we further feed it together with the considered budget Tintoan architecture evaluator to compute the reward. To illustrate our method, we first revisit the NASproblem with a single budget and then generalize it to the problem with diverse budgets. Note thatit is non-trivial to directly find the optimal architecture (Zoph & Le, 2017). By contrast, one canfirst learn a policy (;)and then conduct sampling from it to find promising architectures, i.e.,(;). Given a budget T, the optimization problem becomesmaxE(;)[R(;w)];s.t.c()T: (1)Here,(;)is the learned policy parameterized by , andR(;w)is the reward function parame-terized bywthat measures the joint performance of both the accuracy and the latency of . We useE(;)[]to denote the expectation over the searched architectures.However, Problem (1) only focuses on one specific budget constraint. In fact, we seek to learn thePareto frontier over the whole range of budgets ( e.g., latency). However, this problem is hard to solvesince there exist infinite Pareto optimal architectures. To address this, one can learn the approximatedPareto frontier by finding a set of uniformly distributed Pareto optimal points (Grosan & Abraham,2008). In this paper, we uniformly sample latencies as the budgets and maximize the expected rewardover them. Thus, the optimization problem can be formulated asmaxETThET(jT;)[R(TjT;w)]i;s.t.c(T)T; TT; (2)where ETT[]denotes the expectation over the budget. Unlike Eqn. (1), (jT;)is the learnedpolicy conditioned on the budget of T. To find the architectures satisfying the budget constraint, wetakeTinto account to compute the reward R(jT;w). We will illustrate this in Section 3.3.From Eqn. (2), we aim to improve the overall ability to find promising architectures under arbitrarylatency budget, i.e., learning the Pareto frontier. It is worth noting that simultaneously finding multiplePareto optimal architectures would benefit the search process for each scenario with a specific latencyconstraint due to the shared knowledge across them (See results in Table 1). To be specific, if we finda good architecture w.r.t. one budget, we can slightly change the width or depth of some modules toobtain promising architectures that satisfy the adjacent budgets.4Under review as a conference paper at ICLR 20213.3 R EWARD DESIGN FOR PARETO FRONTIER LEARNINGIn this section, we propose a Pareto dominance reward to train PFNAS. Specifically, to obtain thePareto optimal architectures, we have to find the Pareto improvement direction to iteratively findbetter architectures. Here, Pareto improvement is a situation where some objectives will increase andno objectives will decrease. This situation is also called Pareto dominance where the better solutiondominates the worse one. In this sense, an architecture is defined to be Pareto optimal when it isnot dominated by any architectures in the search space. Thus, since the Pareto frontier is the set ofPareto optimal architectures, the key challenge to Pareto frontier learning becomes how to find Paretooptimal architectures by judging whether an architecture dominates another architecture.To address this, we define a Pareto dominance rule for the NAS problem. In practice, the quality ofan architecture should depend on both the satisfaction of the budget and the accuracy. Specifically,given a specific budget T, a good architecture should be the one with the cost lower than or equaltoTand with high accuracy. Motivated by this, we devise a rule to compare two architecture andjudge which one is dominative. Given any two architectures 1;2, 1) ifc(1)Tandc(2)T,the architecture with higher accuracy is dominative; 2) if at least one architecture has the latencyhigher thanT, the architecture with lower latency is dominative. Formally, we use a function d()torepresent the above rule:d1;2;T=(1[Acc(1)Acc(2)];ifc(1)Tandc(2)T;1[c(1)c(2)]; otherwise:(3)Here,d(1;2;T) = 1 if1dominates2andd(1;2;T) = 0 otherwise. Similar rules are alsofound in the conventional constrained optimization problems (Deb et al., 2002). Note that Eqn. (3) canbe considered as a hard threshold function which helps to guide the controller to find an architecturesatisfying the budget constraints (See results in Sections 4).From Eqn. (3), the Pareto dominance rule requires architecture pairs to find the Pareto optimalarchitectures. However, the controller model only finds an architecture at a time and thus Eqn. (3)cannot be directly used to compute the reward. To address this issue, we propose to train anarchitecture evaluator R(jT;w)to learn the proposed Pareto dominance rule and output a scalar asthe reward for the scenario with c()T. Since the proposed rules are built on the architecturecomparisons, we train the architecture evaluator using a pairwise ranking loss, which has been widelyused in ranking problems (Freund et al., 2003; Burges et al., 2005; Chen et al., 2009). Given Marchitectures, there are M(M1)architecture pairs in total after omitting the pairs with the samearchitecture. Assuming that there are Kbudgets, the pairwise ranking loss becomesL(w) =1KM(M1)KXk=1MXi=1MXj=1;j6=i(R(ijTk;w)R(jjTk;w))d(i;j;Tk); (4)where(z) = max(0;1z)is the hinge loss function. Due to the page limit, we put more discussionson Eqn. (4) in the supplementary.Data preparation. We first train a super network with the progressive shrinking strategy. Then, werandomly sample M=16;000architectures from the architecture space and measure their accuracyAcc()on10;000validation images sampled from the training set of ImageNet (Deng et al., 2009)using the super network. We also measure their latency c()on hardware devices. We record theresults using a set of triplets f(i;c(i);Acc(i))gMi=1.3.4 T RAINING AND INFERENCE METHODSAs shown in Algorithm 1, we first train the super network using the progressive shrinking tech-nique (Cai et al., 2020). Then, we sequentially train the architecture evaluator and the controller. Wedetail the training methods of the architecture evaluator and the controller below.Learning the architecture evaluator R(jT;w).To learn the Pareto frontier, we propose anarchitecture evaluator to compute the reward based on the proposed Pareto dominance rule. Accordingto Eqn. (4), based on collected training data and fTkgKk=1, the gradient w.r.t. wbecomesrw(w) =1KM(M1)KXk=1MXi=1MXj=1;j6=irw(R(ijTk;w)R(jjTk;w))d(i;j;Tk):(5)5Under review as a conference paper at ICLR 2021Algorithm 1 Training method of PFNAS.Require: Latency distribution T, learning rate , parameters M,NandK.1: Initialize model parameters for the controller and wfor the architecture evaluator.2: // Collect the architectures with the validation accuracy and the latency3: Train the super network on the training set with progressive shrinking strategy (Cai et al., 2020).4: Randomly sample a set of architecture figMi=1from the search space.5: Measure the cost and the accuracy on the validation data set to construct set f(i;c(i);Acc(i))gMi=1.6: // Train the architecture evaluator7:while not converge do8: Sample a set of latencies fTkgKk=1fromT.9: Update the architecture evaluator parameters wby descending the gradient:10:w w1KM (M1)KPk=1MPi=1MPj=1;j6=irw(R(ijTk;w)R(jjTk;w))d(i;j;Tk).11:end while12: // Train the controller13:while not converge do14: Sample a set of latencies fTkgKk=1fromT.15: Obtainf(i)TkgNi=1according to the policy (jTk;)for eachk2f1;;Kg.16: Update the controller parameters via policy gradient by ascending the gradient:17: +1KNKPk=1NPi=1rlog((i)TkjTk;)R((i)TkjTk;w)+rH((jTk;)).18:end whileLearning the controller f(T;).The controller model f(T;)takes any given latency budget asinput and outputs promising architectures to fulfill the budget. We learn the controller with policygradient and use an entropy regularization term to encourage exploration. The objective becomesJ() =ETThET(jT;)[R(TjT;w)] +H(jT;)i; (6)whereH()evaluates the entropy of the policy and is a hyper-parameter. In each iteration, we firstsamplefTkgKk=1from the distribution T, and then sample Narchitecturesf(i)TkgNi=1for each budgetTk. Thus, the gradient of Eqn. (6) w.r.t. becomes1rJ()1KNKXk=1NXi=1hrlog((i)TkjTk;)R((i)TkjTk;w) +rH((jTk;))i: (7)Inferring architecture under diverse budgets. Based on the learned policy (jT;), we conductsampling to find promising architectures. Specifically, given any arbitrary latency T, we first sampleseveral candidate architectures from (jT;)and then select the architecture with the highestvalidation accuracy. Note that we train PFNAS using a finite number of discrete latencies. Duringinference, to enable Tto be any value, we perform a linear interpolation between the embeddings oftwo adjacent discrete latencies to represent the considered latency (See more details in supplementary).Training cost of PFNAS. The training of super network takes about 1.5 days based on 32 GPUs.The data preparation process takes about 40 GPU hours. We train the architecture evaluator for about15 minutes. The training of the controller takes about 2 GPU hours, which is more efficient thanthe search process of most methods. Given Kbudgets, PFNAS only searches once and thus takesapproximately 1=Ksearch cost of the total cost to train a controller model for each budget.4 E XPERIMENTSWe apply PFNAS to search for architectures under diverse latency budgets on three kinds of hardwareplatforms, including mobile devices (Google Pixel1), CPU devices (Intel Core i5-7400), and GPUdevices (NVIDIA TITAN X). For convenience, we use “Architecture- T” to represent the searchedarchitecture that satisfies the latency budget w.r.t. T,e.g., PFNAS-80. Due to the page limit, we putthe visualizations of the searched architectures in the supplementary.1We put the derivations of Eqn. (7) in the supplementary.6Under review as a conference paper at ICLR 2021Table 1: Comparisons with state-of-the-art architectures on Google Pixel1 phone.denotes thebest architecture reported in the original paper. “-” denotes the results that are not reported. All themodels are evaluated on 224224images of ImageNet.Architecture Latency (ms) Top-1 Acc. (%) Top-5 Acc. (%) #Params (M) #MAdds (M)MobileNetV3-Large (0.75 ) (Howard et al., 2019) 93.0 73.3 - 4.0 155MobileNetV2 (1.0) (Sandler et al., 2018) 90.3 72.0 - 3.4 300OFA-S-80 76.8 76.8 93.3 6.1 350OFA-MO-80 77.6 76.6 93.2 7.9 340PFNAS-80 (Ours) 79.9 77.5 93.7 7.3 349ProxylessNAS-Mobile (Cai et al., 2019) 97.7 74.6 - 4.1 319MobileNetV3-Large (1.0 ) (Howard et al., 2019) 107.7 75.2 - 5.4 219OFA (Cai et al., 2020) 109.3 78.1 94.0 8.2 354OFA-S-110 109.2 77.5 93.6 6.4 406OFA-MO-110 106.3 78.0 93.8 8.4 478PFNAS-110 (Ours) 106.8 78.4 94.2 9.9 451MnasNet-A1 (1.0) (Tan et al., 2019) 120.7 75.2 92.5 3.4 300FBNet-C (Wu et al., 2019) 135.2 74.9 - 5.5 375OFA (Cai et al., 2020) 133.7 78.4 94.1 8.4 388OFA-S-140 130.0 77.7 93.7 6.6 428OFA-MO-140 139.0 78.4 94.0 9.5 486PFNAS-140 (Ours) 127.8 78.7 94.3 9.2 492PONAS-C (Huang & Chu, 2020) 145.1 75.2 - 5.6 376OFA(Cai et al., 2020) 150.9 78.9 94.4 9.1 511OFA-S-170 163.6 78.2 94.1 7.8 534OFA-MO-170 165.0 78.8 94.4 8.5 584PFNAS-170 (Ours) 167.1 79.0 94.5 10.0 606DARTS (Liu et al., 2019) 176.6 73.1 91.0 4.7 574PC-DARTS (Xu et al., 2020) 194.1 75.8 92.7 5.3 597MnasNet-A1 (1.4) (Tan et al., 2019) 205.5 77.2 93.5 6.1 592EfficientNet B0 (Tan & Le, 2019) 237.7 77.3 93.5 5.3 390OFA-S-200 197.5 78.3 94.2 8.4 629OFA-MO-200 187.4 78.9 94.4 9.1 630PFNAS-200 (Ours) 193.9 79.2 94.7 10.4 724(a) Ground-truth latency histogram. (b) Comparisons of search results. (c) Comparisons of Pareto frontiers.Figure 3: Latency histograms and Pareto curves of the architectures on mobile devices. (a) Ground-truth latency histogram of 16;000architectures that are uniformly sampled from the search space. (b)The latency histogram of 1;000architectures sampled by different methods given T=110 ms. (c) ThePareto frontier of the architectures sampled by different methods.Implementation details. We use MobileNetV3 (Howard et al., 2019) as the backbone to build thesearch space (Cai et al., 2020). We first obtain the range of latency by randomly sampling M=16;000architectures from the search space (See Figure 3(a)). Then, we select K=5latency budgets byevenly dividing the range ( e.g.,f80;110;140;170;200gon mobile devices). We put the discussionon the impact of Kon the search performance of PFNAS and more implementation details in thesupplementary materials.4.1 C OMPARISONS ON HARDWARE DEVICESWe compare PFNAS with state-of-the-art methods on Google Pixel1 phone. We also consider thefollowing baselines: 1) OFA-MO conducts architecture search based on OFA super network byexploiting the multi-objective reward (Tan et al., 2019). 2) OFA-S finds the best one from 16;000architectures sampled from the learned super network. From Table 1 and Figure 4(a), our PFNASconsistently achieves higher accuracy than other methods. Moreover, compared with the methodssearching for different constrained architectures independently ( e.g., OFA and OFA-MO), our PFNASonly needs to search once to find the Pareto frontier instead of a single Pareto optimal solution. The7Under review as a conference paper at ICLR 202150 75 100 125 150 175 200 225 250Mobile Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours)(a) Results on Google Pixel1.20 25 30 35 40 45 50 55 60CPU Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours) (b) Results on Intel Core i5 CPU.50 75 100 125 150 175 200GPU Latenc y (ms)7274767880Top-1 ImageNet Acc. (%)MnasNet-A1MobileNetV2MobileNetV3EfficientNetOFAOFA-SOFA-MOPFNAS (Ours) (c) Results on TITAN X GPU.Figure 4: Comparisons of the architectures obtained by different methods on three hardware devices.Table 2: Comparisons with different reward functions and search strategies on ImageNet. Ps(%)denotes the proportion of the searched architectures that satisfy the corresponding budget.RewardPareto Frontier T=80msT=110 msT=140 msT=170 msT=200 msLearning Acc.Ps Acc.Ps Acc.Ps Acc.Ps Acc.PsMulti-objective Reward (Tan et al., 2019)7 76.6 1.4 78.0 33.2 78.4 54.0 78.8 90.5 78.9 99.9X 77.0 3.2 78.1 43.9 78.5 88.4 78.9 99.1 78.9 99.9Pareto Dominance Reward7 76.3 92.8 78.1 92.5 78.6 93.2 78.9 94.3 79.0 99.9X 77.2 30.2 78.4 76.9 78.7 82.5 79.0 88.1 79.2 99.5learned Pareto frontier benefits from the shared knowledge across the search process under differentbudgets, helping the decision makers to further select their preferred architectures.We also visualize the latency histograms of the architectures searched on mobile devices in Figure 3(b).Given a latency budget of 110ms, only a few architectures produced by OFA-MO satisfy the budget,which demonstrates the multi-objective reward is hard to design to obtain the preferred architectures.By contrast, PFNAS uses the Pareto dominance reward to encourage the architectures to satisfy thedesired budget constraints. Moreover, we compare the searched frontiers of different methods. Here,we combine the architectures searched by 5 independent OFA-MO runs under different budgets andselect all the best architectures to generate an entire Pareto frontier. From Figure 3(c), our PFNASfinds a better frontier than OFA-MO due to the shared knowledge across the search process underdifferent budgets. We also evaluate PFNAS on Intel Core i5-7400 CPU and NVIDIA TITAN X GPU.From Figure 4, PFNAS consistently find better architectures than existing methods for each latencybudget on all considered devices (See more results in supplementary).4.2 A BLATION STUDIES OF THE PROPOSED METHODIn this experiment, we investigate the effectiveness of the Pareto frontier search strategy and thePareto dominance reward. From Table 2, the Pareto frontier search strategy tends to find betterthan the independent search process due to the shared knowledge across the search processes underdifferent budgets. Moreover, compared with multi-objective reward, the Pareto dominance rewardencourages the controller to find more architectures that satisfy the considered budget constraints.For example, even if only a few architectures have the latency lower than 80ms (See Figure 3(a)), westill achieve the Psof92:8%with the Pareto dominance reward. With both the Pareto frontier searchstrategy and the Pareto dominance reward, we yield the best search results under all budgets.5 C ONCLUSIONIn this paper, we have proposed a novel Pareto-Frontier-aware Neural Architecture Search (PFNAS)method to find the Pareto frontier under diverse computation budget ( i.e., latency). Specifically, wetrain the PFNAS model to learn the Pareto frontier by maximizing the expected reward over a set ofbudgets. To provide accurate rewards under diverse budgets, we propose a Pareto dominance rule tojudge whether an architecture is better than another and devise an architecture evaluator to learn thisrule. In this way, PFNAS is able to learn the Pareto frontier and find promising architectures underdiverse budgets. Extensive experiments on three platforms ( i.e., mobile, CPU, and GPU devices)demonstrate the effectiveness of the proposed method.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Official Review ### Review Text Paper Summary The paper considers the problem of Neural Architecture Search (NAS) when multiple objectives needs to be optimized jointly. An approach called Pareto-Frontier-aware Neural Architecture Search (PF-NAS) is proposed for optimizing over two objectives (specifically latency and accuracy). The approach consists of sampling multiple latency budgets uniformly and finding a Pareto set of architectures that satisfies the budget constraints. To compute a single objective value for a proposed architecture with a given budget, a model (named 'architecture evaluator') is learned with pairwise ranking loss. Experiments are performed on three platforms of different latencies. Detailed Comments - The paper considers an important problem relevant in practice (where multiple objective optimization is the usual norm). - One key part of the proposed approach is 'formulating the optimization problem into a Markov Decision Process (MDP)'. However, the write up is confusing and there are many details not described properly. For example, state and action space description is given as follows: 'Here, we define the budget as a state, the decision to find an architecture satisfying any budget as an action'. Is the action space binary (whether we take the decision or not)? Similarly, the state transition function is not clear. Given a state and an action, which state does the agent land in? These are basic questions that needs to be addressed clearly and explicitly. - There is a big assumption in the paper that by learning a Pareto set of architectures for a set 'L' of sampled latency constraints, we can capture any latency by considering a simple interpolation of the latencies from set 'L'. This assumes that the entire space of latencies can be captured by a small sampled set used in training. It is an important assumption that needs to be discussed and tested in much more detail. This is also at odds with the main motivation of the paper that a single utility function cannot be utilized for multi-objective optimization problems. - It is suggested that the main reason for good performance of PF-NAS is that the 'learned Pareto frontier benefits from the shared knowledge across the search process under different budgets'. However, the performance drops significantly and monotonically by increasing the number of sampled budgets (K variable) (Section G in supplementary). This is in contrast with the former statement. If the method leverages shared knowledge across search process under different budgets, the performance should ideally increase (or remain same at least). - The writing of the paper comes across as if the proposed approach is general enough for multiple objectives. However, the proposed solution is specific for two objectives. Please let me know if there is a straightforward extension of the approach to more than two objectives. - Please consider adding a comparison of the proposed approach with other baselines on training time as well. Although PF-NAS does better than the baselines on accuracy metric, but the improvement is within single decimal points and even that might not be statistically significant. Therefore, training time comparison is very important because a practitioner will prefer a method which requires less training time if the accuracy gain is limited by a superior approach but with large training time. - The writing of the paper can be substantially improved. For example, it is not clear what does 'learning the whole Pareto frontier mean'? - Some specific questions on the experimental section - What embedding is used for budget part in the architecture evaluator? ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
u4WfreuXxnk
ICLR.cc/2021/Conference
2021
Single-Node Attack for Fooling Graph Neural Networks
["Ben Finkelshtein", "Chaim Baskin", "Evgenii Zheltonozhskii", "Uri Alon"]
Graph neural networks (GNNs) have shown broad applicability in a variety of domains. Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example, where the node cannot be picked by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label by only slightly perturbing another single arbitrary node in the graph, even when not being able to pick that specific attacker node. When the adversary is allowed to pick a specific attacker node, the attack is even more effective. We show that this attack is effective across various GNN types (e.g., GraphSAGE, GCN, GAT, and GIN), across a variety of real-world datasets, and as a targeted and non-targeted attack. Our code is available anonymously at https://github.com/gnnattack/SINGLE .
["graphs", "GNN", "adversarial", "attack"]
Under review as a conference paper at ICLR 2021SI N G L E - NO D E AT TA C K F O R FO O L I N GGR A P H NE U R A L NE T W O R K SAnonymous authorsPaper under double-blind reviewAB S T R A C TGraph neural networks (GNNs) have shown broad applicability in a variety ofdomains. Some of these domains, such as social networks and product recommen-dations, are fertile ground for malicious users and behavior. In this paper, we showthat GNNs are vulnerable to the extremely limited scenario of a single-node adver-sarial example, where the node cannot be picked by the attacker. That is, an attackercan force the GNN to classify any target node to a chosen label by only slightlyperturbing another single arbitrary node in the graph, even when not being able topick that specific attacker node . When the adversary is allowed to pick a specificattacker node , the attack is even more effective. We show that this attack is effec-tive across various GNN types (e.g., GraphSAGE, GCN, GAT, and GIN), across avariety of real-world datasets, and as a targeted and non-targeted attack. Our codeis available anonymously at https://github.com/gnnattack/SINGLE .1 I N T R O D U C T I O NGraph neural networks (GNNs) (Scarselli et al., 2008; Micheli, 2009) have recently shown sharplyincreasing popularity due to their generality and computation-efficiency (Duvenaud et al., 2015; Liet al., 2016; Kipf & Welling, 2017; Hamilton et al., 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2019b).Graph-structured data underlie a plethora of domains such as citation networks (Sen et al., 2008),social networks (Leskovec & Mcauley, 2012; Ribeiro et al., 2017; 2018), knowledge graphs (Wanget al., 2018; Trivedi et al., 2017; Schlichtkrull et al., 2018), and product recommendations (Shchuret al., 2018). Therefore, GNNs are applicable for a variety of real-world structured data.While most work in this field has focused on improving the accuracy of GNNs and applying themto a growing number of domains, only a few past works have explored the vulnerability of GNNsto adversarial examples. Consider the following scenario: a malicious user joins a social networksuch as Twitter or Facebook. The malicious user mocks the behavior of a benign user, establishesconnections with other users, and submits benign posts. After some time, the user submits a newadversarially crafted post, which might seem irregular but overall benign. Since the GNN representsevery user according to all the user’s posts, this new post perturbs the representation of the user asseen by a GNN. As a result, another, specific benign user gets blocked from the network; alternatively,another malicious user submits a hateful post – but does not get blocked. This scenario is illustratedin Figure 1. In this paper, we show the feasibility of such a troublesome scenario: a single attackernode can perturb its own representation, such that another node will be misclassified as a label of theattacker’s choice.Most previous work on adversarial examples in GNNs required the perturbation to span multiplenodes, which in reality requires the cooperation of multiple attackers. For example, the pioneeringwork of Z ̈ugner et al. (2018) perturbed a setof attacker nodes; Bojchevski & G ̈unnemann (2019a)perturb edges that are covered by a setof nodes. Further and in contrast with existing work, we showthat perturbing a single node is more harmful than perturbing a single edge .In this paper, we present a first a single-node adversarial attack on graph neural networks. If theadversary is allowed to choose the attacker node, for example, by hacking into an existing account,the efficiency of the attack significantly increases. We present two approaches for choosing theattacker: a white-box gradient-based approach, and a black-box, model-free approach that relies ongraph topology. Finally, we perform a comprehensive experimental evaluation of our approach onmultiple datasets and GNN architectures.1Under review as a conference paper at ICLR 2021GNN: validOur vision..Attacker node a Victim node v(a) Before attacking: the victim node ( v) is classifiedas valid.GNN: invalidnot racist Our vision..Attacker node a Victim node v(b) After attacking: the victim node ( v) is classified asinvalid.Figure 1: An partial adversarial example from the test set of the Twitter dataset. An adversarially-crafted post perturbs the representation of the attacker node. This perturbation causes a misclassifica-tion of the target victim node, although they are not even direct neighbors.2 P R E L I M I N A R I E SLetG=fGigNGi=1be a set of graphs. Each graph G= (V;E;X)2G has a set of nodes Vand aset of edgesEVV , where (u;v)2E denotes an edge from a node u2V to a nodev2V.X2RNDis a matrix of D-dimensional node features. The i-th row ofXis the feature vector ofthe nodevi2V and is denoted as xi=Xi;:2RD.Graph neural networks GNNs operate by iteratively propagating neural messages between neigh-boring nodes. Every GNN layer updates the representation of every node by aggregating its currentrepresentation with the current representations of its neighbors.Formally, each node is associated with an initial representation x(0)v=h(0)v2RD. This repre-sentation is considered as the given features of the node. Then, a GNN layer updates each node’srepresentation given its neighbors, yielding h(1)v2Rd1for everyv2V. In general, the `-th layer ofa GNN is a function that updates a node’s representation by combining it with its neighbors:h(`)v=COMBINEh(`1)v;fh(`1)uju2Nvg;`; (1)whereNvis the set of direct neighbors of v:Nv=fu2Vj (u;v)2Eg .The COMBINE function is what mostly distinguishes GNN types. For example, graph convolutionalnetworks (GCN) (Kipf & Welling, 2017) define a layer as:h(`)v= ReLUXu2Nv[fvg1cu;vW(`)h(`1)u(2)wherecu;vis a normalization factor usually set topjNvjjNuj. After`such aggregation itera-tions, every node representation captures information from all nodes within its `-hop neighborhood.The total number of layers Lis usually determined empirically as a hyperparameter. In the nodeclassification scenario, we use the final representation hLvto classifyv.For brevity, we focus our definitions on the semi-supervised transductive node classification goal,where the dataset contains a single graph G, and the split into training and test sets is across nodes inthe same graph. Nonetheless, these definitions can be trivially generalized to the inductive setting,where the dataset contains multiple graphs, the split into training and test sets is between graphs, andthe test nodes are unseen during training.We associate each node v2V with a class yv2Y=f1;:::;Yg. The labels of the training nodesare given during training; the test nodes are seen during training – without their labels. The trainingsubset is represented as D=G;f(vi;yi)gNDi=0. Given the training set, the goal is to learn a modelf: (G;V)!Y that will classify the rest of the nodes correctly. During training, the model fthusminimizes the loss over the given labels, using J(;), which typically is the cross-entropy loss:=argminL(f;D) =argmin1NDXNDi=0J(f(G;vi);yi) (3)2Under review as a conference paper at ICLR 20213 S I N G L E - NO D E G N N A T TA C KIn this section, we describe our Single-node INdirect Gradient adversariaL Evasion (SINGLE ) attack.While our attack is simple, it is the first attack that focuses on perturbing nodes (in contrast to edges(Dai et al., 2018)), which works with an arbitrary single attacker node (in contrast to multiple nodes(Z ̈ugner et al., 2018)) that is not the node under attack (in contrast to “direct” attacks where theattacker perturbs the node under attack directly (Z ̈ugner et al., 2018; Li et al., 2020)).3 . 1 P R O B L E M DE F I N I T I O NGiven a graph G, a trained model f, a “victim” node vfrom the test set along with its classificationby the model ^yv=f(G;v), we assume that an adversary controls another node ain the graph. Thegoal of the adversary is to modify its own feature vector xaby adding a perturbation vector 2RDof its choice, such that the model’s classification of vwillchange .We denote by Gxa+the graphGwhere the row of Xthat corresponds to the node awas added withthe vector. In a non-targeted attack, the goal of the attacker is to find a perturbation vector that willchange the classification to anyother class, i.e., f(Gxa+;v)6=f(G;v). In a targeted attack, theadversary chooses a specific label yadv2Y and the adversary’s goal is to force f(Gxa+;v) =yadv.Generally, the classification of a node vdepends only on nodes whose distance tovin the graph islower than or equal L– the number of GNN layers. Thus, a modification of the features of awillaffect the classification of vonly if the distance betweenaandvis lower than or equal L. Otherwise,awill not be contained in the receptive field of v, and the attack will result in “under-reaching” (Alon& Yahav, 2020) – any perturbation of awill not affect the prediction of v(Barcel ́o et al., 2020).Therefore, we require that distanceG(a;v)L.In this work, we focus on gradient-based attacks. These kinds of attacks assume that the attackercan access a similar model to the model under attack and compute gradients. As recently shownby Wallace et al. (2020), this is reasonable assumption: an attacker can query the original model;using these queries, imitate the model under attack by training an imitation model; find adversarialexamples using the imitation model; and transfer these adversarial examples back to the originalmodel. Under this assumption, these attacks are general and are applicable to any GNN and dataset.3 . 2 C H A L L E N G E SUnnoticeable Perturbations. Our first challenge is to find an adversarial example that will allowan imperceptible perturbation of the input. This objective is attainable in continuous domains suchas images (Szegedy et al., 2013; Goodfellow et al., 2014) and audio (Carlini & Wagner, 2018) ifwe constrain l1-norm of the perturbation vector . It is, however, unclear what imperceptibilitymeans in graphs. In most GNN datasets, a node’s features are a bag-of-words representation of thewords that are associated with the node. For example, in Cora (McCallum et al., 2000; Sen et al.,2008), every node is annotated by a many-hot feature vector of words that appear in the paper; inPubMed (Namata et al., 2012), node vectors are TF-IDF word frequencies; in Twitter (Ribeiro et al.,2017), node features are averages of GloVe embeddings, which can be viewed as word frequencyvectors multiplied by a (frozen) embedding matrix. We argue that an attack would be unnoticeablein an academic paper or in a set of Tweets if the frequency of some words is slightly modified. Forexample, a particular word may be repeated a few times throughout the text or remain unused.To constrain the vector, we require that kk11– the maximal absolute value of the elementsin the perturbation vector – is bounded by 12R+.Perturbing nodes instead of edges. Previous work mostly focused on perturbing graph edges .Z ̈ugner et al. (2018) perturb both edges and node features, but conclude that “perturbations in thestructure lead to a stronger change in the surrogate loss compared to feature attacks”; Wu et al. (2019b)also conclude that “perturbing edges is more effective than modifying the features”. In this paper,we counter these conclusions and show that small node feature perturbations are stronger: (i) first,removing all the edges of a particular node is a special case of node feature perturbation. There existsa perturbation such thatW1(xa+) =0, i.e., the modified feature vector xa+is in the null3Under review as a conference paper at ICLR 2021space of the first GNN layer.1Such a feature perturbation is equivalent to removing all the edges ofthe nodea. (ii) Second, we argue that perturbing the graph structure is not realistic, because a singleattacker controls only its own edges, and cannot control the global graph structure as in previouswork (Dai et al., 2018; Bojchevski & G ̈unnemann, 2019b; Zhang & Zitnik, 2020). (iii) Finally, whena successful attack is caused by removing edges, it is unclear whether the misclassification is causedby sensitivity to non-robust features in the data (Ilyas et al., 2019), or simply due to smaller amountof information. Similarly, when a successful attack is caused by inserting edges, it is unclear whetherthis is simply due to incorrect or unrealistic added information.3 . 3 F I N D I N G T H E PE R T U R B AT I O N VE C T O RTo find the perturbation, we iteratively differentiate the desired loss of vwith respect to the perturba-tion vector, updateaccording to the gradient, and add it to the feature vector. In non-targetedattacks, we take the positive gradient of the loss of the undesired label to increase the loss; in targetedattacks, we take the negative gradient of the loss of the adversarial label yadv:t+1=t+rJ(f(Gxa+t;v);^yv) non-targeted attacktrJ(f(Gxa+t;v);yadv)targeted attack(4)where2R+is a learning rate. We repeat this process for a predefined number of Kiterations, oruntil the model predicts the desired label.Enforcing the constraints. We treat the node features as continuous throughout the attack iterations,whether they are discrete or continuous. Once the attack succeeds, we try to reset to zero as manyperturbation vector elements as possible. We sort the perturbation vector elements in a decreasingorder, according to their absolute value: i1;:::;iD. We start with the index of whose absolute valueis the largest, i1, and reset the rest of the fi2;:::;iDgelements to zero. We then check whetherperturbing only the i1index is sufficient. If the attack succeeds, we stop. If the attack fails (becauseof the large number of perturbation vector elements set to zero), we continue perturbing the rest ofthe elements of . In the worst case, we perturb all Dvector elements of . In most cases, we stopmuch earlier, practically perturbing only a small fraction of the vector elements. If the original nodefeatures are discrete, we discretized features after the optimization.Differentiate by frequencies, not by embeddings. When taking the gradient with respect to theperturbation vector r, there is a subtle, but crucial, difference between the way that node rep-resentations are given in the dataset: (a) indicative datasets provide initial node representationsX= [x1;x2;:::]that are word indicator vectors (many-hot) or frequencies such as (weighted) bag-of-words (Sen et al., 2008; Shchur et al., 2018); (b) in encoded datasets, initial node representationsare given encoded, e.g., as an average of word2vec vectors (Hamilton et al., 2017; Hu et al., 2020).Indicative datasets can be converted to encoded by multiplying every vector by an embedding matrix;encoded datasets cannot be converted to indicative , without the authors releasing the textual data thatwas used to create the encoded dataset.Inindicative datasets, a perturbation of a node vector canbe realized as a perturbation of the originaltext from which the indicative vector was derived. That is, adding or removing words in the text canresult in the perturbed node vector. In contrast, a few-indices perturbation in encoded datasets mightbe an effective attack, but will notbe realistic because there is no perturbation of the original textthat will result in that perturbation of the vector. That is, when perturbing nodes, it is crucial to useindicative datasets, or convert encoded datasets to the indicative representation from which they werederived (as we do in Section 4) using their original text.4 E VA L U AT I O NWe evaluate and analyze the effectiveness of our SINGLE attack. In Section 4.1, we show thatSINGLE is more effective than alternatives such as single-edge attacks. In Section 4.2, we show thatif we are allowed to choose the attacker node, SINGLE is significantly more effective.Setup. Our implementation is based on PyTorch Geometric (Fey & Lenssen, 2019) and its provideddatasets. We trained each GNN type with two layers ( L= 2), using the Adam optimizer, early1This equation demonstrates GCN, but similar equations hold for other GNN types like GAT and GIN.4Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterClean (no attack) 80.5 0.8 68.50.7 78.50.6 89.10.2EdgeGrad 65.1 1.3 48.150.9 59.70.7 82.70.0SINGLE 60.10.1 34.03.6 45.50.5 72.17.2SINGLE -hops 69.3 0.9 45.15.2 48.70.9 74.56.7Table 1: Test accuracy (lower is better) under different types of attacks, when the attacker node ischosen randomly . Performed using GCN, 1= 1for the discrete datasets (Cora and CiteSeer), and1= 0:1for the continuous datasets (PubMed and Twitter).stopped according to the validation set, and applied a dropout of 0:5between layers. We used uptoK= 20 attack iterations. All experiments in this section were performed with GCN, except forSection 4.5, where additional GNN types (GAT, GIN, and GraphSAGE) are shown. In Appendix A.2,we show consistent results across additional GNN types: GAT (Veli ˇckovi ́c et al., 2018), GIN (Xuet al., 2019b), GraphSAGE (Hamilton et al., 2017), SGC (Wu et al., 2019a), and RobustGCN (Z ̈ugner& G ̈unnemann, 2019).Data. We used Cora and CiteSeer (Sen et al., 2008) which are discrete datasets, i.e., the given nodefeature vectors are many-hot vectors. Thus, we set 1= 1, the minimal possible perturbation. Wealso used PubMed (Sen et al., 2008) and the Twitter-Hateful-Users (Ribeiro et al., 2017) datasets,which are continuous , and node features represent frequencies of words. Continuous datasets allowa much more subtle perturbation, and we set 1= 0:1. An analysis of these values is presented inSection 4.5.The Twitter-Hateful-Users dataset is originally provided as an encoded dataset, where every node isan average of GloVe vectors (Pennington et al., 2014). We reconstructed this dataset using the originaltext from Ribeiro et al. (2017), to be able to compute gradients with respect to the weighted histogramof words, rather than the embeddings. We took the most frequent 10,000 words as node features, andused GloVe-Twitter embeddings to multiply by the node features. We thus converted this dataset toindicative rather than encoded . Statistics of all dataset are provided in the supplementary material.Baselines. InSINGLE (Section 3.3) the attacker node is selected randomly for each victim node,and the attack perturbs this node’s features according to 1. SINGLE -hops is a modification ofSINGLE where the attacker node is sampled only among nodes that are not neighbors , i.e., theattacker and the victim are not directly connected ( (a;v)=2E). We compare to additional approachesfrom the literature: EdgeGrad follows most previous work (Xu et al., 2019a; Li et al., 2020; Z ̈ugner& G ̈unnemann, 2020): EdgeGrad randomly samples an attacker node as in SINGLE , and eitherinserts or removes a single edge from or to the attacker node, according to the gradient.2If both use arandomly selected attacker node, EdgeGrad is strictly stronger than the GradArgmax attack of Daiet al. (2018), which only removes edges. We ran each approach 5 times with different random seedsfor each dataset, and report the mean and standard deviation.4 . 1 M A I N RE S U LT STable 1 shows our main results for non-targeted attacks across various datasets. As shown, SINGLEis more effective than EdgeGrad across all datasets. SINGLE -hops , which is more unnoticeablethan attacking with a neighbor node, performs almost as good as SINGLE which attacks using anon-neighboring node, and better than EdgeGrad . On Twitter, SINGLE reduces the test accuracysignificantly better than EdgeGrad : 72.1% compared to 82.7%. Results for targeted attacks areshown in Appendix A.3.Surprisingly, Table A.5 shows that Robust GCN (Z ̈ugner & G ̈unnemann, 2019) is as vulnerable tothe SINGLE attack as a standard GCN, showing that there is still much room for novel ideas andimprovements to the robustness of current GNNs.As we explain in Section 3.3, SINGLE tries to find a perturbation vector in which the number ofperturbed elements is minimal. We measured the number of vector elements that the attack had2This can be implemented easily using edge weights : training the GNN with weights of 1for existing edges,adding all possible edges with weights of 0, and taking the gradient with respect to the vector of weights.5Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterGlobalEdgeGrad 29.72.4 11.90.8 15.30.4 82.70.0SINGLE +GradChoice 31.01.9 19.04.2 8.51.2 7.01.1SINGLE +Topology 31.11.2 18.13.4 5.20.1 6.60.5Table 2: Test accuracy when the adversary can choose the attacker node.perturbed in practice. In PubMed, SINGLE used 76 vector elements on average, which are 15% ofthe elements in the feature vector. In Cora, SINGLE perturbed 717 elements on average, which are50%. In CiteSeer, SINGLE used 1165 attributes on average, which are 31% of the features. In Twitter,SINGLE used 892 attributes on average, which are 9% of the features. In the experiments shownin Table 1, we used 1= 0:1in the continuous datasets (PubMed and Twitter). If we allow largervalues of1, we can reduce the number of perturbed vector elements: using 1= 0:5requiresperturbing only 3% of the attributes on average to achieve the same effectiveness; using 1= 1requires perturbing only 1.6% of the attributes on average to achieve the same effectiveness (inPubMed, where varying 1is meaningful).4 . 2 A T TA C K E R CH O I C EIf the attacker could choose its node, e.g., by hijacking an existing account in a social network, couldthey increase the effectiveness of the attack? We examine the effectiveness of two approaches forchoosing the attacker node.Gradient Attacker Choice (GradChoice ) chooses the attacker node according to the largestgradient with respect to the node representations (for a non-targeted attack): a=argmaxai2VkrxiJ(f(G;v);^yv)k1. The chosen attacker node is never the victim node itself.Topological Attacker Choice (Topology ) chooses the attacker node according to topological propertiesof the graph. As an example, we choose the neighbor of the victim node vwith the smallest numberof neighbors: a=argmina2NvjNaj. The advantage of this approach is that the attacker choiceismodel-free : if the attacker cannot compute gradients, they can at least choose the most harmfulattacker node, and then perform the perturbation itself using other non-gradient approaches such asones proposed by Waniek et al. (2018) and Chang et al. (2020).To perform a fair comparison, we compare these approaches with GlobalEdgeGrad , which is similartoEdgeGrad that can insert or remove an edge, with the difference that the chosen edge can bechosen from the entire graph .Results. Results for these attacker choice approaches are shown in Table 2. The main results arethat choosing the attacker node significantly increases the effectiveness of the SINGLE attack: forexample, in Twitter, from 72.1% (Table 1) to 6.6% test accuracy (Table 2).In datasets where the given initial node features are continuous (PubMed and Twitter), SIN-GLE +Topology andSINGLE +GradChoice show similar results: on Twitter accuracy differenceis less than 0.5%; on PubMed SINGLE +Topology outperforms SINGLE +GradChoice by3%, eventhough SINGLE +Topology is model-free. Both of those attacks are more efficient than GlobalEdge-Grad , showing the superiority of node perturbation over edge perturbation in the global view. InAppendix A.4, we show that allowing GlobalEdgeGrad to insert and remove multiple edges thatbelong to the same attacker node does notlead to a significant improvement.Interestingly, GradChoice andTopology agree on the choice of attacker node for 50.3% of the nodesin Cora, 78.7% of the nodes in CiteSeer, 51.0% of the nodes in PubMed, and on 55.0% of the nodesin Twitter, showing that the node selection can sometimes be performed model-free.In datasets where the initial node features are discrete (Cora and CiteSeer), i.e., many-hot vectors,GlobalEdgeGrad reduces the test accuracy more than GradChoice andTopology . We believe that thereason is the difficulty of two-step optimization in discrete datasets: for example, GradChoice needsto choose the node, and find the perturbation afterwards. Finding a perturbation for a discrete vectoris more difficult than in continuous datasets, and the choice of the attacker node may not be optimal.6Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterSINGLE -two attackers 7.10.5 8.20.2 27.70.2 –SINGLE -direct 21.22.5 13.82.1 0.30.1 57.68.7SINGLE 60.10.1 18.13.4 45.50.5 72.17.2Table 3: Scenario ablation: test accuracy under different attacking scenarios.Standard training Adversarial trainingClean (no attack) 78.5 0.6 76.90.6SINGLE 45.50.5 58.52.7SINGLE -hops 48.7 0.9 62.12.5SINGLE +GradChoice 8.51.2 30.66.8SINGLE +Topology 5.20.1 21.12.1SINGLE -two attackers 27.70.2 40.73.4SINGLE -direct 0.30.1 4.61.1Table 4: Test accuracy while attacking a model that was adversarially trained on PubMed, withdifferent types of attacks.4 . 3 S C E N A R I O AB L AT I O NThe main scenario that we focus on in this paper is a SINGLE approach that always perturbs a singlenode, which is notthe victim node ( a6=v). We now examine our SINGLE attack in other, easier butless realistic, scenarios: SINGLE -two attackers follows Z ̈ugner et al. (2018) and Zang et al. (2020),randomly samples twoattacker nodes and perturbs their features using the same approach as SINGLE .SINGLE -direct perturbs the victim node directly (i.e., a=v), an approach that was found to bethe most efficient by Z ̈ugner et al. (2018). Table 3 shows the test accuracy of these ablations. InAppendix A.5.3, we additionally experiment with more than two attacker nodes.4 . 4 A D V E R S A R I A L TR A I N I N GIn the previous sections, we studied the effectiveness of the SINGLE attack. In this section, weinvestigate to what extent can adversarial training (Madry et al., 2018) defend against SINGLE . Foreach training step and labeled training node, we perform Ktrainadversarial steps to adversariallyperturb another randomly sampled node, exactly as in SINGLE , but at training time. The model isthen trained to minimize the original cross-entropy loss and the adversarial loss:L(f;D) =12NDXNDi=0J(f(G;vi);yi) +JfGxai+i;vi;yi: (5)The main difference from Equation (3) is the adversarial term JfGxai+i;vi;yi, whereaiisthe randomly sampled attacker for the node vi. In every training step, we randomly sample a newattacker for each victim node and compute new ivectors. After the model is trained, we attack themodel withKtestSINGLE adversarial steps. This is similar to Feng et al. (2019) and Deng et al. (2019),except that they used adversarial training as a regularizer, to improve the accuracy of a model whilenot under attack. In contrast, we use adversarial training to defend a model against an attack at testtime. We used Ktrain= 5, as we found it to be the maximal value for which the model’s accuracy isnot significantly hurt while not under attack (“clean”), and Ktest= 20 as in the previous experiments.As shown in Table 4, adversarial training indeed improves the model’s robustness against the differentSINGLE attacks. However, the main result of this section is that SINGLE ,SINGLE +GradChoiceandSINGLE +Topology are still very effective attacks, as they succeed in attacking the adversariallytrained model, reducing its test accuracy to 58.5%, 30.6% and 21.1%, respectively.4 . 5 S E N S I T I V I T Y T O 1How does the intensity of the adversarial perturbation affect the performance of the attack? Intuitively,we say that the less we restrict the perturbation (i.e., larger values of 1), the more powerful theattack. We examine whether this holds in practice.7Under review as a conference paper at ICLR 202100:10:20:30:40:50:60:70:80:9110203040506070801AccGCN GAT GIN GraphSageFigure 2: Effectiveness of the attack comparedto the allowed 1(performed on PubMed, be-cause its features are continuous).1 2 3 4 5 6 7 81020304050607080distance (a;v)AccPubMedCoraCiteSeerFigure 3: Test accuracy compared to the dis-tance between the attacker and the victim, whenthe GCN was trained with L= 8on PubMed.In our experiments in Sections 4.1 to 4.4, we used 1= 0:1for the continuous datasets (PubMedand Twitter). In this section, we vary the value of 1across different GNN types and observe theeffectiveness of the attack. Figure 2 shows the results on PubMed. We used this dataset because itis larger than Cora and CiteSeer (Appendix A.1), and most importantly, its features are continuous,thus real-valued perturbations are feasible. As shown in Figure 2, the most significant difference isbetween performing the perturbation ( 1= 0:1) and not attacking at all ( 1= 0). As we increasethe value of1, GCN and GraphSage (Hamilton et al., 2017) show a natural descent in test accuracy.Contrarily, GAT (Veli ˇckovi ́c et al., 2018) and GIN (Xu et al., 2019b) are more robust to increasedabsolute values of perturbations, while GAT is also the most robust compared to the other GNN types.4 . 6 D I S TA N C E BE T W E E N AT TA C K E R A N D VI C T I MIn Section 4.1, we found that SINGLE performs similarly to SINGLE -hops, although SINGLE -hopssamples an attacker node awhose distance from the victim node vis at least 2. We further questionwhether the effectiveness of the attack depend on the distance in the graph between the attacker andthe victim. We trained a new model for each dataset using L= 8layers. Then, for each test victimnode, we sampled attackers according to their distance to the test node.As shown in Figure 3, the effectiveness of the attack increases as the distance between the attackerand the victim decreases. At distance of 5, the curve seems to saturate. A possible explanation forthis is that apparently more than few layers (e.g., L= 2in Kipf & Welling (2017)) are not neededin most datasets. Thus, the rest of the layers can theoretically learn notto pass much of their inputstarting from the redundant layers, excluding adversarial signals as well.5 R E L AT E D W O R KWorks on adversarial attacks on GNN differ in several main aspects. In this section, we discuss themain criteria, to clarify the settings that we address.Single vs. multiple attackers All previous works allowed perturbing multiple nodes, or edges thatare covered by multiple nodes: Z ̈ugner et al. (2018) perturb features of a setof attacker nodes; Zanget al. (2020) assume “a few bad actors”; other works perturb edges that in realistic settings theirperturbation would require controlling multiple nodes (Bojchevski & G ̈unnemann, 2019a; Sun et al.,2020; Chen et al., 2018).Node vs. edge perturbations Most adversarial attacks on GNNs perturb the input graph by modifyingthe graph structure (Z ̈ugner & G ̈unnemann, 2019; Wang et al., 2020; Xu et al., 2019a). For example,Dai et al. (2018) iteratively remove edges, yet their attack manages to reduce the accuracy by about10% at most when perturbing a single edge. Li et al. (2020) also allow the insertion of edges; Waniek8Under review as a conference paper at ICLR 2021et al. (2018) and Chang et al. (2020) allow insertion and deletion of edges, using attacks that arebased on correlations and eigenvalues, and not on gradients. Yefet et al. (2019) perturb one-hotnode vectors, in the restricted domain of computer programs. Z ̈ugner et al. (2018) and Wu et al.(2019b) perturb both edges and nodes; but they concluded that perturbing edges is more effectivethan perturbing nodes. In this work, we counter these conclusions and show that perturbing nodefeatures is more effective than perturbing edges.Direct vs. influence attacks Another difference between prior works lies in the difference betweendirect attacks andinfluence attacks . In direct attacks, the attacker perturbs the target node itself . Forexample, the attack of Z ̈ugner et al. (2018) is the most effective when the attacker and the target arethe same node . In influence attacks, the perturbed nodes are at least one hop away from the victimnode. In this paper, we show that the strong direct assumption is not required ( SINGLE -direct inSection 4.2), and that our attack is effective when the attacker and the target are not even directneighbors , i.e., they are at least twohops away ( SINGLE -hops in Section 4.1).Poisoning vs. evasion attacks In a related scenario, some work (Z ̈ugner & G ̈unnemann, 2019;Bojchevski & G ̈unnemann, 2019a; Li et al., 2020; Zhang & Zitnik, 2020) focuses on poisoningattacks that perturb examples before training. Contrarily, we focus on the standard evasion scenarioof adversarial examples in neural networks (Szegedy et al., 2013; Goodfellow et al., 2014), where theattack operates at test time, after the model was trained, as Dai et al. (2018).Attacking vs. certifying Z ̈ugner & G ̈unnemann (2020) focus on certifying the robustness of GNNsagainst adversarial perturbations; and Bojchevski & G ̈unnemann (2019b) certified PageRank-stylemodels. In contrast, we study the effectiveness of the adversarial attack itself.6 C O N C L U S I O NWe demonstrate that GNNs are susceptible even to the extremely limited scenario of a single-nodeindirect adversarial example ( SINGLE ). The practical consequences of these findings are that a singleattacker in a network can force a GNN to classify any other target node as the attacker’s chosen label,by slightly perturbing some of the attacker’s features. We further show that if the attacker can chooseits attacker node – the effectiveness of the attack increases significantly. We study the effectivenessof these attacks across various GNN types and datasets.We believe that this work will drive research in this field toward exploring novel defense approachesfor GNNs. Such defenses can be crucial for real-world systems that are modeled using GNNs. Fur-thermore, we believe that the surprising results of this work motivate better theoretical understandingof the expressiveness and generalization of GNNs. To these ends, we make all our code and trainedmodels publicly available.RE F E R E N C E SUri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications.arXiv preprint arXiv:2006.05205 , 2020.Pablo Barcel ́o, Egor V . Kostylev, Mikael Monet, Jorge P ́erez, Juan Reutter, and Juan Pablo Silva.The logical expressiveness of graph neural networks. In International Conference on LearningRepresentations , 2020. URL https://openreview.net/forum?id=r1lZ7AEKvB .Aleksandar Bojchevski and Stephan G ̈unnemann. Adversarial attacks on node embeddings via graphpoisoning. In International Conference on Machine Learning , pp. 695–704, 2019a.Aleksandar Bojchevski and Stephan G ̈unnemann. Certifiable robustness to graph perturbations. InAdvances in Neural Information Processing Systems , pp. 8319–8330, 2019b.Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text.In2018 IEEE Security and Privacy Workshops (SPW) , pp. 1–7. IEEE, 2018.Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, andJunzhou Huang. A restricted black-box adversarial framework towards attacking graph embeddingmodels. In AAAI , pp. 3389–3396, 2020.9Under review as a conference paper at ICLR 2021Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. Fast gradientattack on network embedding. arXiv preprint arXiv:1809.02797 , 2018.Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack ongraph structured data. In International Conference on Machine Learning , pp. 1115–1124, 2018.Zhijie Deng, Yinpeng Dong, and Jun Zhu. Latent adversarial training of graph convolution networks.InICML Workshop on Learning and Reasoning with Graph-Structured Representations , 2019.URLhttps://graphreason.github.io/papers/3.pdf .David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems , pp. 2224–2232, 2015.Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. Graph adversarial training: Dynamicallyregularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering ,2019.Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. InICLR Workshop on Representation Learning on Graphs and Manifolds , 2019.Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572 , 2014.Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in neural information processing systems , pp. 1024–1034, 2017.Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta,and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXivpreprint arXiv:2005.00687 , 2020.Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and AleksanderMadry. Adversarial examples are not bugs, they are features. In Advances in Neural InformationProcessing Systems , pp. 125–136, 2019.Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.InInternational Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl .Jure Leskovec and Julian J. Mcauley. Learning to discover social circles in ego networks. In Advancesin neural information processing systems , pp. 539–547, 2012.Jintang Li, Tau Xie, Liang Chen, Fentang Xie, Xiangnan He, and Zibin Zheng. Adversarial attack onlarge scale graph. arXiv preprint arXiv:2009.03488 , 2020.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations , 2016.Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.Towards deep learning models resistant to adversarial attacks. In International Conference onLearning Representations , 2018.Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating theconstruction of internet portals with machine learning. Information Retrieval , 3(2):127–163, 2000.Alessio Micheli. Neural network for graphs: A contextual constructive approach. IEEE Transactionson Neural Networks , 20(3):498–511, 2009.Galileo Mark Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying forcollective classification. In Workshop on Mining and Learning with Graphs , 2012.Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for wordrepresentation. In Empirical Methods in Natural Language Processing (EMNLP) , pp. 1532–1543,2014. URL http://www.aclweb.org/anthology/D14-1162 .10Under review as a conference paper at ICLR 2021Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virg ́ılio A. F. Almeida, and WagnerMeira Jr. “Like sheep among wolves”: Characterizing hateful users on twitter. arXiv preprintarXiv:1801.00317 , 2017.Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virg ́ılio A. F. Almeida, and Wagner Meira Jr.Characterizing and detecting hateful users on twitter. arXiv preprint arXiv:1803.08977 , 2018.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Thegraph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2008.Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and MaxWelling. Modeling relational data with graph convolutional networks. In European Semantic WebConference , pp. 593–607. Springer, 2018.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93–93, 2008.Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G ̈unnemann. Pitfallsof graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018 ,2018.Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Non-target-specificnode injection attacks on graph neural networks: A hierarchical reinforcement learning approach.InProc. WWW , volume 3, 2020.Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: Deep temporal reasoningfor dynamic knowledge graphs. In International Conference on Machine Learning , pp. 3462–3471,2017.Petar Veli ˇckovi ́c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li `o, and YoshuaBengio. Graph attention networks. In International Conference on Learning Representations ,2018. URL https://openreview.net/forum?id=rJXMpikCZ .Eric Wallace, Mitchell Stern, and Dawn Song. Imitation attacks and defenses for black-box machinetranslation systems. arXiv preprint arXiv:2004.15015 , 2020.Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. Certified robustness of graphneural networks against adversarial structural perturbation. arXiv preprint arXiv:2008.10715 ,2020.Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. Cross-lingual knowledge graph alignmentvia graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methodsin Natural Language Processing , pp. 349–357, 2018.Marcin Waniek, Tomasz P. Michalak, Michael J. Wooldridge, and Talal Rahwan. Hiding individualsand communities in a social network. Nature Human Behaviour , 2(2):139–147, 2018.Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Sim-plifying graph convolutional networks. In International Conference on Machine Learning , pp.6861–6871, 2019a.Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarialexamples for graph data: deep insights into attack and defense. In Proceedings of the 28thInternational Joint Conference on Artificial Intelligence , pp. 4816–4823. AAAI Press, 2019b.Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topol-ogy attack and defense for graph neural networks: an optimization perspective. In Proceedings ofthe 28th International Joint Conference on Artificial Intelligence , pp. 3961–3967. AAAI Press,2019a.11Under review as a conference paper at ICLR 2021Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neuralnetworks? In International Conference on Learning Representations , 2019b. URL https://openreview.net/forum?id=ryGs6iA5Km .Noam Yefet, Uri Alon, and Eran Yahav. Adversarial examples for models of code. arXiv preprintarXiv:1910.07517 , 2019.Xiao Zang, Yi Xie, Jie Chen, and Bo Yuan. Graph universal adversarial attacks: A few bad actorsruin graph learning models. arXiv preprint arXiv:2002.04784 , 2020.Xiang Zhang and Marinka Zitnik. GNNGuard: Defending graph neural networks against adversarialattacks. arXiv preprint arXiv:2006.08149 , 2020.Daniel Z ̈ugner and Stephan G ̈unnemann. Certifiable robustness and robust training for graphconvolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference onKnowledge Discovery & Data Mining , pp. 246–256, 2019.Daniel Z ̈ugner and Stephan G ̈unnemann. Certifiable robustness of graph convolutional networksunder structure perturbations. In Proceedings of the 26th ACM SIGKDD International Conferenceon Knowledge Discovery & Data Mining , pp. 1656–1665, 2020.Daniel Z ̈ugner and Stephan G ̈unnemann. Adversarial attacks on graph neural networks via metalearning. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=Bylnx209YX .Daniel Z ̈ugner, Amir Akbarnejad, and Stephan G ̈unnemann. Adversarial attacks on neural networksfor graph data. In Proceedings of the 24th ACM SIGKDD International Conference on KnowledgeDiscovery & Data Mining , pp. 2847–2856, 2018.12Under review as a conference paper at ICLR 2021A S U P P L E M E N TA RY MAT E R I A LA . 1 D ATA S E T STAT I S T I C SStatistics of the datasets are shown in Table A.1.Table A.1: Dataset statistics.#Training #Val #Test #Unlabeled Nodes #Classes Avg. Node DegreeCora 140 500 1000 2708 7 3.9CiteSeer 120 500 1000 3327 6 2.7PubMed 60 500 1000 19717 3 4.5Twitter 4474 248 249 95415 2 45.6A . 2 A D D I T I O N A L G N N T Y P E STables A.2 to A.4 present the test accuracy of different attacks applied on GAT (Veli ˇckovi ́c et al., 2018),GIN (Xu et al., 2019b), GraphSAGE (Hamilton et al., 2017), RobustGCN (Z ̈ugner & G ̈unnemann,2019), and SGC (Wu et al., 2019a), showing the effectiveness of SINGLE across different GNN types.Cora CiteSeer PubMedEdgeGrad 66.4 1.2 49.41.4 64.91.0SINGLE 40.012.5 33.26.7 35.713.3SINGLE -hops 42.0 11.5 41.75.8 35.513.6GlobalGradEdge 67.8 4.9 48.35.1 63.54.6SINGLE +GradChoice 43.14.9 32.44.7 36.48.0SINGLE +Topology 32.26.4 25.58.0 27.85.7SINGLE -two attackers 12.76.3 11.01.4 26.811.8SINGLE -direct 23.61.5 14.84.3 21.82.5Table A.2: Test accuracy of GAT under different non-targeted attacksCora CiteSeer PubMedEdgeGrad 32.9 3.1 18.53.0 33.31.7SINGLE 27.11.3 12.32.9 12.91.0SINGLE -hops 32.6 0.7 18.53.1 14.00.6GlobalGradEdge 10.72.8 4.82.1 10.31.0SINGLE +GradChoice 15.92.0 8.11.7 10.01.6SINGLE +Topology 16.11.7 7.61.6 6.31.7SINGLE -two attackers 2.70.7 5.42.0 6.21.8SINGLE -direct 5.71.4 4.71.6 3.13.1Table A.3: Test accuracy of GIN under different non-targeted attacksSurprisingly, Table A.5 shows that Robust GCN (Z ̈ugner & G ̈unnemann, 2019) is as vulnerable tothe SINGLE attack as a standard GCN, showing that there is still much room for novel ideas andimprovements to the robustness of current GNNs.A . 3 T A R G E T E D AT TA C K STables A.6 to A.9 show the results of targeted attacks across datasets and approaches. Differentlyfrom other tables which show test accuracy, Tables A.6 to A.9 present the targeted attack’s successrate, which is the fraction of test examples that the attack managed to force a specific label prediction13Under review as a conference paper at ICLR 2021Cora CiteSeer PubMedEdgeGrad 62.9 1.9 45.93.4 64.21.6SINGLE 62.72.4 32.34.3 57.10.8SINGLE -hops 70.0 3.3 45.54.3 60.90.8GlobalGradEdge 48.9 2.7 40.43.3 64.71.1SINGLE +GradChoice 37.33.4 18.03.2 8.20.7SINGLE +Topology 37.43.6 19.24.2 6.60.3SINGLE -two attackers 14.40.9 11.10.1 45.40.8SINGLE -direct 19.62.1 13.53.9 0.00.1Table A.4: Test accuracy of GraphSAGE under different non-targeted attacksGCN RobustGCN SGCClean 78.5 0.6 73.91.6 78.90.5EdgeGrad 59.7 0.7 – 65.11.3SINGLE 45.50.5 34.31.4 47.31.2SINGLE +hops 48.70.9 29.71.1 49.61.2GlobalGradEdge 15.3 0.4 – 15.30.4SINGLE +GradChoice 8.51.2 19.60.9 11.31.2SINGLE +Topology 5.20.1 72.51.9 5.60.5SINGLE +two attackers 27.70.2 20.01.1 30.01.8SINGLE +direct 0.30.1 15.81.1 0.50.2Table A.5: Test accuracy of GCN, Robust GCN (Z ̈ugner & G ̈unnemann, 2019), and SGC (Wu et al.,2019a) under different non-targeted attacks, on PubMed.Cora CiteSeer PubMed TwitterEdgeGrad 8.0 0.7 14.80.5 20.10.6 12.62.5SINGLE 36.62.4 60.72.2 38.20.6 14.64.7SINGLE -hops 33.6 2.3 50.02.5 35.21.6 12.63.7GlobalGradEdge 59.4 0.9 78.70.9 80.10.6 13.02.2SINGLE +GradChoice 65.81.5 67.22.2 43.51.6 42.211.6SINGLE +Topology 57.32.1 66.33.0 90.40.3 55.49.4Table A.6: Success rate (higher is better) of different targeted attacks on a GCN network.(in these results, higher is better). These results suggest that in targeted attack settings, node-basedattacks (such as SINGLE ) have an even bigger advantage over edge-based attacks (such as EdgeGrad).Cora CiteSeer PubMedEdgeGrad 6.1 0.4 12.51.2 17.91.5SINGLE 33.78.6 43.511.1 50.715.8SINGLE -indirect 26.6 7.3 29.888.6 50.315.7GlobalGradEdge 6.0 1.4 14.62.8 22.33.6SINGLE +GradChoice 25.85.3 38.68.5 50.513.5SINGLE +Topology 41.35.3 52.511.3 63.010.2Table A.7: Success rate (higher is better) of different targeted attacks on a GAT network.14Under review as a conference paper at ICLR 2021Cora CiteSeer PubMedEdgeGrad 16.8 1.2 25.61.0 37.92.6SINGLE 31.11.7 49.05.4 58.87.9SINGLE -hops 24.5 1.2 37.44.1 57.85.7GlobalGradEdge 44.7 4.7 55.07.0 64.911.8SINGLE +GradChoice 44.35.0 59.04.5 63.59.9SINGLE +Topology 45.12.3 58.75.1 73.213.3Table A.8: Success rate (higher is better) of different targeted attacks on a GIN network.Cora CiteSeer PubMedEdgeGrad 7.6 0.3 16.31.7 19.11.4SINGLE 24.31.9 50.02.5 27.91.0SINGLE -hops 15.4 3.8 34.22.6 24.11.0GlobalGradEdge 9.3 0.9 14.71.1 19.60.8SINGLE +GradChoice 49.13.0 63.74.0 36.32.1SINGLE +Topology 54.11.3 69.33.2 89.80.3Table A.9: Success rate (higher is better) of different targeted attacks on a GraphSAGE network.A . 4 M U LT I ED G E AT TA C K SWe strengthened the EdgeGrad attack by allowing it to add and remove multiple edges that areconnected to the attacker node – MultiEdgeGrad . Accordingly, MultiGlobalEdgeGrad is equivalenttoGlobalEdgeGrad , except that MultiGlobalEdgeGrad can choose the attacker node.PubMedClean 78.5 0.6EdgeGrad 65.1 1.3MultiEdgeGrad 64.5 0.2SINGLE 45.50.5SINGLE -hops 48.70.9SINGLE +Topology 5.20.1SINGLE +GradChoice 8.51.2GlobalGradEdge 15.3 0.4MultiGlobalGradEdge 15.3 0.5Table A.10: Test accuracy of GCN using MultiEdge attacksAs shown in Table A.10, allowing the attacker node to add and remove multiple edges ( MultiEdge-Grad andMultiGlobalEdgeGrad ) results in a very minor improvement compared to EdgeGrad andGlobalEdgeGrad , while SINGLE ,SINGLE +Topology andSINGLE +GradChoice are much moreeffective.A . 5 A D D I T I O N A L BA S E L I N E SA . 5 . 1 Z E R O -F E AT U R E S AP P R O A C HWe experimented with a baseline where we set =xaas the feature perturbation. The objectiveof experimenting with such an attack is to illustrate that SINGLE can find better perturbations thansimply canceling the node feature vector, making the new vector a vector of zeros (and thus effectivelyremoves the edges of the attacker node in GCN).As shown, Zero features is barely effective (compared to “Clean”), and SINGLE can find much betterperturbations.15Under review as a conference paper at ICLR 2021PubMedClean 78.5 0.6SINGLE 45.50.5Zero features 76.6 0.3Table A.11: Test accuracy of our zero features attack on a GCN network.A . 5 . 2 I N J E C T I O N AT TA C K SWe also study an additional type of a realistic attack that is based on node injection. In this approach,we insert a new node to the graph with a single edge attached to our victim node. The attack isperformed by perturbing the injected node’s attributes. Since there is no initial node feature vectorto measure the 1distance to, the injected node is allowed to find any realistic representation (e.g.,without choosing negative frequencies). This attack is very powerful, reducing the test accuracy downto 0.02% on PubMed.A . 5 . 3 L A R G E R N U M B E R O F AT TA C K E R SNumber ofPubMedattackers1 45.52 27.73 19.04 15.35 12.9Table A.12: Test accuracy for different number of attackers on PubMed.We performed additional experiments with up to five randomly sampled attacker nodes simultaneously(Table A.12). As expected, allowing a larger number of attackers reduces the test accuracy. However,the main observation in this paper is that even a single attacker node is surprisingly effective.A . 6 L I M I T I N G T H E A L L O W E D 0In Section 4.5, we analyzed the effect of the value of 1, that is, the maximal allowed perturbationin each vector attribute, on the performance of the attack. However, in datasets such as Cora andCiteSeer, the input features are binary (i.e., the input node vector is many-hot), so the possibleperturbation to each vector element is only “flipping” its value from zero to one, or vice-versa. Thus,in these datasets, it is interesting to analyze the value of 0, the maximal number of allowed perturbedvector elements, on the performance of the attack. In this case, measuring the l0norm is equivalent tomeasuring the l1norm:kk0=kk1, and is proportional to the l2norm.We performed experiments where we measured the test accuracy of the model while limiting thenumber of allowed perturbed vector elements. The results are shown in Figure A.1.As shown, when 0= 0, no attack is allowed, and the test accuracy is equal to the “Clean” valueof Table 1. When 0= 100% , the results are equal to the SINGLE values of Table 1 – resulting inflipping 50% of the features on average in Cora, and 31% of the features on average in CiteSeer.It is important to note that in practice, the average number of perturbed features is much lower thanthe maximal number of allowed features. For example, in CiteSeer, allowing 100% of the featuresresults in actually using only 31% on average.16Under review as a conference paper at ICLR 20210 0:10:20:30:40:50:60:70:80:9 1102030405060708090Maximal fraction of perturbed featuresAccCora - SINGLECora - SINGLE+GradChoiceCiteSeer - SINGLECiteSeer - SINGLE+GradChoiceFigure A.1: Test accuracy compared to the maximal allowed 0, the number of perturbed features(divided by the total number of features in the dataset). In practice, the average number of perturbedfeatures is much lower than the maximal number of allowed features.17
JVikAF-wTG
the paper lacks novelty
5: Marginally below acceptance threshold
In this paper, the authors mainly show that the adversary can force the GNN to classify any target node to a chosen label by perturbing another single arbitrary node’s feature in the graph. The paper is well written and easy to understand. However, there are several concerns about the paper: 1. The novelty of the paper is rather limited. The paper simply uses the gradient attacker method to add continuous perturbations to the node attributes. What is the novelty here compared to other gradient based attacks for the data without graph structure such as images? I don’t see any novelty or contribution from the methodology perspective. 2. The problem setting needs to be well discussed. In the introduction, the authors use the example of crafting adversarial posts to motivate the problem. However, in the problem definition, it becomes adding perturbations to the node features. Modifying the node’s feature in realistic settings is even harder than modifying the graph structure to me .(They could revise the words or sentences in the post, but they could not add the feature vector directly as the feature vector are usually preprocessed by some other models). 3. The authors should include some robust GNNs such as [1][2] as baselines to test how effective is the proposed method. The authors should also consider add more baselines as the attack methods. [1] Zügner, Daniel, and Stephan Günnemann. "Certifiable robustness and robust training for graph convolutional networks." In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 246-256. 2019. [2] Jin, Hongwei, and Xinhua Zhang. "Robust Training of Graph Convolutional Networks via Latent Perturbation." ECML-PKDD 2020
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Single-Node Attack for Fooling Graph Neural Networks ### Paper Abstract Graph neural networks (GNNs) have shown broad applicability in a variety of domains. Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example, where the node cannot be picked by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label by only slightly perturbing another single arbitrary node in the graph, even when not being able to pick that specific attacker node. When the adversary is allowed to pick a specific attacker node, the attack is even more effective. We show that this attack is effective across various GNN types (e.g., GraphSAGE, GCN, GAT, and GIN), across a variety of real-world datasets, and as a targeted and non-targeted attack. Our code is available anonymously at https://github.com/gnnattack/SINGLE . ### Paper Keywords ["graphs", "GNN", "adversarial", "attack"] ### Paper Content Under review as a conference paper at ICLR 2021SI N G L E - NO D E AT TA C K F O R FO O L I N GGR A P H NE U R A L NE T W O R K SAnonymous authorsPaper under double-blind reviewAB S T R A C TGraph neural networks (GNNs) have shown broad applicability in a variety ofdomains. Some of these domains, such as social networks and product recommen-dations, are fertile ground for malicious users and behavior. In this paper, we showthat GNNs are vulnerable to the extremely limited scenario of a single-node adver-sarial example, where the node cannot be picked by the attacker. That is, an attackercan force the GNN to classify any target node to a chosen label by only slightlyperturbing another single arbitrary node in the graph, even when not being able topick that specific attacker node . When the adversary is allowed to pick a specificattacker node , the attack is even more effective. We show that this attack is effec-tive across various GNN types (e.g., GraphSAGE, GCN, GAT, and GIN), across avariety of real-world datasets, and as a targeted and non-targeted attack. Our codeis available anonymously at https://github.com/gnnattack/SINGLE .1 I N T R O D U C T I O NGraph neural networks (GNNs) (Scarselli et al., 2008; Micheli, 2009) have recently shown sharplyincreasing popularity due to their generality and computation-efficiency (Duvenaud et al., 2015; Liet al., 2016; Kipf & Welling, 2017; Hamilton et al., 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2019b).Graph-structured data underlie a plethora of domains such as citation networks (Sen et al., 2008),social networks (Leskovec & Mcauley, 2012; Ribeiro et al., 2017; 2018), knowledge graphs (Wanget al., 2018; Trivedi et al., 2017; Schlichtkrull et al., 2018), and product recommendations (Shchuret al., 2018). Therefore, GNNs are applicable for a variety of real-world structured data.While most work in this field has focused on improving the accuracy of GNNs and applying themto a growing number of domains, only a few past works have explored the vulnerability of GNNsto adversarial examples. Consider the following scenario: a malicious user joins a social networksuch as Twitter or Facebook. The malicious user mocks the behavior of a benign user, establishesconnections with other users, and submits benign posts. After some time, the user submits a newadversarially crafted post, which might seem irregular but overall benign. Since the GNN representsevery user according to all the user’s posts, this new post perturbs the representation of the user asseen by a GNN. As a result, another, specific benign user gets blocked from the network; alternatively,another malicious user submits a hateful post – but does not get blocked. This scenario is illustratedin Figure 1. In this paper, we show the feasibility of such a troublesome scenario: a single attackernode can perturb its own representation, such that another node will be misclassified as a label of theattacker’s choice.Most previous work on adversarial examples in GNNs required the perturbation to span multiplenodes, which in reality requires the cooperation of multiple attackers. For example, the pioneeringwork of Z ̈ugner et al. (2018) perturbed a setof attacker nodes; Bojchevski & G ̈unnemann (2019a)perturb edges that are covered by a setof nodes. Further and in contrast with existing work, we showthat perturbing a single node is more harmful than perturbing a single edge .In this paper, we present a first a single-node adversarial attack on graph neural networks. If theadversary is allowed to choose the attacker node, for example, by hacking into an existing account,the efficiency of the attack significantly increases. We present two approaches for choosing theattacker: a white-box gradient-based approach, and a black-box, model-free approach that relies ongraph topology. Finally, we perform a comprehensive experimental evaluation of our approach onmultiple datasets and GNN architectures.1Under review as a conference paper at ICLR 2021GNN: validOur vision..Attacker node a Victim node v(a) Before attacking: the victim node ( v) is classifiedas valid.GNN: invalidnot racist Our vision..Attacker node a Victim node v(b) After attacking: the victim node ( v) is classified asinvalid.Figure 1: An partial adversarial example from the test set of the Twitter dataset. An adversarially-crafted post perturbs the representation of the attacker node. This perturbation causes a misclassifica-tion of the target victim node, although they are not even direct neighbors.2 P R E L I M I N A R I E SLetG=fGigNGi=1be a set of graphs. Each graph G= (V;E;X)2G has a set of nodes Vand aset of edgesEVV , where (u;v)2E denotes an edge from a node u2V to a nodev2V.X2RNDis a matrix of D-dimensional node features. The i-th row ofXis the feature vector ofthe nodevi2V and is denoted as xi=Xi;:2RD.Graph neural networks GNNs operate by iteratively propagating neural messages between neigh-boring nodes. Every GNN layer updates the representation of every node by aggregating its currentrepresentation with the current representations of its neighbors.Formally, each node is associated with an initial representation x(0)v=h(0)v2RD. This repre-sentation is considered as the given features of the node. Then, a GNN layer updates each node’srepresentation given its neighbors, yielding h(1)v2Rd1for everyv2V. In general, the `-th layer ofa GNN is a function that updates a node’s representation by combining it with its neighbors:h(`)v=COMBINEh(`1)v;fh(`1)uju2Nvg;`; (1)whereNvis the set of direct neighbors of v:Nv=fu2Vj (u;v)2Eg .The COMBINE function is what mostly distinguishes GNN types. For example, graph convolutionalnetworks (GCN) (Kipf & Welling, 2017) define a layer as:h(`)v= ReLUXu2Nv[fvg1cu;vW(`)h(`1)u(2)wherecu;vis a normalization factor usually set topjNvjjNuj. After`such aggregation itera-tions, every node representation captures information from all nodes within its `-hop neighborhood.The total number of layers Lis usually determined empirically as a hyperparameter. In the nodeclassification scenario, we use the final representation hLvto classifyv.For brevity, we focus our definitions on the semi-supervised transductive node classification goal,where the dataset contains a single graph G, and the split into training and test sets is across nodes inthe same graph. Nonetheless, these definitions can be trivially generalized to the inductive setting,where the dataset contains multiple graphs, the split into training and test sets is between graphs, andthe test nodes are unseen during training.We associate each node v2V with a class yv2Y=f1;:::;Yg. The labels of the training nodesare given during training; the test nodes are seen during training – without their labels. The trainingsubset is represented as D=G;f(vi;yi)gNDi=0. Given the training set, the goal is to learn a modelf: (G;V)!Y that will classify the rest of the nodes correctly. During training, the model fthusminimizes the loss over the given labels, using J(;), which typically is the cross-entropy loss:=argminL(f;D) =argmin1NDXNDi=0J(f(G;vi);yi) (3)2Under review as a conference paper at ICLR 20213 S I N G L E - NO D E G N N A T TA C KIn this section, we describe our Single-node INdirect Gradient adversariaL Evasion (SINGLE ) attack.While our attack is simple, it is the first attack that focuses on perturbing nodes (in contrast to edges(Dai et al., 2018)), which works with an arbitrary single attacker node (in contrast to multiple nodes(Z ̈ugner et al., 2018)) that is not the node under attack (in contrast to “direct” attacks where theattacker perturbs the node under attack directly (Z ̈ugner et al., 2018; Li et al., 2020)).3 . 1 P R O B L E M DE F I N I T I O NGiven a graph G, a trained model f, a “victim” node vfrom the test set along with its classificationby the model ^yv=f(G;v), we assume that an adversary controls another node ain the graph. Thegoal of the adversary is to modify its own feature vector xaby adding a perturbation vector 2RDof its choice, such that the model’s classification of vwillchange .We denote by Gxa+the graphGwhere the row of Xthat corresponds to the node awas added withthe vector. In a non-targeted attack, the goal of the attacker is to find a perturbation vector that willchange the classification to anyother class, i.e., f(Gxa+;v)6=f(G;v). In a targeted attack, theadversary chooses a specific label yadv2Y and the adversary’s goal is to force f(Gxa+;v) =yadv.Generally, the classification of a node vdepends only on nodes whose distance tovin the graph islower than or equal L– the number of GNN layers. Thus, a modification of the features of awillaffect the classification of vonly if the distance betweenaandvis lower than or equal L. Otherwise,awill not be contained in the receptive field of v, and the attack will result in “under-reaching” (Alon& Yahav, 2020) – any perturbation of awill not affect the prediction of v(Barcel ́o et al., 2020).Therefore, we require that distanceG(a;v)L.In this work, we focus on gradient-based attacks. These kinds of attacks assume that the attackercan access a similar model to the model under attack and compute gradients. As recently shownby Wallace et al. (2020), this is reasonable assumption: an attacker can query the original model;using these queries, imitate the model under attack by training an imitation model; find adversarialexamples using the imitation model; and transfer these adversarial examples back to the originalmodel. Under this assumption, these attacks are general and are applicable to any GNN and dataset.3 . 2 C H A L L E N G E SUnnoticeable Perturbations. Our first challenge is to find an adversarial example that will allowan imperceptible perturbation of the input. This objective is attainable in continuous domains suchas images (Szegedy et al., 2013; Goodfellow et al., 2014) and audio (Carlini & Wagner, 2018) ifwe constrain l1-norm of the perturbation vector . It is, however, unclear what imperceptibilitymeans in graphs. In most GNN datasets, a node’s features are a bag-of-words representation of thewords that are associated with the node. For example, in Cora (McCallum et al., 2000; Sen et al.,2008), every node is annotated by a many-hot feature vector of words that appear in the paper; inPubMed (Namata et al., 2012), node vectors are TF-IDF word frequencies; in Twitter (Ribeiro et al.,2017), node features are averages of GloVe embeddings, which can be viewed as word frequencyvectors multiplied by a (frozen) embedding matrix. We argue that an attack would be unnoticeablein an academic paper or in a set of Tweets if the frequency of some words is slightly modified. Forexample, a particular word may be repeated a few times throughout the text or remain unused.To constrain the vector, we require that kk11– the maximal absolute value of the elementsin the perturbation vector – is bounded by 12R+.Perturbing nodes instead of edges. Previous work mostly focused on perturbing graph edges .Z ̈ugner et al. (2018) perturb both edges and node features, but conclude that “perturbations in thestructure lead to a stronger change in the surrogate loss compared to feature attacks”; Wu et al. (2019b)also conclude that “perturbing edges is more effective than modifying the features”. In this paper,we counter these conclusions and show that small node feature perturbations are stronger: (i) first,removing all the edges of a particular node is a special case of node feature perturbation. There existsa perturbation such thatW1(xa+) =0, i.e., the modified feature vector xa+is in the null3Under review as a conference paper at ICLR 2021space of the first GNN layer.1Such a feature perturbation is equivalent to removing all the edges ofthe nodea. (ii) Second, we argue that perturbing the graph structure is not realistic, because a singleattacker controls only its own edges, and cannot control the global graph structure as in previouswork (Dai et al., 2018; Bojchevski & G ̈unnemann, 2019b; Zhang & Zitnik, 2020). (iii) Finally, whena successful attack is caused by removing edges, it is unclear whether the misclassification is causedby sensitivity to non-robust features in the data (Ilyas et al., 2019), or simply due to smaller amountof information. Similarly, when a successful attack is caused by inserting edges, it is unclear whetherthis is simply due to incorrect or unrealistic added information.3 . 3 F I N D I N G T H E PE R T U R B AT I O N VE C T O RTo find the perturbation, we iteratively differentiate the desired loss of vwith respect to the perturba-tion vector, updateaccording to the gradient, and add it to the feature vector. In non-targetedattacks, we take the positive gradient of the loss of the undesired label to increase the loss; in targetedattacks, we take the negative gradient of the loss of the adversarial label yadv:t+1=t+rJ(f(Gxa+t;v);^yv) non-targeted attacktrJ(f(Gxa+t;v);yadv)targeted attack(4)where2R+is a learning rate. We repeat this process for a predefined number of Kiterations, oruntil the model predicts the desired label.Enforcing the constraints. We treat the node features as continuous throughout the attack iterations,whether they are discrete or continuous. Once the attack succeeds, we try to reset to zero as manyperturbation vector elements as possible. We sort the perturbation vector elements in a decreasingorder, according to their absolute value: i1;:::;iD. We start with the index of whose absolute valueis the largest, i1, and reset the rest of the fi2;:::;iDgelements to zero. We then check whetherperturbing only the i1index is sufficient. If the attack succeeds, we stop. If the attack fails (becauseof the large number of perturbation vector elements set to zero), we continue perturbing the rest ofthe elements of . In the worst case, we perturb all Dvector elements of . In most cases, we stopmuch earlier, practically perturbing only a small fraction of the vector elements. If the original nodefeatures are discrete, we discretized features after the optimization.Differentiate by frequencies, not by embeddings. When taking the gradient with respect to theperturbation vector r, there is a subtle, but crucial, difference between the way that node rep-resentations are given in the dataset: (a) indicative datasets provide initial node representationsX= [x1;x2;:::]that are word indicator vectors (many-hot) or frequencies such as (weighted) bag-of-words (Sen et al., 2008; Shchur et al., 2018); (b) in encoded datasets, initial node representationsare given encoded, e.g., as an average of word2vec vectors (Hamilton et al., 2017; Hu et al., 2020).Indicative datasets can be converted to encoded by multiplying every vector by an embedding matrix;encoded datasets cannot be converted to indicative , without the authors releasing the textual data thatwas used to create the encoded dataset.Inindicative datasets, a perturbation of a node vector canbe realized as a perturbation of the originaltext from which the indicative vector was derived. That is, adding or removing words in the text canresult in the perturbed node vector. In contrast, a few-indices perturbation in encoded datasets mightbe an effective attack, but will notbe realistic because there is no perturbation of the original textthat will result in that perturbation of the vector. That is, when perturbing nodes, it is crucial to useindicative datasets, or convert encoded datasets to the indicative representation from which they werederived (as we do in Section 4) using their original text.4 E VA L U AT I O NWe evaluate and analyze the effectiveness of our SINGLE attack. In Section 4.1, we show thatSINGLE is more effective than alternatives such as single-edge attacks. In Section 4.2, we show thatif we are allowed to choose the attacker node, SINGLE is significantly more effective.Setup. Our implementation is based on PyTorch Geometric (Fey & Lenssen, 2019) and its provideddatasets. We trained each GNN type with two layers ( L= 2), using the Adam optimizer, early1This equation demonstrates GCN, but similar equations hold for other GNN types like GAT and GIN.4Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterClean (no attack) 80.5 0.8 68.50.7 78.50.6 89.10.2EdgeGrad 65.1 1.3 48.150.9 59.70.7 82.70.0SINGLE 60.10.1 34.03.6 45.50.5 72.17.2SINGLE -hops 69.3 0.9 45.15.2 48.70.9 74.56.7Table 1: Test accuracy (lower is better) under different types of attacks, when the attacker node ischosen randomly . Performed using GCN, 1= 1for the discrete datasets (Cora and CiteSeer), and1= 0:1for the continuous datasets (PubMed and Twitter).stopped according to the validation set, and applied a dropout of 0:5between layers. We used uptoK= 20 attack iterations. All experiments in this section were performed with GCN, except forSection 4.5, where additional GNN types (GAT, GIN, and GraphSAGE) are shown. In Appendix A.2,we show consistent results across additional GNN types: GAT (Veli ˇckovi ́c et al., 2018), GIN (Xuet al., 2019b), GraphSAGE (Hamilton et al., 2017), SGC (Wu et al., 2019a), and RobustGCN (Z ̈ugner& G ̈unnemann, 2019).Data. We used Cora and CiteSeer (Sen et al., 2008) which are discrete datasets, i.e., the given nodefeature vectors are many-hot vectors. Thus, we set 1= 1, the minimal possible perturbation. Wealso used PubMed (Sen et al., 2008) and the Twitter-Hateful-Users (Ribeiro et al., 2017) datasets,which are continuous , and node features represent frequencies of words. Continuous datasets allowa much more subtle perturbation, and we set 1= 0:1. An analysis of these values is presented inSection 4.5.The Twitter-Hateful-Users dataset is originally provided as an encoded dataset, where every node isan average of GloVe vectors (Pennington et al., 2014). We reconstructed this dataset using the originaltext from Ribeiro et al. (2017), to be able to compute gradients with respect to the weighted histogramof words, rather than the embeddings. We took the most frequent 10,000 words as node features, andused GloVe-Twitter embeddings to multiply by the node features. We thus converted this dataset toindicative rather than encoded . Statistics of all dataset are provided in the supplementary material.Baselines. InSINGLE (Section 3.3) the attacker node is selected randomly for each victim node,and the attack perturbs this node’s features according to 1. SINGLE -hops is a modification ofSINGLE where the attacker node is sampled only among nodes that are not neighbors , i.e., theattacker and the victim are not directly connected ( (a;v)=2E). We compare to additional approachesfrom the literature: EdgeGrad follows most previous work (Xu et al., 2019a; Li et al., 2020; Z ̈ugner& G ̈unnemann, 2020): EdgeGrad randomly samples an attacker node as in SINGLE , and eitherinserts or removes a single edge from or to the attacker node, according to the gradient.2If both use arandomly selected attacker node, EdgeGrad is strictly stronger than the GradArgmax attack of Daiet al. (2018), which only removes edges. We ran each approach 5 times with different random seedsfor each dataset, and report the mean and standard deviation.4 . 1 M A I N RE S U LT STable 1 shows our main results for non-targeted attacks across various datasets. As shown, SINGLEis more effective than EdgeGrad across all datasets. SINGLE -hops , which is more unnoticeablethan attacking with a neighbor node, performs almost as good as SINGLE which attacks using anon-neighboring node, and better than EdgeGrad . On Twitter, SINGLE reduces the test accuracysignificantly better than EdgeGrad : 72.1% compared to 82.7%. Results for targeted attacks areshown in Appendix A.3.Surprisingly, Table A.5 shows that Robust GCN (Z ̈ugner & G ̈unnemann, 2019) is as vulnerable tothe SINGLE attack as a standard GCN, showing that there is still much room for novel ideas andimprovements to the robustness of current GNNs.As we explain in Section 3.3, SINGLE tries to find a perturbation vector in which the number ofperturbed elements is minimal. We measured the number of vector elements that the attack had2This can be implemented easily using edge weights : training the GNN with weights of 1for existing edges,adding all possible edges with weights of 0, and taking the gradient with respect to the vector of weights.5Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterGlobalEdgeGrad 29.72.4 11.90.8 15.30.4 82.70.0SINGLE +GradChoice 31.01.9 19.04.2 8.51.2 7.01.1SINGLE +Topology 31.11.2 18.13.4 5.20.1 6.60.5Table 2: Test accuracy when the adversary can choose the attacker node.perturbed in practice. In PubMed, SINGLE used 76 vector elements on average, which are 15% ofthe elements in the feature vector. In Cora, SINGLE perturbed 717 elements on average, which are50%. In CiteSeer, SINGLE used 1165 attributes on average, which are 31% of the features. In Twitter,SINGLE used 892 attributes on average, which are 9% of the features. In the experiments shownin Table 1, we used 1= 0:1in the continuous datasets (PubMed and Twitter). If we allow largervalues of1, we can reduce the number of perturbed vector elements: using 1= 0:5requiresperturbing only 3% of the attributes on average to achieve the same effectiveness; using 1= 1requires perturbing only 1.6% of the attributes on average to achieve the same effectiveness (inPubMed, where varying 1is meaningful).4 . 2 A T TA C K E R CH O I C EIf the attacker could choose its node, e.g., by hijacking an existing account in a social network, couldthey increase the effectiveness of the attack? We examine the effectiveness of two approaches forchoosing the attacker node.Gradient Attacker Choice (GradChoice ) chooses the attacker node according to the largestgradient with respect to the node representations (for a non-targeted attack): a=argmaxai2VkrxiJ(f(G;v);^yv)k1. The chosen attacker node is never the victim node itself.Topological Attacker Choice (Topology ) chooses the attacker node according to topological propertiesof the graph. As an example, we choose the neighbor of the victim node vwith the smallest numberof neighbors: a=argmina2NvjNaj. The advantage of this approach is that the attacker choiceismodel-free : if the attacker cannot compute gradients, they can at least choose the most harmfulattacker node, and then perform the perturbation itself using other non-gradient approaches such asones proposed by Waniek et al. (2018) and Chang et al. (2020).To perform a fair comparison, we compare these approaches with GlobalEdgeGrad , which is similartoEdgeGrad that can insert or remove an edge, with the difference that the chosen edge can bechosen from the entire graph .Results. Results for these attacker choice approaches are shown in Table 2. The main results arethat choosing the attacker node significantly increases the effectiveness of the SINGLE attack: forexample, in Twitter, from 72.1% (Table 1) to 6.6% test accuracy (Table 2).In datasets where the given initial node features are continuous (PubMed and Twitter), SIN-GLE +Topology andSINGLE +GradChoice show similar results: on Twitter accuracy differenceis less than 0.5%; on PubMed SINGLE +Topology outperforms SINGLE +GradChoice by3%, eventhough SINGLE +Topology is model-free. Both of those attacks are more efficient than GlobalEdge-Grad , showing the superiority of node perturbation over edge perturbation in the global view. InAppendix A.4, we show that allowing GlobalEdgeGrad to insert and remove multiple edges thatbelong to the same attacker node does notlead to a significant improvement.Interestingly, GradChoice andTopology agree on the choice of attacker node for 50.3% of the nodesin Cora, 78.7% of the nodes in CiteSeer, 51.0% of the nodes in PubMed, and on 55.0% of the nodesin Twitter, showing that the node selection can sometimes be performed model-free.In datasets where the initial node features are discrete (Cora and CiteSeer), i.e., many-hot vectors,GlobalEdgeGrad reduces the test accuracy more than GradChoice andTopology . We believe that thereason is the difficulty of two-step optimization in discrete datasets: for example, GradChoice needsto choose the node, and find the perturbation afterwards. Finding a perturbation for a discrete vectoris more difficult than in continuous datasets, and the choice of the attacker node may not be optimal.6Under review as a conference paper at ICLR 2021Cora CiteSeer PubMed TwitterSINGLE -two attackers 7.10.5 8.20.2 27.70.2 –SINGLE -direct 21.22.5 13.82.1 0.30.1 57.68.7SINGLE 60.10.1 18.13.4 45.50.5 72.17.2Table 3: Scenario ablation: test accuracy under different attacking scenarios.Standard training Adversarial trainingClean (no attack) 78.5 0.6 76.90.6SINGLE 45.50.5 58.52.7SINGLE -hops 48.7 0.9 62.12.5SINGLE +GradChoice 8.51.2 30.66.8SINGLE +Topology 5.20.1 21.12.1SINGLE -two attackers 27.70.2 40.73.4SINGLE -direct 0.30.1 4.61.1Table 4: Test accuracy while attacking a model that was adversarially trained on PubMed, withdifferent types of attacks.4 . 3 S C E N A R I O AB L AT I O NThe main scenario that we focus on in this paper is a SINGLE approach that always perturbs a singlenode, which is notthe victim node ( a6=v). We now examine our SINGLE attack in other, easier butless realistic, scenarios: SINGLE -two attackers follows Z ̈ugner et al. (2018) and Zang et al. (2020),randomly samples twoattacker nodes and perturbs their features using the same approach as SINGLE .SINGLE -direct perturbs the victim node directly (i.e., a=v), an approach that was found to bethe most efficient by Z ̈ugner et al. (2018). Table 3 shows the test accuracy of these ablations. InAppendix A.5.3, we additionally experiment with more than two attacker nodes.4 . 4 A D V E R S A R I A L TR A I N I N GIn the previous sections, we studied the effectiveness of the SINGLE attack. In this section, weinvestigate to what extent can adversarial training (Madry et al., 2018) defend against SINGLE . Foreach training step and labeled training node, we perform Ktrainadversarial steps to adversariallyperturb another randomly sampled node, exactly as in SINGLE , but at training time. The model isthen trained to minimize the original cross-entropy loss and the adversarial loss:L(f;D) =12NDXNDi=0J(f(G;vi);yi) +JfGxai+i;vi;yi: (5)The main difference from Equation (3) is the adversarial term JfGxai+i;vi;yi, whereaiisthe randomly sampled attacker for the node vi. In every training step, we randomly sample a newattacker for each victim node and compute new ivectors. After the model is trained, we attack themodel withKtestSINGLE adversarial steps. This is similar to Feng et al. (2019) and Deng et al. (2019),except that they used adversarial training as a regularizer, to improve the accuracy of a model whilenot under attack. In contrast, we use adversarial training to defend a model against an attack at testtime. We used Ktrain= 5, as we found it to be the maximal value for which the model’s accuracy isnot significantly hurt while not under attack (“clean”), and Ktest= 20 as in the previous experiments.As shown in Table 4, adversarial training indeed improves the model’s robustness against the differentSINGLE attacks. However, the main result of this section is that SINGLE ,SINGLE +GradChoiceandSINGLE +Topology are still very effective attacks, as they succeed in attacking the adversariallytrained model, reducing its test accuracy to 58.5%, 30.6% and 21.1%, respectively.4 . 5 S E N S I T I V I T Y T O 1How does the intensity of the adversarial perturbation affect the performance of the attack? Intuitively,we say that the less we restrict the perturbation (i.e., larger values of 1), the more powerful theattack. We examine whether this holds in practice.7Under review as a conference paper at ICLR 202100:10:20:30:40:50:60:70:80:9110203040506070801AccGCN GAT GIN GraphSageFigure 2: Effectiveness of the attack comparedto the allowed 1(performed on PubMed, be-cause its features are continuous).1 2 3 4 5 6 7 81020304050607080distance (a;v)AccPubMedCoraCiteSeerFigure 3: Test accuracy compared to the dis-tance between the attacker and the victim, whenthe GCN was trained with L= 8on PubMed.In our experiments in Sections 4.1 to 4.4, we used 1= 0:1for the continuous datasets (PubMedand Twitter). In this section, we vary the value of 1across different GNN types and observe theeffectiveness of the attack. Figure 2 shows the results on PubMed. We used this dataset because itis larger than Cora and CiteSeer (Appendix A.1), and most importantly, its features are continuous,thus real-valued perturbations are feasible. As shown in Figure 2, the most significant difference isbetween performing the perturbation ( 1= 0:1) and not attacking at all ( 1= 0). As we increasethe value of1, GCN and GraphSage (Hamilton et al., 2017) show a natural descent in test accuracy.Contrarily, GAT (Veli ˇckovi ́c et al., 2018) and GIN (Xu et al., 2019b) are more robust to increasedabsolute values of perturbations, while GAT is also the most robust compared to the other GNN types.4 . 6 D I S TA N C E BE T W E E N AT TA C K E R A N D VI C T I MIn Section 4.1, we found that SINGLE performs similarly to SINGLE -hops, although SINGLE -hopssamples an attacker node awhose distance from the victim node vis at least 2. We further questionwhether the effectiveness of the attack depend on the distance in the graph between the attacker andthe victim. We trained a new model for each dataset using L= 8layers. Then, for each test victimnode, we sampled attackers according to their distance to the test node.As shown in Figure 3, the effectiveness of the attack increases as the distance between the attackerand the victim decreases. At distance of 5, the curve seems to saturate. A possible explanation forthis is that apparently more than few layers (e.g., L= 2in Kipf & Welling (2017)) are not neededin most datasets. Thus, the rest of the layers can theoretically learn notto pass much of their inputstarting from the redundant layers, excluding adversarial signals as well.5 R E L AT E D W O R KWorks on adversarial attacks on GNN differ in several main aspects. In this section, we discuss themain criteria, to clarify the settings that we address.Single vs. multiple attackers All previous works allowed perturbing multiple nodes, or edges thatare covered by multiple nodes: Z ̈ugner et al. (2018) perturb features of a setof attacker nodes; Zanget al. (2020) assume “a few bad actors”; other works perturb edges that in realistic settings theirperturbation would require controlling multiple nodes (Bojchevski & G ̈unnemann, 2019a; Sun et al.,2020; Chen et al., 2018).Node vs. edge perturbations Most adversarial attacks on GNNs perturb the input graph by modifyingthe graph structure (Z ̈ugner & G ̈unnemann, 2019; Wang et al., 2020; Xu et al., 2019a). For example,Dai et al. (2018) iteratively remove edges, yet their attack manages to reduce the accuracy by about10% at most when perturbing a single edge. Li et al. (2020) also allow the insertion of edges; Waniek8Under review as a conference paper at ICLR 2021et al. (2018) and Chang et al. (2020) allow insertion and deletion of edges, using attacks that arebased on correlations and eigenvalues, and not on gradients. Yefet et al. (2019) perturb one-hotnode vectors, in the restricted domain of computer programs. Z ̈ugner et al. (2018) and Wu et al.(2019b) perturb both edges and nodes; but they concluded that perturbing edges is more effectivethan perturbing nodes. In this work, we counter these conclusions and show that perturbing nodefeatures is more effective than perturbing edges.Direct vs. influence attacks Another difference between prior works lies in the difference betweendirect attacks andinfluence attacks . In direct attacks, the attacker perturbs the target node itself . Forexample, the attack of Z ̈ugner et al. (2018) is the most effective when the attacker and the target arethe same node . In influence attacks, the perturbed nodes are at least one hop away from the victimnode. In this paper, we show that the strong direct assumption is not required ( SINGLE -direct inSection 4.2), and that our attack is effective when the attacker and the target are not even directneighbors , i.e., they are at least twohops away ( SINGLE -hops in Section 4.1).Poisoning vs. evasion attacks In a related scenario, some work (Z ̈ugner & G ̈unnemann, 2019;Bojchevski & G ̈unnemann, 2019a; Li et al., 2020; Zhang & Zitnik, 2020) focuses on poisoningattacks that perturb examples before training. Contrarily, we focus on the standard evasion scenarioof adversarial examples in neural networks (Szegedy et al., 2013; Goodfellow et al., 2014), where theattack operates at test time, after the model was trained, as Dai et al. (2018).Attacking vs. certifying Z ̈ugner & G ̈unnemann (2020) focus on certifying the robustness of GNNsagainst adversarial perturbations; and Bojchevski & G ̈unnemann (2019b) certified PageRank-stylemodels. In contrast, we study the effectiveness of the adversarial attack itself.6 C O N C L U S I O NWe demonstrate that GNNs are susceptible even to the extremely limited scenario of a single-nodeindirect adversarial example ( SINGLE ). The practical consequences of these findings are that a singleattacker in a network can force a GNN to classify any other target node as the attacker’s chosen label,by slightly perturbing some of the attacker’s features. We further show that if the attacker can chooseits attacker node – the effectiveness of the attack increases significantly. We study the effectivenessof these attacks across various GNN types and datasets.We believe that this work will drive research in this field toward exploring novel defense approachesfor GNNs. Such defenses can be crucial for real-world systems that are modeled using GNNs. Fur-thermore, we believe that the surprising results of this work motivate better theoretical understandingof the expressiveness and generalization of GNNs. To these ends, we make all our code and trainedmodels publicly available.RE F E R E N C E SUri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications.arXiv preprint arXiv:2006.05205 , 2020.Pablo Barcel ́o, Egor V . Kostylev, Mikael Monet, Jorge P ́erez, Juan Reutter, and Juan Pablo Silva.The logical expressiveness of graph neural networks. In International Conference on LearningRepresentations , 2020. URL https://openreview.net/forum?id=r1lZ7AEKvB .Aleksandar Bojchevski and Stephan G ̈unnemann. Adversarial attacks on node embeddings via graphpoisoning. In International Conference on Machine Learning , pp. 695–704, 2019a.Aleksandar Bojchevski and Stephan G ̈unnemann. Certifiable robustness to graph perturbations. InAdvances in Neural Information Processing Systems , pp. 8319–8330, 2019b.Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text.In2018 IEEE Security and Privacy Workshops (SPW) , pp. 1–7. IEEE, 2018.Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, andJunzhou Huang. A restricted black-box adversarial framework towards attacking graph embeddingmodels. In AAAI , pp. 3389–3396, 2020.9Under review as a conference paper at ICLR 2021Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. Fast gradientattack on network embedding. arXiv preprint arXiv:1809.02797 , 2018.Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack ongraph structured data. In International Conference on Machine Learning , pp. 1115–1124, 2018.Zhijie Deng, Yinpeng Dong, and Jun Zhu. Latent adversarial training of graph convolution networks.InICML Workshop on Learning and Reasoning with Graph-Structured Representations , 2019.URLhttps://graphreason.github.io/papers/3.pdf .David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems , pp. 2224–2232, 2015.Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. Graph adversarial training: Dynamicallyregularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering ,2019.Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. InICLR Workshop on Representation Learning on Graphs and Manifolds , 2019.Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572 , 2014.Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in neural information processing systems , pp. 1024–1034, 2017.Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta,and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXivpreprint arXiv:2005.00687 , 2020.Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and AleksanderMadry. Adversarial examples are not bugs, they are features. In Advances in Neural InformationProcessing Systems , pp. 125–136, 2019.Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks.InInternational Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl .Jure Leskovec and Julian J. Mcauley. Learning to discover social circles in ego networks. In Advancesin neural information processing systems , pp. 539–547, 2012.Jintang Li, Tau Xie, Liang Chen, Fentang Xie, Xiangnan He, and Zibin Zheng. Adversarial attack onlarge scale graph. arXiv preprint arXiv:2009.03488 , 2020.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations , 2016.Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.Towards deep learning models resistant to adversarial attacks. In International Conference onLearning Representations , 2018.Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating theconstruction of internet portals with machine learning. Information Retrieval , 3(2):127–163, 2000.Alessio Micheli. Neural network for graphs: A contextual constructive approach. IEEE Transactionson Neural Networks , 20(3):498–511, 2009.Galileo Mark Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying forcollective classification. In Workshop on Mining and Learning with Graphs , 2012.Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for wordrepresentation. In Empirical Methods in Natural Language Processing (EMNLP) , pp. 1532–1543,2014. URL http://www.aclweb.org/anthology/D14-1162 .10Under review as a conference paper at ICLR 2021Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virg ́ılio A. F. Almeida, and WagnerMeira Jr. “Like sheep among wolves”: Characterizing hateful users on twitter. arXiv preprintarXiv:1801.00317 , 2017.Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virg ́ılio A. F. Almeida, and Wagner Meira Jr.Characterizing and detecting hateful users on twitter. arXiv preprint arXiv:1803.08977 , 2018.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Thegraph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2008.Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and MaxWelling. Modeling relational data with graph convolutional networks. In European Semantic WebConference , pp. 593–607. Springer, 2018.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93–93, 2008.Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G ̈unnemann. Pitfallsof graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018 ,2018.Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Non-target-specificnode injection attacks on graph neural networks: A hierarchical reinforcement learning approach.InProc. WWW , volume 3, 2020.Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: Deep temporal reasoningfor dynamic knowledge graphs. In International Conference on Machine Learning , pp. 3462–3471,2017.Petar Veli ˇckovi ́c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li `o, and YoshuaBengio. Graph attention networks. In International Conference on Learning Representations ,2018. URL https://openreview.net/forum?id=rJXMpikCZ .Eric Wallace, Mitchell Stern, and Dawn Song. Imitation attacks and defenses for black-box machinetranslation systems. arXiv preprint arXiv:2004.15015 , 2020.Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. Certified robustness of graphneural networks against adversarial structural perturbation. arXiv preprint arXiv:2008.10715 ,2020.Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. Cross-lingual knowledge graph alignmentvia graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methodsin Natural Language Processing , pp. 349–357, 2018.Marcin Waniek, Tomasz P. Michalak, Michael J. Wooldridge, and Talal Rahwan. Hiding individualsand communities in a social network. Nature Human Behaviour , 2(2):139–147, 2018.Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Sim-plifying graph convolutional networks. In International Conference on Machine Learning , pp.6861–6871, 2019a.Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarialexamples for graph data: deep insights into attack and defense. In Proceedings of the 28thInternational Joint Conference on Artificial Intelligence , pp. 4816–4823. AAAI Press, 2019b.Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topol-ogy attack and defense for graph neural networks: an optimization perspective. In Proceedings ofthe 28th International Joint Conference on Artificial Intelligence , pp. 3961–3967. AAAI Press,2019a.11Under review as a conference paper at ICLR 2021Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neuralnetworks? In International Conference on Learning Representations , 2019b. URL https://openreview.net/forum?id=ryGs6iA5Km .Noam Yefet, Uri Alon, and Eran Yahav. Adversarial examples for models of code. arXiv preprintarXiv:1910.07517 , 2019.Xiao Zang, Yi Xie, Jie Chen, and Bo Yuan. Graph universal adversarial attacks: A few bad actorsruin graph learning models. arXiv preprint arXiv:2002.04784 , 2020.Xiang Zhang and Marinka Zitnik. GNNGuard: Defending graph neural networks against adversarialattacks. arXiv preprint arXiv:2006.08149 , 2020.Daniel Z ̈ugner and Stephan G ̈unnemann. Certifiable robustness and robust training for graphconvolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference onKnowledge Discovery & Data Mining , pp. 246–256, 2019.Daniel Z ̈ugner and Stephan G ̈unnemann. Certifiable robustness of graph convolutional networksunder structure perturbations. In Proceedings of the 26th ACM SIGKDD International Conferenceon Knowledge Discovery & Data Mining , pp. 1656–1665, 2020.Daniel Z ̈ugner and Stephan G ̈unnemann. Adversarial attacks on graph neural networks via metalearning. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=Bylnx209YX .Daniel Z ̈ugner, Amir Akbarnejad, and Stephan G ̈unnemann. Adversarial attacks on neural networksfor graph data. In Proceedings of the 24th ACM SIGKDD International Conference on KnowledgeDiscovery & Data Mining , pp. 2847–2856, 2018.12Under review as a conference paper at ICLR 2021A S U P P L E M E N TA RY MAT E R I A LA . 1 D ATA S E T STAT I S T I C SStatistics of the datasets are shown in Table A.1.Table A.1: Dataset statistics.#Training #Val #Test #Unlabeled Nodes #Classes Avg. Node DegreeCora 140 500 1000 2708 7 3.9CiteSeer 120 500 1000 3327 6 2.7PubMed 60 500 1000 19717 3 4.5Twitter 4474 248 249 95415 2 45.6A . 2 A D D I T I O N A L G N N T Y P E STables A.2 to A.4 present the test accuracy of different attacks applied on GAT (Veli ˇckovi ́c et al., 2018),GIN (Xu et al., 2019b), GraphSAGE (Hamilton et al., 2017), RobustGCN (Z ̈ugner & G ̈unnemann,2019), and SGC (Wu et al., 2019a), showing the effectiveness of SINGLE across different GNN types.Cora CiteSeer PubMedEdgeGrad 66.4 1.2 49.41.4 64.91.0SINGLE 40.012.5 33.26.7 35.713.3SINGLE -hops 42.0 11.5 41.75.8 35.513.6GlobalGradEdge 67.8 4.9 48.35.1 63.54.6SINGLE +GradChoice 43.14.9 32.44.7 36.48.0SINGLE +Topology 32.26.4 25.58.0 27.85.7SINGLE -two attackers 12.76.3 11.01.4 26.811.8SINGLE -direct 23.61.5 14.84.3 21.82.5Table A.2: Test accuracy of GAT under different non-targeted attacksCora CiteSeer PubMedEdgeGrad 32.9 3.1 18.53.0 33.31.7SINGLE 27.11.3 12.32.9 12.91.0SINGLE -hops 32.6 0.7 18.53.1 14.00.6GlobalGradEdge 10.72.8 4.82.1 10.31.0SINGLE +GradChoice 15.92.0 8.11.7 10.01.6SINGLE +Topology 16.11.7 7.61.6 6.31.7SINGLE -two attackers 2.70.7 5.42.0 6.21.8SINGLE -direct 5.71.4 4.71.6 3.13.1Table A.3: Test accuracy of GIN under different non-targeted attacksSurprisingly, Table A.5 shows that Robust GCN (Z ̈ugner & G ̈unnemann, 2019) is as vulnerable tothe SINGLE attack as a standard GCN, showing that there is still much room for novel ideas andimprovements to the robustness of current GNNs.A . 3 T A R G E T E D AT TA C K STables A.6 to A.9 show the results of targeted attacks across datasets and approaches. Differentlyfrom other tables which show test accuracy, Tables A.6 to A.9 present the targeted attack’s successrate, which is the fraction of test examples that the attack managed to force a specific label prediction13Under review as a conference paper at ICLR 2021Cora CiteSeer PubMedEdgeGrad 62.9 1.9 45.93.4 64.21.6SINGLE 62.72.4 32.34.3 57.10.8SINGLE -hops 70.0 3.3 45.54.3 60.90.8GlobalGradEdge 48.9 2.7 40.43.3 64.71.1SINGLE +GradChoice 37.33.4 18.03.2 8.20.7SINGLE +Topology 37.43.6 19.24.2 6.60.3SINGLE -two attackers 14.40.9 11.10.1 45.40.8SINGLE -direct 19.62.1 13.53.9 0.00.1Table A.4: Test accuracy of GraphSAGE under different non-targeted attacksGCN RobustGCN SGCClean 78.5 0.6 73.91.6 78.90.5EdgeGrad 59.7 0.7 – 65.11.3SINGLE 45.50.5 34.31.4 47.31.2SINGLE +hops 48.70.9 29.71.1 49.61.2GlobalGradEdge 15.3 0.4 – 15.30.4SINGLE +GradChoice 8.51.2 19.60.9 11.31.2SINGLE +Topology 5.20.1 72.51.9 5.60.5SINGLE +two attackers 27.70.2 20.01.1 30.01.8SINGLE +direct 0.30.1 15.81.1 0.50.2Table A.5: Test accuracy of GCN, Robust GCN (Z ̈ugner & G ̈unnemann, 2019), and SGC (Wu et al.,2019a) under different non-targeted attacks, on PubMed.Cora CiteSeer PubMed TwitterEdgeGrad 8.0 0.7 14.80.5 20.10.6 12.62.5SINGLE 36.62.4 60.72.2 38.20.6 14.64.7SINGLE -hops 33.6 2.3 50.02.5 35.21.6 12.63.7GlobalGradEdge 59.4 0.9 78.70.9 80.10.6 13.02.2SINGLE +GradChoice 65.81.5 67.22.2 43.51.6 42.211.6SINGLE +Topology 57.32.1 66.33.0 90.40.3 55.49.4Table A.6: Success rate (higher is better) of different targeted attacks on a GCN network.(in these results, higher is better). These results suggest that in targeted attack settings, node-basedattacks (such as SINGLE ) have an even bigger advantage over edge-based attacks (such as EdgeGrad).Cora CiteSeer PubMedEdgeGrad 6.1 0.4 12.51.2 17.91.5SINGLE 33.78.6 43.511.1 50.715.8SINGLE -indirect 26.6 7.3 29.888.6 50.315.7GlobalGradEdge 6.0 1.4 14.62.8 22.33.6SINGLE +GradChoice 25.85.3 38.68.5 50.513.5SINGLE +Topology 41.35.3 52.511.3 63.010.2Table A.7: Success rate (higher is better) of different targeted attacks on a GAT network.14Under review as a conference paper at ICLR 2021Cora CiteSeer PubMedEdgeGrad 16.8 1.2 25.61.0 37.92.6SINGLE 31.11.7 49.05.4 58.87.9SINGLE -hops 24.5 1.2 37.44.1 57.85.7GlobalGradEdge 44.7 4.7 55.07.0 64.911.8SINGLE +GradChoice 44.35.0 59.04.5 63.59.9SINGLE +Topology 45.12.3 58.75.1 73.213.3Table A.8: Success rate (higher is better) of different targeted attacks on a GIN network.Cora CiteSeer PubMedEdgeGrad 7.6 0.3 16.31.7 19.11.4SINGLE 24.31.9 50.02.5 27.91.0SINGLE -hops 15.4 3.8 34.22.6 24.11.0GlobalGradEdge 9.3 0.9 14.71.1 19.60.8SINGLE +GradChoice 49.13.0 63.74.0 36.32.1SINGLE +Topology 54.11.3 69.33.2 89.80.3Table A.9: Success rate (higher is better) of different targeted attacks on a GraphSAGE network.A . 4 M U LT I ED G E AT TA C K SWe strengthened the EdgeGrad attack by allowing it to add and remove multiple edges that areconnected to the attacker node – MultiEdgeGrad . Accordingly, MultiGlobalEdgeGrad is equivalenttoGlobalEdgeGrad , except that MultiGlobalEdgeGrad can choose the attacker node.PubMedClean 78.5 0.6EdgeGrad 65.1 1.3MultiEdgeGrad 64.5 0.2SINGLE 45.50.5SINGLE -hops 48.70.9SINGLE +Topology 5.20.1SINGLE +GradChoice 8.51.2GlobalGradEdge 15.3 0.4MultiGlobalGradEdge 15.3 0.5Table A.10: Test accuracy of GCN using MultiEdge attacksAs shown in Table A.10, allowing the attacker node to add and remove multiple edges ( MultiEdge-Grad andMultiGlobalEdgeGrad ) results in a very minor improvement compared to EdgeGrad andGlobalEdgeGrad , while SINGLE ,SINGLE +Topology andSINGLE +GradChoice are much moreeffective.A . 5 A D D I T I O N A L BA S E L I N E SA . 5 . 1 Z E R O -F E AT U R E S AP P R O A C HWe experimented with a baseline where we set =xaas the feature perturbation. The objectiveof experimenting with such an attack is to illustrate that SINGLE can find better perturbations thansimply canceling the node feature vector, making the new vector a vector of zeros (and thus effectivelyremoves the edges of the attacker node in GCN).As shown, Zero features is barely effective (compared to “Clean”), and SINGLE can find much betterperturbations.15Under review as a conference paper at ICLR 2021PubMedClean 78.5 0.6SINGLE 45.50.5Zero features 76.6 0.3Table A.11: Test accuracy of our zero features attack on a GCN network.A . 5 . 2 I N J E C T I O N AT TA C K SWe also study an additional type of a realistic attack that is based on node injection. In this approach,we insert a new node to the graph with a single edge attached to our victim node. The attack isperformed by perturbing the injected node’s attributes. Since there is no initial node feature vectorto measure the 1distance to, the injected node is allowed to find any realistic representation (e.g.,without choosing negative frequencies). This attack is very powerful, reducing the test accuracy downto 0.02% on PubMed.A . 5 . 3 L A R G E R N U M B E R O F AT TA C K E R SNumber ofPubMedattackers1 45.52 27.73 19.04 15.35 12.9Table A.12: Test accuracy for different number of attackers on PubMed.We performed additional experiments with up to five randomly sampled attacker nodes simultaneously(Table A.12). As expected, allowing a larger number of attackers reduces the test accuracy. However,the main observation in this paper is that even a single attacker node is surprisingly effective.A . 6 L I M I T I N G T H E A L L O W E D 0In Section 4.5, we analyzed the effect of the value of 1, that is, the maximal allowed perturbationin each vector attribute, on the performance of the attack. However, in datasets such as Cora andCiteSeer, the input features are binary (i.e., the input node vector is many-hot), so the possibleperturbation to each vector element is only “flipping” its value from zero to one, or vice-versa. Thus,in these datasets, it is interesting to analyze the value of 0, the maximal number of allowed perturbedvector elements, on the performance of the attack. In this case, measuring the l0norm is equivalent tomeasuring the l1norm:kk0=kk1, and is proportional to the l2norm.We performed experiments where we measured the test accuracy of the model while limiting thenumber of allowed perturbed vector elements. The results are shown in Figure A.1.As shown, when 0= 0, no attack is allowed, and the test accuracy is equal to the “Clean” valueof Table 1. When 0= 100% , the results are equal to the SINGLE values of Table 1 – resulting inflipping 50% of the features on average in Cora, and 31% of the features on average in CiteSeer.It is important to note that in practice, the average number of perturbed features is much lower thanthe maximal number of allowed features. For example, in CiteSeer, allowing 100% of the featuresresults in actually using only 31% on average.16Under review as a conference paper at ICLR 20210 0:10:20:30:40:50:60:70:80:9 1102030405060708090Maximal fraction of perturbed featuresAccCora - SINGLECora - SINGLE+GradChoiceCiteSeer - SINGLECiteSeer - SINGLE+GradChoiceFigure A.1: Test accuracy compared to the maximal allowed 0, the number of perturbed features(divided by the total number of features in the dataset). In practice, the average number of perturbedfeatures is much lower than the maximal number of allowed features.17<|im_end|> <|im_start|>assistant ### Review Title the paper lacks novelty ### Review Text In this paper, the authors mainly show that the adversary can force the GNN to classify any target node to a chosen label by perturbing another single arbitrary node’s feature in the graph. The paper is well written and easy to understand. However, there are several concerns about the paper: 1. The novelty of the paper is rather limited. The paper simply uses the gradient attacker method to add continuous perturbations to the node attributes. What is the novelty here compared to other gradient based attacks for the data without graph structure such as images? I don’t see any novelty or contribution from the methodology perspective. 2. The problem setting needs to be well discussed. In the introduction, the authors use the example of crafting adversarial posts to motivate the problem. However, in the problem definition, it becomes adding perturbations to the node features. Modifying the node’s feature in realistic settings is even harder than modifying the graph structure to me .(They could revise the words or sentences in the post, but they could not add the feature vector directly as the feature vector are usually preprocessed by some other models). 3. The authors should include some robust GNNs such as [1][2] as baselines to test how effective is the proposed method. The authors should also consider add more baselines as the attack methods. [1] Zügner, Daniel, and Stephan Günnemann. "Certifiable robustness and robust training for graph convolutional networks." In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 246-256. 2019. [2] Jin, Hongwei, and Xinhua Zhang. "Robust Training of Graph Convolutional Networks via Latent Perturbation." ECML-PKDD 2020 ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
Brxlk6EuPW9
ICLR.cc/2022/Workshop/DGM4HSD
2022
Deep Generative Neural Embeddings for High Dimensional Data Visualization
["Anonymous"]
We propose an embedding-based visualization method along with a data generation model. In particular, corresponding locations of data points in the visualization are optimized as embeddings along with a generative network such that the network could reconstruct the original data. The generalization aspect of the neural network enforces similar data points to be close in the embedding space. Since our method includes the generative part, it allows visualizations that are not possible with neighborhood embedding methods such as TSNE. Compared to parametric methods such as VAE our method is non-parametric and relaxes the need to optimize the encoder part, allowing us to obtain better optimizations.
["Deep Generative Models", "Embedding"]
ABSTRACTWe propose an embedding-based visualization method along with a data gener-ation model. In particular, corresponding locations of data points in the visual-ization are optimized as embeddings along with a generative network such thatthe network could reconstruct the original data. The generalization aspect of theneural network enforces similar data points to be close in the embedding space.Since our method includes the generative part, it allows visualizations that arenot possible with neighborhood embedding methods such as TSNE. Compared toparametric methods such as V AE our method is non-parametric and relaxes theneed to optimize the encoder part, allowing us to obtain better optimizations.1 I NTRODUCTIONVisualization is an essential tool for assessing the quality of feature representations, understandinggroups and sub-groups in data, and comparing difficulties of decision boundaries across differentclasses. On the other hand, high-dimensional data visualization is challenging due to the curse ofdimensionality. Thus, this is still an ongoing research direction in the machine learning community.TSNE and UMAP are the common choices for high dimensional data visualization in recent ma-chine learning literature (McInnes et al., 2018; Van der Maaten & Hinton, 2008). They createa neighborhood graph of the high-dimensional data and reconstruct these graph relationships inlower-dimensional space by gradient-based optimization.Similar to neighborhood graph approaches, complete parametric methods such as Variational AutoEncoder (V AE) could provide visualizations as well if the bottleneck dimensions are restricted totwo dimensions (Kingma & Welling, 2014). The visualization layer is completely controlled by theencoder portion of the network, thereby preventing any optimization for visualization independentfor individual data points.In this paper, we propose a method to create a visualization while obtaining a generative modelof the data without an encoder. To achieve this, we relaxed the encoder part of Variational AutoEncoder and let it be learned as embeddings. We name this class of models as Generative NeuralEmbeddings (GNE). The optimization objective maximizes the embedding locations and generator(decoder) network jointly. We compared GNE generated visualizations versus existing methods.Additionally, we demonstrated the ability to generate new samples from embeddings.2 R ELATED WORKOur work is related to Generative Adversarial Networks (Goodfellow et al., 2014). In these methods,the networks are optimized in pairs as predictor and generator jointly. The generator part tries tomap random noise into realistic data such that the discriminator would fail to discriminate betweensynthetic and real data. In our method, however, the generator tries to reconstruct the original imagesthemselves. Also, embeddings are optimized instead of sampling from a random variable.1Under review as a conference paper at ICLR 2022Word2vec (Mikolov et al., 2013b;a) uses a similar idea that models the generation of textual data byhierarchical softmax given the word embeddings. In our case, we have used a deep residual networkfor image generation and gave dummy ids to each image data point. Additionally, low-dimensionalembeddings are chosen for visualization purposes.DeepDream also optimizes the input image to maximize the activation of selected neuron (Mordv-intsev et al., 2015). In our case, we are optimizing input embeddings of the generative model whileallowing optimization on the generative model as well.3 M ETHODOur method could be described as an embedding layer and a generative model on this embeddinglayer. Embedding layer Eprovides the lookup table of embeddings for data point id i. Generativemodel Ggenerates the output from given embedding. Thus the generation process for data ibecomesG(E[i]). The objective of the whole network is to try to minimize the loss between generated dataversus the real data corresponding to id i. The purpose of the embedding is to make the inputs ofthe generator optimizable. Hence, the name generative network embeddings (GNE) comes from thisconnection. Since the number of parameters grows with the number of data points, this method is anon-parametric method.We have selected resnet structure for the generative part of the model (He et al., 2016). In orderto have enough number of hidden units, there is a dense expansion layer that increases the numberof dimensions after the embeddings. Embeddings are selected to be in 2 dimensions that could bedisplayed in a plane. However, any number of dimensions is possible for other purposes. In ourmodel, we have an additional Gaussian noise layer that regularizes the embedding space. A Kerasimplementation of the model is given in the listing.lin = Input(1)embed = Embedding(x.shape[0], 2)(lin)flatten = Flatten()(embed)noise = GaussianNoise(1)(flatten)hidden = Dense(N_HIDDEN, activation=’elu’)(noise)for l in range(NLAYERS):relu = Dense(N_HIDDEN, activation=’relu’)(hidden)linear = Dense(N_HIDDEN)(relu)hidden = add([hidden, linear])dense = Dense(OUTPUT_SHAPE, activation=’sigmoid’)(hidden)4 R ESULTS AND DISCUSSIONWe used the MNIST training dataset to test the proposed method after normalizing images on 0-1scale. Embedding size becomes 60000x2 for this dataset. We used Adam optimizer for training withan initial learning rate of 1e-2 with a batch size of 1024. We selected 64 dimensions as the numberof hidden dimensions for 4 layers of residual blocks. We used mean squared error as a loss function.Figure 1 shows five visualizations of this data. In the scatter plots, each color represents a distinctdigit class, and the coordinates are embedding values for the corresponding digit image. Actualdigit images are placed at corresponding coordinates. Figure 1a shows TSNE visualization. Distinctclusters are created which could allow discovery of certain classes of images in the absence of labels(colors). However, this visualization alone is not suitable for efficient exploration of images. Inorder to achieve this, generator networks that map 2d input coordinates to generated data samplescan be used as shown in Figures 1c and 1e. In these plots, the range of maximum and minimumvalues is divided by 32 equal grid points which sent as an input to generator decoders.In Figure 1b and 1c correspond to GNE optimization. In Figure 1d and 1e we show an equivalentnetwork with 2d bottleneck layer. One difference between these two methods is the relative proxim-ity of the various classes that they achieve. For example in V AE optimization, the images of digit 1are adjacent to those of digit 5 which is not the case for GNE optimization. That adjacency between1 and 5 can be observed grid plot in Figure 1e. Another difference is the fact that GNE relaxes the2Under review as a conference paper at ICLR 2022(a) TSNE visualization(b) GNE visualization (c) Generated Digits from GNE decoder(d) V AE visualization (e) Generated Digits from V AE decoder3Under review as a conference paper at ICLR 2022need of optimizing encoder part which in our case allowed us to obtain lower loss function aftertraining (GNE: 0.0288, V AE:0.0438 ).5 F UTURE WORKSWe have demonstrated the utilization of embeddings to generate visualizations of data along with agenerative model. There are additional potential aspects for improvement. We have listed them inthis section.First, the method could model each class using multi-modal distributions, and there is no explicitrestriction factor that enforces the similar points to appear in a similar location in visualization space.On the other hand, adding pairwise similarity checks like in neighborhood graph models increasestheoretical computational time.Second, the optimization algorithm is gradient-based in this study. It does not have to be sincethe input space is very small. For example, even a global optimization with a simple grid searchin 2D space could yield a better grouping of data points. However, this optimization diverges theoptimization of the decoder part of the network from embeddings.Third, GNE does not describe a direct way for getting embeddings for test data in the case of train-test split. One way to obtain them is by reusing embedding vectors for test cases and optimizingonly the embeddings part. As expected, the optimization approach would be slower than standardfeed-forward architectures.
rgblTZfOLz5
Existing method, limited evaluation
3: Clear rejection
The authors consider an auto-encoder where we parametrize the latent codes of training points _directly_, instead of parametrizing an encoder. The model is evaluated on data visualization, where we aim to visualize the structure in the data by plotting the two-dimensional latent codes. Unfortunately, the proposed model is not novel, and has been explored in [1], where it is called an auto-_decoder_, and is used to learn a latent space of three-dimensional shapes. I assume the authors were not aware of this prior work. The novelty of this paper is in using the method for data visualization, instead of generation/in-painting as in [1]. The MNIST experiment shows that the method indeed fits _reasonable_ two-dimensional codes. Unfortunately, the reader does not learn much beyond this. It is not clear if the learned codes are in any way more informative/useful than the codes learned by a standard auto-encoder, a VAE or TSNE/UMAP. In summary, the proposed method already exists in literature, and the experimental aspect is too limited to constitute a significant contribution. I recommend rejecting the paper, but encourage the authors to continue studying the strengths/drawbacks of this method in the context of data visualization. - [1] Park, J., Florence, P., Straub, J., Newcombe, R., & Lovegrove, S. (2019). _DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation_. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://openaccess.thecvf.com/content_CVPR_2019/html/Park_DeepSDF_Learning_Continuous_Signed_Distance_Functions_for_Shape_Representation_CVPR_2019_paper.html
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Deep Generative Neural Embeddings for High Dimensional Data Visualization ### Paper Abstract We propose an embedding-based visualization method along with a data generation model. In particular, corresponding locations of data points in the visualization are optimized as embeddings along with a generative network such that the network could reconstruct the original data. The generalization aspect of the neural network enforces similar data points to be close in the embedding space. Since our method includes the generative part, it allows visualizations that are not possible with neighborhood embedding methods such as TSNE. Compared to parametric methods such as VAE our method is non-parametric and relaxes the need to optimize the encoder part, allowing us to obtain better optimizations. ### Paper Keywords ["Deep Generative Models", "Embedding"] ### Paper Content ABSTRACTWe propose an embedding-based visualization method along with a data gener-ation model. In particular, corresponding locations of data points in the visual-ization are optimized as embeddings along with a generative network such thatthe network could reconstruct the original data. The generalization aspect of theneural network enforces similar data points to be close in the embedding space.Since our method includes the generative part, it allows visualizations that arenot possible with neighborhood embedding methods such as TSNE. Compared toparametric methods such as V AE our method is non-parametric and relaxes theneed to optimize the encoder part, allowing us to obtain better optimizations.1 I NTRODUCTIONVisualization is an essential tool for assessing the quality of feature representations, understandinggroups and sub-groups in data, and comparing difficulties of decision boundaries across differentclasses. On the other hand, high-dimensional data visualization is challenging due to the curse ofdimensionality. Thus, this is still an ongoing research direction in the machine learning community.TSNE and UMAP are the common choices for high dimensional data visualization in recent ma-chine learning literature (McInnes et al., 2018; Van der Maaten & Hinton, 2008). They createa neighborhood graph of the high-dimensional data and reconstruct these graph relationships inlower-dimensional space by gradient-based optimization.Similar to neighborhood graph approaches, complete parametric methods such as Variational AutoEncoder (V AE) could provide visualizations as well if the bottleneck dimensions are restricted totwo dimensions (Kingma & Welling, 2014). The visualization layer is completely controlled by theencoder portion of the network, thereby preventing any optimization for visualization independentfor individual data points.In this paper, we propose a method to create a visualization while obtaining a generative modelof the data without an encoder. To achieve this, we relaxed the encoder part of Variational AutoEncoder and let it be learned as embeddings. We name this class of models as Generative NeuralEmbeddings (GNE). The optimization objective maximizes the embedding locations and generator(decoder) network jointly. We compared GNE generated visualizations versus existing methods.Additionally, we demonstrated the ability to generate new samples from embeddings.2 R ELATED WORKOur work is related to Generative Adversarial Networks (Goodfellow et al., 2014). In these methods,the networks are optimized in pairs as predictor and generator jointly. The generator part tries tomap random noise into realistic data such that the discriminator would fail to discriminate betweensynthetic and real data. In our method, however, the generator tries to reconstruct the original imagesthemselves. Also, embeddings are optimized instead of sampling from a random variable.1Under review as a conference paper at ICLR 2022Word2vec (Mikolov et al., 2013b;a) uses a similar idea that models the generation of textual data byhierarchical softmax given the word embeddings. In our case, we have used a deep residual networkfor image generation and gave dummy ids to each image data point. Additionally, low-dimensionalembeddings are chosen for visualization purposes.DeepDream also optimizes the input image to maximize the activation of selected neuron (Mordv-intsev et al., 2015). In our case, we are optimizing input embeddings of the generative model whileallowing optimization on the generative model as well.3 M ETHODOur method could be described as an embedding layer and a generative model on this embeddinglayer. Embedding layer Eprovides the lookup table of embeddings for data point id i. Generativemodel Ggenerates the output from given embedding. Thus the generation process for data ibecomesG(E[i]). The objective of the whole network is to try to minimize the loss between generated dataversus the real data corresponding to id i. The purpose of the embedding is to make the inputs ofthe generator optimizable. Hence, the name generative network embeddings (GNE) comes from thisconnection. Since the number of parameters grows with the number of data points, this method is anon-parametric method.We have selected resnet structure for the generative part of the model (He et al., 2016). In orderto have enough number of hidden units, there is a dense expansion layer that increases the numberof dimensions after the embeddings. Embeddings are selected to be in 2 dimensions that could bedisplayed in a plane. However, any number of dimensions is possible for other purposes. In ourmodel, we have an additional Gaussian noise layer that regularizes the embedding space. A Kerasimplementation of the model is given in the listing.lin = Input(1)embed = Embedding(x.shape[0], 2)(lin)flatten = Flatten()(embed)noise = GaussianNoise(1)(flatten)hidden = Dense(N_HIDDEN, activation=’elu’)(noise)for l in range(NLAYERS):relu = Dense(N_HIDDEN, activation=’relu’)(hidden)linear = Dense(N_HIDDEN)(relu)hidden = add([hidden, linear])dense = Dense(OUTPUT_SHAPE, activation=’sigmoid’)(hidden)4 R ESULTS AND DISCUSSIONWe used the MNIST training dataset to test the proposed method after normalizing images on 0-1scale. Embedding size becomes 60000x2 for this dataset. We used Adam optimizer for training withan initial learning rate of 1e-2 with a batch size of 1024. We selected 64 dimensions as the numberof hidden dimensions for 4 layers of residual blocks. We used mean squared error as a loss function.Figure 1 shows five visualizations of this data. In the scatter plots, each color represents a distinctdigit class, and the coordinates are embedding values for the corresponding digit image. Actualdigit images are placed at corresponding coordinates. Figure 1a shows TSNE visualization. Distinctclusters are created which could allow discovery of certain classes of images in the absence of labels(colors). However, this visualization alone is not suitable for efficient exploration of images. Inorder to achieve this, generator networks that map 2d input coordinates to generated data samplescan be used as shown in Figures 1c and 1e. In these plots, the range of maximum and minimumvalues is divided by 32 equal grid points which sent as an input to generator decoders.In Figure 1b and 1c correspond to GNE optimization. In Figure 1d and 1e we show an equivalentnetwork with 2d bottleneck layer. One difference between these two methods is the relative proxim-ity of the various classes that they achieve. For example in V AE optimization, the images of digit 1are adjacent to those of digit 5 which is not the case for GNE optimization. That adjacency between1 and 5 can be observed grid plot in Figure 1e. Another difference is the fact that GNE relaxes the2Under review as a conference paper at ICLR 2022(a) TSNE visualization(b) GNE visualization (c) Generated Digits from GNE decoder(d) V AE visualization (e) Generated Digits from V AE decoder3Under review as a conference paper at ICLR 2022need of optimizing encoder part which in our case allowed us to obtain lower loss function aftertraining (GNE: 0.0288, V AE:0.0438 ).5 F UTURE WORKSWe have demonstrated the utilization of embeddings to generate visualizations of data along with agenerative model. There are additional potential aspects for improvement. We have listed them inthis section.First, the method could model each class using multi-modal distributions, and there is no explicitrestriction factor that enforces the similar points to appear in a similar location in visualization space.On the other hand, adding pairwise similarity checks like in neighborhood graph models increasestheoretical computational time.Second, the optimization algorithm is gradient-based in this study. It does not have to be sincethe input space is very small. For example, even a global optimization with a simple grid searchin 2D space could yield a better grouping of data points. However, this optimization diverges theoptimization of the decoder part of the network from embeddings.Third, GNE does not describe a direct way for getting embeddings for test data in the case of train-test split. One way to obtain them is by reusing embedding vectors for test cases and optimizingonly the embeddings part. As expected, the optimization approach would be slower than standardfeed-forward architectures.<|im_end|> <|im_start|>assistant ### Review Title Existing method, limited evaluation ### Review Text The authors consider an auto-encoder where we parametrize the latent codes of training points _directly_, instead of parametrizing an encoder. The model is evaluated on data visualization, where we aim to visualize the structure in the data by plotting the two-dimensional latent codes. Unfortunately, the proposed model is not novel, and has been explored in [1], where it is called an auto-_decoder_, and is used to learn a latent space of three-dimensional shapes. I assume the authors were not aware of this prior work. The novelty of this paper is in using the method for data visualization, instead of generation/in-painting as in [1]. The MNIST experiment shows that the method indeed fits _reasonable_ two-dimensional codes. Unfortunately, the reader does not learn much beyond this. It is not clear if the learned codes are in any way more informative/useful than the codes learned by a standard auto-encoder, a VAE or TSNE/UMAP. In summary, the proposed method already exists in literature, and the experimental aspect is too limited to constitute a significant contribution. I recommend rejecting the paper, but encourage the authors to continue studying the strengths/drawbacks of this method in the context of data visualization. - [1] Park, J., Florence, P., Straub, J., Newcombe, R., & Lovegrove, S. (2019). _DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation_. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://openaccess.thecvf.com/content_CVPR_2019/html/Park_DeepSDF_Learning_Continuous_Signed_Distance_Functions_for_Shape_Representation_CVPR_2019_paper.html ### Review Rating 3: Clear rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
SygKyeHKDH
ICLR.cc/2020/Conference
2020
Making Efficient Use of Demonstrations to Solve Hard Exploration Problems
["Caglar Gulcehre", "Tom Le Paine", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams", "Gabriel Barth-Maron", "Ziyu Wang", "Nando de Freitas", "Worlds Team"]
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.
["imitation learning", "deep learning", "reinforcement learning"]
ABSTRACTThis paper introduces R2D3, an agent that makes efficient use of demonstrations tosolve hard exploration problems in partially observable environments with highlyvariable initial conditions. We also introduce a suite of eight tasks that combinethese three properties, and show that R2D3 can solve several of the tasks whereother state of the art methods (both with and without demonstrations) fail to seeeven a single successful trajectory after tens of billions of steps of exploration.1 I NTRODUCTIONReinforcement learning from demonstrations has proven to be an effective strategy for attackingproblems that require sample efficiency and involve hard exploration. For example, Aytar et al. (2018),Pohlen et al. (2018) and Salimans and Chen (2018b) have shown that RL with demonstrations canaddress the hard exploration problem in Montezuma’s Revenge. Ve ˇcerík et al. (2017), Merel et al.(2017) and Paine et al. (2018) have demonstrated similar results in robotics. Many other works haveshown that demonstrations can accelerate learning and address hard-exploration tasks (e.g. see Hesteret al., 2018; Kim et al., 2013; Nair et al., 2018; Kang et al., 2018).In this paper, we attack the problem of learning from demonstrations in hard exploration tasks inpartially observable environments with highly variable initial conditions. These three aspects togetherconspire to make learning challenging:1.Sparse rewards induce a difficult exploration problem, which is a challenge for many stateof the art RL methods. An environment has sparse reward when a non-zero reward is onlyseen after taking a long sequence of correct actions. Our approach is able to solve taskswhere standard methods run for billions of steps without seeing a single non-zero reward.2.Partial observability forces the use of memory, and also reduces the generality of informa-tion provided by a single demonstration, since trajectories cannot be broken into isolatedtransitions using the Markov property. An environment has partial observability if the agentcan only observe a part of the environment at each timestep.3.Highly variable initial conditions (i.e. changes in the starting configuration of the envi-ronment in each episode) are a big challenge for learning from demonstrations, because thedemonstrations can not account for all possible configurations. When the initial conditionsare fixed it is possible to be extremely efficient through tracking (Aytar et al., 2018; Penget al., 2018); however, with a large variety of initial conditions the agent is forced to general-ize over environment configurations not present in demonstrations. Generalizing betweendifferent initial conditions is known to be difficult (Ghosh et al., 2017; Langlois et al., 2019;Zolna et al., 2019).Our approach to these problems combines demonstrations with off-policy, recurrent Q-learning in away that allows us to make very efficient use of the available data. In particular, we vastly outperformbehavioral cloning using the same set of demonstrations in all of our experiments.indicates joint first authorship, both authors equally contributed to this project.1Published as a conference paper at ICLR 2020Another desirable property of our approach is that our agents are able to learn to outperform thedemonstrators, and in some cases even to discover strategies that the demonstrators were not awareof. In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of allthe demonstrators completing the task in the intended way.Learning from a small number of demonstrations under highly variable initial conditions is notstraight-forward. We identify a key parameter of our algorithm, the demo-ratio , which controls theproportion of expert demonstrations vs agent experience in each training batch. This hyper-parameterhas a dramatic effect on the performance of the algorithm. Surprisingly, we find that the optimaldemo ratio is very small (but non-zero) across a wide variety of tasks.The mechanism our agents use to efficiently extract information from expert demonstrations is to usethem in a way that guides (or biases) the agent’s own autonomous exploration of the environment.Although this mechanism is not obvious from the algorithm construction, our behavioral analysisconfirms the presence of this guided exploration effect.To demonstrate the effectiveness of our approach we introduce a suite of tasks (which we call theHard-Eight suite) that exhibit our three targeted properties. The tasks are set in a procedurally-generated 3D world, and require complex behavior (e.g. tool use, long-horizon memory) from theagent to succeed. The tasks are designed to be difficult challenges in our targeted setting, and severalstate of the art methods (themselves ablations of our approach) fail to solve them.The main contributions of this paper are, firstly we design a new agent that makes efficient use ofdemonstrations to solve sparse reward tasks in partially observed environments with highly variableinitial conditions. Secondly, we provide an analysis of the mechanism our agents use to exploitinformation from the demonstrations. Lastly, we introduce a suite of eight tasks that support this lineof research.2 R ECURRENT REPLAY DISTRIBUTED DQN FROM DEMONSTRATIONSnetwork weightsprioritiz ed sampling prioritiz ed samplingagent tr aject ories& initial prioritiesdemo replay agent replaylearneractorenvrnn agenttrainingbatch ρ (1 - ρ)double Q-learning+ n-step return targetupdatedprioritiesupdatedprioritiesFigure 1: The R2D3 distributed system diagram. The learner samplesbatches that are a mixture of demonstrations and the experiences the agentgenerates by interacting with the environment over the course of training.The ratio between demos and agent experiences is a key hyper-parameterwhich must be carefully tuned to achieve good performance.We propose a new agent, whichwe refer to as Recurrent ReplayDistributed DQN from Demon-strations (R2D3). R2D3 is de-signed to make efficient use ofdemonstrations to solve sparsereward tasks in partially ob-served environments with highlyvariable initial conditions. Thissection gives an overview of theagent, and detailed pseudocodecan be found in Section 2.1.The architecture of the R2D3agent is shown in Figure 1. Thereare several actor processes, eachrunning independent copies ofthe behavior against an instanceof the environment. Each actorstreams its experience to a sharedagent replay buffer, where experience from all actors is aggregated and globally prioritized (Schaulet al., 2016; Horgan et al., 2018) using a mixture of max and mean of the TD-errors with priorityexponent= 1:0as in Kapturowski et al. (2018). The actors periodically request the latest networkweights from the learner process in order to update their behavior.In addition to the agent replay, we maintain a second demo replay buffer, which is populated withexpert demonstrations of the task to be solved. Expert trajectories are also prioritized using thescheme of Kapturowski et al. (2018). Maintaining separate replay buffers for agent experience andexpert demonstrations allows us to prioritize the sampling of agent and expert data separately.2Published as a conference paper at ICLR 2020The learner process samples batches of data from both the agent and demo replay buffers simulta-neously. A hyperparameter , the demo ratio , controls the proportion of data coming from expertdemonstrations versus from the agent’s own experience. The demo ratio is implemented at a batchlevel by randomly choosing whether to sample from the expert replay buffer independently for eachelement with probability . Using a stochastic demo ratio in this way allows us to target demo ratiosthat are smaller than the batch size, which we found to be very important for good performance. Theobjective optimized by the learner uses of n-step, double Q-learning (with n= 5) and a duelingarchitecture (Wang et al., 2016; Hessel et al., 2018). In addition to performing network updates, thelearner is also responsible for pushing updated priorities back to the replay buffers.In each replay buffer, we store fixed-length ( m= 80 ) sequences of (s;a;r )tuples where adjacentsequences overlap by 40time-steps. The sequences never cross episode boundaries. Given a singlebatch of trajectories we unroll both online and target networks (Mnih et al., 2015) on the samesequence of states to generate value estimates with the recurrent state initialized to zero. Properinitialization of the recurrent state would require always replaying episodes from the beginning,which would add significant complexity to our implementation. As an approximation of this we treatthe first 40steps of each sequence as a burn-in phase, and apply the training objective to the final 40steps only. An alternative approximation would be to store stale recurrent states in replay, but we didnot find this to improve performance over zero initialization with burn-in.2.1 R2D3 A GENTIn this section, we provide the pseudocode for the R2D3. First, the agent has a single learner processwhich samples from both demonstration and agent buffers in order to update its policy parameters,the pseudocode of the R2D3 learner can be found in Algorithm 1.Algorithm 1 LearnerInputs: replay of expert demonstrations D, replay of agent experiences R, batch sizeB, sequence length m,and number of actors A.Initialize policy weights .Initialize target policy weights 0 .LaunchAactors and replicate policy weights to each actor.fornsteps doSample transition sequences (st:t+m;at:t+m;rt:t+m)from replayDwith probability or from replayRwith probability (1), to construct a mini-batch of size B.Calculate loss using target network.Perform a gradient descent step to update .Iftmodttarget = 0, update the target policy weights 0 .Iftmodtactor = 0, replicate policy weights to the actors.end forThe R2D3 agent has Aparallel actor processes which interact with a copy of the environment inorder to obtain data which is then inserted into the agent buffer. The agents periodically update theirparameters to match those being updated on the learner. The pseudocode for the actors is provided inAlgorithm 2.Algorithm 2 ActorrepeatSample action from behavior policy a (s)Executeaand observe s0andrStore (s;a;s0;r)inRuntil learner finishes.3 B ACKGROUNDExploration remains one of the most fundamental challenges for reinforcement learning. So-called“hard-exploration” domains are those in which rewards are sparse, and optimal solutions typically have3Published as a conference paper at ICLR 2020Figure 2: Hard-Eight task suite. In each task an agent ( H) must interact with objects in its environment in orderto gain access to a large apple ( H) that provides reward. The 3D environment is also procedurally generated sothat every episode the state of the world including object shapes, colors, and positions is different. From thepoint of view of the agent the environment is partially observed. Because it may take hundreds of low-levelactions to collect an apple the reward is sparse which makes exploration difficult.long and sparsely-rewarded trajectories. Hard-exploration domains may also have many distractingdead ends that the agent may not be able to recover from once it gets into a certain state. In recentyears, the most notable such domains are Atari environments, including Montezuma’s Revenge andPitfall (Bellemare et al., 2013). These domains are particularly tricky for classical RL algorithmsbecause even finding a single non-zero reward to bootstrap from is incredibly challenging.A common technique used to address the difficulty of exploration is to encourage the agent to visitunder-explored areas of the state-space (Schmidhuber, 1991). Such techniques are commonly knownas intrinsic motivation (Chentanez et al., 2005) or count-based exploration (Bellemare et al., 2016).However, these approaches do not scale well as the state space grows, as they still require exhaustivesearch in sparse reward environments. Additionally, recent empirical results suggest that thesemethods do not consistently outperform -greedy exploration (Taïga et al., 2019). The difficulty ofexploration is also a consequence of the current inability of our agents to abstract the world andlearn scalable, causal models with explanatory power. Instead they often use low-level features orhandcrafted heuristics and lack the generalization power necessary to work in a more abstract space.Hints can be provided to the agent which bias it towards promising regions of the state space eithervia reward-shaping (Ng et al., 1999) or by introducing a sequence of curriculum tasks (Bengio et al.,2009; Graves et al., 2017). However, these approaches can be difficult to specify and, in the case ofreward shaping, often lead to unexpected behavior where the agent learns to exploit the modifiedrewards.Another hallmark of hard-exploration benchmarks is that they tend to be fully-observable and exhibitlittle variation between episodes. Nevertheless, techniques like random no-ops and “sticky actions”have been proposed to artificially increase episode variance in Atari (Machado et al., 2018), analternative is to instead consider domains with inherent variability. Other recent work on the ObstacleTower challenge domain (Juliani et al., 2019) is similar to our task suite in this regard. Reliance ondeterminism of the environment is one of the chief criticisms of imitation leveled by Juliani (2018),who offers a valuable critique on Aytar et al. (2018), Ecoffet et al. (2019) and Salimans and Chen(2018a). In contrast, our approach is able to solve tasks with substantial per-episode variability.GAIL (Ho and Ermon, 2016) is another imitation learning method, however standard GAIL doesnot work in the following settings: 1) POMDPs (Gangwani et al., 2019; ̇Zołna et al., 2019), 2) frompixels (Li et al., 2017; Reed et al., 2018), 3) off policy (Kostrikov et al., 2018) and 4) with variableinitial conditions (Zolna et al., 2019). Our setting combines all of these, so we leave extending GAILto this combined setting for future work.4Published as a conference paper at ICLR 2020Figure 3: High-level steps necessary to solve the Baseball task. Each step in this sequence must be completed inorder, and must be implemented by the agent as a sequence of low level actions (no option structure is availableto the agent). The necessity of completing such a long sequence of high level steps makes it unlikely that thetask will ever be solved by random exploration. Note that each step involves interaction with physical objects,shown in bold.4 H ARD-EIGHT TASK SUITETo address the difficulty of hard exploration in partially observable problems with highly variableinitital conditions we introduce a collection of eight tasks, which exhibit these properties. Due to thegenerated nature of these tasks and the rich form of interaction between the agent and environment,we see greatly increased levels of variability between episodes. From the perspective of the learningprocess, these tasks are particularly interesting because just memorizing an open loop sequence ofactions is unlikely to achieve even partial success on a new episode. The nature of interaction with theenvironment combined with a limited field of view also necessitates the use of memory in the agent.All of the tasks in the Hard-Eight task suite share important common properties that make themhard exploration problems. First, each task emits sparse rewards —in all but one task the onlypositive instantaneous reward obtained also ends the episode. The visual observations in each taskare also first-person and thus the state of the world is only ever partially observed . Several of thetasks are constructed to ensure that that it is not possible to observe all task relevant informationsimultaneously.Finally, each task is subject to a highly variable initial conditions . This is accomplished by includingseveral procedural elements, including colors, shapes and configurations of task relevant objects. Theprocedural generation ensures that simply copying the actions from a demonstration is not sufficientfor successful execution, which is a sharp contrast to the the case of Atari (Pohlen et al., 2018). Amore detailed discussion of these aspects can be found in Appendix A and videos of agents andhumans performing these tasks can be found at https://bit.ly/2mAAUgg .Each task makes use of a standardized avatar with a first-person view of the environment, controlledby the same discretized action space consisting of 46 discrete actions. In all tasks the agent is rewardedfor collecting apples and often this is the only reward obtained before the episode ends. A depiction ofeach task is shown in Figure 2. A description of the procedural elements and filmstrip of a successfulepisode for each task is provided in Appendix A.Each of these tasks requires the agent to complete a sequence of high-level steps to complete the task.An example from the task suite is shown in Figure 3. The agent must: find the bat, pick up the bat,knock the ball off the plinth, pick up the ball, activate the sensor with the ball (opening the door),walk through the door, and collect the large apple.We are hoping that our release of the Hard-Eight tasks1will enable machine learning researchers totry imitation learning or inverse reinforcement learning algorithms on more complicated tasks.1The link for the tasks and the data can be found at deepmind.com/r2d3 , once they are officiallyreleased.5Published as a conference paper at ICLR 20205 B ASELINESIn this section we discuss the baselines and ablations we use to compare against our R2D3 agent in theexperiments. We compare to Behavior Cloning (a common baseline for learning from demonstrations)as well as two ablations of our method which individually remove either recurrence or demonstrationsfrom R2D3. The two ablations correspond to two different state of the art methods from the literature.Behavior Cloning BC is a simple and common baseline method for learning policies from demon-strations (Pomerleau, 1989; Rahmatizadeh et al., 2018). This algorithm corresponds to a supervisedlearning approach to imitation learning which uses only expert trajectories as its training datasetto fit a parameterized policy mapping states to actions. For discrete actions this corresponds to aclassification task, which we fit using the cross-entropy loss. If the rewards of trajectories in thetraining dataset are consistently high, BC is known to outperform recent batch-RL methods (Fujimotoet al., 2018). To enable fair comparison we trained our BC agent using the same recurrent neuralnetwork architecture that we used for our R2D3 algorithm (see Figure 4).No Demonstrations The first ablation we consider is to remove demonstrations from R2D3. Thiscorresponds to setting the demo ratio (see Figure 1) to = 0. This special case of R2D3 correspondsexactly to the R2D2 agent of Kapturowski et al. (2018), which itself extends DQN (Mnih et al.,2015) to partially observed environments by combining it with recurrence and the distributed trainingarchitecture of Ape-X DQN (Horgan et al., 2018). This ablation is itself state of the art on Atari-57and DMLab-30, making it an extremely strong baseline.No Recurrence The second ablation we consider is to replace the recurrent value function of R2D3with a feed-forward reactive network. We do this separately from the no demonstrations ablation,leaving the full system in Figure 1 in tact, with only the structure of the network changed. If wefurther fix the demo ratio to = 0:25then this ablation corresponds to the DQfD agent of Hesteret al. (2018), which is competitive on hard-exploration Atari environments such as Montezuma’sRevenge. However, we do not restrict ourselves to = 0:25, and instead optimize over the demoratio for the ablation as well as for our main agent.6 E XPERIMENTSWe evaluate the performance of our R2D3 agent alongside state-of-the-art deep RL baselines. Asdiscussed in Section 5, we compare our R2D3 agent to BC (standard LfD baseline) R2D2 (off-policySOTA), DQfD (LfD SOTA). We use our own implementations for all agents, and we plan to releasecode for all agents including R2D3.For each task in the Hard-Eight suite, we trained R2D3, R2D2, and DQfD using 256 -greedyCPU-based actors and a single GPU-based learner process. Following Horgan et al. (2018), the i-thactor was assigned a distinct noise parameter i2[0:48;0:4]where each iis regularly spaced inlog0:4space. For each of the algorithms their common hyperparameters were held fixed. Additionally,for R2D3 and DQfD the demo ratio was varied to study its effect. For BC we also varied the learningrate independently in a vain attempt to find a successful agent.All agents act in the environment with an action-repeat factor of 2, i.e. the actions received by theenvironment are repeated twice before passing the observation to the agent. Using an action repeat of4 is common in other domains like Atari (Bellemare et al., 2012; Mnih et al., 2015); however, wefound that using an action repeat of 4 made the Hard-Eight tasks too difficult for our demonstrators.Using an action repeat of 2 allowed us to strike a compromise between ease of demonstration (highaction repeats prohibiting smooth and intuitive motion) and ease of learning for the agents (low actionrepeats increase the number of steps required to complete the task).Figure 4 illustrates the neural network architecture of the different agents. As much as possible weuse the same network architecture across all agents, deviating only for DQfD, where the recurrenthead is replaced with an equally sized feed-forward layer. We briefly outline the training setup below,and give an explicit enumeration of the hyperparameters in Appendix B.For R2D3, R2D2 and DQfD we use the Adam optimizer (Kingma and Ba, 2014) with a fixed learningrate of 2104. We use hyperparameters that are shown to work well for similar environments. Weuse distributed training with 256 parallel actors, trained for at least 10 billion actor steps for all tasks.6Published as a conference paper at ICLR 2020Figure 4: (a) Recurrent head used by R2D3 agents. (b)Feedforward head used by the DQfD agent. Headsin both a) and b) are used to compute the Q values. (c)Architecture used to compute the input featurerepresentations. Frames of size 96x72 are fed into a ResNet, the output is then augmented by concatenating theprevious action at1, previous reward rt1, and other proprioceptive features ft, such as accelerations, whetherthe avatar hand is holding an object, and the hand’s relative distance to the avatar.For the BC agent the training regime is slightly different, since this agent does not interact with theenvironment during training. For BC we also use the Adam optimizer but we additionally performa hyperparameter sweep over learning rates f105;104;103g. Since there is no notion of actorsteps in BC we trained for 500k learner steps instead.During the course of training, an evaluator process periodically queries the learner process for thelatest network weights and runs the resulting policy on an episode, logging both the final returnand the total number of steps (actor or learner steps, as appropriate) performed at the time the ofevaluation.We collected a total of 100 demonstrations for each task spread across three different experts (eachexpert contributed roughly one third of the demonstrations for each task). Demonstrations for thetasks were collected using keyboard and mouse controls mapped to the agent’s exact action space,which was necessary to enable both behaviour cloning and learning from demonstrations. We showstatistics related to the human demonstration data which we collected from three experts in Table 1.6.1 L EARNING THE HARD-EIGHT TASKSIn Figure 5, we report the return against the number of actor steps, averaged over five randominitializations. We find that none of the baselines succeed in any of the eight environments. Meanwhile,R2D3 learns six out of the eight tasks, and reaches or exceeds human performance in four of them.The fact that R2D3 learns at all in this setting with only 100 demonstrations per task demonstratesthe ability of the agent to make very efficient use of the demonstrations. This is in contrast to BC andDQfD which use the same demonstrations, and both fail to learn a single task from the suite.All methods, including R2D3, fail to solve two of the tasks: Remember Sensor and Throw Across.These are the two tasks in the suite that are most demanding in terms of memory requirements for theagent, and it is possible that our zero-initialization with burn-in strategy for handling LSTM statesin replay does not give R2D3 sufficient context to complete these tasks successfully. Future workshould explore the better handling of recurrent states as a possible avenue towards success on thesetasks. R2D3, BC, and DQfD receive some negative returns on Remember Sensor, which indicatesthat the agents navigate down the hallway and walks over penalty sensors.R2D3 performed better than our average human demonstrator on Baseball, Drawbridge, NavigateCubes and the Wall Sensor tasks. The behavior on Wall Sensor Stack in particular is quite interesting.On this task R2D3 found a completely different strategy than the human demonstrators by exploitinga bug in the implementation of the environment. The intended strategy for this task is to stack twoblocks on top of each other so that one of them can remain in contact with a wall mounted sensor,7Published as a conference paper at ICLR 2020Figure 5: Reward vs actor steps curves for R2D3 and baselines on the Hard-Eight task suite. The curves arecomputed as the mean performance for the same agent across 5 different seeds per task. Error regions show the95% confidence interval for the mean reward across seeds. Several curves overlap exactly at zero reward for thefull range of the plots. R2D3 can perform human-level or better on Baseball, Drawbridge, Navigate Cubes andWall Sensor. R2D2 could not get any positive rewards on any of the tasks. DQfD and BC agents occasionally seerewards on Drawbridge and Navigate Cubes tasks, but this happens rarely enough that the effect is not visible inthe plots. Indicators ( H) mark analysis points in Section 6.3.demo ratiosuccess rate1/256 1/128 1/64 1/320.60.40.20.0Figure 6 | Success rate (see main text) for R2D3across all tasks with at least one successful seed,as a function of demo ratio. The square markersfor each demo ratio denote the mean success rate,and the error bars show a bootstrapped estimateof the [25;75]percentile interval for the meanestimate. The lower demo ratios consistently out-perform the higher demo ratios across the suite oftasks.Task Name Reward Episode Len.Baseball 7.84.1 492121Drawbridge 12.32.5 641137Navigate Cubes 7.94.1 638185Push Blocks 9.12.9 683270Remember Sensor 7.71.4 853188Throw Across 5.44.9 464172Wall Sensor 9.12.8 28087Wall Sensor Stack 8.63.5 521107Table 1 | Human demonstration statistics. Wecollected 100 demos for each tasks from threehuman demonstrators. We report mean lengths(in number of frames) and rewards of the episodesalong with the standard deviations for each task.and this is the strategy employed by the demonstrators. However, due to a bug in the environment thestrategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contactwith the key by pressing the key against it in a precise way.In light of the uniform failure of our baselines to learn on the Hard-Eight suite we made severalattempts at training other models on the task suite; however, these attempts were all unsuccessful. Forexample, we tried adding randomized prior functions (Osband et al., 2018) to R2D2, but this approachwas still unable to obtain reward on any of the Hard-Eight tasks. We also trained an IMPALA agentwith pixel control (Jaderberg et al., 2016) as auxiliary reward to help with exploration, but thisapproach also failed to learn on any of the tasks we attempted. We omit these results from Figure 5,only keeping the most relevant baselines.6.2 E FFECT OF THE DEMO RATIOIn our experiments on Hard-Eight tasks (see Figure 5), we did a hyperparameter search and chosethe best hyperparameters for each method independently. In this section, we look more closely athow the demo ratio ( ) affects learning in R2D3. To do this we look at how the success rate of R2D3across the entire Hard-Eight task suite varies as a function of the demo ratio.The goal of each task in the Hard-Eight suite is to collect a large apple, which ends the episode andgives a large reward. We consider an episode successful if the large apple is collected. An agentthat executes many episodes in the environment will either succeed or fail at each one. We consider8Published as a conference paper at ICLR 2020R2D2 @ 5B R2D3 @ 5B0 4 8 12 40(trained)actor steps (B)0246810distance crates pushedR2D3R2D2R2D3 @ 40B(a) (b) (c)Figure 7: Guided exploration behavior in the Push Blocks task. (a)Spatial pattern of exploration behaviorat5B actor steps (reward-driven learning kicks off for R2D3 only after 20B steps). Overlay of agent’strajectories over 200 episodes. Blocks and sensors are not shown for clarity. R2D2 appears to follow a randomwalk. R2D3 concentrates on a particular spatial region. (b)Interactions between the agent and blocks during thefirst 12B steps. Each line shows a different random seed. R2D2 rarely pushes the blocks. (c)Example trajectoryof R2D3 after training, the agent pushes the blue block onto the blue sensor, then collects the apple (green star).anagent successful if, after training, at least 75% of its final 25 episodes are successful. Finally,an individual agent with a fixed set of hyperparameters may still succeed or fail depending on therandomness in the environment and the initialization of the agent.We train several R2D3 agents on each tractable task2in the Hard-Eight suite, varying only the demoratio while keeping other hyperparameters fixed at the values used for the learning experiment. Weconsider four different demo ratios across six tasks, with five seeds for each task (120 trained agents).Figure 6 shows estimates of the success rate for the R2D3 algorithm for each different demo ratio,aggregated across all tasks. We observe that tuning the demo ratio has a strong effect on the successrate across the task suite, and that the best demo ratio is quite small. See Appendix C.3 for furtherresults.6.3 G UIDED EXPLORATION BY DEMONSTRATIONThe typical strategy for exploration in RL is to either use a stochastic policy and sample actions,or to use a deterministic policy and take random actions some small fraction of the time. Givensufficient time both of these approaches will in theory cover the space of possible behaviors, butin practice the amount of time required to achieve this coverage can be prohibitively long. In thisexperiment, we compare the behavior of R2D3 to the behavior of R2D2 (which is equivalent to R2D3without demonstrations) on two of the tasks from the Hard-Eight suite. Even very early in training(well before R2D3 is able to reliably complete the tasks) we see many more task-relevant actionsfrom R2D3 than from R2D2, suggesting that the effect of demonstrations is to bias R2D3 towardsexploring relevant parts of the environment.In Figure 7 we begin by examining the Push Blocks tasks. The task here is to push a particularblock onto a sensor to give access to a large apple, and we examine the behavior of both R2D3and R2D2 after 5B steps, which is long before R2D3 begins to solve the task with any regularity(see Figure 5). Looking at the distribution of spatial locations for the agents it is clear that R2D2essentially diffuses randomly around the room, while R2D3 spends much more time in task-relevantparts of the environment (e.g. away from the walls). We also record the total distance traveled by themoveable blocks in the room, and find that R2D3 tends to move the blocks significantly more oftenthan R2D2, even before it has learned to solve the task.7 C ONCLUSIONIn this paper, we introduced the R2D3 agent, which is designed to make efficient use of demonstrationsto learn in partially observable environments with sparse rewards and highly variable initial conditions.We showed through several experiments on eight very difficult tasks that our approach is able tooutperform multiple state of the art baselines, two of which are themselves ablations of R2D3.2We exclude Remember Sensor and Throw Across from this analysis, since we saw no successful seeds foreither of these tasks.9Published as a conference paper at ICLR 2020We also identified a key parameter of our algorithm, the demo ratio , and showed that careful tuningof this parameter is critical to good performance. Interestingly we found that the optimal demoratio is surprisingly small but non-zero, which suggests that there may be a risk of overfitting to thedemonstrations at the cost of generalization. For future work, we could investigate how this optimaldemo ratio changes with the total number of demonstrations and, more generally, the distribution ofexpert trajectories relative to the task variability.We introduced the Hard-Eight suite of tasks and used them in all of our experiments. These tasks arespecifically designed to be partially observable tasks with sparse rewards and highly variable initialconditions, making them an ideal testbed for showcasing the strengths of R2D3 in contrast to existingmethods in the literature.Our behavioral analysis showed that the mechanism R2D3 uses to efficiently extract information fromexpert demonstrations is to use them in a way that guides (or biases) the agent’s own autonomousexploration of the environment. An in-depth analysis of agent behavior on the Hard-Eight tasksuite is a promising direction for understanding how different RL algorithms make selective use ofinformation.
rJl2wiVt9S
Official Blind Review #4
6: Weak Accept
In this work, R2D3 (Recurrent Replay Distributed DQN from Demonstration), which combines R2D2 [1] with imitation learning (IL), is proposed. Similar to the existing works on “reinforcement learning (RL) with demonstration” such as DQfD, DDPGfD, policy optimization with demonstration (POfD) [2], hard exploration conditions (sparse reward, partial observability, high variance in initial states) are assumed, which is difficult to achieve good performance with RL without demonstration in general. Eight tasks in such conditions were devised and used to test the performance of R2D3. I like the fact that the authors of this work have chosen quite challenging scenarios, but I think the novelty of this submission is a bit weak to be accepted to the conference. I believe “RL with demonstration” becomes meaningful when it beats both RL and IL in some reasonable setting. For example, POfD [2] assumes sparse-reward tasks with *imperfect* demonstrations, which is difficult to achieve good performance by using RL or IL. From such a perspective, I have the following concerns: - Imitation learning baselines: There has been recent advancement in imitation learning. In the submission, it was mentioned that “GAIL has never been successfully applied to complex partially observable environments that require memory”, but there’s [3] that successfully uses GAIL in such a setting. Also, off-policy imitation learning such as DAC [4] is shown to be highly sample-efficient compared to GAIL in MuJoCo domain. However, the submission only considers behavioral cloning (BC) (which shows poor performance at unseen states due to the covariate shift problem) as a baseline among imitation learning method - Reinforcement learning baselines: The submission adopted R2D2 as an RL baseline, and it seems to me that the R2D2 agent starts from random initialization. For a fair comparison, however, I believe R2D2 with BC (or Batch RL) initialization should be considered. In addition to the above concerns, it seems to me that most of the features in R2D3 simply combines those in either DQfD or R2D2, and I couldn’t find out its own algorithmic novelty except “demo ratio” parameter. I’ll increase my score if I made wrong comments or misunderstood the contribution. References [1] Kapturowski, Ostrovski, Quan, Munos. and Dabney, “Recurrent experience replay in distributed reinforcement learning,” ICLR 2019. [2] Kang, Jie, Feng, “Policy optimization with demonstrations,” ICML 2018 [3] Gangwani, Lehman, Liu, Peng, “Learning Belief Representations for Imitation Learning in POMDPs,” UAI 2019 [4] Kostrikov, Agrawal, Dwibedi, Levine, Jonathan, Tompson, “Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,” ICLR 2019
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Making Efficient Use of Demonstrations to Solve Hard Exploration Problems ### Paper Abstract This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration. ### Paper Keywords ["imitation learning", "deep learning", "reinforcement learning"] ### Paper Content ABSTRACTThis paper introduces R2D3, an agent that makes efficient use of demonstrations tosolve hard exploration problems in partially observable environments with highlyvariable initial conditions. We also introduce a suite of eight tasks that combinethese three properties, and show that R2D3 can solve several of the tasks whereother state of the art methods (both with and without demonstrations) fail to seeeven a single successful trajectory after tens of billions of steps of exploration.1 I NTRODUCTIONReinforcement learning from demonstrations has proven to be an effective strategy for attackingproblems that require sample efficiency and involve hard exploration. For example, Aytar et al. (2018),Pohlen et al. (2018) and Salimans and Chen (2018b) have shown that RL with demonstrations canaddress the hard exploration problem in Montezuma’s Revenge. Ve ˇcerík et al. (2017), Merel et al.(2017) and Paine et al. (2018) have demonstrated similar results in robotics. Many other works haveshown that demonstrations can accelerate learning and address hard-exploration tasks (e.g. see Hesteret al., 2018; Kim et al., 2013; Nair et al., 2018; Kang et al., 2018).In this paper, we attack the problem of learning from demonstrations in hard exploration tasks inpartially observable environments with highly variable initial conditions. These three aspects togetherconspire to make learning challenging:1.Sparse rewards induce a difficult exploration problem, which is a challenge for many stateof the art RL methods. An environment has sparse reward when a non-zero reward is onlyseen after taking a long sequence of correct actions. Our approach is able to solve taskswhere standard methods run for billions of steps without seeing a single non-zero reward.2.Partial observability forces the use of memory, and also reduces the generality of informa-tion provided by a single demonstration, since trajectories cannot be broken into isolatedtransitions using the Markov property. An environment has partial observability if the agentcan only observe a part of the environment at each timestep.3.Highly variable initial conditions (i.e. changes in the starting configuration of the envi-ronment in each episode) are a big challenge for learning from demonstrations, because thedemonstrations can not account for all possible configurations. When the initial conditionsare fixed it is possible to be extremely efficient through tracking (Aytar et al., 2018; Penget al., 2018); however, with a large variety of initial conditions the agent is forced to general-ize over environment configurations not present in demonstrations. Generalizing betweendifferent initial conditions is known to be difficult (Ghosh et al., 2017; Langlois et al., 2019;Zolna et al., 2019).Our approach to these problems combines demonstrations with off-policy, recurrent Q-learning in away that allows us to make very efficient use of the available data. In particular, we vastly outperformbehavioral cloning using the same set of demonstrations in all of our experiments.indicates joint first authorship, both authors equally contributed to this project.1Published as a conference paper at ICLR 2020Another desirable property of our approach is that our agents are able to learn to outperform thedemonstrators, and in some cases even to discover strategies that the demonstrators were not awareof. In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of allthe demonstrators completing the task in the intended way.Learning from a small number of demonstrations under highly variable initial conditions is notstraight-forward. We identify a key parameter of our algorithm, the demo-ratio , which controls theproportion of expert demonstrations vs agent experience in each training batch. This hyper-parameterhas a dramatic effect on the performance of the algorithm. Surprisingly, we find that the optimaldemo ratio is very small (but non-zero) across a wide variety of tasks.The mechanism our agents use to efficiently extract information from expert demonstrations is to usethem in a way that guides (or biases) the agent’s own autonomous exploration of the environment.Although this mechanism is not obvious from the algorithm construction, our behavioral analysisconfirms the presence of this guided exploration effect.To demonstrate the effectiveness of our approach we introduce a suite of tasks (which we call theHard-Eight suite) that exhibit our three targeted properties. The tasks are set in a procedurally-generated 3D world, and require complex behavior (e.g. tool use, long-horizon memory) from theagent to succeed. The tasks are designed to be difficult challenges in our targeted setting, and severalstate of the art methods (themselves ablations of our approach) fail to solve them.The main contributions of this paper are, firstly we design a new agent that makes efficient use ofdemonstrations to solve sparse reward tasks in partially observed environments with highly variableinitial conditions. Secondly, we provide an analysis of the mechanism our agents use to exploitinformation from the demonstrations. Lastly, we introduce a suite of eight tasks that support this lineof research.2 R ECURRENT REPLAY DISTRIBUTED DQN FROM DEMONSTRATIONSnetwork weightsprioritiz ed sampling prioritiz ed samplingagent tr aject ories& initial prioritiesdemo replay agent replaylearneractorenvrnn agenttrainingbatch ρ (1 - ρ)double Q-learning+ n-step return targetupdatedprioritiesupdatedprioritiesFigure 1: The R2D3 distributed system diagram. The learner samplesbatches that are a mixture of demonstrations and the experiences the agentgenerates by interacting with the environment over the course of training.The ratio between demos and agent experiences is a key hyper-parameterwhich must be carefully tuned to achieve good performance.We propose a new agent, whichwe refer to as Recurrent ReplayDistributed DQN from Demon-strations (R2D3). R2D3 is de-signed to make efficient use ofdemonstrations to solve sparsereward tasks in partially ob-served environments with highlyvariable initial conditions. Thissection gives an overview of theagent, and detailed pseudocodecan be found in Section 2.1.The architecture of the R2D3agent is shown in Figure 1. Thereare several actor processes, eachrunning independent copies ofthe behavior against an instanceof the environment. Each actorstreams its experience to a sharedagent replay buffer, where experience from all actors is aggregated and globally prioritized (Schaulet al., 2016; Horgan et al., 2018) using a mixture of max and mean of the TD-errors with priorityexponent= 1:0as in Kapturowski et al. (2018). The actors periodically request the latest networkweights from the learner process in order to update their behavior.In addition to the agent replay, we maintain a second demo replay buffer, which is populated withexpert demonstrations of the task to be solved. Expert trajectories are also prioritized using thescheme of Kapturowski et al. (2018). Maintaining separate replay buffers for agent experience andexpert demonstrations allows us to prioritize the sampling of agent and expert data separately.2Published as a conference paper at ICLR 2020The learner process samples batches of data from both the agent and demo replay buffers simulta-neously. A hyperparameter , the demo ratio , controls the proportion of data coming from expertdemonstrations versus from the agent’s own experience. The demo ratio is implemented at a batchlevel by randomly choosing whether to sample from the expert replay buffer independently for eachelement with probability . Using a stochastic demo ratio in this way allows us to target demo ratiosthat are smaller than the batch size, which we found to be very important for good performance. Theobjective optimized by the learner uses of n-step, double Q-learning (with n= 5) and a duelingarchitecture (Wang et al., 2016; Hessel et al., 2018). In addition to performing network updates, thelearner is also responsible for pushing updated priorities back to the replay buffers.In each replay buffer, we store fixed-length ( m= 80 ) sequences of (s;a;r )tuples where adjacentsequences overlap by 40time-steps. The sequences never cross episode boundaries. Given a singlebatch of trajectories we unroll both online and target networks (Mnih et al., 2015) on the samesequence of states to generate value estimates with the recurrent state initialized to zero. Properinitialization of the recurrent state would require always replaying episodes from the beginning,which would add significant complexity to our implementation. As an approximation of this we treatthe first 40steps of each sequence as a burn-in phase, and apply the training objective to the final 40steps only. An alternative approximation would be to store stale recurrent states in replay, but we didnot find this to improve performance over zero initialization with burn-in.2.1 R2D3 A GENTIn this section, we provide the pseudocode for the R2D3. First, the agent has a single learner processwhich samples from both demonstration and agent buffers in order to update its policy parameters,the pseudocode of the R2D3 learner can be found in Algorithm 1.Algorithm 1 LearnerInputs: replay of expert demonstrations D, replay of agent experiences R, batch sizeB, sequence length m,and number of actors A.Initialize policy weights .Initialize target policy weights 0 .LaunchAactors and replicate policy weights to each actor.fornsteps doSample transition sequences (st:t+m;at:t+m;rt:t+m)from replayDwith probability or from replayRwith probability (1), to construct a mini-batch of size B.Calculate loss using target network.Perform a gradient descent step to update .Iftmodttarget = 0, update the target policy weights 0 .Iftmodtactor = 0, replicate policy weights to the actors.end forThe R2D3 agent has Aparallel actor processes which interact with a copy of the environment inorder to obtain data which is then inserted into the agent buffer. The agents periodically update theirparameters to match those being updated on the learner. The pseudocode for the actors is provided inAlgorithm 2.Algorithm 2 ActorrepeatSample action from behavior policy a (s)Executeaand observe s0andrStore (s;a;s0;r)inRuntil learner finishes.3 B ACKGROUNDExploration remains one of the most fundamental challenges for reinforcement learning. So-called“hard-exploration” domains are those in which rewards are sparse, and optimal solutions typically have3Published as a conference paper at ICLR 2020Figure 2: Hard-Eight task suite. In each task an agent ( H) must interact with objects in its environment in orderto gain access to a large apple ( H) that provides reward. The 3D environment is also procedurally generated sothat every episode the state of the world including object shapes, colors, and positions is different. From thepoint of view of the agent the environment is partially observed. Because it may take hundreds of low-levelactions to collect an apple the reward is sparse which makes exploration difficult.long and sparsely-rewarded trajectories. Hard-exploration domains may also have many distractingdead ends that the agent may not be able to recover from once it gets into a certain state. In recentyears, the most notable such domains are Atari environments, including Montezuma’s Revenge andPitfall (Bellemare et al., 2013). These domains are particularly tricky for classical RL algorithmsbecause even finding a single non-zero reward to bootstrap from is incredibly challenging.A common technique used to address the difficulty of exploration is to encourage the agent to visitunder-explored areas of the state-space (Schmidhuber, 1991). Such techniques are commonly knownas intrinsic motivation (Chentanez et al., 2005) or count-based exploration (Bellemare et al., 2016).However, these approaches do not scale well as the state space grows, as they still require exhaustivesearch in sparse reward environments. Additionally, recent empirical results suggest that thesemethods do not consistently outperform -greedy exploration (Taïga et al., 2019). The difficulty ofexploration is also a consequence of the current inability of our agents to abstract the world andlearn scalable, causal models with explanatory power. Instead they often use low-level features orhandcrafted heuristics and lack the generalization power necessary to work in a more abstract space.Hints can be provided to the agent which bias it towards promising regions of the state space eithervia reward-shaping (Ng et al., 1999) or by introducing a sequence of curriculum tasks (Bengio et al.,2009; Graves et al., 2017). However, these approaches can be difficult to specify and, in the case ofreward shaping, often lead to unexpected behavior where the agent learns to exploit the modifiedrewards.Another hallmark of hard-exploration benchmarks is that they tend to be fully-observable and exhibitlittle variation between episodes. Nevertheless, techniques like random no-ops and “sticky actions”have been proposed to artificially increase episode variance in Atari (Machado et al., 2018), analternative is to instead consider domains with inherent variability. Other recent work on the ObstacleTower challenge domain (Juliani et al., 2019) is similar to our task suite in this regard. Reliance ondeterminism of the environment is one of the chief criticisms of imitation leveled by Juliani (2018),who offers a valuable critique on Aytar et al. (2018), Ecoffet et al. (2019) and Salimans and Chen(2018a). In contrast, our approach is able to solve tasks with substantial per-episode variability.GAIL (Ho and Ermon, 2016) is another imitation learning method, however standard GAIL doesnot work in the following settings: 1) POMDPs (Gangwani et al., 2019; ̇Zołna et al., 2019), 2) frompixels (Li et al., 2017; Reed et al., 2018), 3) off policy (Kostrikov et al., 2018) and 4) with variableinitial conditions (Zolna et al., 2019). Our setting combines all of these, so we leave extending GAILto this combined setting for future work.4Published as a conference paper at ICLR 2020Figure 3: High-level steps necessary to solve the Baseball task. Each step in this sequence must be completed inorder, and must be implemented by the agent as a sequence of low level actions (no option structure is availableto the agent). The necessity of completing such a long sequence of high level steps makes it unlikely that thetask will ever be solved by random exploration. Note that each step involves interaction with physical objects,shown in bold.4 H ARD-EIGHT TASK SUITETo address the difficulty of hard exploration in partially observable problems with highly variableinitital conditions we introduce a collection of eight tasks, which exhibit these properties. Due to thegenerated nature of these tasks and the rich form of interaction between the agent and environment,we see greatly increased levels of variability between episodes. From the perspective of the learningprocess, these tasks are particularly interesting because just memorizing an open loop sequence ofactions is unlikely to achieve even partial success on a new episode. The nature of interaction with theenvironment combined with a limited field of view also necessitates the use of memory in the agent.All of the tasks in the Hard-Eight task suite share important common properties that make themhard exploration problems. First, each task emits sparse rewards —in all but one task the onlypositive instantaneous reward obtained also ends the episode. The visual observations in each taskare also first-person and thus the state of the world is only ever partially observed . Several of thetasks are constructed to ensure that that it is not possible to observe all task relevant informationsimultaneously.Finally, each task is subject to a highly variable initial conditions . This is accomplished by includingseveral procedural elements, including colors, shapes and configurations of task relevant objects. Theprocedural generation ensures that simply copying the actions from a demonstration is not sufficientfor successful execution, which is a sharp contrast to the the case of Atari (Pohlen et al., 2018). Amore detailed discussion of these aspects can be found in Appendix A and videos of agents andhumans performing these tasks can be found at https://bit.ly/2mAAUgg .Each task makes use of a standardized avatar with a first-person view of the environment, controlledby the same discretized action space consisting of 46 discrete actions. In all tasks the agent is rewardedfor collecting apples and often this is the only reward obtained before the episode ends. A depiction ofeach task is shown in Figure 2. A description of the procedural elements and filmstrip of a successfulepisode for each task is provided in Appendix A.Each of these tasks requires the agent to complete a sequence of high-level steps to complete the task.An example from the task suite is shown in Figure 3. The agent must: find the bat, pick up the bat,knock the ball off the plinth, pick up the ball, activate the sensor with the ball (opening the door),walk through the door, and collect the large apple.We are hoping that our release of the Hard-Eight tasks1will enable machine learning researchers totry imitation learning or inverse reinforcement learning algorithms on more complicated tasks.1The link for the tasks and the data can be found at deepmind.com/r2d3 , once they are officiallyreleased.5Published as a conference paper at ICLR 20205 B ASELINESIn this section we discuss the baselines and ablations we use to compare against our R2D3 agent in theexperiments. We compare to Behavior Cloning (a common baseline for learning from demonstrations)as well as two ablations of our method which individually remove either recurrence or demonstrationsfrom R2D3. The two ablations correspond to two different state of the art methods from the literature.Behavior Cloning BC is a simple and common baseline method for learning policies from demon-strations (Pomerleau, 1989; Rahmatizadeh et al., 2018). This algorithm corresponds to a supervisedlearning approach to imitation learning which uses only expert trajectories as its training datasetto fit a parameterized policy mapping states to actions. For discrete actions this corresponds to aclassification task, which we fit using the cross-entropy loss. If the rewards of trajectories in thetraining dataset are consistently high, BC is known to outperform recent batch-RL methods (Fujimotoet al., 2018). To enable fair comparison we trained our BC agent using the same recurrent neuralnetwork architecture that we used for our R2D3 algorithm (see Figure 4).No Demonstrations The first ablation we consider is to remove demonstrations from R2D3. Thiscorresponds to setting the demo ratio (see Figure 1) to = 0. This special case of R2D3 correspondsexactly to the R2D2 agent of Kapturowski et al. (2018), which itself extends DQN (Mnih et al.,2015) to partially observed environments by combining it with recurrence and the distributed trainingarchitecture of Ape-X DQN (Horgan et al., 2018). This ablation is itself state of the art on Atari-57and DMLab-30, making it an extremely strong baseline.No Recurrence The second ablation we consider is to replace the recurrent value function of R2D3with a feed-forward reactive network. We do this separately from the no demonstrations ablation,leaving the full system in Figure 1 in tact, with only the structure of the network changed. If wefurther fix the demo ratio to = 0:25then this ablation corresponds to the DQfD agent of Hesteret al. (2018), which is competitive on hard-exploration Atari environments such as Montezuma’sRevenge. However, we do not restrict ourselves to = 0:25, and instead optimize over the demoratio for the ablation as well as for our main agent.6 E XPERIMENTSWe evaluate the performance of our R2D3 agent alongside state-of-the-art deep RL baselines. Asdiscussed in Section 5, we compare our R2D3 agent to BC (standard LfD baseline) R2D2 (off-policySOTA), DQfD (LfD SOTA). We use our own implementations for all agents, and we plan to releasecode for all agents including R2D3.For each task in the Hard-Eight suite, we trained R2D3, R2D2, and DQfD using 256 -greedyCPU-based actors and a single GPU-based learner process. Following Horgan et al. (2018), the i-thactor was assigned a distinct noise parameter i2[0:48;0:4]where each iis regularly spaced inlog0:4space. For each of the algorithms their common hyperparameters were held fixed. Additionally,for R2D3 and DQfD the demo ratio was varied to study its effect. For BC we also varied the learningrate independently in a vain attempt to find a successful agent.All agents act in the environment with an action-repeat factor of 2, i.e. the actions received by theenvironment are repeated twice before passing the observation to the agent. Using an action repeat of4 is common in other domains like Atari (Bellemare et al., 2012; Mnih et al., 2015); however, wefound that using an action repeat of 4 made the Hard-Eight tasks too difficult for our demonstrators.Using an action repeat of 2 allowed us to strike a compromise between ease of demonstration (highaction repeats prohibiting smooth and intuitive motion) and ease of learning for the agents (low actionrepeats increase the number of steps required to complete the task).Figure 4 illustrates the neural network architecture of the different agents. As much as possible weuse the same network architecture across all agents, deviating only for DQfD, where the recurrenthead is replaced with an equally sized feed-forward layer. We briefly outline the training setup below,and give an explicit enumeration of the hyperparameters in Appendix B.For R2D3, R2D2 and DQfD we use the Adam optimizer (Kingma and Ba, 2014) with a fixed learningrate of 2104. We use hyperparameters that are shown to work well for similar environments. Weuse distributed training with 256 parallel actors, trained for at least 10 billion actor steps for all tasks.6Published as a conference paper at ICLR 2020Figure 4: (a) Recurrent head used by R2D3 agents. (b)Feedforward head used by the DQfD agent. Headsin both a) and b) are used to compute the Q values. (c)Architecture used to compute the input featurerepresentations. Frames of size 96x72 are fed into a ResNet, the output is then augmented by concatenating theprevious action at1, previous reward rt1, and other proprioceptive features ft, such as accelerations, whetherthe avatar hand is holding an object, and the hand’s relative distance to the avatar.For the BC agent the training regime is slightly different, since this agent does not interact with theenvironment during training. For BC we also use the Adam optimizer but we additionally performa hyperparameter sweep over learning rates f105;104;103g. Since there is no notion of actorsteps in BC we trained for 500k learner steps instead.During the course of training, an evaluator process periodically queries the learner process for thelatest network weights and runs the resulting policy on an episode, logging both the final returnand the total number of steps (actor or learner steps, as appropriate) performed at the time the ofevaluation.We collected a total of 100 demonstrations for each task spread across three different experts (eachexpert contributed roughly one third of the demonstrations for each task). Demonstrations for thetasks were collected using keyboard and mouse controls mapped to the agent’s exact action space,which was necessary to enable both behaviour cloning and learning from demonstrations. We showstatistics related to the human demonstration data which we collected from three experts in Table 1.6.1 L EARNING THE HARD-EIGHT TASKSIn Figure 5, we report the return against the number of actor steps, averaged over five randominitializations. We find that none of the baselines succeed in any of the eight environments. Meanwhile,R2D3 learns six out of the eight tasks, and reaches or exceeds human performance in four of them.The fact that R2D3 learns at all in this setting with only 100 demonstrations per task demonstratesthe ability of the agent to make very efficient use of the demonstrations. This is in contrast to BC andDQfD which use the same demonstrations, and both fail to learn a single task from the suite.All methods, including R2D3, fail to solve two of the tasks: Remember Sensor and Throw Across.These are the two tasks in the suite that are most demanding in terms of memory requirements for theagent, and it is possible that our zero-initialization with burn-in strategy for handling LSTM statesin replay does not give R2D3 sufficient context to complete these tasks successfully. Future workshould explore the better handling of recurrent states as a possible avenue towards success on thesetasks. R2D3, BC, and DQfD receive some negative returns on Remember Sensor, which indicatesthat the agents navigate down the hallway and walks over penalty sensors.R2D3 performed better than our average human demonstrator on Baseball, Drawbridge, NavigateCubes and the Wall Sensor tasks. The behavior on Wall Sensor Stack in particular is quite interesting.On this task R2D3 found a completely different strategy than the human demonstrators by exploitinga bug in the implementation of the environment. The intended strategy for this task is to stack twoblocks on top of each other so that one of them can remain in contact with a wall mounted sensor,7Published as a conference paper at ICLR 2020Figure 5: Reward vs actor steps curves for R2D3 and baselines on the Hard-Eight task suite. The curves arecomputed as the mean performance for the same agent across 5 different seeds per task. Error regions show the95% confidence interval for the mean reward across seeds. Several curves overlap exactly at zero reward for thefull range of the plots. R2D3 can perform human-level or better on Baseball, Drawbridge, Navigate Cubes andWall Sensor. R2D2 could not get any positive rewards on any of the tasks. DQfD and BC agents occasionally seerewards on Drawbridge and Navigate Cubes tasks, but this happens rarely enough that the effect is not visible inthe plots. Indicators ( H) mark analysis points in Section 6.3.demo ratiosuccess rate1/256 1/128 1/64 1/320.60.40.20.0Figure 6 | Success rate (see main text) for R2D3across all tasks with at least one successful seed,as a function of demo ratio. The square markersfor each demo ratio denote the mean success rate,and the error bars show a bootstrapped estimateof the [25;75]percentile interval for the meanestimate. The lower demo ratios consistently out-perform the higher demo ratios across the suite oftasks.Task Name Reward Episode Len.Baseball 7.84.1 492121Drawbridge 12.32.5 641137Navigate Cubes 7.94.1 638185Push Blocks 9.12.9 683270Remember Sensor 7.71.4 853188Throw Across 5.44.9 464172Wall Sensor 9.12.8 28087Wall Sensor Stack 8.63.5 521107Table 1 | Human demonstration statistics. Wecollected 100 demos for each tasks from threehuman demonstrators. We report mean lengths(in number of frames) and rewards of the episodesalong with the standard deviations for each task.and this is the strategy employed by the demonstrators. However, due to a bug in the environment thestrategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contactwith the key by pressing the key against it in a precise way.In light of the uniform failure of our baselines to learn on the Hard-Eight suite we made severalattempts at training other models on the task suite; however, these attempts were all unsuccessful. Forexample, we tried adding randomized prior functions (Osband et al., 2018) to R2D2, but this approachwas still unable to obtain reward on any of the Hard-Eight tasks. We also trained an IMPALA agentwith pixel control (Jaderberg et al., 2016) as auxiliary reward to help with exploration, but thisapproach also failed to learn on any of the tasks we attempted. We omit these results from Figure 5,only keeping the most relevant baselines.6.2 E FFECT OF THE DEMO RATIOIn our experiments on Hard-Eight tasks (see Figure 5), we did a hyperparameter search and chosethe best hyperparameters for each method independently. In this section, we look more closely athow the demo ratio ( ) affects learning in R2D3. To do this we look at how the success rate of R2D3across the entire Hard-Eight task suite varies as a function of the demo ratio.The goal of each task in the Hard-Eight suite is to collect a large apple, which ends the episode andgives a large reward. We consider an episode successful if the large apple is collected. An agentthat executes many episodes in the environment will either succeed or fail at each one. We consider8Published as a conference paper at ICLR 2020R2D2 @ 5B R2D3 @ 5B0 4 8 12 40(trained)actor steps (B)0246810distance crates pushedR2D3R2D2R2D3 @ 40B(a) (b) (c)Figure 7: Guided exploration behavior in the Push Blocks task. (a)Spatial pattern of exploration behaviorat5B actor steps (reward-driven learning kicks off for R2D3 only after 20B steps). Overlay of agent’strajectories over 200 episodes. Blocks and sensors are not shown for clarity. R2D2 appears to follow a randomwalk. R2D3 concentrates on a particular spatial region. (b)Interactions between the agent and blocks during thefirst 12B steps. Each line shows a different random seed. R2D2 rarely pushes the blocks. (c)Example trajectoryof R2D3 after training, the agent pushes the blue block onto the blue sensor, then collects the apple (green star).anagent successful if, after training, at least 75% of its final 25 episodes are successful. Finally,an individual agent with a fixed set of hyperparameters may still succeed or fail depending on therandomness in the environment and the initialization of the agent.We train several R2D3 agents on each tractable task2in the Hard-Eight suite, varying only the demoratio while keeping other hyperparameters fixed at the values used for the learning experiment. Weconsider four different demo ratios across six tasks, with five seeds for each task (120 trained agents).Figure 6 shows estimates of the success rate for the R2D3 algorithm for each different demo ratio,aggregated across all tasks. We observe that tuning the demo ratio has a strong effect on the successrate across the task suite, and that the best demo ratio is quite small. See Appendix C.3 for furtherresults.6.3 G UIDED EXPLORATION BY DEMONSTRATIONThe typical strategy for exploration in RL is to either use a stochastic policy and sample actions,or to use a deterministic policy and take random actions some small fraction of the time. Givensufficient time both of these approaches will in theory cover the space of possible behaviors, butin practice the amount of time required to achieve this coverage can be prohibitively long. In thisexperiment, we compare the behavior of R2D3 to the behavior of R2D2 (which is equivalent to R2D3without demonstrations) on two of the tasks from the Hard-Eight suite. Even very early in training(well before R2D3 is able to reliably complete the tasks) we see many more task-relevant actionsfrom R2D3 than from R2D2, suggesting that the effect of demonstrations is to bias R2D3 towardsexploring relevant parts of the environment.In Figure 7 we begin by examining the Push Blocks tasks. The task here is to push a particularblock onto a sensor to give access to a large apple, and we examine the behavior of both R2D3and R2D2 after 5B steps, which is long before R2D3 begins to solve the task with any regularity(see Figure 5). Looking at the distribution of spatial locations for the agents it is clear that R2D2essentially diffuses randomly around the room, while R2D3 spends much more time in task-relevantparts of the environment (e.g. away from the walls). We also record the total distance traveled by themoveable blocks in the room, and find that R2D3 tends to move the blocks significantly more oftenthan R2D2, even before it has learned to solve the task.7 C ONCLUSIONIn this paper, we introduced the R2D3 agent, which is designed to make efficient use of demonstrationsto learn in partially observable environments with sparse rewards and highly variable initial conditions.We showed through several experiments on eight very difficult tasks that our approach is able tooutperform multiple state of the art baselines, two of which are themselves ablations of R2D3.2We exclude Remember Sensor and Throw Across from this analysis, since we saw no successful seeds foreither of these tasks.9Published as a conference paper at ICLR 2020We also identified a key parameter of our algorithm, the demo ratio , and showed that careful tuningof this parameter is critical to good performance. Interestingly we found that the optimal demoratio is surprisingly small but non-zero, which suggests that there may be a risk of overfitting to thedemonstrations at the cost of generalization. For future work, we could investigate how this optimaldemo ratio changes with the total number of demonstrations and, more generally, the distribution ofexpert trajectories relative to the task variability.We introduced the Hard-Eight suite of tasks and used them in all of our experiments. These tasks arespecifically designed to be partially observable tasks with sparse rewards and highly variable initialconditions, making them an ideal testbed for showcasing the strengths of R2D3 in contrast to existingmethods in the literature.Our behavioral analysis showed that the mechanism R2D3 uses to efficiently extract information fromexpert demonstrations is to use them in a way that guides (or biases) the agent’s own autonomousexploration of the environment. An in-depth analysis of agent behavior on the Hard-Eight tasksuite is a promising direction for understanding how different RL algorithms make selective use ofinformation.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #4 ### Review Text In this work, R2D3 (Recurrent Replay Distributed DQN from Demonstration), which combines R2D2 [1] with imitation learning (IL), is proposed. Similar to the existing works on “reinforcement learning (RL) with demonstration” such as DQfD, DDPGfD, policy optimization with demonstration (POfD) [2], hard exploration conditions (sparse reward, partial observability, high variance in initial states) are assumed, which is difficult to achieve good performance with RL without demonstration in general. Eight tasks in such conditions were devised and used to test the performance of R2D3. I like the fact that the authors of this work have chosen quite challenging scenarios, but I think the novelty of this submission is a bit weak to be accepted to the conference. I believe “RL with demonstration” becomes meaningful when it beats both RL and IL in some reasonable setting. For example, POfD [2] assumes sparse-reward tasks with *imperfect* demonstrations, which is difficult to achieve good performance by using RL or IL. From such a perspective, I have the following concerns: - Imitation learning baselines: There has been recent advancement in imitation learning. In the submission, it was mentioned that “GAIL has never been successfully applied to complex partially observable environments that require memory”, but there’s [3] that successfully uses GAIL in such a setting. Also, off-policy imitation learning such as DAC [4] is shown to be highly sample-efficient compared to GAIL in MuJoCo domain. However, the submission only considers behavioral cloning (BC) (which shows poor performance at unseen states due to the covariate shift problem) as a baseline among imitation learning method - Reinforcement learning baselines: The submission adopted R2D2 as an RL baseline, and it seems to me that the R2D2 agent starts from random initialization. For a fair comparison, however, I believe R2D2 with BC (or Batch RL) initialization should be considered. In addition to the above concerns, it seems to me that most of the features in R2D3 simply combines those in either DQfD or R2D2, and I couldn’t find out its own algorithmic novelty except “demo ratio” parameter. I’ll increase my score if I made wrong comments or misunderstood the contribution. References [1] Kapturowski, Ostrovski, Quan, Munos. and Dabney, “Recurrent experience replay in distributed reinforcement learning,” ICLR 2019. [2] Kang, Jie, Feng, “Policy optimization with demonstrations,” ICML 2018 [3] Gangwani, Lehman, Liu, Peng, “Learning Belief Representations for Imitation Learning in POMDPs,” UAI 2019 [4] Kostrikov, Agrawal, Dwibedi, Levine, Jonathan, Tompson, “Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,” ICLR 2019 ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
mo3Uqtnvz_
ICLR.cc/2021/Conference
2021
Multi-scale Network Architecture Search for Object Detection
["Yuxin Yue", "Quanquan Li", "Yujie Wang"]
Many commonly-used detection frameworks aim to handle the multi-scale object detection problem. The input image is always encoded to multi-scale features and objects grouped by scale range are assigned to the corresponding features. However, the design of multi-scale feature production is quite hand-crafted or partially automatic. In this paper, we show that more possible architectures of encoder network and different strategies of feature utilization can lead to superior performance. Specifically, we propose an efficient and effective multi-scale network architecture search method (MSNAS) to improve multi-scale object detection by jointly optimizing network stride search of the encoder and appropriate feature selection for detection heads. We demonstrate the effectiveness of the method on COCO dataset and obtain a remarkable performance gain with respect to the original Feature Pyramid Networks.
["Object Detection", "Neural Architecture Search"]
ABSTRACTMany commonly-used detection frameworks aim to handle the multi-scale objectdetection problem. The input image is always encoded to multi-scale features andobjects grouped by scale range are assigned to the corresponding features. How-ever, the design of multi-scale feature production is quite hand-crafted or partiallyautomatic. In this paper, we show that more possible architectures of encodernetwork and different strategies of feature utilization can lead to superior perfor-mance. Specifically, we propose an efficient and effective multi-scale networkarchitecture search method (MSNAS) to improve multi-scale object detection byjointly optimizing network stride search of the encoder and appropriate featureselection for detection heads. We demonstrate the effectiveness of the methodon COCO dataset and obtain a remarkable performance gain with respect to theoriginal Feature Pyramid Networks.1 I NTRODUCTIONRecognizing and localizing objects at vastly different scales is a fundamental challenge in objectdetection. Detection performance for objects with different scales is highly related to features withdifferent properties such as feature resolution, receptive fields, and feature fusion ways. The key tosolving the multi-scale problem in object detection is how to build a multi-scale network that hasproper high-level semantic features for objects with different scales.A recent work in object detection Feature Pyramid Networks(FPN) (Lin et al., 2017) has achieved re-markable success in multi-scale feature design and has been commonly used by many modern objectdetectors (He et al., 2017; Lin et al., 2020; Lu et al., 2019). FPN extracts multi-scale intermediatefeatures from the encoder network and assigns objects grouped by scales to corresponding featuresaccording to a heuristic rule. Another prevalent detection framework, SSD (Liu et al., 2016), con-ducts feature generation by a lighter encoder network without upsampling operators. The basic ideato deal with the multi-scale detection problem can be summarized as below. Given the input image,a series of feature maps with the various resolution are generated to detect objects grouped by scalerange. We note it as multi-scale feature production. In FPN and its variants, the multi-scale featureproduction is split into two steps, feature generation and feature utilization. In terms of feature gen-eration, an encoder network composed of blocks provides features with different scales. And thestrategy of feature utilization determines the rule of assigning objects to feature maps. These twosteps are closely related to each other.Although FPN has achieved promising results on multi-scale object detection tasks, the productionof multi-scale features is quite hand-crafted and relies heavily on the experiences of human experts.More specifically, network architectures of FPN are based on a downsample-upsample architecturewhich may not be effective enough. By changing the downsampling and upsampling operation’spositions and numbers, we could obtain many other candidates to generate different multi-scalefeatures. Also, the predefined rule of feature utilization is very empirical and other alternatives maylead to better performance. Therefore we wonder: Can we find network architectures that can buildbetter semantic feature representation for multiple scales? The answer is yes.Recent advances in neural architecture search have shown promising results compared with hand-crafted architecture by human experts (Zoph et al., 2018; Liu et al., 2019b; Cai et al., 2019; Guoet al., 2019). Several works have also focused on neural architecture search in object detectiontasks (Chen et al., 2019; Ghiasi et al., 2019; Du et al., 2019), but generating and utilizing multi-scale1Under review as a conference paper at ICLR 2021EncoderNetworkDetectionHeads(a) FPNEncoderNetworkDetectionHeads (b) MSNASFigure 1: Architecture of ResNet18-FPN and the searched network of MSNAS-R18. MSNAS-R18has different stride values of blocks in the encoding network and a more flexible feature utilizationstrategy. For simplification, P6 of FPN is not included in the figure and only three detection headsare presented.features are still not well explored. DetNAS (Chen et al., 2019) adopts the method mainly designedon image classification to search the operations of backbone networks in object detectors. NAS-FPN (Ghiasi et al., 2019) focuses on searching for better feature-fusion connections in the neckpart of FPN. NAS-FPN doesn’t optimize the whole encoder network and still relies on predefinedbackbone architecture. Recently SpineNet (Du et al., 2019) proposes a search method with scale-permuted features and cross-scale connections by reinforcement learning, but the search cost is quitelarge. All these previous works focus on designing better neural network architectures to generatebetter features given a fixed feature selection strategy. However, they fail to conduct a completeflexible multi-scale feature production strategy.In this paper, we propose a new method to take into account of both aspects and build detectionnetworks with the strong and proper multi-scale feature production strategy by neural architecturesearch. For feature generation, we put forward a network stride search method to generate multiplefeature representations for different scales. Different from the scale-decreasing-increasing archi-tecture of FPN, the scale of our networks can decrease or increase at each block, as illustrated inFigure 1. By stride search for each block, we could significantly explore a wide range of possiblefeature generation designs of multi-resolution networks. Most backbones of object detectors areoriginally designed on image classification without multi-scale problems. However, stride configu-ration in the encoder network would be optimized in the context of the multi-scale task. Moreover,more complex cross-scale feature fusions might appear according to more complex internal scalechanges. For feature utilization, we change the previous one-to-one mapping strategy into a moreflexible feature selection. Since each group with objects of the same scale range owns one detectionhead, feature utilization is implemented by selecting proper features for detection heads. Objectsof different scale ranges might be assigned to the same feature map. It is not possible in previousmethods, as shown in Figure 1(b).By jointly optimizing feature generation and utilization of multi-scale features, we search for flexiblebut complete multi-scale feature production strategies. Extensive experiments demonstrate completemulti-scale feature production search is critical to building strong and proper semantic features forobject detection with different scales. On challenging COCO dataset (Lin et al., 2014), our methodobtains a 2.6%, 1.5%, 1.2% mAP improvement with similar FLOPs as ResNet18-FPN, ResNet34-FPN, ResNet50-FPN.2 R ELATED WORKNeural Architecture Search Neural Architecture Search aims to design better network architec-tures automatically. RL-based methods (Zoph et al., 2018; Zoph & Le, 2017) have achieved greatsuccess despite a huge computation cost. In differentiable algorithms (Liu et al., 2019b; Cai et al.,2019), architecture parameters are employed and operators in the search space are considered as2Under review as a conference paper at ICLR 202148163264stridedepthN 1(a) Super-net and one of the paths of feature generationk=k0+log2(wh/base_scale)Head1Head2Head3Head4Head5k<22≤k<33≤k<44≤k<5k>5 (b) One example of feature utiliza-tionFigure 2: Figure (a) shows the basic architecture of stride selection in the super-net for featuregeneration. The path represents one of its sub-architectures. The colored nodes are correspondingoutput features of the encoder network. Figure (b) illustrates the feature utilization search. Theformula in Figure (b) represents the rule to assign RoIs to multi-scale features in FPN. We usedifferent ranges of kto distinguish different object groups as well as their detection heads. The solidlines show an example of feature utilization strategies. The dotted lines imply each detection headcan select any of the output features.the weighted sum of candidate operators. There exist difficulties to deal with operators with dif-ferent strides. Super-nets, acting as the collection of weights shared by all the sub-architectures,and evolutionary search are involved in one-shot NAS (Guo et al., 2019; Bender et al., 2018). Butit’s difficult to ensure strong correlations between one-shot and stand-alone performances of thesub-architectures.Multi-scale Object Detection SSD (Liu et al., 2016) uses multi-scale features generated by differentstages of the backbone network to detect objects of different scales. Feature pyramid architecturesare utilized in FPN (Lin et al., 2017) and RetinaNet (Lin et al., 2020) to obtain multi-scale features.SNIP (Singh & Davis, 2018) includes the image pyramid architecture to deal with multi-scale de-tection. Frameworks with multi-scale features are prevalent as objects of different scales appear inone image.Neural Architecture Search on Object Detection DetNAS (Chen et al., 2019), NAS-FCOS (Wanget al., 2019) and Auto-FPN (Xu et al., 2019) focus on the architecture of the top-down pathway andfeature fusion. SM-NAS (Yao et al., 2019) and CR-NAS (Liang et al., 2020) try to adjust the compu-tation occupied by different parts of detectors. Also, there are several works aiming to improve FPNusing NAS. NAS-FPN (Ghiasi et al., 2019) improves detection performance by searching for betterconnections within the feature pyramid network. It is limited without modification to the overallencoder architecture. And it fails to take the multi-scale feature utilization into account. Efficient-Det (Tan et al., 2019) and Auto-FPN (Xu et al., 2019) search better feature fusion for FPN withdifferentiable methods. Other relative works like Liu et al. (2019a) conduct similar modifications.Recently, SpineNet (Du et al., 2019) proposes a backbone search method with scale-permuted fea-tures and cross-scale connections by reinforcement learning. Our work has several major differencesfrom it. First, the search space of MSNAS is designed that each operator in the network can down-sample or upsample instead of permutation, which builds a much larger search space than SpineNet.Second, the complete multi-scale feature production is considered in our work, while SpineNet onlyfocuses on the architecture of the encoder network. Lastly, our method is based on the one-shotsearch method instead of reinforcement learning in SpineNet. Our method is much more efficientand requires much less computation cost than SpineNet.3 M ETHODWe start by discussing multi-scale feature production for the object detection network in Section 3.1.In Section 3.2 we will introduce how to build the search space and search proper stride in detectionnetworks to obtain better multi-scale features. In Section 3.3, how to search the appropriate strategy3Under review as a conference paper at ICLR 2021of feature utilization is presented. Finally, details about super-net training and search strategy aredescribed in Section 3.4.3.1 F EATURE PRODUCTION FOR DETECTION NETWORKIn this section, we’ll discuss the production of multi-scale features and define the problem in detail.As noted above, the basic idea of handling the multi-scale detection problem can be summarized asfeature production. In feature production, the input image is first encoded into a series of featuremaps with various resolutions. Then the objects are detected based on the features according totheir scales. One method is to produce the feature for each scale range by one neural network,like the featurized image pyramid discussed in Lin et al. (2017). Variants include utilizing theimage pyramid, like SNIP (Singh & Davis, 2018). However, we employ only one neural networkand obtain multi-scale features from intermediate features of the network. Because deep neuralnetworks are experts in encoding the image into features. And they are considered to be able toencode information of different scales into features with different resolutions. Then we face theproblem of how to utilize these features since there are Nfeatures available for Kobject groups.Therefore, it is reasonable to split feature production into feature generation and feature utilization.To be more specific, the problem of multi-scale feature production can be defined as Equation 1.When we divide the problem into feature generation and feature utilization, as in Equation 2 andEquation 3,is approximated by gf.:R3WH!fRHiWiHigK (1)f:R3WH!fRHiWiHigN (2)g:fRHiWiHigN!fRHiWiHigK (3)And it is likely that only optimizing feature generation, as previous works do, is not optimal. Soinstead of optimizing feature generation for all the feature utilization strategies, we jointly optimizefeature generation and utilization as a whole.3.2 F EATURE GENERATIONResolution of feature maps in one network changes with downsampling or upsampling operators.The network architectures of FPN follow the downsampling-upsampling style, as Figure 1(a) shows.By encouraging a more flexible design of the scale-changing operation’s positions and numbers, wecould obtain many more promising candidates to generate better multi-scale features. We implementthat by searching the stride of each block in the network.Search space By deconstructing and generalizing the prevalent feature pyramid architecture, thebasic search space is built as a stride-variable straight structure. A super-net of MSNAS with thedepth of N consists of N mixed-blocks, MB 1;MB 2;:::;MB N. For each mixed-block, three possi-ble strides are provided, i.e. 0.5, 1, and 2. The block whose stride equals to 0.5 is implemented as anupsampling block with an interpolation operator followed by a convolution to double the width andheight of the feature map. Blocks that don’t change the resolution of the feature map are referred toas normal blocks. The resolution of the feature output by one mixed-block could be twice, half or thesame as the input feature, as illustrated in Figure 2(a). Considering operators with different strideswithin a mixed-block and the variation of sizes output from different operators in one mixed-block,the super-net is designed as a path-wise structure like Guo et al. (2019). One path in the super-net istreated as valid if none of the blocks output feature larger than a quarter or smaller than 1/64 of theinput image. Invalid paths are removed either in the training process of super-net or the samplingduring the evolutionary search.Lateral connections Lateral connections are built according to current sub-architecture, as Fig-ure 1(b) shows. One additional 1 1 lateral convolution attached after every mix-block is availablefor latter cross-block connections. In scale-decreasing architectures, blocks can be grouped by res-olution of output features. Each group is notated as one stage. Similarly, we refer stage to a groupof adjacent blocks with the same output resolution, e.g. one downsampling or upsampling blockand following normal blocks. The feature map of the last mix-block at one stage is merged with thelateral feature by element-wise addition as Equation 4-Equation 6 shows.xi=MBi(xi1) +lati (4)4Under review as a conference paper at ICLR 2021lati=LateralConv ri(xri);if(stride i+16= 1) or(i=N1)0; otherwise(5)whereri= max (fkj(Yk<jistride j= 1) and(stride k+16= 1)g) (6)wherelatimeans the lateral connection of block ito combine with. If block iis the last block ofa stage or the end of the encoder network as described in Equation5, latiis generated by lateralconvolution LateralConv ri. Among blocks with output feature of the same resolution as block i,blockriis the nearest one at a different stage. It can be regarded as an extensive version of lateralconnections in the original FPN structure.3.3 F EATURE UTILIZATIONIn most multi-scale detection frameworks, objects are assigned to feature maps according to theirscales given a predefined strategy. In this section, we will discuss how to build the search space offeature utilization with respect to the feature generation network. Basically, objects are split intoGgroups and there exists one detection head for each group. So feature utilization strategy couldbe simplified by selecting the resolution of feature maps from generated multi-scale features foreach detection head. Figure 2(b) shows one example of feature utilization. Three feature maps ofdifferent resolutions are available. In this case, objects in various scale ranges might be assigned tothe same feature map. This is very different from previous predefined strategies. The dotted linesrepresent a possibility of connecting to features of any resolution provided by the encoder network.When searching for feature utilization, the exploration to obtain better features of object groups isimplemented within a lessened search space. For convenience, several constraints are designed formore efficient search, as Equation 7 shows.sisi+180i<Gmins8maxs8mins6= max s(7)Letsbe an array with length of G.Gequals to the number of object groups as described above. Theith item in s, noted as si, represents the selected size with respect to the input image of the feature forthe corresponding object group. For example, s= (4;8;16;32;64)is the configuration counterpartof FPN. sis assumed as a monotonic sequential based on insights to assign multi-scale objects. Thatis to say, smaller objects are considered to be assigned to finer-resolution features, while larger onesare more compatible with coarser ones. Besides, the degraded pyramid structures are excluded inthe super-net. We expect to focus more on hierarchical architectures and avoid extreme memoryconsumption of some special architectures.3.4 S UPER -NET TRAINING AND SEARCH STRATEGYIt’s difficult to combine features with different resolutions by element-wise addition, so one-shotbased search strategies show great compatibility with our search space. During training the super-net, one valid path, fulfilling all the requirements in Section 3.2 and Section 3.3, is randomly sam-pled to optimize weights in the super-net. Inspired by Zhang et al. (2020), we treat the super-netas a good pre-trained model. A better rank could be obtained within a few iterations of individualfine-tuning, although the primitive weights in the super-net don’t perform well in terms of rankingrandom samples. Fine-tuning for each architecture individually for a few iterations not only mod-ulates the globally-optimized shared weights towards more personalized weights but also modifiesthe statistics of batch normalization operators. And the additional computation cost is marginal inthe entire pipeline.Evolutionary search is adopted after the super-net training as Algorithm 1 shows. The functionGetValidRandomSample (n)returns n valid random samples as described in the last paragraph.The evolution process starts from a population with size P. Variation operations are performed onboth the encoder and stride of heads’ selected features. In Algorithm 1, CrossoverEncoder meansdoing crossover concerning the stride values in the encoder network and CrossoverFeatureStridemeans doing a crossover concerning the selected stride values of utilization. MutationEncoder5Under review as a conference paper at ICLR 2021andMutationFeatureStride have similar meanings about doing mutation. Given that the setof valid children within computation constraints is not continuous with respect to crossover andmutation, not enough children could be generated in some iterations and the search process wouldterminate at a local optimum of the search space. Like Liu et al. (2020), a random set of newchildren from Rattempts of valid samples with various computations are appended as the proposalsof population. In this way, both exploitation and exploration in the search space are encouraged tobe conducted.Algorithm 1: Evolution ProcessInput: population size P, total evolution iteration T, max variation attempts M, attempts ofrandom children R, Constraints C, return top samples kOutput: the top architectures with the best one-shot performances that meet both the validityrequirements and computation constraints1pop 0:=GetValidRandomSample (P);2fori= 1 :Tdo3popi:=;;// Generate children by crossover and mutation4j:= 0;5 whilej <M andjpopijPdo6children :=CrossoverEncoder (popi1)[MutationEncoder (popi1)[CrossoverFeatureStride (popi1)[MutationFeatureStride (popi1);children :=Select (children;C );7popi:=popi[children ;8j:=j+ 1;9 end10 // Add random children11random _children :=GetValidRandomSample (R);12random _children :=Select (random _children;C );13popi:=popi[random _children ;14popi:=Topk (popi[popi1;P);15end16returnTopk (popT;k)4 E XPERIMENTSExperiments are presented in the following sections. Section 4.1 describes the implementation de-tails. Section 4.2 shows the main results of MSNAS along with their FPN baselines. Ablationexperiments are conducted and discussed in Section 4.3. Finally, the performance of MSNAS andcomparison with other methods are included in Section 4.4.4.1 I MPLEMENTATION DETAILSDataset COCO (Lin et al., 2014) is one of the commonly used dataset for object detection andinstance segmentation. It contains a training set with around 118K images, a validation set witharound 5K images, and a test-dev set with about 20k images. The annotations cover 80 categoriesof common objects.Super-net training details We train our super-net and retrain the best architectures in the samesettings. An input image is resized so that the shorter side is no more than 800 and the longer sideis less than 1333, then both sides will be fulfilled by padding to be divided by 64. The models aretrained from scratch for 4x-long time with a batch size of 32 images on 16 GPUs. The learning rateis initialized as 0.00125 and increases to 0.04 after a warm-up epoch. Then it is divided by 10 at the42nd and 47th epoch. The weight decay is set to 1e-4 and the momentum is 0.9. Each architecturesample is fine-tuned for several iterations(100 iterations) with a batch size of 32 at a learning rate of0.004 before testing and evaluation. The evolutionary search process is repeated for 20 iterations.The population size is 50 and 50 children are generated to update the population in each iteration.Only those with computation within a 1% gap of the target FLOPs will be considered as valid6Under review as a conference paper at ICLR 2021Table 1: Experimental results with respect to their FPN baselines.Baseline Baseline FLOPs MSNAS Mean Var MaxArchitectures mAP (Encoder+RPN) (Ours) mAP mAP mAPR18-FPN 34.6 145.45 MSNAS-R18 36.94 0.03 37.2R34-FPN 37.7 182.20 MSNAS-R34 38.86 0.0784 39.2R50-FPN 38.3 197.33 MSNAS-R50 39.3 0.028 39.5children. As we focus on the design of the encoder network and multi-scale feature utilization, onlythe computations of encoder and RPN head are involved during the search. So in 1, the FLOPs ofthe first stage of the network are used. For better comparison, we report our results with FLOPs ofthe entire network, as shown in 6. Since the real image input size differs from image to image asindicated in the previous part, a fixed approximate input size is used when computing FLOPs of thearchitectures. In both 6 and 1, the approximate value for architectures with 800 1333 input is setto 8321280 and that for networks with 600 1000 input is 5761024. About 10 individuals willbe appended to the population randomly in every iteration. Finally, five of the top samples after thesearch procedure are retrained to compute the statistics.Detection network details Following He et al. (2019), the b-box head at the second stage originallycomposed of two fully-connected layers is replaced by a structure with two convolutions and onefully-connected layer. We adopt synchronized batch normalization to both the encoder and the b-box head. Blocks with different strides share the same number of channels, which we adjust to geta proper distribution of computation in the search space. An ideal search space includes a largeproportion of architectures with similar FLOPs as the target FLOPs. Following the principle above,the numbers of channels for MSNAS-R18, MSNAS-R34, and MSNAS-R50 are set to 180, 160, 144respectively.4.2 R ESULTSTable 1 shows the main results of MSNAS comparing with FPN counterparts. As we can see in thetable, best searched architectures of MSNAS achieve mAP at 37.2%, 39.2%, 39.5% at the compara-ble computation with ResNet18-FPN(34.6%), ResNet34-FPN(37.7%) and ResNet50-FPN(38.3%),with a remarkable improvement of 2.6%, 1.5%, 1.2% mAP gain. Moreover, the best samples in allthe experiments outperform the manually-designed baseline networks on average with a relativelysmall variance. In particular, the performance of the best sample of MSNAS-R18 is comparablewith ResNet34-FPN, while the computation of the former one is 20% less than that of the latter one.Also, the maximum of mAP of top samples in the search space of MSNAS-R34 is superior overResNet50-FPN.Computation cost The super-nets are trained with 4x-schedule for around 30 hours on 16 GPUs.And the evolutionary search stage costs around 3 hours per iteration and about 60hours in total.Then around 90 hours are spent to search the optimal architectures. It could be further improved ifbetter schedules and strategies for training detectors from scratch are proposed.4.3 A BLATION STUDYEffectiveness of feature utilization and feature generation search To verify the effectiveness offeature utilization, we conduct experiments to compare fixed predefined feature selection and ourproposed search-based feature selection. For fixed predefined FPN-style feature utilization, all sub-networks in the search space extract feature maps following the same strategies as FPN. Results areshown in Table 2. Feature utilization search in MSNAS shows large improvement compared withthe pre-defined feature selection way in the original FPN. By comparing performances of ResNet18-FPN and FPN-style searched architectures, a +0.7%mAP performance gain is obtained by search-ing stride for encoder network. We find that the correlation between the one-shot performancesand stand-alone samples is weaker for the experiment in FPN-style feature utilization, according toKendall’s tau listed in Table 2. We infer that the reason is the discontinuity among paths inside thesuper-net intensifies.7Under review as a conference paper at ICLR 2021Table 2: Ablation experiments of pre-defined FPN-style and searched feature utilization.Encoder Feature Utilization Mean mAP Var mAP Max mAP Kendall’s tauResNet18 FPN - - 34.6 -Searched FPN-style 34.98 0.061 35.3 -0.2247Searched Searched 36.94 0.03 37.2 0.4495Table 3: Ablation experiments of stride range constraintsStride Constraints Mean mAP Var mAP Max mAPyes 36.94 0.03 37.2no 36.98 0.107 37.4Feature utilization search space constraints Several constraints of selected features’ resolution areapplied when searching feature utilization, as described in Section 3.3. Table 3 shows experimentresults with or without stride range constraints. It can be found that although adding constraintscannot achieve much gain of performance but can reduce the variations of performance. Besides,the evolutionary process converges faster with constraints and the variance of one-shot performancesin the population is reduced if constraints are added, as shown in Figure 3.Fine-tuning strategy In Table 4, Kendall’s taus are computed by the one-shot performances fromsuper-net after fine-tuning and the stand-alone performances of ten random samples with the samecomputation as the target FLOPs. It can be easily observed that the ranks have a better performanceafter fine-tuning several iterations in both MSNAS-R18 and MSNAS-R34. MSNAS-R18 with fine-tuning achieves +0.3 mAP at average performance and MSNAS-R34 obtains a +0.6 mAP gain atboth the maximal and average performance of top-5 samples.Random children search strategy As noted in Section 3.4, several random children are added tothe population for better exploration. According to Table 5, improvement in MSNAS-R50 can beobserved by including random children. At the same time, the average one-shot performances oftop-5 samples increase by more than 0.1 mAP, which is relatively remarkable among low values ofone-shot performances.4.4 C OMPARING WITH OTHER METHODSIn Table 6, comparison with other algorithms is conducted. An outstanding performance of40.7%mAP is achieved by MSNAS-R50. The network of MSNAS-R50-Mask-RCNN and MSNAS-R50are trained for 6x-long in order to get comparable performance with that of 2x-schedule withpre-trained models. R50-FPN-Faster R-CNN(heavy head) is trained for 6x-long with SyncBN andthe bounding-box head at the second stage follows the 4conv-1fc format as noted in He et al.(2019). In order to perform a more fair comparison, we reproduced NAS-FPN. It is trained with0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0evolution iteration0.140.160.180.200.220.240.260.28oneshot performance(a) Evolutionary process with constraints0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0evolution iteration0.120.140.160.180.200.220.240.26oneshot performance (b) Evolutionary process without constraintsFigure 3: Comparison of the evolutionary process in the ablation of constraints. It’s easy to observethat the evolutionary process converges faster with constraints.8Under review as a conference paper at ICLR 2021Table 4: Ablation experiments of fine-tuning strategyNetwork Fine-tuning Kendall’s tau Mean mAP Var mAP Max mAPMSNAS-R18 yes 0.4495 37.1 0.008 37.2MSNAS-R18 no -0.0899 36.84 0.1024 37.2MSNAS-R34 yes 0.5683 38.86 0.0784 39.2MSNAS-R34 no 0.2501 38.28 0.0936 38.6Table 5: Ablation experiments of random children strategyNetwork Random Children One-shot mAP Mean mAP Var mAP Max mAPMSNAS-R18 yes 27.12 37.1 0.008 37.2MSNAS-R18 no 27.31 36.94 0.03 37.2MSNAS-R50 yes 25.99 39.3 0.028 39.5MSNAS-R50 no 25.85 38.84 0.0064 38.9weights pre-trained on ImageNet for 2x-schedule. We can see that MSNAS-R50 has an advantageover R50-NAS-FPN(7@256) at a comparable computation.5 C ONCLUSIONBy analyzing the commonly-used detection framework FPN, we find it critical to generate bettermulti-scale features and select proper features for detection heads. Considering the fact that multi-scale feature production plays an important role in object detection, we propose a one-shot-basedmethod to efficiently search a complete multi-scale feature generation strategy in the generalizeddetection architecture. Instead of only modifying network architecture for feature generation, wejointly optimize feature generation and feature utilization. The searched architectures achieve anoutstanding performance compared with the state-of-the-art algorithms. More exploration and im-provement could be carried out by further works.
jVcwNQp7MCw
Good results with limited novelty
4: Ok but not good enough - rejection
This paper argues searching for both encoder and anchor assignment (feature utilization) in a unified NAS framework would lead to better detector performance. Experiments have been conducted on COCO dataset to show proposed method outperform baseline FPN by a significant margin, and both searching for better encoder and feature utilization benefit object detection task. Pros: - dealing with objects with different scale is a fundamental problem in detection, and research in this direction would benefit vision community. - Most NAS work on detection focus on the backbone feature extractor (encoder). This paper brings new perspective for NAS on object detection task. - The experimental results back authors' claim that searching for feature utilization brings more performance gains than searching for encoder alone. Cons: - The proposed method for searching a path throw super-net across multiple stride is not new. E.g. "Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation" has adopted similar techniques. - While searching for feature utilization looks novel, from Figure 2 b) it is effectively searching for how to select and fuse features with different resolution to each detector head. Assigning objects with different scales has been explored by works such as "Scale-Aware Trident Networks for Object Detection" / "Feature Selective Anchor-Free Module for Single-Shot Object Detection" in a non-NAS setting, which might be interesting to compare the proposed method with. Also the proposed approach is essentially similar to works that search for FPN connections, e.g. "NAS-FCOS: Fast Neural Architecture Search for Object Detection". - [minor] anchor-free object detection has growing more popularity in detection tasks, and thus might limit the importance of searching for feature utilization. - This paper is a bit challenging to follow and would benefit from careful proofreading. See comments below for more details. Other comments: - Abstract: "we show that more possible architectures of encoder network and different strategies of feature utilization". by "more possible architectures" does it mean "larger search space"? - Intro, paragraph 1 "the key to solving" -> "the key to solve" - Intro, paragraph 2 after introducing FPN / SSD and how they deal with multiple scales, "The basic idea to deal with the multi-scale detection problem can be summarized as below" and re-evaluate FPN architecture. this feels a bit redundant and doesn't read smoothly. - Intro, paragraph 3 "Also, the predefined rule of feature utilization is very empirical and other alternatives may lead to better performance". Agree with the statement and there has been quite a few work that tries to better align object scales with feature maps (like the papers mentioned above). Please cite and compare. - Related work. The method proposed in this paper is based on one-shot search and should be simpler than SpineNet. It would be great to have SpineNet in comparison (specially when SpineNet seem to show better performance)? - Sec 3.1: This section reiterate the motivation of this paper, which have been stated in Intro section and feels a bit redundant. - Sec 3.2, paragraph 2. "Considering operators with different strides within a mixed-block and the variation of sizes output from different operators in one mixed-block" this is very confusing. Does it indicate we have additional upscaling / downscaling operations within a mixed-block? How this related to searching through a path in a super-net? - Sec 3.2, Eq 3. Should it be $stride_{j+1} \ne 1$? - Sec 3.3 last paragraph "descrbied" -> "described" - Sec 3.3. $s_i$ here indicates the selected feature size w.r.t. to image size (actually should be the inverse of it? otherwise it would be 1/4, ..., 1/64). It is also unclear how this setup would facilitate using multiple feature maps for one head. - Sec 3.4 "It’s difficult to combine features with different resolutions by element-wise addition, so one-shot based search strategies show great compatibility with our search space." It is not straightforward to understand the connection between combining feature maps at different resolution, proposed scheme and one-shot search strategy. Please elabrate. - Sec 3.4 " although the primitive weights in the super-net don’t perform well in terms of ranking random samples". This is very confusing. Could authors clarify? - Sec 3.4 "statics of batch norm..." it should be "statistics"? - Sec 4: first paragraph. the experiment setup (dataset, splits) is mentioned in appendix but would be better to place it here. - Sec 4.1 first paragraph "m=in" what does this mean here? - Table 1: the table is a bit confusing. It compares ResNet-FPN with MSNAS-Resnet. It might be more clear to indicate the backbone network on each row and set the first column to be Resnet-FPN and others for MSNAS-Resnet - Table 1: Both baseline and proposed method share the same FLOPs. Is this the case? please clarify.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Multi-scale Network Architecture Search for Object Detection ### Paper Abstract Many commonly-used detection frameworks aim to handle the multi-scale object detection problem. The input image is always encoded to multi-scale features and objects grouped by scale range are assigned to the corresponding features. However, the design of multi-scale feature production is quite hand-crafted or partially automatic. In this paper, we show that more possible architectures of encoder network and different strategies of feature utilization can lead to superior performance. Specifically, we propose an efficient and effective multi-scale network architecture search method (MSNAS) to improve multi-scale object detection by jointly optimizing network stride search of the encoder and appropriate feature selection for detection heads. We demonstrate the effectiveness of the method on COCO dataset and obtain a remarkable performance gain with respect to the original Feature Pyramid Networks. ### Paper Keywords ["Object Detection", "Neural Architecture Search"] ### Paper Content ABSTRACTMany commonly-used detection frameworks aim to handle the multi-scale objectdetection problem. The input image is always encoded to multi-scale features andobjects grouped by scale range are assigned to the corresponding features. How-ever, the design of multi-scale feature production is quite hand-crafted or partiallyautomatic. In this paper, we show that more possible architectures of encodernetwork and different strategies of feature utilization can lead to superior perfor-mance. Specifically, we propose an efficient and effective multi-scale networkarchitecture search method (MSNAS) to improve multi-scale object detection byjointly optimizing network stride search of the encoder and appropriate featureselection for detection heads. We demonstrate the effectiveness of the methodon COCO dataset and obtain a remarkable performance gain with respect to theoriginal Feature Pyramid Networks.1 I NTRODUCTIONRecognizing and localizing objects at vastly different scales is a fundamental challenge in objectdetection. Detection performance for objects with different scales is highly related to features withdifferent properties such as feature resolution, receptive fields, and feature fusion ways. The key tosolving the multi-scale problem in object detection is how to build a multi-scale network that hasproper high-level semantic features for objects with different scales.A recent work in object detection Feature Pyramid Networks(FPN) (Lin et al., 2017) has achieved re-markable success in multi-scale feature design and has been commonly used by many modern objectdetectors (He et al., 2017; Lin et al., 2020; Lu et al., 2019). FPN extracts multi-scale intermediatefeatures from the encoder network and assigns objects grouped by scales to corresponding featuresaccording to a heuristic rule. Another prevalent detection framework, SSD (Liu et al., 2016), con-ducts feature generation by a lighter encoder network without upsampling operators. The basic ideato deal with the multi-scale detection problem can be summarized as below. Given the input image,a series of feature maps with the various resolution are generated to detect objects grouped by scalerange. We note it as multi-scale feature production. In FPN and its variants, the multi-scale featureproduction is split into two steps, feature generation and feature utilization. In terms of feature gen-eration, an encoder network composed of blocks provides features with different scales. And thestrategy of feature utilization determines the rule of assigning objects to feature maps. These twosteps are closely related to each other.Although FPN has achieved promising results on multi-scale object detection tasks, the productionof multi-scale features is quite hand-crafted and relies heavily on the experiences of human experts.More specifically, network architectures of FPN are based on a downsample-upsample architecturewhich may not be effective enough. By changing the downsampling and upsampling operation’spositions and numbers, we could obtain many other candidates to generate different multi-scalefeatures. Also, the predefined rule of feature utilization is very empirical and other alternatives maylead to better performance. Therefore we wonder: Can we find network architectures that can buildbetter semantic feature representation for multiple scales? The answer is yes.Recent advances in neural architecture search have shown promising results compared with hand-crafted architecture by human experts (Zoph et al., 2018; Liu et al., 2019b; Cai et al., 2019; Guoet al., 2019). Several works have also focused on neural architecture search in object detectiontasks (Chen et al., 2019; Ghiasi et al., 2019; Du et al., 2019), but generating and utilizing multi-scale1Under review as a conference paper at ICLR 2021EncoderNetworkDetectionHeads(a) FPNEncoderNetworkDetectionHeads (b) MSNASFigure 1: Architecture of ResNet18-FPN and the searched network of MSNAS-R18. MSNAS-R18has different stride values of blocks in the encoding network and a more flexible feature utilizationstrategy. For simplification, P6 of FPN is not included in the figure and only three detection headsare presented.features are still not well explored. DetNAS (Chen et al., 2019) adopts the method mainly designedon image classification to search the operations of backbone networks in object detectors. NAS-FPN (Ghiasi et al., 2019) focuses on searching for better feature-fusion connections in the neckpart of FPN. NAS-FPN doesn’t optimize the whole encoder network and still relies on predefinedbackbone architecture. Recently SpineNet (Du et al., 2019) proposes a search method with scale-permuted features and cross-scale connections by reinforcement learning, but the search cost is quitelarge. All these previous works focus on designing better neural network architectures to generatebetter features given a fixed feature selection strategy. However, they fail to conduct a completeflexible multi-scale feature production strategy.In this paper, we propose a new method to take into account of both aspects and build detectionnetworks with the strong and proper multi-scale feature production strategy by neural architecturesearch. For feature generation, we put forward a network stride search method to generate multiplefeature representations for different scales. Different from the scale-decreasing-increasing archi-tecture of FPN, the scale of our networks can decrease or increase at each block, as illustrated inFigure 1. By stride search for each block, we could significantly explore a wide range of possiblefeature generation designs of multi-resolution networks. Most backbones of object detectors areoriginally designed on image classification without multi-scale problems. However, stride configu-ration in the encoder network would be optimized in the context of the multi-scale task. Moreover,more complex cross-scale feature fusions might appear according to more complex internal scalechanges. For feature utilization, we change the previous one-to-one mapping strategy into a moreflexible feature selection. Since each group with objects of the same scale range owns one detectionhead, feature utilization is implemented by selecting proper features for detection heads. Objectsof different scale ranges might be assigned to the same feature map. It is not possible in previousmethods, as shown in Figure 1(b).By jointly optimizing feature generation and utilization of multi-scale features, we search for flexiblebut complete multi-scale feature production strategies. Extensive experiments demonstrate completemulti-scale feature production search is critical to building strong and proper semantic features forobject detection with different scales. On challenging COCO dataset (Lin et al., 2014), our methodobtains a 2.6%, 1.5%, 1.2% mAP improvement with similar FLOPs as ResNet18-FPN, ResNet34-FPN, ResNet50-FPN.2 R ELATED WORKNeural Architecture Search Neural Architecture Search aims to design better network architec-tures automatically. RL-based methods (Zoph et al., 2018; Zoph & Le, 2017) have achieved greatsuccess despite a huge computation cost. In differentiable algorithms (Liu et al., 2019b; Cai et al.,2019), architecture parameters are employed and operators in the search space are considered as2Under review as a conference paper at ICLR 202148163264stridedepthN 1(a) Super-net and one of the paths of feature generationk=k0+log2(wh/base_scale)Head1Head2Head3Head4Head5k<22≤k<33≤k<44≤k<5k>5 (b) One example of feature utiliza-tionFigure 2: Figure (a) shows the basic architecture of stride selection in the super-net for featuregeneration. The path represents one of its sub-architectures. The colored nodes are correspondingoutput features of the encoder network. Figure (b) illustrates the feature utilization search. Theformula in Figure (b) represents the rule to assign RoIs to multi-scale features in FPN. We usedifferent ranges of kto distinguish different object groups as well as their detection heads. The solidlines show an example of feature utilization strategies. The dotted lines imply each detection headcan select any of the output features.the weighted sum of candidate operators. There exist difficulties to deal with operators with dif-ferent strides. Super-nets, acting as the collection of weights shared by all the sub-architectures,and evolutionary search are involved in one-shot NAS (Guo et al., 2019; Bender et al., 2018). Butit’s difficult to ensure strong correlations between one-shot and stand-alone performances of thesub-architectures.Multi-scale Object Detection SSD (Liu et al., 2016) uses multi-scale features generated by differentstages of the backbone network to detect objects of different scales. Feature pyramid architecturesare utilized in FPN (Lin et al., 2017) and RetinaNet (Lin et al., 2020) to obtain multi-scale features.SNIP (Singh & Davis, 2018) includes the image pyramid architecture to deal with multi-scale de-tection. Frameworks with multi-scale features are prevalent as objects of different scales appear inone image.Neural Architecture Search on Object Detection DetNAS (Chen et al., 2019), NAS-FCOS (Wanget al., 2019) and Auto-FPN (Xu et al., 2019) focus on the architecture of the top-down pathway andfeature fusion. SM-NAS (Yao et al., 2019) and CR-NAS (Liang et al., 2020) try to adjust the compu-tation occupied by different parts of detectors. Also, there are several works aiming to improve FPNusing NAS. NAS-FPN (Ghiasi et al., 2019) improves detection performance by searching for betterconnections within the feature pyramid network. It is limited without modification to the overallencoder architecture. And it fails to take the multi-scale feature utilization into account. Efficient-Det (Tan et al., 2019) and Auto-FPN (Xu et al., 2019) search better feature fusion for FPN withdifferentiable methods. Other relative works like Liu et al. (2019a) conduct similar modifications.Recently, SpineNet (Du et al., 2019) proposes a backbone search method with scale-permuted fea-tures and cross-scale connections by reinforcement learning. Our work has several major differencesfrom it. First, the search space of MSNAS is designed that each operator in the network can down-sample or upsample instead of permutation, which builds a much larger search space than SpineNet.Second, the complete multi-scale feature production is considered in our work, while SpineNet onlyfocuses on the architecture of the encoder network. Lastly, our method is based on the one-shotsearch method instead of reinforcement learning in SpineNet. Our method is much more efficientand requires much less computation cost than SpineNet.3 M ETHODWe start by discussing multi-scale feature production for the object detection network in Section 3.1.In Section 3.2 we will introduce how to build the search space and search proper stride in detectionnetworks to obtain better multi-scale features. In Section 3.3, how to search the appropriate strategy3Under review as a conference paper at ICLR 2021of feature utilization is presented. Finally, details about super-net training and search strategy aredescribed in Section 3.4.3.1 F EATURE PRODUCTION FOR DETECTION NETWORKIn this section, we’ll discuss the production of multi-scale features and define the problem in detail.As noted above, the basic idea of handling the multi-scale detection problem can be summarized asfeature production. In feature production, the input image is first encoded into a series of featuremaps with various resolutions. Then the objects are detected based on the features according totheir scales. One method is to produce the feature for each scale range by one neural network,like the featurized image pyramid discussed in Lin et al. (2017). Variants include utilizing theimage pyramid, like SNIP (Singh & Davis, 2018). However, we employ only one neural networkand obtain multi-scale features from intermediate features of the network. Because deep neuralnetworks are experts in encoding the image into features. And they are considered to be able toencode information of different scales into features with different resolutions. Then we face theproblem of how to utilize these features since there are Nfeatures available for Kobject groups.Therefore, it is reasonable to split feature production into feature generation and feature utilization.To be more specific, the problem of multi-scale feature production can be defined as Equation 1.When we divide the problem into feature generation and feature utilization, as in Equation 2 andEquation 3,is approximated by gf.:R3WH!fRHiWiHigK (1)f:R3WH!fRHiWiHigN (2)g:fRHiWiHigN!fRHiWiHigK (3)And it is likely that only optimizing feature generation, as previous works do, is not optimal. Soinstead of optimizing feature generation for all the feature utilization strategies, we jointly optimizefeature generation and utilization as a whole.3.2 F EATURE GENERATIONResolution of feature maps in one network changes with downsampling or upsampling operators.The network architectures of FPN follow the downsampling-upsampling style, as Figure 1(a) shows.By encouraging a more flexible design of the scale-changing operation’s positions and numbers, wecould obtain many more promising candidates to generate better multi-scale features. We implementthat by searching the stride of each block in the network.Search space By deconstructing and generalizing the prevalent feature pyramid architecture, thebasic search space is built as a stride-variable straight structure. A super-net of MSNAS with thedepth of N consists of N mixed-blocks, MB 1;MB 2;:::;MB N. For each mixed-block, three possi-ble strides are provided, i.e. 0.5, 1, and 2. The block whose stride equals to 0.5 is implemented as anupsampling block with an interpolation operator followed by a convolution to double the width andheight of the feature map. Blocks that don’t change the resolution of the feature map are referred toas normal blocks. The resolution of the feature output by one mixed-block could be twice, half or thesame as the input feature, as illustrated in Figure 2(a). Considering operators with different strideswithin a mixed-block and the variation of sizes output from different operators in one mixed-block,the super-net is designed as a path-wise structure like Guo et al. (2019). One path in the super-net istreated as valid if none of the blocks output feature larger than a quarter or smaller than 1/64 of theinput image. Invalid paths are removed either in the training process of super-net or the samplingduring the evolutionary search.Lateral connections Lateral connections are built according to current sub-architecture, as Fig-ure 1(b) shows. One additional 1 1 lateral convolution attached after every mix-block is availablefor latter cross-block connections. In scale-decreasing architectures, blocks can be grouped by res-olution of output features. Each group is notated as one stage. Similarly, we refer stage to a groupof adjacent blocks with the same output resolution, e.g. one downsampling or upsampling blockand following normal blocks. The feature map of the last mix-block at one stage is merged with thelateral feature by element-wise addition as Equation 4-Equation 6 shows.xi=MBi(xi1) +lati (4)4Under review as a conference paper at ICLR 2021lati=LateralConv ri(xri);if(stride i+16= 1) or(i=N1)0; otherwise(5)whereri= max (fkj(Yk<jistride j= 1) and(stride k+16= 1)g) (6)wherelatimeans the lateral connection of block ito combine with. If block iis the last block ofa stage or the end of the encoder network as described in Equation5, latiis generated by lateralconvolution LateralConv ri. Among blocks with output feature of the same resolution as block i,blockriis the nearest one at a different stage. It can be regarded as an extensive version of lateralconnections in the original FPN structure.3.3 F EATURE UTILIZATIONIn most multi-scale detection frameworks, objects are assigned to feature maps according to theirscales given a predefined strategy. In this section, we will discuss how to build the search space offeature utilization with respect to the feature generation network. Basically, objects are split intoGgroups and there exists one detection head for each group. So feature utilization strategy couldbe simplified by selecting the resolution of feature maps from generated multi-scale features foreach detection head. Figure 2(b) shows one example of feature utilization. Three feature maps ofdifferent resolutions are available. In this case, objects in various scale ranges might be assigned tothe same feature map. This is very different from previous predefined strategies. The dotted linesrepresent a possibility of connecting to features of any resolution provided by the encoder network.When searching for feature utilization, the exploration to obtain better features of object groups isimplemented within a lessened search space. For convenience, several constraints are designed formore efficient search, as Equation 7 shows.sisi+180i<Gmins8maxs8mins6= max s(7)Letsbe an array with length of G.Gequals to the number of object groups as described above. Theith item in s, noted as si, represents the selected size with respect to the input image of the feature forthe corresponding object group. For example, s= (4;8;16;32;64)is the configuration counterpartof FPN. sis assumed as a monotonic sequential based on insights to assign multi-scale objects. Thatis to say, smaller objects are considered to be assigned to finer-resolution features, while larger onesare more compatible with coarser ones. Besides, the degraded pyramid structures are excluded inthe super-net. We expect to focus more on hierarchical architectures and avoid extreme memoryconsumption of some special architectures.3.4 S UPER -NET TRAINING AND SEARCH STRATEGYIt’s difficult to combine features with different resolutions by element-wise addition, so one-shotbased search strategies show great compatibility with our search space. During training the super-net, one valid path, fulfilling all the requirements in Section 3.2 and Section 3.3, is randomly sam-pled to optimize weights in the super-net. Inspired by Zhang et al. (2020), we treat the super-netas a good pre-trained model. A better rank could be obtained within a few iterations of individualfine-tuning, although the primitive weights in the super-net don’t perform well in terms of rankingrandom samples. Fine-tuning for each architecture individually for a few iterations not only mod-ulates the globally-optimized shared weights towards more personalized weights but also modifiesthe statistics of batch normalization operators. And the additional computation cost is marginal inthe entire pipeline.Evolutionary search is adopted after the super-net training as Algorithm 1 shows. The functionGetValidRandomSample (n)returns n valid random samples as described in the last paragraph.The evolution process starts from a population with size P. Variation operations are performed onboth the encoder and stride of heads’ selected features. In Algorithm 1, CrossoverEncoder meansdoing crossover concerning the stride values in the encoder network and CrossoverFeatureStridemeans doing a crossover concerning the selected stride values of utilization. MutationEncoder5Under review as a conference paper at ICLR 2021andMutationFeatureStride have similar meanings about doing mutation. Given that the setof valid children within computation constraints is not continuous with respect to crossover andmutation, not enough children could be generated in some iterations and the search process wouldterminate at a local optimum of the search space. Like Liu et al. (2020), a random set of newchildren from Rattempts of valid samples with various computations are appended as the proposalsof population. In this way, both exploitation and exploration in the search space are encouraged tobe conducted.Algorithm 1: Evolution ProcessInput: population size P, total evolution iteration T, max variation attempts M, attempts ofrandom children R, Constraints C, return top samples kOutput: the top architectures with the best one-shot performances that meet both the validityrequirements and computation constraints1pop 0:=GetValidRandomSample (P);2fori= 1 :Tdo3popi:=;;// Generate children by crossover and mutation4j:= 0;5 whilej <M andjpopijPdo6children :=CrossoverEncoder (popi1)[MutationEncoder (popi1)[CrossoverFeatureStride (popi1)[MutationFeatureStride (popi1);children :=Select (children;C );7popi:=popi[children ;8j:=j+ 1;9 end10 // Add random children11random _children :=GetValidRandomSample (R);12random _children :=Select (random _children;C );13popi:=popi[random _children ;14popi:=Topk (popi[popi1;P);15end16returnTopk (popT;k)4 E XPERIMENTSExperiments are presented in the following sections. Section 4.1 describes the implementation de-tails. Section 4.2 shows the main results of MSNAS along with their FPN baselines. Ablationexperiments are conducted and discussed in Section 4.3. Finally, the performance of MSNAS andcomparison with other methods are included in Section 4.4.4.1 I MPLEMENTATION DETAILSDataset COCO (Lin et al., 2014) is one of the commonly used dataset for object detection andinstance segmentation. It contains a training set with around 118K images, a validation set witharound 5K images, and a test-dev set with about 20k images. The annotations cover 80 categoriesof common objects.Super-net training details We train our super-net and retrain the best architectures in the samesettings. An input image is resized so that the shorter side is no more than 800 and the longer sideis less than 1333, then both sides will be fulfilled by padding to be divided by 64. The models aretrained from scratch for 4x-long time with a batch size of 32 images on 16 GPUs. The learning rateis initialized as 0.00125 and increases to 0.04 after a warm-up epoch. Then it is divided by 10 at the42nd and 47th epoch. The weight decay is set to 1e-4 and the momentum is 0.9. Each architecturesample is fine-tuned for several iterations(100 iterations) with a batch size of 32 at a learning rate of0.004 before testing and evaluation. The evolutionary search process is repeated for 20 iterations.The population size is 50 and 50 children are generated to update the population in each iteration.Only those with computation within a 1% gap of the target FLOPs will be considered as valid6Under review as a conference paper at ICLR 2021Table 1: Experimental results with respect to their FPN baselines.Baseline Baseline FLOPs MSNAS Mean Var MaxArchitectures mAP (Encoder+RPN) (Ours) mAP mAP mAPR18-FPN 34.6 145.45 MSNAS-R18 36.94 0.03 37.2R34-FPN 37.7 182.20 MSNAS-R34 38.86 0.0784 39.2R50-FPN 38.3 197.33 MSNAS-R50 39.3 0.028 39.5children. As we focus on the design of the encoder network and multi-scale feature utilization, onlythe computations of encoder and RPN head are involved during the search. So in 1, the FLOPs ofthe first stage of the network are used. For better comparison, we report our results with FLOPs ofthe entire network, as shown in 6. Since the real image input size differs from image to image asindicated in the previous part, a fixed approximate input size is used when computing FLOPs of thearchitectures. In both 6 and 1, the approximate value for architectures with 800 1333 input is setto 8321280 and that for networks with 600 1000 input is 5761024. About 10 individuals willbe appended to the population randomly in every iteration. Finally, five of the top samples after thesearch procedure are retrained to compute the statistics.Detection network details Following He et al. (2019), the b-box head at the second stage originallycomposed of two fully-connected layers is replaced by a structure with two convolutions and onefully-connected layer. We adopt synchronized batch normalization to both the encoder and the b-box head. Blocks with different strides share the same number of channels, which we adjust to geta proper distribution of computation in the search space. An ideal search space includes a largeproportion of architectures with similar FLOPs as the target FLOPs. Following the principle above,the numbers of channels for MSNAS-R18, MSNAS-R34, and MSNAS-R50 are set to 180, 160, 144respectively.4.2 R ESULTSTable 1 shows the main results of MSNAS comparing with FPN counterparts. As we can see in thetable, best searched architectures of MSNAS achieve mAP at 37.2%, 39.2%, 39.5% at the compara-ble computation with ResNet18-FPN(34.6%), ResNet34-FPN(37.7%) and ResNet50-FPN(38.3%),with a remarkable improvement of 2.6%, 1.5%, 1.2% mAP gain. Moreover, the best samples in allthe experiments outperform the manually-designed baseline networks on average with a relativelysmall variance. In particular, the performance of the best sample of MSNAS-R18 is comparablewith ResNet34-FPN, while the computation of the former one is 20% less than that of the latter one.Also, the maximum of mAP of top samples in the search space of MSNAS-R34 is superior overResNet50-FPN.Computation cost The super-nets are trained with 4x-schedule for around 30 hours on 16 GPUs.And the evolutionary search stage costs around 3 hours per iteration and about 60hours in total.Then around 90 hours are spent to search the optimal architectures. It could be further improved ifbetter schedules and strategies for training detectors from scratch are proposed.4.3 A BLATION STUDYEffectiveness of feature utilization and feature generation search To verify the effectiveness offeature utilization, we conduct experiments to compare fixed predefined feature selection and ourproposed search-based feature selection. For fixed predefined FPN-style feature utilization, all sub-networks in the search space extract feature maps following the same strategies as FPN. Results areshown in Table 2. Feature utilization search in MSNAS shows large improvement compared withthe pre-defined feature selection way in the original FPN. By comparing performances of ResNet18-FPN and FPN-style searched architectures, a +0.7%mAP performance gain is obtained by search-ing stride for encoder network. We find that the correlation between the one-shot performancesand stand-alone samples is weaker for the experiment in FPN-style feature utilization, according toKendall’s tau listed in Table 2. We infer that the reason is the discontinuity among paths inside thesuper-net intensifies.7Under review as a conference paper at ICLR 2021Table 2: Ablation experiments of pre-defined FPN-style and searched feature utilization.Encoder Feature Utilization Mean mAP Var mAP Max mAP Kendall’s tauResNet18 FPN - - 34.6 -Searched FPN-style 34.98 0.061 35.3 -0.2247Searched Searched 36.94 0.03 37.2 0.4495Table 3: Ablation experiments of stride range constraintsStride Constraints Mean mAP Var mAP Max mAPyes 36.94 0.03 37.2no 36.98 0.107 37.4Feature utilization search space constraints Several constraints of selected features’ resolution areapplied when searching feature utilization, as described in Section 3.3. Table 3 shows experimentresults with or without stride range constraints. It can be found that although adding constraintscannot achieve much gain of performance but can reduce the variations of performance. Besides,the evolutionary process converges faster with constraints and the variance of one-shot performancesin the population is reduced if constraints are added, as shown in Figure 3.Fine-tuning strategy In Table 4, Kendall’s taus are computed by the one-shot performances fromsuper-net after fine-tuning and the stand-alone performances of ten random samples with the samecomputation as the target FLOPs. It can be easily observed that the ranks have a better performanceafter fine-tuning several iterations in both MSNAS-R18 and MSNAS-R34. MSNAS-R18 with fine-tuning achieves +0.3 mAP at average performance and MSNAS-R34 obtains a +0.6 mAP gain atboth the maximal and average performance of top-5 samples.Random children search strategy As noted in Section 3.4, several random children are added tothe population for better exploration. According to Table 5, improvement in MSNAS-R50 can beobserved by including random children. At the same time, the average one-shot performances oftop-5 samples increase by more than 0.1 mAP, which is relatively remarkable among low values ofone-shot performances.4.4 C OMPARING WITH OTHER METHODSIn Table 6, comparison with other algorithms is conducted. An outstanding performance of40.7%mAP is achieved by MSNAS-R50. The network of MSNAS-R50-Mask-RCNN and MSNAS-R50are trained for 6x-long in order to get comparable performance with that of 2x-schedule withpre-trained models. R50-FPN-Faster R-CNN(heavy head) is trained for 6x-long with SyncBN andthe bounding-box head at the second stage follows the 4conv-1fc format as noted in He et al.(2019). In order to perform a more fair comparison, we reproduced NAS-FPN. It is trained with0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0evolution iteration0.140.160.180.200.220.240.260.28oneshot performance(a) Evolutionary process with constraints0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0evolution iteration0.120.140.160.180.200.220.240.26oneshot performance (b) Evolutionary process without constraintsFigure 3: Comparison of the evolutionary process in the ablation of constraints. It’s easy to observethat the evolutionary process converges faster with constraints.8Under review as a conference paper at ICLR 2021Table 4: Ablation experiments of fine-tuning strategyNetwork Fine-tuning Kendall’s tau Mean mAP Var mAP Max mAPMSNAS-R18 yes 0.4495 37.1 0.008 37.2MSNAS-R18 no -0.0899 36.84 0.1024 37.2MSNAS-R34 yes 0.5683 38.86 0.0784 39.2MSNAS-R34 no 0.2501 38.28 0.0936 38.6Table 5: Ablation experiments of random children strategyNetwork Random Children One-shot mAP Mean mAP Var mAP Max mAPMSNAS-R18 yes 27.12 37.1 0.008 37.2MSNAS-R18 no 27.31 36.94 0.03 37.2MSNAS-R50 yes 25.99 39.3 0.028 39.5MSNAS-R50 no 25.85 38.84 0.0064 38.9weights pre-trained on ImageNet for 2x-schedule. We can see that MSNAS-R50 has an advantageover R50-NAS-FPN(7@256) at a comparable computation.5 C ONCLUSIONBy analyzing the commonly-used detection framework FPN, we find it critical to generate bettermulti-scale features and select proper features for detection heads. Considering the fact that multi-scale feature production plays an important role in object detection, we propose a one-shot-basedmethod to efficiently search a complete multi-scale feature generation strategy in the generalizeddetection architecture. Instead of only modifying network architecture for feature generation, wejointly optimize feature generation and feature utilization. The searched architectures achieve anoutstanding performance compared with the state-of-the-art algorithms. More exploration and im-provement could be carried out by further works.<|im_end|> <|im_start|>assistant ### Review Title Good results with limited novelty ### Review Text This paper argues searching for both encoder and anchor assignment (feature utilization) in a unified NAS framework would lead to better detector performance. Experiments have been conducted on COCO dataset to show proposed method outperform baseline FPN by a significant margin, and both searching for better encoder and feature utilization benefit object detection task. Pros: - dealing with objects with different scale is a fundamental problem in detection, and research in this direction would benefit vision community. - Most NAS work on detection focus on the backbone feature extractor (encoder). This paper brings new perspective for NAS on object detection task. - The experimental results back authors' claim that searching for feature utilization brings more performance gains than searching for encoder alone. Cons: - The proposed method for searching a path throw super-net across multiple stride is not new. E.g. "Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation" has adopted similar techniques. - While searching for feature utilization looks novel, from Figure 2 b) it is effectively searching for how to select and fuse features with different resolution to each detector head. Assigning objects with different scales has been explored by works such as "Scale-Aware Trident Networks for Object Detection" / "Feature Selective Anchor-Free Module for Single-Shot Object Detection" in a non-NAS setting, which might be interesting to compare the proposed method with. Also the proposed approach is essentially similar to works that search for FPN connections, e.g. "NAS-FCOS: Fast Neural Architecture Search for Object Detection". - [minor] anchor-free object detection has growing more popularity in detection tasks, and thus might limit the importance of searching for feature utilization. - This paper is a bit challenging to follow and would benefit from careful proofreading. See comments below for more details. Other comments: - Abstract: "we show that more possible architectures of encoder network and different strategies of feature utilization". by "more possible architectures" does it mean "larger search space"? - Intro, paragraph 1 "the key to solving" -> "the key to solve" - Intro, paragraph 2 after introducing FPN / SSD and how they deal with multiple scales, "The basic idea to deal with the multi-scale detection problem can be summarized as below" and re-evaluate FPN architecture. this feels a bit redundant and doesn't read smoothly. - Intro, paragraph 3 "Also, the predefined rule of feature utilization is very empirical and other alternatives may lead to better performance". Agree with the statement and there has been quite a few work that tries to better align object scales with feature maps (like the papers mentioned above). Please cite and compare. - Related work. The method proposed in this paper is based on one-shot search and should be simpler than SpineNet. It would be great to have SpineNet in comparison (specially when SpineNet seem to show better performance)? - Sec 3.1: This section reiterate the motivation of this paper, which have been stated in Intro section and feels a bit redundant. - Sec 3.2, paragraph 2. "Considering operators with different strides within a mixed-block and the variation of sizes output from different operators in one mixed-block" this is very confusing. Does it indicate we have additional upscaling / downscaling operations within a mixed-block? How this related to searching through a path in a super-net? - Sec 3.2, Eq 3. Should it be $stride_{j+1} \ne 1$? - Sec 3.3 last paragraph "descrbied" -> "described" - Sec 3.3. $s_i$ here indicates the selected feature size w.r.t. to image size (actually should be the inverse of it? otherwise it would be 1/4, ..., 1/64). It is also unclear how this setup would facilitate using multiple feature maps for one head. - Sec 3.4 "It’s difficult to combine features with different resolutions by element-wise addition, so one-shot based search strategies show great compatibility with our search space." It is not straightforward to understand the connection between combining feature maps at different resolution, proposed scheme and one-shot search strategy. Please elabrate. - Sec 3.4 " although the primitive weights in the super-net don’t perform well in terms of ranking random samples". This is very confusing. Could authors clarify? - Sec 3.4 "statics of batch norm..." it should be "statistics"? - Sec 4: first paragraph. the experiment setup (dataset, splits) is mentioned in appendix but would be better to place it here. - Sec 4.1 first paragraph "m=in" what does this mean here? - Table 1: the table is a bit confusing. It compares ResNet-FPN with MSNAS-Resnet. It might be more clear to indicate the backbone network on each row and set the first column to be Resnet-FPN and others for MSNAS-Resnet - Table 1: Both baseline and proposed method share the same FLOPs. Is this the case? please clarify. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
SJGvns0qK7
ICLR.cc/2019/Conference
2019
Bayesian Policy Optimization for Model Uncertainty
["Gilwoo Lee", "Brian Hou", "Aditya Mandalika", "Jeongseok Lee", "Sanjiban Choudhury", "Siddhartha S. Srinivasa"]
Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers.
["Bayes-Adaptive Markov Decision Process", "Model Uncertainty", "Bayes Policy Optimization"]
Published as a conference paper at ICLR 2019Bayesian Policy Optimization forModel UncertaintyGilwoo Lee, Brian Hou, Aditya Mandalika Vamsikrishna, Jeongseok Lee,Sanjiban Choudhury, Siddhartha S. SrinivasaPaul G. Allen School of Computer Science & EngineeringUniversity of Washington{gilwoo,bhou,adityavk,jslee02,sanjibac,siddh}@cs.uw.eduAbstractAddressing uncertainty is critical for autonomous systems to robustly adaptto the real world. We formulate the problem of model uncertainty as acontinuous Bayes-Adaptive Markov Decision Process (BAMDP), where anagent maintains a posterior distribution over latent model parameters givena history of observations and maximizes its expected long-term rewardwith respect to this belief distribution. Our algorithm, Bayesian PolicyOptimization, builds on recent policy optimization algorithms to learn auniversal policy that navigates the exploration-exploitation trade-off to max-imize the Bayesian value function. To address challenges from discretizingthe continuous latent parameter space, we propose a new policy networkarchitecture that encodes the belief distribution independently from theobservable state. Our method significantly outperforms algorithms thataddress model uncertainty without explicitly reasoning about belief distribu-tions and is competitive with state-of-the-art Partially Observable MarkovDecision Process solvers.1 IntroductionAt its core, real-world robotics focuses on operating under uncertainty. An autonomous carmust drive alongside unpredictable human drivers under road conditions that change fromday to day. An assistive home robot must simultaneously infer users’ intended goals as ithelps them. A robot arm must recognize and manipulate varied objects. These examplesshare common themes: (1) an underlying dynamical system with unknown latent parameters(road conditions, human goals, object identities), (2) an agent that can probe the system viaexploration , while ultimately (3) maximizing an expected long-term reward via exploitation .The Bayes-Adaptive Markov Decision Process (BAMDP) framework (Ghavamzadeh et al.,2015) elegantly captures the exploration-exploitation dilemma that the agent faces. Here,the agent maintains a belief, which is a posterior distribution over the latent parameters given a history of observations. A BAMDP can be cast as a Partially Observable MarkovDecision Process (POMDP) (Duff & Barto, 2002) whose state is (s;), wherescorrespondsto the observable world state. By planning in the belief space of this POMDP, the agentbalances explorative and exploitative actions. In this paper, we focus on BAMDP problemsin which the latent parameter space is either a discrete finite set or abounded continuoussetthat can be approximated via discretization . For this class of BAMDPs, the belief is acategorical distribution, allowing us to represent it using a vector of weights.The core problem for BAMDPs with continuous state-action spaces is how to explore thereachable belief space. In particular, discretizing the latent space can result in an arbitrarilylarge belief vector, which causes the belief space to grow exponentially. Approximating thevalue function over the reachable belief space can be challenging: although point-based valueapproximations (Kurniawati et al., 2008; Pineau et al., 2003) have been largely successful forapproximating value functions of discrete POMDP problems, these approaches do not easilyextend to continuous state-action spaces. Monte-Carlo Tree Search approaches (Silver &1Published as a conference paper at ICLR 2019Bayes Filter...BatchPolicyOpt.(a) Training procedurebsEncoderEncoderPolicyNetworka (b) Network structureFigure 1: An overview of Bayesian Policy Optimization. The policy is simulated on multiplelatent models. At each timestep of the simulation, a black-box Bayes filter updates theposterior belief and inputs the state-belief to the policy (Figure 1a). Belief ( b) and state ( s)are independently encoded before being pushed into the policy network (Figure 1b)Veness, 2010; Guez et al., 2012) are also prohibitively expensive in continuous state-actionspaces: the width of the search tree after a single iteration is too large, preventing anadequate search depth from being reached.Our key insight is that we can bypass learning the value function and directly learn apolicy that maps beliefs to actions by leveraging the latest advancements in batch policyoptimization algorithms (Schulman et al., 2015; 2017). Inspired by previous approachesthat train learning algorithms with an ensemble of models (Rajeswaran et al., 2017; Yuet al., 2017), we examine model uncertainty through a BAMDP lens. Although our approachprovides only locally optimal policies, we believe that it offers a practical and scalable solutionfor continuous BAMDPs.Our method, Bayesian Policy Optimization ( BPO), is a batch policy optimization methodwhich utilizes a black-box Bayesian filter and augmented state-belief representation. Duringoffline training, BPOsimulates the policy on multiple latent models sampled from the sourcedistribution (Figure 1a). At each simulation timestep, it computes the posterior belief usinga Bayes filter and inputs the state-belief pair (s;b)to the policy. Our algorithm only needsto update the posterior along the simulated trajectory in each sampled model, rather thanbranching at each possible action and observation as in MCTS-based approaches.Our key contribution is the following. We introduce a Bayesian policy optimization algorithmto learn policies that directly reason about model uncertainty while maximizing the expectedlong-term reward (Section 4). To address the challenge of large belief representations, weintroduce two encoder networks that balance the size of belief and state embeddings inthe policy network (Figure 1b). In addition, we show that our method, while designed forBAMDPs, can be applied to continuous POMDPs when a compact belief representation isavailable (Section 4.2). Through experiments on classical POMDP problems and BAMDPvariantsofOpenAIGymbenchmarks, weshowthat BPOsignificantlyoutperformsalgorithmsthat address model uncertainty without explicitly reasoning about beliefs and is competitivewith state-of-the-art POMDP algorithms (Section 5).2 Preliminaries: Bayesian Reinforcement LearningThe Bayes-Adaptive Markov Decision Process framework (Duff & Barto, 2002; Ross et al.,2008; Kolter & Ng, 2009) was originally proposed to address uncertainty in the transitionfunction of an MDP. The uncertainty is captured by a latent variable, 2, which is eitherdirectly the transition function, e.g. sas0=T(s;a;s0), or is a parameter of the transition,e.g. physical properties of the system. The latent variable is either fixed or has a knowntransition function. We extend the previous formulation of to address uncertainty in thereward function as well.2Published as a conference paper at ICLR 2019Formally, a BAMDP is defined by a tuple hS;;A;T;R;P 0;i, whereSis the observablestate space of the underlying MDP, is the latent space, and Ais the action space. TandRare the parameterized transition and reward functions, respectively. The transition functionis defined as: T(s;;a0;s0;0) =P(s0;0js;;a0) =P(s0js;;a0)P(0js;;a0;s0). The initialdistribution over (s;)is given by P0:S!R+, andis the discount.Bayesian Reinforcement Learning (BRL) considers the long-term expected reward withrespect to the uncertainty over rather than the true (unknown) value of . The uncertaintyis represented as a belief distribution b2Bover latent variables . BRL maximizes thefollowing Bayesian value function, which is the expected value given the uncertainty :V(s;b) =R(s;b;a0) +Xs02S;b02BP(s0;b0js;b;a0)V(s0;b0)=R(s;b;a0) +Xs02S;b02BP(s0js;b;a0)P(b0js;b;a0;s0)V(s0;b0)(1)where the action is a0=(s;b).1The Bayesian reward and transition functions are defined in expectation with respect to:R(s;b;a0) =P2b()R(s;;a0),P(s0js;b;a0) =P2b()P(s0js;;a0). The beliefdistribution can be maintained recursively, with a black-box Bayes filter performing posteriorupdates given observations. We describe how to implement such a Bayes filter in Section 4.1.The use of (s;b)casts the partially observable BAMDP as a fully observable MDP inbelief space, which permits the use of any policy gradient method. We highlight that areactive Bayesian policy in belief space is equivalent to a policy with memory in observablespace (Kaelbling et al., 1998). In our work, the complexity of memory is delegated to aBayes filter that computes a sufficient statistic of the history.In partially observable MDPs (POMDPs), the states can be observed only via a noisyobservation function. Mixed-observability MDPs (MOMDPs) (Ong et al., 2010) are similarto BAMDPs: their states are (s;), wheresis observable and is latent. Although anyBAMDP problem can be cast as a POMDP or a MOMDP problem (Duff & Barto, 2002),the source of uncertainty in a BAMDP usually comes from the transition function, not theunobservability of the state as it does with POMDPs and MOMDPs.3 Related WorkA long history of research addresses belief-space reinforcement learning and robust rein-forcement learning. Here, we highlight the most relevant work and refer the reader toGhavamzadeh et al. (2015), Shani et al. (2013), and Aberdeen (2003) for more comprehensivereviews of the Bayes-Adaptive and Partially Observable MDP literatures.Belief-Space Reinforcement Learning. Planning in belief space, where part of thestate representation is a belief distribution, is intractable (Papadimitriou & Tsitsiklis, 1987).This is a consequence of the curse of dimensionality: the dimensionality of belief spaceover a finite set of variables equals the size of that set, so the size of belief space growsexponentially. Many approximate solvers focus on one or more of the following: 1) valuefunction approximation, 2) compact, approximate belief representation, or 3) direct mappingof belief to an action. QMDP (Littman et al., 1995) assumes full observability after one stepto approximate Q-value. Point-based solvers, like SARSOP (Kurniawati et al., 2008) andPBVI (Pineau et al., 2003), exploit the piecewise-linear-convex structure of POMDP valuefunctions (under mild assumptions) to approximate the value of a belief state. Sampling-based approaches, such as BAMCP (Guez et al., 2012) and POMCP (Silver & Veness, 2010),combine Monte Carlo sampling and simple rollout policies to approximate Q-values at theroot node in a search tree. Except for QMDP, these approaches target discrete POMDPsand cannot be easily extended to continuous spaces. Sunberg & Kochenderfer (2018) extendPOMCP to continuous spaces using double progressive widening. Model-based trajectory1The state space Scan be either discrete or continuous. The belief space Bis always continuous,but we usePnotation for simplicity.3Published as a conference paper at ICLR 2019optimization methods (Platt et al., 2010; van den Berg et al., 2012) have also been successfulfor navigation on systems like unmanned aerial vehicles and other mobile robots.Neural network variants of POMDP algorithms are well suited for compressing high-dimensional belief states into compact representations. For example, QMDP-Net (Karkuset al., 2017) jointly trains a Bayes-filter network and a policy network to approximate Q-value.Deep Variational Reinforcement Learning (Igl et al., 2018) learns to approximate the beliefusing variational inference and a particle filter, and it uses the belief to generate actions.Our method is closely related to Exp-GPOMDP (Aberdeen & Baxter, 2002), a model-freepolicy gradient method for POMDPs, but we leverage model knowledge from the BAMDPand revisit the underlying policy optimization method with recent advancements. Peng et al.(2018) use Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) to encodea history of observations to generate an action. The key difference between our method andPeng et al. (2018) is that BPO explicitly utilizes the belief distribution, while in Peng et al.(2018) the LSTM must implicitly learn an embedding for the distribution. We believe thatexplicitly using a Bayes filter improves data efficiency and interpretability.Robust (Adversarial) Reinforcement Learning. One can bypass the burden of main-taining belief and still find a robust policy by maximizing the return for worst-case scenarios.Commonly referred to as Robust Reinforcement Learning (Morimoto & Doya, 2001), thisapproach uses a min-max objective and is conceptually equivalent to H-infinity control (Başar& Bernhard, 2008) from classical robust control theory. Recent works have adapted this ob-jective to train agents against various external disturbances and adversarial scenarios (Pintoet al., 2017; Bansal et al., 2018; Pattanaik et al., 2018). Interestingly, instead of trainingagainst an adversary, an agent can also train to be robust against model uncertainty withan ensemble of models. For example, Ensemble Policy Optimization (EPOpt) (Rajeswaranet al., 2017) trains an agent on multiple MDPs and strives to improve worst-case per-formance by concentrating rollouts on MDPs where the current policy performs poorly.Ensemble-CIO (Mordatch et al., 2015) optimizes trajectories across a finite set of MDPs.Whileadversarialandensemblemodelapproacheshaveproventoberobusteventounmodeledeffects, theymayresultinoverlyconservativebehaviorwhentheworst-casescenarioisextreme.In addition, since these methods do not infer or utilize uncertainty, they perform poorlywhen explicit information-gathering actions are required. Our approach is fundamentallydifferent from them because it internally maintains a belief distribution. As a result, itspolicies outperform robust policies in many scenarios.Adaptive Policy Methods. Some approaches can adapt to changing model estimateswithout operating in belief space. Adaptive-EPOpt (Rajeswaran et al., 2017) retrains anagent with an updated source distribution after real-world interactions. PSRL (Osband et al.,2013) samples from a source distribution, executes an optimal policy for the sample for afixed horizon, and then re-samples from an updated source distribution. These approachescan work well for scenarios in which the latent MDP is fixed throughout multiple episodes.Universal Policy with Online System Identification ( UP-OSI ) (Yu et al., 2017) learns topredict the maximum likelihood estimate MLEand trains a universal policy that maps(s;MLE)to an action. However, without a notion of belief, both PSRL and UP-OSIcan over-confidently execute policies that are optimal for the single estimate, causing poorperformance in expectation over different MDPs.4 Bayesian Policy OptimizationWe propose Bayesian Policy Optimization, a simple policy gradient algorithm for BAMDPs(Algorithm 1). The agent learns a stochastic Bayesian policy that maps a state-belief pair toa probability distribution over actions :SB!P(A). During each training iteration,BPOcollects trajectories by simulating the current policy on several MDPs sampled fromthe prior distribution. During the simulation, the Bayes filter updates the posterior beliefdistribution at each timestep and sends the updated state-belief pair to the Bayesian policy.By simulating on MDPs with different latent variables, BPOobserves the evolution of thestate-belief throughout multiple trajectories. Since the state-belief representation makes thepartially observable BAMDP a fully observable Belief-MDP, any batch policy optimization4Published as a conference paper at ICLR 2019Algorithm 1 Bayesian Policy OptimizationRequire: Bayes filter (), initial belief b0(),P0, policy0, horizonH,nitr;nsample1:fori= 1;2;;nitrdo2: forn= 1;2;;nsample do3: Sample latent MDP M:(s0;0)P04:n Simulate (i1;b0; ;M;H )5:Update policy: i BatchPolicyOptimization (i1;f1;;nsampleg)6:returnbest7:procedure Simulate (;b0; ;M;H )8: fort= 1;;Hdo9:at (st1;bt1)10: ExecuteatonM, observing rt;st11:bt (st1;bt1;at;st)12: return (s0;b0;a1;r1;s1;b1;;aH;rH;sH;bH)algorithm (e.g., Schulman et al. (2015; 2017)) can be used to maximize the Bayesian Bellmanequation (Equation 1).One key challenge is how to represent the belief distribution over the latent state space. Tothis end, we impose one mild requirement, i.e., that the belief can be represented with afixed-size vector. For example, if the latent space is discrete, we can represent the beliefas a categorical distribution. For continuous latent state spaces, we can use Gaussian or amixture of Gaussian distributions. When such specific representations are not appropriate,we can choose a more general uniform discretization of the latent space.Discretizing the latent space introduces the curse of dimensionality. An algorithm must berobust to the size of the belief representation. To address the high-dimensionality of beliefspace, we introduce a new policy network structure that consists of two separate networks toindependently encode state and belief (Figure 1b). These encoders consist of multiple layersof nonlinear (e.g., ReLU) and linear operations, and they output a compact representationof state and belief. We design the encoders to yield outputs of the same size, which weconcatenate to form the input to the policy network. The encoder networks and the policynetwork are jointlytrained by the batch policy optimization. Our belief encoder achieves thedesired robustness by learning to compactly represent arbitrarily large belief representations.In Section 5, we empirically verify that the separate belief encoder makes our algorithmmore robust to large belief representations (Figure 2b).As with most policy gradient algorithms, BPOprovides only a locally optimal solution.Nonetheless, it produces robust policies that scale to problems with high-dimensional observ-able states and beliefs (see Section 5).4.1 Bayes Filter for Bayesian Policy OptimizationGiven an initial belief b0, a Bayes filter recursively performs the posterior update:b0(0js;b;a0;s0) =X2b()T(s;;a0;s0;0) (2)whereis the normalizing constant, and the transition function is defined asT(s;;a0;s0;0) =P(s0;0js;;a0) =P(s0js;;a0)P(0js;;a0;s0). At timestep t, the be-liefbt(t)is the posterior distribution over given the history of states and actions,(s0;a1;s1;:::;st). Whencorresponds to physical parameters for an autonomous system, weoften assume that the latent states are fixed.Our algorithm utilizes a black-box Bayes filter to produce a posterior distribution over thelatent states. Any Bayes filter that outputs a fixed-size belief representation can be used;for example, we use an extended Kalman filter to maintain a Gaussian distribution overcontinuous latent variables in the LightDark environment in Section 5. When such a specific5Published as a conference paper at ICLR 2019representation is not appropriate, we can choose a more general discretization of the latentspace to obtain a computationally tractable belief update.For our algorithm, we found that uniformly discretizing the range of each latent parameterintoKequal-sized bins is sufficient. From each of the resulting Kjjbins, we form an MDPby selecting the mean bin value for each latent parameter. Then, we approximate the beliefdistribution with a categorical distribution over the resulting MDPs.We approximate the Bayes update in Equation 2 by computing the probability of observings0under each discretized 2f1;;Kjjgas follows:b0(js;b;a0;s0) =b()p(s0js;;a0)PKjji=1b(i)p(s0js;i;a0)where the denominator corresponds to .As we verify in Section 5, our algorithm is robust to approximate beliefs, which allows theuse of computationally efficient approximate Bayes filters without degrading performance. Abelief needs only to be accurate enough to inform the agent of its actions.4.2 Generalization to POMDPAlthough BPOis designed for BAMDP problems, it can naturally be applied to POMDPs.In a general POMDP where state is unobservable, we need only b(s), so we can remove thestate encoder network.Knowing the transition and observation functions, we can construct a Bayes filter thatcomputes the belief bover the hidden state:b0(s0) = (b;a0;o0) =Xs2Sb(s)T(s;a0;s0)Z(s;a0;o0)whereis the normalization constant, and Zis the observation function, Z(s;a0;o0) =P(o0js;a0), of observing o0after taking action a0at states. Then, BPOoptimizes thefollowing Bellman equation:V(b) =Xs2Sb(s)R(s;(b)) +Xb02BP(b0jb;(b))V(b0)For general POMDPs with large state spaces, however, discretizing state space to form thebelief state is impractical. We believe that this generalization is best suited for beliefs withconjugate distributions, e.g., Gaussians.5 Experimental ResultsWe evaluate BPOon discrete and continuous POMDP benchmarks to highlight its use ofinformation-gathering actions. We also evaluate BPOon BAMDP problems constructed byvarying physical model parameters on OpenAI benchmark problems (Brockman et al., 2016).For all BAMDP problems with continuous latent spaces ( Chain,MuJoCo), latent parametersare sampled in the continuous space in Step 3 of Algorithm 1, regardless of discretization.We compare BPOtoEPOptandUP-MLE , robust and adaptive policy gradient algorithms,respectively. We also include BPO-, a version of our algorithm without the belief andstate encoders; this version directly feeds the original state and belief to the policy network.Comparing with BPO- allows us to better understand the effect of the encoders. ForUP-MLE , we use the maximum likelihood estimate (MLE) from the same Bayes filter usedforBPO, instead of learning an additional online system identification (OSI) network asoriginally proposed by UP-OSI . This lets us directly compare performance when a fullbelief distribution is used ( BPO) rather than a point estimate ( UP-MLE ). For the OpenAIBAMDP problems, we also compare to a policy trained with TRPOin an environment withthe mean values of the latent parameters.All policy gradient algorithms ( BPO,BPO-,EPOpt,UP-MLE ) use TRPOas the un-derlying batch policy optimization subroutine. We refer the reader to Appendix A.1 for6Published as a conference paper at ICLR 2019Tiger LightDark Chain Cheetah Swimmer Ant00.51Norm. RewardBPO BPO- UP-MLE EPOPT(a) Benchmark101102103250300350Avg. RewardBPO BPO- (b) DiscretizationFigure 2: (a) Comparison of BPOwith belief-agnostic, robust RL algorithms. BPOsignifi-cantly outperforms benchmarks when belief-awareness and explicit information gathering arenecessary ( Tiger,LightDark ). It is competitive with UP-MLE when passive estimation oruniversal robustness is sufficient ( Chain,MuJoCo). (b) Scalability of BPOwith respect tolatent state space discretization for the Chainproblem.parameter details. For all algorithms, we compare the results from the seed with the highestmean reward across multiple random seeds. Although EPOpt andUP-MLE are the mostrelevant algorithms that use batch policy optimization to address model uncertainty, weemphasize that neither formulates the problems as BAMDPs.As shown in Figure 1b, the BPOnetwork’s state and belief encoder components are identical,consisting of two fully connected layers with Nhhidden units each and tanhactivations(Nh= 32forTiger,Chain, and LightDark ;Nh= 64forMuJoCo). The policy network alsoconsists of two fully connected layers with Nhhidden units each and tanhactivations. Fordiscrete action spaces ( Tiger,Chain), the output activation is a softmax, resulting in acategorical distribution over the discrete actions. For continuous action spaces ( LightDark ,MuJoCo), we represent the policy as a Gaussian distribution.Figure 2a illustrates the normalized performance for all algorithms and experiments. Wenormalize by dividing the total reward by the reward of BPO. For LightDark , whichhas negative reward, we first shift the total reward to be positive and then normalize.Appendix A.2 shows the unnormalized rewards.Tiger (Discrete POMDP). In the Tigerproblem, originally proposed by Kaelbling et al.(1998), a tiger is hiding behind one of two doors. An agent must choose among three actions:listen, or open one of the two doors; when the agent listens, it receives a noisy observation ofthe tiger’s position. If the agent opens the door and reveals the tiger, it receives a penaltyof -100. Opening the door without the tiger results in a reward of 10. Listening incurs apenalty of -1. In this problem, the optimal agent listens until its belief about which door thetiger is behind is substantially higher for one door vs. the other. Chen et al. (2016) frameTigeras aBAMDP problem with two latent states, one for each position of the tiger.Figure 2a demonstrates the benefit of operating in state-belief space when informationgathering is required to reduce model uncertainty. Since the EPOpt policy does notmaintain a belief distribution, it sees only the most recent observation. Without the fullhistory of observations, EPOpt learns only that opening doors is risky; because it expectsthe worst-case scenario, it always chooses to listen. UP-MLE leverages all past observationsto estimate the tiger’s position. However, without the full belief distribution, the policycannot account for the confidence of the estimate. Once there is a higher probability ofthe tiger being on one side, the UP-MLE policy prematurely chooses to open the saferdoor. BPOsignificantly outperforms both of these algorithms, learning to listen untilit is extremely confident about the tiger’s location. In fact, BPOachieves close to theapproximately optimal return found by SARSOP (19:00:6), a state-of-the-art offlinePOMDP solver that approximates the optimal value function rather than performing policyoptimization (Kurniawati et al., 2008).7Published as a conference paper at ICLR 2019BPO BEETLE PERSEUS MCBRLChain-10 (tied) 364.5 0.5 365.0 0.4 366.1 0.2Chain-10 (semitied) 364.9 0.8 364.8 0.3 365.1 0.3 321.6 6.4Table 1: For the Chainproblem, a comparison of the 95%confidence intervals of averagereturn for BPOvs. other benchmark algorithms. Values for BEETLE, MCBRL, and Perseusare taken from Wang et al. (2012), which does not report MCBRL performance in the “tied”setting.GoalBPOGoalEPOptGoalUP-MLEFigure 3: Visualization of different algorithms on the LightDark environment. The dashedline indicates the light source. Blue circles are one standard deviation for per-step estimates.TheBPOpolicy moves toward the light to obtain a better state estimate before movingtoward the goal.Chain (Discrete BAMDP). To evaluate the usefulness of the independent encodernetworks, we consider a variant of the Chainproblem (Strens, 2000). The original problemis a discrete MDP with five states fsig5i=1and two actionsfA;Bg. Taking action Ain statesitransitions to si+1with no reward; taking action Ain states5transitions to s5with areward of 10. Action Btransitions from any state to s1with a reward of 2. However, theseactions are noisy: in the canonical version of Chain, the opposite action is taken with slipprobability 0.2. In our variant, the slip probability is uniformly sampled from [0;1:0]at thebeginning of each episode.2In this problem, either action provides equal information aboutthe latent parameter. Since active information-gathering actions do not exist, BPOandUP-MLE achieve similar performance.Figure 2b shows that our algorithm is robust to the size of latent space discretization. Wediscretize the parameter space with 3, 10, 100, 500, and 1000 uniformly spaced samples. Atcoarser discretizations (3, 10), we see little difference between BPOandBPO-. However,with a large discretization (500, 1000), the performance of BPO- degrades significantly, whileBPOmaintains comparable performance. The performance of BPOalso slightly degradeswhen the discretization is too fine, suggesting that this level of discretization makes theproblem unnecessarily complex. Figure 2a shows the best discretization (10).In this discrete domain, we compare BPOto BEETLE (Poupart et al., 2006) andMCBRL (Wang et al., 2012), state-of-the-art discrete Bayesian reinforcement learningalgorithms, as well as Perseus (Spaan & Vlassis, 2005), a discrete POMDP solver. In additionto our variant, we consider a more challenging version where the slip probabilities for bothactions must be estimated independently. Poupart et al. (2006) refer to this as the “semi-tied”setting; our variant is “tied.” BPOperforms comparably to all of these benchmarks (Table 1).Light-Dark (Continuous POMDP). We consider a variant of the LightDark problemproposed by Platt et al. (2010), where an agent tries to reach a known goal location whilebeing uncertain about its own position. At each timestep, the agent receives a noisyobservation of its location. In our problem, the vertical dashed line is a light source; thefarther the agent is from the light, the noisier its observations. The agent must decideeither to reduce uncertainty by moving closer to the light, or to exploit by moving from itsestimated position to the goal. We refer the reader to Appendix A.3 for details about therewards and observation noise model.2A similar variant was introduced in Wang et al. (2012).8Published as a conference paper at ICLR 2019(a)BPO vs.TRPO0 10 20 30123tBPOUP-MLE (b) Entropy for Ant80% 100% 120%Leg 1 length80%100%120%Leg 2 length0.00.00.00.00.00.00.00.00.80.10.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 (c) Belief at t= 20Figure 4: (a) Comparison of BPOandTRPOtrained on the nominal environment for adifferent environment. The task is to move to the right along the x-axis. However, the modelat test time differs from the one TRPOtrained with: one leg is 20% longer, another is 20%shorter. (b) Comparison of average entropy per timestep by BPOandUP-MLE . The beliefdistribution collapses more quickly under the BPOpolicy. (c) Belief distribution at t= 20during a BPOrollout.Cheetah Swimmer AntBPOEPOpt UP-MLE TRPO EPOpt UP-MLE TRPO EPOpt UP-MLE TRPOFigure 5: Pairwise performance comparison of algorithms on MuJoCoBAMDPs. Eachpoint represents an MDP, and its (x;y)-coordinates correspond to the long-term reward by(baseline, BPO). The farther a point is above the line y=x, the more BPOoutperformsthat baseline. Colors indicate which algorithm achieved higher reward: BPO(red), EPOpt(green), UP-MLE (blue), or TRPO(purple).This example demonstrates how to apply BPOto general continuous POMDPs (Section 4.2).The latent state is the continuous pose of the agent. For this example, we parameterizethe belief as a Gaussian distribution and perform the posterior update with an ExtendedKalman Filter, as in Platt et al. (2010).Figure 3 compares sample trajectories from different algorithms on the LightDark environ-ment. Based on its initial belief, the BPOpolicy moves toward a light source to acquireless noisy observations. As it becomes more confident in its position estimate, it changesdirection toward the light and then moves straight to the goal. Both EPOpt andUP-MLEmove straight to the goal without initially reducing uncertainty.MuJoCo (Continuous BAMDP). Finally, we evaluate the algorithms on three simu-lated benchmarks from OpenAI Gym (Brockman et al., 2016) using the MuJoCo physicssimulator (Todorov et al., 2012): HalfCheetah ,Swimmer, and Ant. Each environment hasseveral latent physical parameters that can be changed to form a BAMDP. We refer thereader to Appendix A.4 for details regarding model variation and belief parameterization.The MuJoCo benchmarks demonstrate the robustness of BPOto model uncertainty. Foreach environment, BPOlearns a universal policy that adapts to the changing belief over thelatent parameters.Figure 4 highlights the performance of BPOonAnt.BPOcan efficiently move to the righteven when the model substantially differs from the nominal model (Figure 4a). It takesactions that reduce entropy more quickly than UP-MLE (Figure 4b). The belief over thepossible MDPs quickly collapses into a single bin (Figure 4c), which allows BPOto adaptthe policy to the identified model.Figure 5 provides a more in-depth comparison of the long-term expected reward achieved byeach algorithm. In particular, for the HalfCheetah environment, BPOhas a higher averagereturn than both EPOptandUP-MLE for most MDPs. Although BPOfares slightly worse9Published as a conference paper at ICLR 2019thanUP-MLE onSwimmer, we believe that this is largely due to random seeds, especiallysince BPO- matches UP-MLE ’s performance (Figure 2a).Qualitatively, all three algorithms produced agents with reasonable gaits in most MDPs. Wepostulate two reasons for this. First, the environments do not require active information-gathering actions to achieve a high reward. Furthermore, for deterministic systems withlittle noise, the belief collapses quickly (Figure 4b); as a result, the MLE is as meaningful asthe belief distribution. As demonstrated by Rajeswaran et al. (2017), a universally robustpolicy for these problems is capable of performing the task. Therefore, even algorithms thatdo not maintain a history of observations can perform well.6 DiscussionBayesian Policy Optimization is a practical and scalable approach for continuous BAMDPproblems. We demonstrate that BPOlearns policies that achieve performance comparable tostate-of-the-art discrete POMDP solvers. They also outperform state-of-the-art robust policygradient algorithms that address model uncertainty without formulating it as a BAMDPproblem. Our network architecture scales well with respect to the degree of latent parameterspace discretization due to its independent encoding of state and belief. We highlightthatBPOis agnostic to the choice of batch policy optimization subroutine. Althoughwe used TRPOin this work, we can also use more recent policy optimization algorithms,such as PPO (Schulman et al., 2017), and leverage improvements in variance-reductiontechniques (Weaver & Tao, 2001).BPOoutperforms algorithms that do not explicitly reason about belief distributions. OurBayesian approach is necessary for environments where uncertainty must actively be reduced,as shown in Figure 2a and Figure 3. If all actions are informative (as with MuJoCo,Chain)and the posterior belief distribution easily collapses into a unimodal distribution, UP-MLEprovides a lightweight alternative.BPOscales to fine-grained discretizations of latent space. However, our experimentsalso suggest that each problem has an optimal discretization level, beyond which furtherdiscretization may degrade performance. As a result, it may be preferable to performvariable-resolutiondiscretizationratherthananextremelyfine, single-resolutiondiscretization.Adapting iterative densification ideas previously explored in motion planning (Gammell et al.,2015) and optimal control (Munos & Moore, 1999) to the discretization of latent space mayyield a more compact belief representation while enabling further improved performance.An alternative to the model-based Bayes filter and belief encoder components of BPOislearning to directly map a history of observations to a lower-dimensional belief embedding,analogous to Peng et al. (2018). This would enable a policy to learn a meaningful beliefembedding without losing information from our a priori choice of discretization. Combininga recurrent policy for unidentified parameters with a Bayes filter for identified parametersoffers an intriguing future direction for research efforts.AcknowledgmentsGilwoo Lee is partially supported by Kwanjeong Educational Foundation, and Brian Hou ispartially supported by NASA Space Technology Research Fellowships (NSTRF). This workwas partially funded by the National Institute of Health R01 (#R01EB019335), NationalScience Foundation CPS (#1544797), National Science Foundation NRI (#1637748), theOffice of Naval Research, the RCTA, Amazon, and Honda.10Published as a conference paper at ICLR 2019ReferencesDouglas Aberdeen. A (revised) survey of approximate methods for solving partially observablemarkov decision processes. National ICT Australia, Canberra, Australia , 2003.Douglas Aberdeen and Jonathan Baxter. Scaling internal-state policy-gradient methods forPOMDPs. In International Conference on Machine Learning , 2002.Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emer-gent complexity via multi-agent competition. In International Conference on LearningRepresentations , 2018.Tamer Başar and Pierre Bernhard. H-infinity optimal control and related minimax designproblems: a dynamic game approach . Springer Science & Business Media, 2008.Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, JieTang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540 , 2016.Min Chen, Emilio Frazzoli, David Hsu, and Wee Sun Lee. POMDP-lite for robust robotplanningunderuncertainty. In IEEE International Conference on Robotics and Automation ,2016.Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deepreinforcement learning for continuous control. In International Conference on MachineLearning , 2016.Michael O’Gordon Duff and Andrew Barto. Optimal Learning: Computational proceduresfor Bayes-adaptive Markov decision processes . PhD thesis, University of Massachusetts atAmherst, 2002.Jonathan Gammell, Siddhartha Srinivasa, and Timothy Barfoot. Batch informed trees(BIT*): Sampling-based optimal planning via the heuristically guided search of implicitrandom geometric graphs. In IEEE International Conference on Robotics and Automation ,2015.Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesianreinforcement learning: A survey. Foundations and Trends Rin Machine Learning , 8(5-6):359–483, 2015.Arthur Guez, David Silver, and Peter Dayan. Efficient Bayes-adaptive reinforcement learningusing sample-based search. In Advances in Neural Information Processing Systems , 2012.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deepvariational reinforcement learning for pomdps. arXiv preprint arXiv:1806.02426 , 2018.Leslie Pack Kaelbling, Michael Littman, and Anthony Cassandra. Planning and acting inpartially observable stochastic domains. Artificial Intelligence , 101(1-2):99–134, 1998.Peter Karkus, David Hsu, and Wee Sun Lee. QMDP-Net: Deep learning for planning underpartial observability. In Advances in Neural Information Processing Systems , 2017.Zico Kolter and Andrew Ng. Near-Bayesian exploration in polynomial time. In InternationalConference on Machine Learning , 2009.Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDPplanning by approximating optimally reachable belief spaces. In Robotics: Science andSystems, 2008.Michael Littman, Anthony Cassandra, and Leslie Pack Kaelbling. Learning policies forpartially observable environments: Scaling up. In International Conference on MachineLearning , 1995.11Published as a conference paper at ICLR 2019Igor Mordatch, Kendall Lowrey, and Emanuel Todorov. Ensemble-CIO: Full-body dynamicmotion planning that transfers to physical humanoids. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems , 2015.Jun Morimoto and Kenji Doya. Robust reinforcement learning. In Advances in NeuralInformation Processing Systems , 2001.RemiMunosandAndrewMoore. Variableresolutiondiscretizationforhigh-accuracysolutionsof optimal control problems. In International Joint Conference on Artificial Intelligence ,1999.Sylvie CW Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertaintyfor robotic tasks with mixed observability. The International Journal of Robotics Research ,29(8):1053–1068, 2010.Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learningvia posterior sampling. In Advances in Neural Information Processing Systems , 2013.Christos Papadimitriou and John Tsitsiklis. The complexity of Markov decision processes.Mathematics of Operations Research , 12(3):441–450, 1987.Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary.Robust deep reinforcement learning with adversarial attacks. In International Conferenceon Autonomous Agents and Multiagent Systems , 2018.Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-realtransfer of robotic control with dynamics randomization. In IEEE International Conferenceon Robotics and Automation , 2018.Joelle Pineau, Geoff Gordon, Sebastian Thrun, et al. Point-based value iteration: An anytimealgorithm for POMDPs. In International Joint Conference on Artificial Intelligence , 2003.Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarialreinforcement learning. In International Conference on Machine Learning , 2017.Robert Platt, Russ Tedrake, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Belief spaceplanning assuming maximum likelihood observations. In Robotics: Science and Systems ,2010.Pascal Poupart, Nikos Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discretebayesian reinforcement learning. In International Conference on Machine Learning , 2006.Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. EPOpt:Learningrobustneuralnetworkpoliciesusingmodelensembles. In International Conferenceon Learning Representations , 2017.Stephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayes-adaptive POMDPs. InAdvances in Neural Information Processing Systems , 2008.John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trustregion policy optimization. In International Conference on Machine Learning , 2015.John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximalpolicy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017.Guy Shani, Joelle Pineau, and Robert Kaplow. A survey of point-based POMDP solvers.Journal on Autonomous Agents and Multiagent Systems , 27(1):1–51, 2013.David Silver and Joel Veness. Monte-carlo planning in large POMDPs. In Advances inNeural Information Processing Systems , 2010.Matthijs TJ Spaan and Nikos Vlassis. Perseus: Randomized point-based value iteration forPOMDPs. Journal of Artificial Intelligence Research , 24:195–220, 2005.12Published as a conference paper at ICLR 2019Malcolm Strens. A Bayesian framework for reinforcement learning. In International Confer-ence on Machine Learning , 2000.Zachary Sunberg and Mykel Kochenderfer. Online algorithms for POMDPs with continuousstate, action, and observation spaces. In International Conference on Automated Planningand Scheduling , 2018.Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-basedcontrol. In IEEE/RSJ International Conference on Intelligent Robots and Systems , 2012.Jur van den Berg, Sachin Patil, and Ron Alterovitz. Motion planning under uncertaintyusing iterative local optimization in belief space. The International Journal of RoboticsResearch , 31(11):1263–1278, 2012.Yi Wang, Kok Sung Won, David Hsu, and Wee Sun Lee. Monte Carlo Bayesian reinforcementlearning. In International Conference on Machine Learning , 2012.Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcementlearning. In Conference on Uncertainty in Artificial Intelligence , 2001.Wenhao Yu, Jie Tan, C. Karen Liu, and Greg Turk. Preparing for the unknown: Learning auniversal policy with online system identification. In Robotics: Science and Systems , 2017.13Published as a conference paper at ICLR 2019AppendixA.1 Training ParametersThe encoder networks and policy network are jointly trained with Trust Region PolicyOptimization (Schulman et al., 2015). We used the implementation provided by Duan et al.(2016) with the parameters listed in Appendix Table 1.Tiger Chain LightDark MuJoCoMax. episode length 100 100 15 200Batch size 500 10000 400 500Training iterations 1000 500 10000 200Discount () 0.95 1.00 1.00 0.99Stepsize (DKL) 0.01 0.01 0.01 0.01GAE 0.96 0.96 0.96 0.96Appendix Table 1: Training parametersA.2 Unnormalized Experimental ResultsHere, we provide unnormalized experimental results for the normalized performance inFigure 2a.BPO BPO - EPOpt UP-MLE TRPOTiger 17.90.6 15.80.6 -19.9 0.0 -9.8 2.0 -Chain-3 260.1 5.6 268.9 5.7 267.9 13.1 242.011.2 -Chain-10 374.0 6.9 355.27.0 267.9 13.1 378.2 15.7 -Chain-1000 360.1 7.1 231.64.2 267.9 13.1 342.4 14.9 -LightDark -166.7 2.4 -867.9 22.1 -1891.2 45.0 -745.9 22.3 -HalfCheetah 115.6 3.5 109.13 3.1 107.0 2.7 108.9 3.3 64.3 6.1Swimmer 36.00.4 36.90.6 27.9 0.4 37.60.4 29.4 0.6Ant 117.0 2.7 115.9 3.9 112.5 3.1 111.7 2.5 116.5 7.5Appendix Table 2: Comparison of the 95%confidence intervals of average return for BPOand other benchmark algorithms across all environments. Algorithms with the highestaverage return on each environment are shown in bold, with multiple algorithms selected ifintervals overlap. BPOachieves the highest return on seven of the eight environments. Thecombined result of BPOandBPO- achieves the highest return on all environments.14Published as a conference paper at ICLR 2019A.3 Experimental Detail: LightDarkAfter each action, an agent receives a noisy observation of its location, which is sampledfrom a Gaussian distribution, oN([x;y]>;w(x)), where [x;y]is the true location. Thenoise variance is a function of xand is minimized when x= 5:w(x) =12(x5)2+const.There is no process noise.The reward function is r(s;a) =12(ksgk2+kak2), wheresis the true agent position andgis the goal position. A large penalty of 5000ksTgk2is incurred if the agent does notreach the goal by the end of the time horizon, analogous to the strict equality constraint inthe original optimization problem (Platt et al., 2010).The initial belief is [x;y;2] = [2;2;2:25]. During training, we randomly sample latentstart positions from a rectangular region [2;2][4;4]and observable goal positions from[0;-2][2;4].A.4 Experimental Detail: MuJoCoFor ease of analysis, we vary two parameters for each environment. For HalfCheetah , thefront and back leg lengths are varied. For Ant, the two front leg lengths are varied. Swimmerhas four body links, so the first two link lengths vary together according to the first parameter,and the last two links vary together according to the second parameter. We chose to varylink lengths rather than friction or the damping constant because a policy trained on a singlenominal environment can perform well across large variations in those parameters. All linklengths vary by up to 20% of the original length.To construct a Bayes filter, the 2D-parameter space is discretized into a 55grid with auniform initial belief. We assume Gaussian noise on the observation, i.e. o=f(s;a) +wwithwN(0;2), withbeing the parameter corresponding to the center of each grid cell.It typically requires only a few steps for the belief to concentrate in a single cell of the grid,even when a large 2is assumed.15
Hklm9vec3m
Experiment results are not much convincing.
7: Good paper, accept
Summary: This paper proposes a policy optimization framework for Bayesian RL (BPO). BPO is based on a Bayesian model-based RL formulation. Using a Bayesian approach, it is expected to have better trade-off between exploration and exploitation in RL, and be able to deal with model uncertainty as well. Experiments are done on multiple domains consisting both POMDP planning tasks and RL. In general, the paper is well written. Related work are thoroughly discussed. In my opinion, the proposed idea is a solid combination of existing techniques: Monte-Carlo sampling (step 3), Bayes belief update, and policy gradient in POMDP (G(PO)MDP). However, this combination is still worth trying and has been shown to scale to larger problems through the use of deep learning. I have some following major concerns about the paper: - Root sampling (step 3 in Algorithm 1) would result in sampled models that are fixed in every simulation. In a pure nature of Bayes RL, after each update at new observation (step 11: belief update), the model distribution already changes. Thus how does this Algorithm can guarantee an optimal solution for BAMDP? can the authors have more discussions on this point? Does this explain why TRPO (using a mean model) can perform comparably to BPO in Ant? - Belief representation is based on a Bayes filter which requires discretization. Finely discretized belief would increase the complexity and computation dramatically with the dimension of the latent space. This would result in very slow SIMULATE steps, especially for a long-horizon problem, let alone further computation for BatchPolicyOptimization. - I wonder how TRPO using RNN would perform in this case, instead of using a wrong starting model (an average model)?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Bayesian Policy Optimization for Model Uncertainty ### Paper Abstract Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers. ### Paper Keywords ["Bayes-Adaptive Markov Decision Process", "Model Uncertainty", "Bayes Policy Optimization"] ### Paper Content Published as a conference paper at ICLR 2019Bayesian Policy Optimization forModel UncertaintyGilwoo Lee, Brian Hou, Aditya Mandalika Vamsikrishna, Jeongseok Lee,Sanjiban Choudhury, Siddhartha S. SrinivasaPaul G. Allen School of Computer Science & EngineeringUniversity of Washington{gilwoo,bhou,adityavk,jslee02,sanjibac,siddh}@cs.uw.eduAbstractAddressing uncertainty is critical for autonomous systems to robustly adaptto the real world. We formulate the problem of model uncertainty as acontinuous Bayes-Adaptive Markov Decision Process (BAMDP), where anagent maintains a posterior distribution over latent model parameters givena history of observations and maximizes its expected long-term rewardwith respect to this belief distribution. Our algorithm, Bayesian PolicyOptimization, builds on recent policy optimization algorithms to learn auniversal policy that navigates the exploration-exploitation trade-off to max-imize the Bayesian value function. To address challenges from discretizingthe continuous latent parameter space, we propose a new policy networkarchitecture that encodes the belief distribution independently from theobservable state. Our method significantly outperforms algorithms thataddress model uncertainty without explicitly reasoning about belief distribu-tions and is competitive with state-of-the-art Partially Observable MarkovDecision Process solvers.1 IntroductionAt its core, real-world robotics focuses on operating under uncertainty. An autonomous carmust drive alongside unpredictable human drivers under road conditions that change fromday to day. An assistive home robot must simultaneously infer users’ intended goals as ithelps them. A robot arm must recognize and manipulate varied objects. These examplesshare common themes: (1) an underlying dynamical system with unknown latent parameters(road conditions, human goals, object identities), (2) an agent that can probe the system viaexploration , while ultimately (3) maximizing an expected long-term reward via exploitation .The Bayes-Adaptive Markov Decision Process (BAMDP) framework (Ghavamzadeh et al.,2015) elegantly captures the exploration-exploitation dilemma that the agent faces. Here,the agent maintains a belief, which is a posterior distribution over the latent parameters given a history of observations. A BAMDP can be cast as a Partially Observable MarkovDecision Process (POMDP) (Duff & Barto, 2002) whose state is (s;), wherescorrespondsto the observable world state. By planning in the belief space of this POMDP, the agentbalances explorative and exploitative actions. In this paper, we focus on BAMDP problemsin which the latent parameter space is either a discrete finite set or abounded continuoussetthat can be approximated via discretization . For this class of BAMDPs, the belief is acategorical distribution, allowing us to represent it using a vector of weights.The core problem for BAMDPs with continuous state-action spaces is how to explore thereachable belief space. In particular, discretizing the latent space can result in an arbitrarilylarge belief vector, which causes the belief space to grow exponentially. Approximating thevalue function over the reachable belief space can be challenging: although point-based valueapproximations (Kurniawati et al., 2008; Pineau et al., 2003) have been largely successful forapproximating value functions of discrete POMDP problems, these approaches do not easilyextend to continuous state-action spaces. Monte-Carlo Tree Search approaches (Silver &1Published as a conference paper at ICLR 2019Bayes Filter...BatchPolicyOpt.(a) Training procedurebsEncoderEncoderPolicyNetworka (b) Network structureFigure 1: An overview of Bayesian Policy Optimization. The policy is simulated on multiplelatent models. At each timestep of the simulation, a black-box Bayes filter updates theposterior belief and inputs the state-belief to the policy (Figure 1a). Belief ( b) and state ( s)are independently encoded before being pushed into the policy network (Figure 1b)Veness, 2010; Guez et al., 2012) are also prohibitively expensive in continuous state-actionspaces: the width of the search tree after a single iteration is too large, preventing anadequate search depth from being reached.Our key insight is that we can bypass learning the value function and directly learn apolicy that maps beliefs to actions by leveraging the latest advancements in batch policyoptimization algorithms (Schulman et al., 2015; 2017). Inspired by previous approachesthat train learning algorithms with an ensemble of models (Rajeswaran et al., 2017; Yuet al., 2017), we examine model uncertainty through a BAMDP lens. Although our approachprovides only locally optimal policies, we believe that it offers a practical and scalable solutionfor continuous BAMDPs.Our method, Bayesian Policy Optimization ( BPO), is a batch policy optimization methodwhich utilizes a black-box Bayesian filter and augmented state-belief representation. Duringoffline training, BPOsimulates the policy on multiple latent models sampled from the sourcedistribution (Figure 1a). At each simulation timestep, it computes the posterior belief usinga Bayes filter and inputs the state-belief pair (s;b)to the policy. Our algorithm only needsto update the posterior along the simulated trajectory in each sampled model, rather thanbranching at each possible action and observation as in MCTS-based approaches.Our key contribution is the following. We introduce a Bayesian policy optimization algorithmto learn policies that directly reason about model uncertainty while maximizing the expectedlong-term reward (Section 4). To address the challenge of large belief representations, weintroduce two encoder networks that balance the size of belief and state embeddings inthe policy network (Figure 1b). In addition, we show that our method, while designed forBAMDPs, can be applied to continuous POMDPs when a compact belief representation isavailable (Section 4.2). Through experiments on classical POMDP problems and BAMDPvariantsofOpenAIGymbenchmarks, weshowthat BPOsignificantlyoutperformsalgorithmsthat address model uncertainty without explicitly reasoning about beliefs and is competitivewith state-of-the-art POMDP algorithms (Section 5).2 Preliminaries: Bayesian Reinforcement LearningThe Bayes-Adaptive Markov Decision Process framework (Duff & Barto, 2002; Ross et al.,2008; Kolter & Ng, 2009) was originally proposed to address uncertainty in the transitionfunction of an MDP. The uncertainty is captured by a latent variable, 2, which is eitherdirectly the transition function, e.g. sas0=T(s;a;s0), or is a parameter of the transition,e.g. physical properties of the system. The latent variable is either fixed or has a knowntransition function. We extend the previous formulation of to address uncertainty in thereward function as well.2Published as a conference paper at ICLR 2019Formally, a BAMDP is defined by a tuple hS;;A;T;R;P 0;i, whereSis the observablestate space of the underlying MDP, is the latent space, and Ais the action space. TandRare the parameterized transition and reward functions, respectively. The transition functionis defined as: T(s;;a0;s0;0) =P(s0;0js;;a0) =P(s0js;;a0)P(0js;;a0;s0). The initialdistribution over (s;)is given by P0:S!R+, andis the discount.Bayesian Reinforcement Learning (BRL) considers the long-term expected reward withrespect to the uncertainty over rather than the true (unknown) value of . The uncertaintyis represented as a belief distribution b2Bover latent variables . BRL maximizes thefollowing Bayesian value function, which is the expected value given the uncertainty :V(s;b) =R(s;b;a0) +Xs02S;b02BP(s0;b0js;b;a0)V(s0;b0)=R(s;b;a0) +Xs02S;b02BP(s0js;b;a0)P(b0js;b;a0;s0)V(s0;b0)(1)where the action is a0=(s;b).1The Bayesian reward and transition functions are defined in expectation with respect to:R(s;b;a0) =P2b()R(s;;a0),P(s0js;b;a0) =P2b()P(s0js;;a0). The beliefdistribution can be maintained recursively, with a black-box Bayes filter performing posteriorupdates given observations. We describe how to implement such a Bayes filter in Section 4.1.The use of (s;b)casts the partially observable BAMDP as a fully observable MDP inbelief space, which permits the use of any policy gradient method. We highlight that areactive Bayesian policy in belief space is equivalent to a policy with memory in observablespace (Kaelbling et al., 1998). In our work, the complexity of memory is delegated to aBayes filter that computes a sufficient statistic of the history.In partially observable MDPs (POMDPs), the states can be observed only via a noisyobservation function. Mixed-observability MDPs (MOMDPs) (Ong et al., 2010) are similarto BAMDPs: their states are (s;), wheresis observable and is latent. Although anyBAMDP problem can be cast as a POMDP or a MOMDP problem (Duff & Barto, 2002),the source of uncertainty in a BAMDP usually comes from the transition function, not theunobservability of the state as it does with POMDPs and MOMDPs.3 Related WorkA long history of research addresses belief-space reinforcement learning and robust rein-forcement learning. Here, we highlight the most relevant work and refer the reader toGhavamzadeh et al. (2015), Shani et al. (2013), and Aberdeen (2003) for more comprehensivereviews of the Bayes-Adaptive and Partially Observable MDP literatures.Belief-Space Reinforcement Learning. Planning in belief space, where part of thestate representation is a belief distribution, is intractable (Papadimitriou & Tsitsiklis, 1987).This is a consequence of the curse of dimensionality: the dimensionality of belief spaceover a finite set of variables equals the size of that set, so the size of belief space growsexponentially. Many approximate solvers focus on one or more of the following: 1) valuefunction approximation, 2) compact, approximate belief representation, or 3) direct mappingof belief to an action. QMDP (Littman et al., 1995) assumes full observability after one stepto approximate Q-value. Point-based solvers, like SARSOP (Kurniawati et al., 2008) andPBVI (Pineau et al., 2003), exploit the piecewise-linear-convex structure of POMDP valuefunctions (under mild assumptions) to approximate the value of a belief state. Sampling-based approaches, such as BAMCP (Guez et al., 2012) and POMCP (Silver & Veness, 2010),combine Monte Carlo sampling and simple rollout policies to approximate Q-values at theroot node in a search tree. Except for QMDP, these approaches target discrete POMDPsand cannot be easily extended to continuous spaces. Sunberg & Kochenderfer (2018) extendPOMCP to continuous spaces using double progressive widening. Model-based trajectory1The state space Scan be either discrete or continuous. The belief space Bis always continuous,but we usePnotation for simplicity.3Published as a conference paper at ICLR 2019optimization methods (Platt et al., 2010; van den Berg et al., 2012) have also been successfulfor navigation on systems like unmanned aerial vehicles and other mobile robots.Neural network variants of POMDP algorithms are well suited for compressing high-dimensional belief states into compact representations. For example, QMDP-Net (Karkuset al., 2017) jointly trains a Bayes-filter network and a policy network to approximate Q-value.Deep Variational Reinforcement Learning (Igl et al., 2018) learns to approximate the beliefusing variational inference and a particle filter, and it uses the belief to generate actions.Our method is closely related to Exp-GPOMDP (Aberdeen & Baxter, 2002), a model-freepolicy gradient method for POMDPs, but we leverage model knowledge from the BAMDPand revisit the underlying policy optimization method with recent advancements. Peng et al.(2018) use Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) to encodea history of observations to generate an action. The key difference between our method andPeng et al. (2018) is that BPO explicitly utilizes the belief distribution, while in Peng et al.(2018) the LSTM must implicitly learn an embedding for the distribution. We believe thatexplicitly using a Bayes filter improves data efficiency and interpretability.Robust (Adversarial) Reinforcement Learning. One can bypass the burden of main-taining belief and still find a robust policy by maximizing the return for worst-case scenarios.Commonly referred to as Robust Reinforcement Learning (Morimoto & Doya, 2001), thisapproach uses a min-max objective and is conceptually equivalent to H-infinity control (Başar& Bernhard, 2008) from classical robust control theory. Recent works have adapted this ob-jective to train agents against various external disturbances and adversarial scenarios (Pintoet al., 2017; Bansal et al., 2018; Pattanaik et al., 2018). Interestingly, instead of trainingagainst an adversary, an agent can also train to be robust against model uncertainty withan ensemble of models. For example, Ensemble Policy Optimization (EPOpt) (Rajeswaranet al., 2017) trains an agent on multiple MDPs and strives to improve worst-case per-formance by concentrating rollouts on MDPs where the current policy performs poorly.Ensemble-CIO (Mordatch et al., 2015) optimizes trajectories across a finite set of MDPs.Whileadversarialandensemblemodelapproacheshaveproventoberobusteventounmodeledeffects, theymayresultinoverlyconservativebehaviorwhentheworst-casescenarioisextreme.In addition, since these methods do not infer or utilize uncertainty, they perform poorlywhen explicit information-gathering actions are required. Our approach is fundamentallydifferent from them because it internally maintains a belief distribution. As a result, itspolicies outperform robust policies in many scenarios.Adaptive Policy Methods. Some approaches can adapt to changing model estimateswithout operating in belief space. Adaptive-EPOpt (Rajeswaran et al., 2017) retrains anagent with an updated source distribution after real-world interactions. PSRL (Osband et al.,2013) samples from a source distribution, executes an optimal policy for the sample for afixed horizon, and then re-samples from an updated source distribution. These approachescan work well for scenarios in which the latent MDP is fixed throughout multiple episodes.Universal Policy with Online System Identification ( UP-OSI ) (Yu et al., 2017) learns topredict the maximum likelihood estimate MLEand trains a universal policy that maps(s;MLE)to an action. However, without a notion of belief, both PSRL and UP-OSIcan over-confidently execute policies that are optimal for the single estimate, causing poorperformance in expectation over different MDPs.4 Bayesian Policy OptimizationWe propose Bayesian Policy Optimization, a simple policy gradient algorithm for BAMDPs(Algorithm 1). The agent learns a stochastic Bayesian policy that maps a state-belief pair toa probability distribution over actions :SB!P(A). During each training iteration,BPOcollects trajectories by simulating the current policy on several MDPs sampled fromthe prior distribution. During the simulation, the Bayes filter updates the posterior beliefdistribution at each timestep and sends the updated state-belief pair to the Bayesian policy.By simulating on MDPs with different latent variables, BPOobserves the evolution of thestate-belief throughout multiple trajectories. Since the state-belief representation makes thepartially observable BAMDP a fully observable Belief-MDP, any batch policy optimization4Published as a conference paper at ICLR 2019Algorithm 1 Bayesian Policy OptimizationRequire: Bayes filter (), initial belief b0(),P0, policy0, horizonH,nitr;nsample1:fori= 1;2;;nitrdo2: forn= 1;2;;nsample do3: Sample latent MDP M:(s0;0)P04:n Simulate (i1;b0; ;M;H )5:Update policy: i BatchPolicyOptimization (i1;f1;;nsampleg)6:returnbest7:procedure Simulate (;b0; ;M;H )8: fort= 1;;Hdo9:at (st1;bt1)10: ExecuteatonM, observing rt;st11:bt (st1;bt1;at;st)12: return (s0;b0;a1;r1;s1;b1;;aH;rH;sH;bH)algorithm (e.g., Schulman et al. (2015; 2017)) can be used to maximize the Bayesian Bellmanequation (Equation 1).One key challenge is how to represent the belief distribution over the latent state space. Tothis end, we impose one mild requirement, i.e., that the belief can be represented with afixed-size vector. For example, if the latent space is discrete, we can represent the beliefas a categorical distribution. For continuous latent state spaces, we can use Gaussian or amixture of Gaussian distributions. When such specific representations are not appropriate,we can choose a more general uniform discretization of the latent space.Discretizing the latent space introduces the curse of dimensionality. An algorithm must berobust to the size of the belief representation. To address the high-dimensionality of beliefspace, we introduce a new policy network structure that consists of two separate networks toindependently encode state and belief (Figure 1b). These encoders consist of multiple layersof nonlinear (e.g., ReLU) and linear operations, and they output a compact representationof state and belief. We design the encoders to yield outputs of the same size, which weconcatenate to form the input to the policy network. The encoder networks and the policynetwork are jointlytrained by the batch policy optimization. Our belief encoder achieves thedesired robustness by learning to compactly represent arbitrarily large belief representations.In Section 5, we empirically verify that the separate belief encoder makes our algorithmmore robust to large belief representations (Figure 2b).As with most policy gradient algorithms, BPOprovides only a locally optimal solution.Nonetheless, it produces robust policies that scale to problems with high-dimensional observ-able states and beliefs (see Section 5).4.1 Bayes Filter for Bayesian Policy OptimizationGiven an initial belief b0, a Bayes filter recursively performs the posterior update:b0(0js;b;a0;s0) =X2b()T(s;;a0;s0;0) (2)whereis the normalizing constant, and the transition function is defined asT(s;;a0;s0;0) =P(s0;0js;;a0) =P(s0js;;a0)P(0js;;a0;s0). At timestep t, the be-liefbt(t)is the posterior distribution over given the history of states and actions,(s0;a1;s1;:::;st). Whencorresponds to physical parameters for an autonomous system, weoften assume that the latent states are fixed.Our algorithm utilizes a black-box Bayes filter to produce a posterior distribution over thelatent states. Any Bayes filter that outputs a fixed-size belief representation can be used;for example, we use an extended Kalman filter to maintain a Gaussian distribution overcontinuous latent variables in the LightDark environment in Section 5. When such a specific5Published as a conference paper at ICLR 2019representation is not appropriate, we can choose a more general discretization of the latentspace to obtain a computationally tractable belief update.For our algorithm, we found that uniformly discretizing the range of each latent parameterintoKequal-sized bins is sufficient. From each of the resulting Kjjbins, we form an MDPby selecting the mean bin value for each latent parameter. Then, we approximate the beliefdistribution with a categorical distribution over the resulting MDPs.We approximate the Bayes update in Equation 2 by computing the probability of observings0under each discretized 2f1;;Kjjgas follows:b0(js;b;a0;s0) =b()p(s0js;;a0)PKjji=1b(i)p(s0js;i;a0)where the denominator corresponds to .As we verify in Section 5, our algorithm is robust to approximate beliefs, which allows theuse of computationally efficient approximate Bayes filters without degrading performance. Abelief needs only to be accurate enough to inform the agent of its actions.4.2 Generalization to POMDPAlthough BPOis designed for BAMDP problems, it can naturally be applied to POMDPs.In a general POMDP where state is unobservable, we need only b(s), so we can remove thestate encoder network.Knowing the transition and observation functions, we can construct a Bayes filter thatcomputes the belief bover the hidden state:b0(s0) = (b;a0;o0) =Xs2Sb(s)T(s;a0;s0)Z(s;a0;o0)whereis the normalization constant, and Zis the observation function, Z(s;a0;o0) =P(o0js;a0), of observing o0after taking action a0at states. Then, BPOoptimizes thefollowing Bellman equation:V(b) =Xs2Sb(s)R(s;(b)) +Xb02BP(b0jb;(b))V(b0)For general POMDPs with large state spaces, however, discretizing state space to form thebelief state is impractical. We believe that this generalization is best suited for beliefs withconjugate distributions, e.g., Gaussians.5 Experimental ResultsWe evaluate BPOon discrete and continuous POMDP benchmarks to highlight its use ofinformation-gathering actions. We also evaluate BPOon BAMDP problems constructed byvarying physical model parameters on OpenAI benchmark problems (Brockman et al., 2016).For all BAMDP problems with continuous latent spaces ( Chain,MuJoCo), latent parametersare sampled in the continuous space in Step 3 of Algorithm 1, regardless of discretization.We compare BPOtoEPOptandUP-MLE , robust and adaptive policy gradient algorithms,respectively. We also include BPO-, a version of our algorithm without the belief andstate encoders; this version directly feeds the original state and belief to the policy network.Comparing with BPO- allows us to better understand the effect of the encoders. ForUP-MLE , we use the maximum likelihood estimate (MLE) from the same Bayes filter usedforBPO, instead of learning an additional online system identification (OSI) network asoriginally proposed by UP-OSI . This lets us directly compare performance when a fullbelief distribution is used ( BPO) rather than a point estimate ( UP-MLE ). For the OpenAIBAMDP problems, we also compare to a policy trained with TRPOin an environment withthe mean values of the latent parameters.All policy gradient algorithms ( BPO,BPO-,EPOpt,UP-MLE ) use TRPOas the un-derlying batch policy optimization subroutine. We refer the reader to Appendix A.1 for6Published as a conference paper at ICLR 2019Tiger LightDark Chain Cheetah Swimmer Ant00.51Norm. RewardBPO BPO- UP-MLE EPOPT(a) Benchmark101102103250300350Avg. RewardBPO BPO- (b) DiscretizationFigure 2: (a) Comparison of BPOwith belief-agnostic, robust RL algorithms. BPOsignifi-cantly outperforms benchmarks when belief-awareness and explicit information gathering arenecessary ( Tiger,LightDark ). It is competitive with UP-MLE when passive estimation oruniversal robustness is sufficient ( Chain,MuJoCo). (b) Scalability of BPOwith respect tolatent state space discretization for the Chainproblem.parameter details. For all algorithms, we compare the results from the seed with the highestmean reward across multiple random seeds. Although EPOpt andUP-MLE are the mostrelevant algorithms that use batch policy optimization to address model uncertainty, weemphasize that neither formulates the problems as BAMDPs.As shown in Figure 1b, the BPOnetwork’s state and belief encoder components are identical,consisting of two fully connected layers with Nhhidden units each and tanhactivations(Nh= 32forTiger,Chain, and LightDark ;Nh= 64forMuJoCo). The policy network alsoconsists of two fully connected layers with Nhhidden units each and tanhactivations. Fordiscrete action spaces ( Tiger,Chain), the output activation is a softmax, resulting in acategorical distribution over the discrete actions. For continuous action spaces ( LightDark ,MuJoCo), we represent the policy as a Gaussian distribution.Figure 2a illustrates the normalized performance for all algorithms and experiments. Wenormalize by dividing the total reward by the reward of BPO. For LightDark , whichhas negative reward, we first shift the total reward to be positive and then normalize.Appendix A.2 shows the unnormalized rewards.Tiger (Discrete POMDP). In the Tigerproblem, originally proposed by Kaelbling et al.(1998), a tiger is hiding behind one of two doors. An agent must choose among three actions:listen, or open one of the two doors; when the agent listens, it receives a noisy observation ofthe tiger’s position. If the agent opens the door and reveals the tiger, it receives a penaltyof -100. Opening the door without the tiger results in a reward of 10. Listening incurs apenalty of -1. In this problem, the optimal agent listens until its belief about which door thetiger is behind is substantially higher for one door vs. the other. Chen et al. (2016) frameTigeras aBAMDP problem with two latent states, one for each position of the tiger.Figure 2a demonstrates the benefit of operating in state-belief space when informationgathering is required to reduce model uncertainty. Since the EPOpt policy does notmaintain a belief distribution, it sees only the most recent observation. Without the fullhistory of observations, EPOpt learns only that opening doors is risky; because it expectsthe worst-case scenario, it always chooses to listen. UP-MLE leverages all past observationsto estimate the tiger’s position. However, without the full belief distribution, the policycannot account for the confidence of the estimate. Once there is a higher probability ofthe tiger being on one side, the UP-MLE policy prematurely chooses to open the saferdoor. BPOsignificantly outperforms both of these algorithms, learning to listen untilit is extremely confident about the tiger’s location. In fact, BPOachieves close to theapproximately optimal return found by SARSOP (19:00:6), a state-of-the-art offlinePOMDP solver that approximates the optimal value function rather than performing policyoptimization (Kurniawati et al., 2008).7Published as a conference paper at ICLR 2019BPO BEETLE PERSEUS MCBRLChain-10 (tied) 364.5 0.5 365.0 0.4 366.1 0.2Chain-10 (semitied) 364.9 0.8 364.8 0.3 365.1 0.3 321.6 6.4Table 1: For the Chainproblem, a comparison of the 95%confidence intervals of averagereturn for BPOvs. other benchmark algorithms. Values for BEETLE, MCBRL, and Perseusare taken from Wang et al. (2012), which does not report MCBRL performance in the “tied”setting.GoalBPOGoalEPOptGoalUP-MLEFigure 3: Visualization of different algorithms on the LightDark environment. The dashedline indicates the light source. Blue circles are one standard deviation for per-step estimates.TheBPOpolicy moves toward the light to obtain a better state estimate before movingtoward the goal.Chain (Discrete BAMDP). To evaluate the usefulness of the independent encodernetworks, we consider a variant of the Chainproblem (Strens, 2000). The original problemis a discrete MDP with five states fsig5i=1and two actionsfA;Bg. Taking action Ain statesitransitions to si+1with no reward; taking action Ain states5transitions to s5with areward of 10. Action Btransitions from any state to s1with a reward of 2. However, theseactions are noisy: in the canonical version of Chain, the opposite action is taken with slipprobability 0.2. In our variant, the slip probability is uniformly sampled from [0;1:0]at thebeginning of each episode.2In this problem, either action provides equal information aboutthe latent parameter. Since active information-gathering actions do not exist, BPOandUP-MLE achieve similar performance.Figure 2b shows that our algorithm is robust to the size of latent space discretization. Wediscretize the parameter space with 3, 10, 100, 500, and 1000 uniformly spaced samples. Atcoarser discretizations (3, 10), we see little difference between BPOandBPO-. However,with a large discretization (500, 1000), the performance of BPO- degrades significantly, whileBPOmaintains comparable performance. The performance of BPOalso slightly degradeswhen the discretization is too fine, suggesting that this level of discretization makes theproblem unnecessarily complex. Figure 2a shows the best discretization (10).In this discrete domain, we compare BPOto BEETLE (Poupart et al., 2006) andMCBRL (Wang et al., 2012), state-of-the-art discrete Bayesian reinforcement learningalgorithms, as well as Perseus (Spaan & Vlassis, 2005), a discrete POMDP solver. In additionto our variant, we consider a more challenging version where the slip probabilities for bothactions must be estimated independently. Poupart et al. (2006) refer to this as the “semi-tied”setting; our variant is “tied.” BPOperforms comparably to all of these benchmarks (Table 1).Light-Dark (Continuous POMDP). We consider a variant of the LightDark problemproposed by Platt et al. (2010), where an agent tries to reach a known goal location whilebeing uncertain about its own position. At each timestep, the agent receives a noisyobservation of its location. In our problem, the vertical dashed line is a light source; thefarther the agent is from the light, the noisier its observations. The agent must decideeither to reduce uncertainty by moving closer to the light, or to exploit by moving from itsestimated position to the goal. We refer the reader to Appendix A.3 for details about therewards and observation noise model.2A similar variant was introduced in Wang et al. (2012).8Published as a conference paper at ICLR 2019(a)BPO vs.TRPO0 10 20 30123tBPOUP-MLE (b) Entropy for Ant80% 100% 120%Leg 1 length80%100%120%Leg 2 length0.00.00.00.00.00.00.00.00.80.10.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 (c) Belief at t= 20Figure 4: (a) Comparison of BPOandTRPOtrained on the nominal environment for adifferent environment. The task is to move to the right along the x-axis. However, the modelat test time differs from the one TRPOtrained with: one leg is 20% longer, another is 20%shorter. (b) Comparison of average entropy per timestep by BPOandUP-MLE . The beliefdistribution collapses more quickly under the BPOpolicy. (c) Belief distribution at t= 20during a BPOrollout.Cheetah Swimmer AntBPOEPOpt UP-MLE TRPO EPOpt UP-MLE TRPO EPOpt UP-MLE TRPOFigure 5: Pairwise performance comparison of algorithms on MuJoCoBAMDPs. Eachpoint represents an MDP, and its (x;y)-coordinates correspond to the long-term reward by(baseline, BPO). The farther a point is above the line y=x, the more BPOoutperformsthat baseline. Colors indicate which algorithm achieved higher reward: BPO(red), EPOpt(green), UP-MLE (blue), or TRPO(purple).This example demonstrates how to apply BPOto general continuous POMDPs (Section 4.2).The latent state is the continuous pose of the agent. For this example, we parameterizethe belief as a Gaussian distribution and perform the posterior update with an ExtendedKalman Filter, as in Platt et al. (2010).Figure 3 compares sample trajectories from different algorithms on the LightDark environ-ment. Based on its initial belief, the BPOpolicy moves toward a light source to acquireless noisy observations. As it becomes more confident in its position estimate, it changesdirection toward the light and then moves straight to the goal. Both EPOpt andUP-MLEmove straight to the goal without initially reducing uncertainty.MuJoCo (Continuous BAMDP). Finally, we evaluate the algorithms on three simu-lated benchmarks from OpenAI Gym (Brockman et al., 2016) using the MuJoCo physicssimulator (Todorov et al., 2012): HalfCheetah ,Swimmer, and Ant. Each environment hasseveral latent physical parameters that can be changed to form a BAMDP. We refer thereader to Appendix A.4 for details regarding model variation and belief parameterization.The MuJoCo benchmarks demonstrate the robustness of BPOto model uncertainty. Foreach environment, BPOlearns a universal policy that adapts to the changing belief over thelatent parameters.Figure 4 highlights the performance of BPOonAnt.BPOcan efficiently move to the righteven when the model substantially differs from the nominal model (Figure 4a). It takesactions that reduce entropy more quickly than UP-MLE (Figure 4b). The belief over thepossible MDPs quickly collapses into a single bin (Figure 4c), which allows BPOto adaptthe policy to the identified model.Figure 5 provides a more in-depth comparison of the long-term expected reward achieved byeach algorithm. In particular, for the HalfCheetah environment, BPOhas a higher averagereturn than both EPOptandUP-MLE for most MDPs. Although BPOfares slightly worse9Published as a conference paper at ICLR 2019thanUP-MLE onSwimmer, we believe that this is largely due to random seeds, especiallysince BPO- matches UP-MLE ’s performance (Figure 2a).Qualitatively, all three algorithms produced agents with reasonable gaits in most MDPs. Wepostulate two reasons for this. First, the environments do not require active information-gathering actions to achieve a high reward. Furthermore, for deterministic systems withlittle noise, the belief collapses quickly (Figure 4b); as a result, the MLE is as meaningful asthe belief distribution. As demonstrated by Rajeswaran et al. (2017), a universally robustpolicy for these problems is capable of performing the task. Therefore, even algorithms thatdo not maintain a history of observations can perform well.6 DiscussionBayesian Policy Optimization is a practical and scalable approach for continuous BAMDPproblems. We demonstrate that BPOlearns policies that achieve performance comparable tostate-of-the-art discrete POMDP solvers. They also outperform state-of-the-art robust policygradient algorithms that address model uncertainty without formulating it as a BAMDPproblem. Our network architecture scales well with respect to the degree of latent parameterspace discretization due to its independent encoding of state and belief. We highlightthatBPOis agnostic to the choice of batch policy optimization subroutine. Althoughwe used TRPOin this work, we can also use more recent policy optimization algorithms,such as PPO (Schulman et al., 2017), and leverage improvements in variance-reductiontechniques (Weaver & Tao, 2001).BPOoutperforms algorithms that do not explicitly reason about belief distributions. OurBayesian approach is necessary for environments where uncertainty must actively be reduced,as shown in Figure 2a and Figure 3. If all actions are informative (as with MuJoCo,Chain)and the posterior belief distribution easily collapses into a unimodal distribution, UP-MLEprovides a lightweight alternative.BPOscales to fine-grained discretizations of latent space. However, our experimentsalso suggest that each problem has an optimal discretization level, beyond which furtherdiscretization may degrade performance. As a result, it may be preferable to performvariable-resolutiondiscretizationratherthananextremelyfine, single-resolutiondiscretization.Adapting iterative densification ideas previously explored in motion planning (Gammell et al.,2015) and optimal control (Munos & Moore, 1999) to the discretization of latent space mayyield a more compact belief representation while enabling further improved performance.An alternative to the model-based Bayes filter and belief encoder components of BPOislearning to directly map a history of observations to a lower-dimensional belief embedding,analogous to Peng et al. (2018). This would enable a policy to learn a meaningful beliefembedding without losing information from our a priori choice of discretization. Combininga recurrent policy for unidentified parameters with a Bayes filter for identified parametersoffers an intriguing future direction for research efforts.AcknowledgmentsGilwoo Lee is partially supported by Kwanjeong Educational Foundation, and Brian Hou ispartially supported by NASA Space Technology Research Fellowships (NSTRF). This workwas partially funded by the National Institute of Health R01 (#R01EB019335), NationalScience Foundation CPS (#1544797), National Science Foundation NRI (#1637748), theOffice of Naval Research, the RCTA, Amazon, and Honda.10Published as a conference paper at ICLR 2019ReferencesDouglas Aberdeen. A (revised) survey of approximate methods for solving partially observablemarkov decision processes. National ICT Australia, Canberra, Australia , 2003.Douglas Aberdeen and Jonathan Baxter. Scaling internal-state policy-gradient methods forPOMDPs. In International Conference on Machine Learning , 2002.Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emer-gent complexity via multi-agent competition. In International Conference on LearningRepresentations , 2018.Tamer Başar and Pierre Bernhard. H-infinity optimal control and related minimax designproblems: a dynamic game approach . Springer Science & Business Media, 2008.Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, JieTang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540 , 2016.Min Chen, Emilio Frazzoli, David Hsu, and Wee Sun Lee. POMDP-lite for robust robotplanningunderuncertainty. In IEEE International Conference on Robotics and Automation ,2016.Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deepreinforcement learning for continuous control. In International Conference on MachineLearning , 2016.Michael O’Gordon Duff and Andrew Barto. Optimal Learning: Computational proceduresfor Bayes-adaptive Markov decision processes . PhD thesis, University of Massachusetts atAmherst, 2002.Jonathan Gammell, Siddhartha Srinivasa, and Timothy Barfoot. Batch informed trees(BIT*): Sampling-based optimal planning via the heuristically guided search of implicitrandom geometric graphs. In IEEE International Conference on Robotics and Automation ,2015.Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesianreinforcement learning: A survey. Foundations and Trends Rin Machine Learning , 8(5-6):359–483, 2015.Arthur Guez, David Silver, and Peter Dayan. Efficient Bayes-adaptive reinforcement learningusing sample-based search. In Advances in Neural Information Processing Systems , 2012.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deepvariational reinforcement learning for pomdps. arXiv preprint arXiv:1806.02426 , 2018.Leslie Pack Kaelbling, Michael Littman, and Anthony Cassandra. Planning and acting inpartially observable stochastic domains. Artificial Intelligence , 101(1-2):99–134, 1998.Peter Karkus, David Hsu, and Wee Sun Lee. QMDP-Net: Deep learning for planning underpartial observability. In Advances in Neural Information Processing Systems , 2017.Zico Kolter and Andrew Ng. Near-Bayesian exploration in polynomial time. In InternationalConference on Machine Learning , 2009.Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDPplanning by approximating optimally reachable belief spaces. In Robotics: Science andSystems, 2008.Michael Littman, Anthony Cassandra, and Leslie Pack Kaelbling. Learning policies forpartially observable environments: Scaling up. In International Conference on MachineLearning , 1995.11Published as a conference paper at ICLR 2019Igor Mordatch, Kendall Lowrey, and Emanuel Todorov. Ensemble-CIO: Full-body dynamicmotion planning that transfers to physical humanoids. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems , 2015.Jun Morimoto and Kenji Doya. Robust reinforcement learning. In Advances in NeuralInformation Processing Systems , 2001.RemiMunosandAndrewMoore. Variableresolutiondiscretizationforhigh-accuracysolutionsof optimal control problems. In International Joint Conference on Artificial Intelligence ,1999.Sylvie CW Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertaintyfor robotic tasks with mixed observability. The International Journal of Robotics Research ,29(8):1053–1068, 2010.Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learningvia posterior sampling. In Advances in Neural Information Processing Systems , 2013.Christos Papadimitriou and John Tsitsiklis. The complexity of Markov decision processes.Mathematics of Operations Research , 12(3):441–450, 1987.Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary.Robust deep reinforcement learning with adversarial attacks. In International Conferenceon Autonomous Agents and Multiagent Systems , 2018.Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-realtransfer of robotic control with dynamics randomization. In IEEE International Conferenceon Robotics and Automation , 2018.Joelle Pineau, Geoff Gordon, Sebastian Thrun, et al. Point-based value iteration: An anytimealgorithm for POMDPs. In International Joint Conference on Artificial Intelligence , 2003.Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarialreinforcement learning. In International Conference on Machine Learning , 2017.Robert Platt, Russ Tedrake, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Belief spaceplanning assuming maximum likelihood observations. In Robotics: Science and Systems ,2010.Pascal Poupart, Nikos Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discretebayesian reinforcement learning. In International Conference on Machine Learning , 2006.Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. EPOpt:Learningrobustneuralnetworkpoliciesusingmodelensembles. In International Conferenceon Learning Representations , 2017.Stephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayes-adaptive POMDPs. InAdvances in Neural Information Processing Systems , 2008.John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trustregion policy optimization. In International Conference on Machine Learning , 2015.John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximalpolicy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017.Guy Shani, Joelle Pineau, and Robert Kaplow. A survey of point-based POMDP solvers.Journal on Autonomous Agents and Multiagent Systems , 27(1):1–51, 2013.David Silver and Joel Veness. Monte-carlo planning in large POMDPs. In Advances inNeural Information Processing Systems , 2010.Matthijs TJ Spaan and Nikos Vlassis. Perseus: Randomized point-based value iteration forPOMDPs. Journal of Artificial Intelligence Research , 24:195–220, 2005.12Published as a conference paper at ICLR 2019Malcolm Strens. A Bayesian framework for reinforcement learning. In International Confer-ence on Machine Learning , 2000.Zachary Sunberg and Mykel Kochenderfer. Online algorithms for POMDPs with continuousstate, action, and observation spaces. In International Conference on Automated Planningand Scheduling , 2018.Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-basedcontrol. In IEEE/RSJ International Conference on Intelligent Robots and Systems , 2012.Jur van den Berg, Sachin Patil, and Ron Alterovitz. Motion planning under uncertaintyusing iterative local optimization in belief space. The International Journal of RoboticsResearch , 31(11):1263–1278, 2012.Yi Wang, Kok Sung Won, David Hsu, and Wee Sun Lee. Monte Carlo Bayesian reinforcementlearning. In International Conference on Machine Learning , 2012.Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcementlearning. In Conference on Uncertainty in Artificial Intelligence , 2001.Wenhao Yu, Jie Tan, C. Karen Liu, and Greg Turk. Preparing for the unknown: Learning auniversal policy with online system identification. In Robotics: Science and Systems , 2017.13Published as a conference paper at ICLR 2019AppendixA.1 Training ParametersThe encoder networks and policy network are jointly trained with Trust Region PolicyOptimization (Schulman et al., 2015). We used the implementation provided by Duan et al.(2016) with the parameters listed in Appendix Table 1.Tiger Chain LightDark MuJoCoMax. episode length 100 100 15 200Batch size 500 10000 400 500Training iterations 1000 500 10000 200Discount () 0.95 1.00 1.00 0.99Stepsize (DKL) 0.01 0.01 0.01 0.01GAE 0.96 0.96 0.96 0.96Appendix Table 1: Training parametersA.2 Unnormalized Experimental ResultsHere, we provide unnormalized experimental results for the normalized performance inFigure 2a.BPO BPO - EPOpt UP-MLE TRPOTiger 17.90.6 15.80.6 -19.9 0.0 -9.8 2.0 -Chain-3 260.1 5.6 268.9 5.7 267.9 13.1 242.011.2 -Chain-10 374.0 6.9 355.27.0 267.9 13.1 378.2 15.7 -Chain-1000 360.1 7.1 231.64.2 267.9 13.1 342.4 14.9 -LightDark -166.7 2.4 -867.9 22.1 -1891.2 45.0 -745.9 22.3 -HalfCheetah 115.6 3.5 109.13 3.1 107.0 2.7 108.9 3.3 64.3 6.1Swimmer 36.00.4 36.90.6 27.9 0.4 37.60.4 29.4 0.6Ant 117.0 2.7 115.9 3.9 112.5 3.1 111.7 2.5 116.5 7.5Appendix Table 2: Comparison of the 95%confidence intervals of average return for BPOand other benchmark algorithms across all environments. Algorithms with the highestaverage return on each environment are shown in bold, with multiple algorithms selected ifintervals overlap. BPOachieves the highest return on seven of the eight environments. Thecombined result of BPOandBPO- achieves the highest return on all environments.14Published as a conference paper at ICLR 2019A.3 Experimental Detail: LightDarkAfter each action, an agent receives a noisy observation of its location, which is sampledfrom a Gaussian distribution, oN([x;y]>;w(x)), where [x;y]is the true location. Thenoise variance is a function of xand is minimized when x= 5:w(x) =12(x5)2+const.There is no process noise.The reward function is r(s;a) =12(ksgk2+kak2), wheresis the true agent position andgis the goal position. A large penalty of 5000ksTgk2is incurred if the agent does notreach the goal by the end of the time horizon, analogous to the strict equality constraint inthe original optimization problem (Platt et al., 2010).The initial belief is [x;y;2] = [2;2;2:25]. During training, we randomly sample latentstart positions from a rectangular region [2;2][4;4]and observable goal positions from[0;-2][2;4].A.4 Experimental Detail: MuJoCoFor ease of analysis, we vary two parameters for each environment. For HalfCheetah , thefront and back leg lengths are varied. For Ant, the two front leg lengths are varied. Swimmerhas four body links, so the first two link lengths vary together according to the first parameter,and the last two links vary together according to the second parameter. We chose to varylink lengths rather than friction or the damping constant because a policy trained on a singlenominal environment can perform well across large variations in those parameters. All linklengths vary by up to 20% of the original length.To construct a Bayes filter, the 2D-parameter space is discretized into a 55grid with auniform initial belief. We assume Gaussian noise on the observation, i.e. o=f(s;a) +wwithwN(0;2), withbeing the parameter corresponding to the center of each grid cell.It typically requires only a few steps for the belief to concentrate in a single cell of the grid,even when a large 2is assumed.15<|im_end|> <|im_start|>assistant ### Review Title Experiment results are not much convincing. ### Review Text Summary: This paper proposes a policy optimization framework for Bayesian RL (BPO). BPO is based on a Bayesian model-based RL formulation. Using a Bayesian approach, it is expected to have better trade-off between exploration and exploitation in RL, and be able to deal with model uncertainty as well. Experiments are done on multiple domains consisting both POMDP planning tasks and RL. In general, the paper is well written. Related work are thoroughly discussed. In my opinion, the proposed idea is a solid combination of existing techniques: Monte-Carlo sampling (step 3), Bayes belief update, and policy gradient in POMDP (G(PO)MDP). However, this combination is still worth trying and has been shown to scale to larger problems through the use of deep learning. I have some following major concerns about the paper: - Root sampling (step 3 in Algorithm 1) would result in sampled models that are fixed in every simulation. In a pure nature of Bayes RL, after each update at new observation (step 11: belief update), the model distribution already changes. Thus how does this Algorithm can guarantee an optimal solution for BAMDP? can the authors have more discussions on this point? Does this explain why TRPO (using a mean model) can perform comparably to BPO in Ant? - Belief representation is based on a Bayes filter which requires discretization. Finely discretized belief would increase the complexity and computation dramatically with the dimension of the latent space. This would result in very slow SIMULATE steps, especially for a long-horizon problem, let alone further computation for BatchPolicyOptimization. - I wonder how TRPO using RNN would perform in this case, instead of using a wrong starting model (an average model)? ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
D_I6trPKwlt
ICLR.cc/2021/Conference
2021
Spectrally Similar Graph Pooling
["Kyoung-Woon On", "Eun-Sol Kim", "Il-Jae Kwon", "Sangwoong Yoon", "Byoung-Tak Zhang"]
We consider the problem of learning compositional hierarchies of graphs. Even though structural characteristics of graphs can be learned by Graph Neural Networks (GNNs), it is difficult to find an overall compositional hierarchy using such flat operators. In this paper, we propose a new graph pooling algorithm, Spectrally Similar Graph Pooling (SSGPool), to learn hierarchical representations of graphs. The main idea of the proposed SSGPool algorithm is to learn a coarsening matrix which maps nodes from an original graph to a smaller number of nodes in a coarsened graph. The coarsening matrix is trained to coarsen the nodes based on their feature vectors while keeping the spectral characteristics of the original graph in the coarsened one. Although existing graph pooling methods take either feature-based pooling or structure-preserving pooling, SSGPool considers two properties simultaneously in an end-to-end manner. Experiments on various graph benchmarks show the advantage of our method compared to strong baselines. To further investigate the effectiveness of our proposed method, we evaluate our approach on a real-world problem, image retrieval with visual scene graphs. Quantitative and qualitative analyses on the retrieval problem confirm that the proposed method efficiently captures the hierarchical semantic structure of scene graphs.
["Graph Neural Networks", "Graph Pooling", "Spectral Similarity on Graph"]
ABSTRACTWe consider the problem of learning compositional hierarchies of graphs. Eventhough structural characteristics of graphs can be learned by Graph Neural Net-works (GNNs), it is difficult to find an overall compositional hierarchy using suchflat operators. In this paper, we propose a new graph pooling algorithm, SpectrallySimilar Graph Pooling (SSGPool), to learn hierarchical representations of graphs.The main idea of the proposed SSGPool algorithm is to learn a coarsening matrixwhich maps nodes from an original graph to a smaller number of nodes in a coars-ened graph. The coarsening matrix is trained to coarsen the nodes based on theirfeature vectors while keeping the spectral characteristics of the original graph inthe coarsened one. Experiments on various graph benchmarks show the advantageof our method compared to strong baselines. To further investigate the effective-ness of our proposed method, we evaluate our approach on a real-world problem,image retrieval with visual scene graphs. Quantitative and qualitative analyses onthe retrieval problem confirm that the proposed method efficiently captures thehierarchical semantic structure of scene graphs.1 I NTRODUCTIONBy virtue of the recent progress on graph neural networks (GNNs) (Gori et al., 2005; Scarselli et al.,2008; Bruna et al., 2013; Kipf & Welling, 2016; Gilmer et al., 2017; Veli ˇckovi ́c et al., 2018), varioustypes of data including structural data can be dealt with using neural network algorithms. Whileconventional neural network algorithms, such as convolutional neural networks and recurrent neuralnetworks, take regular structured inputs (images with grid pixel structure and sound signals withMarkovian temporal dependencies), GNNs have been recently suggested as a method for extendingthe scope of the inputs to graphs having irregular structures, such as molecular data, knowledgegraphs, social networks and visual scene graphs. Most GNNs attempt to implicitly reflect the struc-tural information through node (graph) representations. In other words, GNNs assign feature vec-tors to each node and update the node features by transforming and aggregating information fromthe neighborhoods. Even though structural characteristics can be learned by applying these mes-sage passing steps repeatedly, it is difficult to find an overall compositional hierarchy using such flatoperators.Recent work has proposed using pooling methods such as CNNs in order to discover hierarchi-cal structures between nodes in GNNs (Vinyals et al., 2015; Ying et al., 2018; Zhang et al., 2018;Lee et al., 2019; Gao & Ji, 2019; Diehl, 2019; Ma et al., 2019). These studies are divided intotwo categories depending on what information is mainly used for the pooling operator: structure-based approaches and feature-based approaches. Structure-based approaches learn node featureswith GNNs, however, the original graph is coarsened by deterministic graph clustering algorithmsbased on graph theory. Therefore, the resultant coarsened graph reflects the topology of the originalgraph, but the node features are not used during coarsening. Also the deterministic clustering meth-ods are not end-to-end trainable. On the other hand, feature-based approaches learn to assign nodesin the original graph to the nodes in the coarsened graph based on the node feature vector. Eventhough these approaches can be trained in an end-to-end manner, it is hard to maintain the topologyinformation of the original graph.In this paper, we propose a new graph pooling method, Spectrally Similar Graph Pooling (SSGPool),which makes use of both node features and structural information between the nodes (Figure 1).The main idea of SSGPool is to learn a coarsening matrix which maps nodes from an originalgraph to a smaller number of nodes in a coarsened graph. The coarsening matrix is trained to1Under review as a conference paper at ICLR 2021Figure 1: An illustrative example of compositional hierarchy in a visual scene graph. (a) is anoriginal image and (b) is a hierarchical structure of visual scene graph for the image.coarsen the nodes based on correlations between their feature vectors while maintaining the topologyinformation using spectral characteristics of the original graph. To utilize the node feature vectors,SSGPool basically builds upon conventional GNN algorithms. In addition, structural similaritiesbetween two different sized graphs are defined in order to be used as a regularizer during training.By having structural similarities act as a regularizer, SGGPool binds nodes having similar featurevectors while keeping the spectral characteristics of the original graphs in an end-to-end manner.Experiments on various graph benchmarks show the advantage of our method compared to strongbaselines. To further investigate the effectiveness of our proposed method, we evaluate our approachon a real-world problem, image retrieval with visual scene graphs. Quantitative and qualitative anal-yses on the retrieval problem confirm that the proposed method efficiently captures the hierarchicalsemantic structures of scene graphs.The remainder of the paper is organized as follows. In Section 2, we review related work aboutthe graph pooling algorithms. Next, we introduce notations about the graphs, GNN algorithms andspectral similarity between graphs as preliminaries. After that, the proposed SSGPool method isexplained in detail and experimental results on various datasets, comparing our proposed algorithmwith other well-known graph pooling algorithms are presented.2 R ELATED WORKPooling operations in graph neural networks (GNNs) can scale down the size of inputs and enlargethe receptive fields, thus giving rise to better generalization and performance. In this section, wereview several recent methods for graph pooling coupled with GNNs. Graph pooling methods canbe grouped into the following two categories: structure-based pooling and feature-based pooling.2.1 S TRUCTURE -BASED POOLINGIncluding earlier works of neural networks on graph, several proposed GNNs perform pooling withexisting graph clustering algorithm. These methods learn the representations of graphs in 2-steps:First these pooling methods build hierarchical structures using a graph clustering algorithm. Next,they learn embeddings of nodes in each layer based on GNN modules. Bruna et al. (2013) built ahierarchy of the graph with agglomerative clustering. Defferrard et al. (2016) and Fey et al. (2018)used the Graclus algorithm (Dhillon et al., 2007) which computes graph clustering without eigen-vectors. Simonovsky & Komodakis (2017) constructed the graph hierarchies through a combineduse of spectral polarity and Kron reduction. More recently, Ma et al. (2019) proposed EigenPool,which used spectral graph clustering methods to produce a coarsened graph. These methods leveragetopological information from graphs in order to produce coarsened graph. However these methodsdo not use node features which have useful information for learning representations of graphs. Fur-thermore, as the existing graph clustering algorithms are not differentiable, they are incapable oflearning in an end-to-end fashion.2Under review as a conference paper at ICLR 20212.2 F EATURE -BASED POOLINGIn contrast to structure-based pooling, several end-to-end trainable pooling methods are proposed.Ying et al. (2018) proposed a differentiable graph pooling module (DiffPool) to softly assign nodesto a set of clusters using neural networks, forming fully connected coarsened graphs through adense cluster assignment matrix. Gao & Ji (2019) and Lee et al. (2019) devised a top-K nodeselection-based pooling method (gPool and SAGPool) to form an induced subgraph for the nextlayer. Although it is efficient, this method loses the completeness of the graph structure information.In addition, Vinyals et al. (2015) proposed Set2Set, the global pooling operation by aggregatinginformation through RNNs. Zhang et al. (2018) proposed SortPool which pools graphs accordingto the feature map values that are sorted in descending order. Diehl (2019) designed a poolingoperation by contracting the edges (EdgePool). The contracting scores are calculated by featuresfrom the two incident nodes. These approaches learn hierarchical structures from node features withdifferentiable parameters. However, they tend not to reflect the topology information of the graphfor pooling.3 P RELIMINARIES3.1 G RAPH NOTATIONSA graphGis denoted as a pair (V;E)withV=fv1;:::;v Ngthe set of nodes (vertices), and E2VV the set of edges. Each node viis associated with a feature vector xi2Rf. To makenotation more compact, the set of node feature vectors of graph Gis denoted as a matrix X=[x1;x2;:::;x N]>2RNf. Also, a graph has a N-by-Nweighted adjacency matrix AwhereAi;jrepresents the weight of the edge between viandvjand a degree matrix D, a diagonal matrix whichcontains information about the degree of each node — that is, the sum of edge weights attached toeach node. As usual, we denote the combinatorial Laplacian Lof graphGwithL=DAand letkandkbe thek-th (smallest) eigenvalue and corresponding eigenvector of Lrespectively.3.2 G RAPH NEURAL NETWORKSDue to an ever increasing interest in combining deep learning and structured approaches, variousgraph-based neural networks have been proposed over the years. Based on spectral graph the-ory (Chung & Graham, 1997), approaches which convert graphs to the spectral domain and ap-ply convolution kernels of the graphs have been proposed (Bruna et al., 2013; Henaff et al., 2015;Kipf & Welling, 2016). Gilmer et al. (2017) suggested the message passing framework, which en-compasses a number of previous neural models for graphs under a differentiable message passinginterpretation. Xu et al. (2018) analyzed the representation power of various GNN architectures andproposed Graph Isomorphism Networks (GIN), where representational power is equal to the powerof the Weisfeiler-Lehman test.In this paper, we use a simple form of a message passing function similar to GIN.M(A;X ) = (X+D12AD12X)Wm (1)whereWm2Rff. After that, we define a single GNNs layer as follows:GNN (A;X ) = [(M2(A;(M1(A;X )))) ;(M1(A;X ))]Wg (2)whereM1andM2are message passing layer, is an activation function, [X;Y]denotes row-wiseconcatenation of two matrix and Wg2R2ff0is a learnable parameter for GNN (A;X ). In the restof the paper, we use GNN (A;X )in Equation equation 2 as a base GNNs module.13.3 S PECTRAL SIMILARITY BETWEEN GRAPHSSpectral graph theory has been considered as a powerful way to describe structural characteristicsof graphs. Therefore, structural similarity between two graphs can be clearly defined by comparingthe spectral properties of graphs.1It is just for fair comparison with stable performances for all models. It is a non-critical choice and it canbe substituted by any GNN architectures.3Under review as a conference paper at ICLR 2021Figure 2: The architecture of SSGPool layer combined with graph neural networks. The SSGPoollearns coarsening matrices Pto minimize task-specific loss while retaining spectral similarity. Torepresent the spectral similarity, we use the Fiedler vector of graph Laplacian.For two graphs having the same number of nodes, Spielman & Srivastava (2011) and Spielman &Teng (2011) proposed spectral similarity to determine how closely a graph Gsapproximates a G:8f2RN;(1)f>Lff>Lsf(1 +)f>Lf (3)whereLis a Laplacian matrix of a graph G. If the equation holds, we can say that Gsis an-spectralapproximation of G.For the graph coarsening problem which has different number of nodes between the original graphsand coarsened graphs, Loukas & Vandergheynst (2018) generalized it by restricting to first K-eigenspace: the restricted spectral similarity (RSS) . If there is a mapping matrix P2RNnbe-tween original vertex set V=fv1;:::;v Ngand the coarsened vertex set Vc=fv01;:::;v0ng, then RSSis defined as follows:Restricted Spectral Similarity (RSS) .Suppose that there exist an integer Kand positive constantk, such that for every kK,(1k)u>kLuku>k~Luk(1 +k)u>kLuk;~L=PLcP+; L c=P>LP (4)whereukisk-th eigenvector of L,P+andPare pseudo-inverse of Pand its transpose, and ~LisLaplacian matrix of lifted (reverse of coarsening) graph of GcfromRnback to RN. Then, theGCis said to satisfy the restricted spectral similarity property with the RSS constants fkgKk=1.4 S PECTRALLY SIMILAR GRAPH POOLINGWe suggest a new graph pooling algorithm which learns coarsening matrix to construct adjacencymatrix and node feature matrix of upper layers while keeping spectral characteristics of originalgraphs. The main idea is to keep the spectral information by maximizing the similarity between theFiedler vector of original graphs and its coarsened ones. As two vectors are on different dimensionalspaces, the vector of the coarsened graph is lifted back to the original space using the inverse of thecoarsening matrix. In order to make the whole process end-to-end trainable, we define the coarsen-ing matrix and derive the easy inversion of the coarsening matrix. Figure 2 shows the architectureof proposed method.4.1 G RAPH COARSENINGThe coarsening can be expressed with a surjective map (i.e., many-to-one) ':VN!V nbetweenthe original vertex set VNand the smaller vertex set Vn. Then, graph coarsening can be defined viaa coarsening matrix:4Under review as a conference paper at ICLR 2021Definition 1 (Coarsening matrix). MatrixP2f0;1gNnis a coarsening matrix with regard tographGif and only if it satisfies the condition that it is a surjective mapping of the vertex set,meaning that if P(i;r) = 1 thenP(i;r0) = 0 for everyr06=r.Similar to Loukas (2019), the expensive pseudo-inverse computation for Pcan be substituted bysimple transposition and re-scaling:Proposition 1 (Easy inversion). The pseudo-inverse of a coarsening matrix Pis given byP+=Q2P>, whereQ2Rnnis a diagonal matrix with Q(r;r) =jjP(:;r)jj2.4.2 P OOLING WITH COARSENING MATRIXSuppose we have the learned coarsening matrix at l-th layer,Pl2RNlNl+1. WithPl, SSGPoollayer coarsens the graph, generating a new coarsened adjacency matrix Al+1and a new node featurematrixXl+1.Most previous coarsening based pooling approaches such as Ying et al. (2018) and Ma et al. (2019)used a quadratic form of the adjacency matrix to obtain new coarsened adjacency matrix, Al+1=P>lAlPl. Instead, we use the Laplacian matrix Llto obtain a new coarsened adjacency matrix Al+1:Ll+1=P>lLlPl; A l+1=Dl+1Ll+1 (5)whereDl+1is a degree matrix obtained by leaving only diagonal terms of Ll+1.UtilizingLlinstead ofAlhas two noteworthy benefits. First, the obtained coarsened adjacency ma-trix is not diagonal-dominant: the coarsened graph obtained from the quadratic form of Ahas signif-icantly stronger self-loops than any other connections, and these self-loops might hamper the mes-sage passing of GNNs. Second, our coarsening is consistent with regard to the Laplacian form: theLaplacian matrix of the coarsened graph retains spectral properties as is desired, e.g., the nullspaceofLis preserved both by coarsening and lifting because Pl1Nl+1=1Nl+1andP+l1Nl=1Nl.Further, the new node feature matrix of the next layer Xl+1is obtained as follows:Zl=GNN l;embed (Al;Xl)Xl+1=P+l;softZl (6)wherePsoftis softmax output of P, which will be covered in the next section. It is worthwhile tonote that while most of previous methods use the form of transpose of Psoftso that features of uppernodes are obtained by sum of the original nodes (sum pooling), we use pseudoinverse of Psofttoweighted average the node features (average pooling) to get features of supernodes. As the numberof nodes in each cluster can be vary, our method can stabilize the learning.4.3 L EARNING THE COARSENING MATRIXWe describe how SSGPool generates the coarsening matrix at the l-th layer,Pl2RNlNl+1. Forconvenience, we drop the notation of layer land denote pi=P(i;:). According to Definition 1, pican be defined as a categorical random variable with probabilities i1;i2;:::; in, wherenis thenumber of nodes in the coarsened graph.It is straightforward to sample from pi, but we cannot backpropagate gradients though the samplingsince the variables are discrete. A recently popular approach to handle this difficulty is to samplefrom a continuous approximation of the discrete distribution (Maddison et al., 2016; Jang et al.,2017), and use the reparameterization trick to get (biased) gradients from this approximation. In thiswork, we simply borrow the gradient trick of Straight-Through Gumbel-Softmax estimator (Janget al., 2017) to ensure end-to-end training. The probability is estimated via the GNN modulefollowed by softmax function: =Psoft=softmax (GNN pool(A;X )) (7)Finally, the pican be drawn by one-hot of the argmax on the softmax output:pi=onehotarg maxj[ij](8)Although the original ST-Gumbel trick utilizes samples drawn from gGumbel (0;1)to givestochasticity, we drop this sampling procedure and choose the max jonly with the probability .5Under review as a conference paper at ICLR 20214.4 S PECTRAL SIMILARITY OF GRAPHS AS REGULARIZATIONIn this section, we propose the spectral regularizer for a graph pooling, which enforces coarsen-ing matrices to keep coarsened graph spectrally similar to the original graph. To start with, therelationship between the original graph and the final coarsened graph is expressed in compact form:Lf=PL0P>;~L0=P+LfP (9)whereLfandL0are Laplacian matrices of the final coarsened graph and the original graph, P=PfP0andP+=P+0P+f. By virtue of Proposition 1, the pseudo-inverse of Plcan becalculated in linear time.In spectral graph theory, the second smallest eigenvector of graph Laplacian, also known as Fiedlervector, entails the overall topology information of graphs, as it is the function that maps adjacentnodes with similar values: The larger difference between values of nodes has the farther topologicaldistance between nodes is.The Fiedler vector uof the original graph can be coarsened and lifted given a coarsening matrix P:uc=P+u;~u=Puc (10)where ~uis a vector that has been sequentially coarsened and lifted from Fiedler vector vector ugiven the matrix P. Then, the ~uis the best approximation of ugivenP, because the PP+is theprojection matrix with a smaller rank (See the proof of Proposition 1). Therefore, as the distancebetween ~uandugets closer, the original graph and coarsened graph become more similar to eachother in terms of global structure. Finally, we propose the spectral regularizer term based on cosinesimilarity:LTotal=LTask+1u>~ujujj~uj(11)whereLTaskis task-specific loss term and is a hyperparameter for the regularization term.Connection to Restricted Spectral Similarity (RSS). Followed by Loukas (2019), The RSS canbe re-formulated through the following induced semi-norm:jjujjL=pu>Lu;jjucjjLc=qu>cLcuc;whereuc=P+u(1)jjucjjLjjucjjLc(1 +)jjujjL (12)Then, we can obtain an upper bound of difference between semi-norms of the original graph and thecoarsened graph with a triangular inequality.jju~ujjLjjujjLjjjujjLjjucjjLcjjjujjL;where ~u=Puc (13)Therefore, reducing the distance between uand~uwith our regularization term makes the originalgraph and coarsened graph to be spectrally similar.5 E XPERIMENTSIn this section, we highlight the advantages of SSGPool compared to other competitive graph poolingalgorithms with various graph benchmark datasets. Also, we apply our method to a real-worldproblem, image retrieval with visual scene graphs. For the experiments, we use five competitivebaselines recently proposed for differentiable graph pooling. All the experimental details includingbaseline models, benchmarks and implementation details are in Appendix B.5.1 G RAPH CLASSIFICATIONS TASK WITH GRAPH BENCHMARK DATASETSWe evaluate SSGPool on a variety of graph datasets from benchmarks commonly used for graphclassification tasks. To examine the general ability of our model, four datasets are selected accordingto their amount of data and graph size: MUTAG (Debnath et al., 1991), ENZYME (Borgwardt et al.,2005), PROTEINS (Feragen et al., 2013) and NCI1 (Shervashidze et al., 2011).6Under review as a conference paper at ICLR 2021Table 1: Average accuracy and standard deviation for graph benchmarks are presented. The Diff-Pool* denotes DiffPool with additional losses originally proposed in Ying et al. (2018). Also,SSGPool-NoReg indicates the SSGPool without regularization. We highlight the best results ( bold )and second best results (blue).Model MUTAG ENZYME PROTEINS NCI1GNN w/o Pooling .746 .007 .301.023 .726.007 .733.005SortPool (Zhang et al., 2018) .832 .016 .277.020 .730.012 .734.011gPool (Gao & Ji, 2019) .732 .018 .303.019 .734.006 .721.004SAGPool (Lee et al., 2019) .803 .015 .326.028 .730.006 .738.009EdgePool (Diehl et al., 2019) .770 .033 .329.025 .731.004 .751.006DiffPool* (Ying et al., 2018) .853.019 .283.043 .756.009 .743.009SSGPool-NoReg .846 .015 .369.021 .745.006 .752.005SSGPool .852 .009 .382.012 .750.005 .753.010Table 1 shows overall results for graph benchmarks compared to other state-of-the-art graph poolingmethods. The average and standard deviation are obtained from 10 times of 10-fold cross valida-tions test. First of all, we highlight that the proposed regularization term significantly improvesperformance across all datasets. This implies that preserving global structures while simultaneouslypooling graphs has a substantial impact on graph representation learning.We observed that, for ENZYME and NCI datasets, the SSGPool showed best performance. Eventhough the SSGPool achieves second best performance in MUTAG and PROTEINS datasets, itshows very competitive results compared to other methods. Also, it is worthwhile to note that the se-lected four datasets have distinct statistics in terms of the number of data and graph size. As a result,all comparative models show considerably different performance depending on the datasets. For ex-ample, The DiffPool shows best performance at MUTAG and PROTEINS but for the ENZYME andNCI1, it achieves degraded scores. However, the proposed method consistently showed good perfor-mance across all datasets. We also report the results for benchmarks with varying hyperparameters(e.g., the number of pooling layer, pooling ratio and existence of regularizer) in Appendix C.5.2 I MAGE RETRIEVAL TASK WITH VISUAL SCENE GRAPHTable 2: The results of image retrieval interms of NDCG. Higher the NDCG score is,better the performance.NDCGModel 5 10 20 30 40 50ResNet152 .720 .728 .742 .756 .771 .786GNNs .785 .799 .820 .836 .845 .862SAGPool .789 .803 .824 .839 .852 .865DiffPool .790 .805 .825 .840 .853 .865SSGPool .796 .810 .830 .844 .857 .869To see more intuitive interpretation of the pooling,we apply SSGPool to perform image retrieval viavisual scene graph matching. A visual scene graph,initially proposed in Johnson et al. (2015), representscontents of an image in the form of a graph consist-ing of three kinds of components: objects, their at-tributes, and relationships between two objects.Visual scene graphs can be used to build an image-to-image retrieval system (Gordo & Larlus, 2017),which returns a list of images sorted by relevancewith respect to an image query. In an image retrievalsystem based on a visual scene graph, the relevancemeasure is defined as a degree of matching betweenvisual scene graphs. The matching between two vi-sual scene graphs can be evaluated by computing their cosine similarity between embedded visualscene graphs, either annotated by a human or algorithmically generated, into a fixed-length vector.To train and evaluate the image retrieval system, we need a ground truth measure of image relevance.Following prior work (Gordo & Larlus, 2017) which demonstrates that the similarity between imagecaptions is highly correlated to the human rating of image relevance, we utilize caption similarity asa proxy metric during our experiment. We use S-BERT (Reimers & Gurevych, 2019), a transformerpretrained to generate sentence embeddings, to compute the similarity between captions. A proxyrelevance measure between two images is obtained by first computing S-BERT representations ofthe captions and then obtaining the cosine similarity between them. With the proxy relevance score7Under review as a conference paper at ICLR 2021Figure 3: Left : An original image corresponding to the scene graphs on the right. Right : Poolingresults on each graph in each layer. Same color of nodes are meant to be mapped to the samecoarsened node in the pooled layer. Since DiffPool coarsens the graph with soft assignment matrix,we selected a top-1 coarsened node for each original node for visualization. The grey colored nodesin layer-2 are left-over coarsened nodes that were not chosen as top-1 by any original nodes. Somesignificant node labels are specified to demonstrate different properties between the methods.defined, Normalized Discounted Cumulative Gain (NDCG) is used to measure the performance ofretrieval.The proxy relevance score also provides supervision for learning graph representation. In every iter-ation, a batch of training image pairs (and corresponding visual scene graph pairs) are sampled, andthe squared error between the cosine similarity of embeddings in each pair and their proxy relevancescore is minimized. To obtain both captions and scene-graphs for images, we use 48,220 imageswhich belongs to both MS COCO dataset (Lin et al., 2014) and Visual Genome (VG) dataset (Kr-ishna et al., 2017). Following the Stanford split (Xu et al., 2017), we manually split the VG-COCOdataset with 36,601 train, 1,000 validation and 5,000 test images. We use ResNet152 (Simonyan &Zisserman, 2014), GNNs without pooling, DiffPool and SAGPool are chosen as comparative base-lines. Table 2 shows the performance on the image retrieval task. Among the overall models, theSSGPool achieves the best results over all NDCG scores.To compare the learned hierarchical structure among the graph pooling methods, we visualize thecoarsening results of each model (Figure 3). As shown in the first column, SSGPool coarsens thegraph by reflecting the structural information well. Due to this characteristic, the trees and their at-tributes (leaf-green) are coarsened to a single node, and deer eating grass and zebra are coarsened toanother node. Furthermore, it can be seen that our method successfully maintains the overall topo-logical structure of the original graph in the upper layer. In the case of DiffPool taking the coarsen-ing form like our method, however, nodes with similar features tend to be coarsened together. Also,as DiffPool has a dense coarsening matrix, the upper layer graph cannot reflect the original graphstructure and has the form of a fully connected graph. Lastly, the SAGPool constitutes hierarchies byselecting important nodes. We can see that it selects important nodes (e.g., eating, deer, zebra) butloses considerable amounts of other peripheral information. Additionally, SAGPool’s upper layergraph loses structural information from the original graph due to it is masking out all other nodesnot selected. We attach more examples of qualitative results in Appendix D.6 C ONCLUSIONSIn this paper, we proposed the end-to-end graph pooling method, Spectrally Similar Graph Pooling.In contrast to previous work, our method learns compositional hierarchies while preserving theglobal structure of the graph. The proposed method shows competitive results not only in graphbenchmarks datasets, but in real-world problem such as image retrieval with visual scene graphs.We also show that our proposed method learns meaningful hierarchical structures.8Under review as a conference paper at ICLR 2021
N6stbIy6rGE
This paper tackles an interesting problem on GNNs. However, the current presentation and evaluation of the paper is not good enough
4: Ok but not good enough - rejection
Graph Neural Networks (GNNs) is an increasing popular topic of research in machine learning on irregular graph-structured data. This paper tackles the problem of graph pooling and presents Spectrally Similar Graph Pooling (SSGPool) algorithm for learning hierarchical representations of the graphs. The main idea of the paper is to learn a coarsening matrix (surjective mapping of the nodes) to coarsen nodes while keeping the structural and feature information. To keep the structural information, it maximizes the similarity between the Fiedler vectors of the original and coarsened graphs while using standard GNNs for keeping feature information. The idea of the new pooling algorithm is interesting. However, the current presentation of the paper has several shortcomings. I would suggest the following comments to further improve the quality of the paper. • The contributions of the paper are not clearly mentioned in the current draft. I would suggest clearly mentioning the contributions and differentiate them with the existing methods. • The proposed pooling algorithm uses Laplacian matrix to obtain adjacency matrix for the coarsened graphs- Eq. 5 (instead of the Adjacency matrix) which is quite interesting and has several benefits as compared to the existing approaches. I think it would be interesting to see a results comparison in terms of classification accuracy (Table 1) of using the adjacency matrix and the newly proposed Laplacian matrix for obtaining new coarsened adjacency matrix during the pooling. • My major concern is the limited evaluation of the proposed method. Only four bioinformatics datasets: mutag, enzymes, proteins, and nci1 are chosen for the comparison. I would encourage the authors to run experiments on DD, collab, imdb-binary, imdb-multi, reddit-binary and reddit-multi datasets and provide a comparison. Such comparison would be helpful to better highlight the performance of the proposed method on different kinds of datasets. Also, the current results are marginally improved only on two datasets which is not enough for comparison • Wang et al., (2020) have recently proposed quite relevant graph pooling algorithm. Their results are encouraging, also the source code is publicly available. I would encourage the authors to consider this method for the comparison too. • The authors are encouraged to release their code and pre-trained models for foster reproducibility of the results • A proofread is suggested to avoid some minor mistakes, e.g., Fiedler vector vector (section 4.4) etc. References: Wang, Z., & Ji, S. (2020). Second-Order Pooling for Graph Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Spectrally Similar Graph Pooling ### Paper Abstract We consider the problem of learning compositional hierarchies of graphs. Even though structural characteristics of graphs can be learned by Graph Neural Networks (GNNs), it is difficult to find an overall compositional hierarchy using such flat operators. In this paper, we propose a new graph pooling algorithm, Spectrally Similar Graph Pooling (SSGPool), to learn hierarchical representations of graphs. The main idea of the proposed SSGPool algorithm is to learn a coarsening matrix which maps nodes from an original graph to a smaller number of nodes in a coarsened graph. The coarsening matrix is trained to coarsen the nodes based on their feature vectors while keeping the spectral characteristics of the original graph in the coarsened one. Although existing graph pooling methods take either feature-based pooling or structure-preserving pooling, SSGPool considers two properties simultaneously in an end-to-end manner. Experiments on various graph benchmarks show the advantage of our method compared to strong baselines. To further investigate the effectiveness of our proposed method, we evaluate our approach on a real-world problem, image retrieval with visual scene graphs. Quantitative and qualitative analyses on the retrieval problem confirm that the proposed method efficiently captures the hierarchical semantic structure of scene graphs. ### Paper Keywords ["Graph Neural Networks", "Graph Pooling", "Spectral Similarity on Graph"] ### Paper Content ABSTRACTWe consider the problem of learning compositional hierarchies of graphs. Eventhough structural characteristics of graphs can be learned by Graph Neural Net-works (GNNs), it is difficult to find an overall compositional hierarchy using suchflat operators. In this paper, we propose a new graph pooling algorithm, SpectrallySimilar Graph Pooling (SSGPool), to learn hierarchical representations of graphs.The main idea of the proposed SSGPool algorithm is to learn a coarsening matrixwhich maps nodes from an original graph to a smaller number of nodes in a coars-ened graph. The coarsening matrix is trained to coarsen the nodes based on theirfeature vectors while keeping the spectral characteristics of the original graph inthe coarsened one. Experiments on various graph benchmarks show the advantageof our method compared to strong baselines. To further investigate the effective-ness of our proposed method, we evaluate our approach on a real-world problem,image retrieval with visual scene graphs. Quantitative and qualitative analyses onthe retrieval problem confirm that the proposed method efficiently captures thehierarchical semantic structure of scene graphs.1 I NTRODUCTIONBy virtue of the recent progress on graph neural networks (GNNs) (Gori et al., 2005; Scarselli et al.,2008; Bruna et al., 2013; Kipf & Welling, 2016; Gilmer et al., 2017; Veli ˇckovi ́c et al., 2018), varioustypes of data including structural data can be dealt with using neural network algorithms. Whileconventional neural network algorithms, such as convolutional neural networks and recurrent neuralnetworks, take regular structured inputs (images with grid pixel structure and sound signals withMarkovian temporal dependencies), GNNs have been recently suggested as a method for extendingthe scope of the inputs to graphs having irregular structures, such as molecular data, knowledgegraphs, social networks and visual scene graphs. Most GNNs attempt to implicitly reflect the struc-tural information through node (graph) representations. In other words, GNNs assign feature vec-tors to each node and update the node features by transforming and aggregating information fromthe neighborhoods. Even though structural characteristics can be learned by applying these mes-sage passing steps repeatedly, it is difficult to find an overall compositional hierarchy using such flatoperators.Recent work has proposed using pooling methods such as CNNs in order to discover hierarchi-cal structures between nodes in GNNs (Vinyals et al., 2015; Ying et al., 2018; Zhang et al., 2018;Lee et al., 2019; Gao & Ji, 2019; Diehl, 2019; Ma et al., 2019). These studies are divided intotwo categories depending on what information is mainly used for the pooling operator: structure-based approaches and feature-based approaches. Structure-based approaches learn node featureswith GNNs, however, the original graph is coarsened by deterministic graph clustering algorithmsbased on graph theory. Therefore, the resultant coarsened graph reflects the topology of the originalgraph, but the node features are not used during coarsening. Also the deterministic clustering meth-ods are not end-to-end trainable. On the other hand, feature-based approaches learn to assign nodesin the original graph to the nodes in the coarsened graph based on the node feature vector. Eventhough these approaches can be trained in an end-to-end manner, it is hard to maintain the topologyinformation of the original graph.In this paper, we propose a new graph pooling method, Spectrally Similar Graph Pooling (SSGPool),which makes use of both node features and structural information between the nodes (Figure 1).The main idea of SSGPool is to learn a coarsening matrix which maps nodes from an originalgraph to a smaller number of nodes in a coarsened graph. The coarsening matrix is trained to1Under review as a conference paper at ICLR 2021Figure 1: An illustrative example of compositional hierarchy in a visual scene graph. (a) is anoriginal image and (b) is a hierarchical structure of visual scene graph for the image.coarsen the nodes based on correlations between their feature vectors while maintaining the topologyinformation using spectral characteristics of the original graph. To utilize the node feature vectors,SSGPool basically builds upon conventional GNN algorithms. In addition, structural similaritiesbetween two different sized graphs are defined in order to be used as a regularizer during training.By having structural similarities act as a regularizer, SGGPool binds nodes having similar featurevectors while keeping the spectral characteristics of the original graphs in an end-to-end manner.Experiments on various graph benchmarks show the advantage of our method compared to strongbaselines. To further investigate the effectiveness of our proposed method, we evaluate our approachon a real-world problem, image retrieval with visual scene graphs. Quantitative and qualitative anal-yses on the retrieval problem confirm that the proposed method efficiently captures the hierarchicalsemantic structures of scene graphs.The remainder of the paper is organized as follows. In Section 2, we review related work aboutthe graph pooling algorithms. Next, we introduce notations about the graphs, GNN algorithms andspectral similarity between graphs as preliminaries. After that, the proposed SSGPool method isexplained in detail and experimental results on various datasets, comparing our proposed algorithmwith other well-known graph pooling algorithms are presented.2 R ELATED WORKPooling operations in graph neural networks (GNNs) can scale down the size of inputs and enlargethe receptive fields, thus giving rise to better generalization and performance. In this section, wereview several recent methods for graph pooling coupled with GNNs. Graph pooling methods canbe grouped into the following two categories: structure-based pooling and feature-based pooling.2.1 S TRUCTURE -BASED POOLINGIncluding earlier works of neural networks on graph, several proposed GNNs perform pooling withexisting graph clustering algorithm. These methods learn the representations of graphs in 2-steps:First these pooling methods build hierarchical structures using a graph clustering algorithm. Next,they learn embeddings of nodes in each layer based on GNN modules. Bruna et al. (2013) built ahierarchy of the graph with agglomerative clustering. Defferrard et al. (2016) and Fey et al. (2018)used the Graclus algorithm (Dhillon et al., 2007) which computes graph clustering without eigen-vectors. Simonovsky & Komodakis (2017) constructed the graph hierarchies through a combineduse of spectral polarity and Kron reduction. More recently, Ma et al. (2019) proposed EigenPool,which used spectral graph clustering methods to produce a coarsened graph. These methods leveragetopological information from graphs in order to produce coarsened graph. However these methodsdo not use node features which have useful information for learning representations of graphs. Fur-thermore, as the existing graph clustering algorithms are not differentiable, they are incapable oflearning in an end-to-end fashion.2Under review as a conference paper at ICLR 20212.2 F EATURE -BASED POOLINGIn contrast to structure-based pooling, several end-to-end trainable pooling methods are proposed.Ying et al. (2018) proposed a differentiable graph pooling module (DiffPool) to softly assign nodesto a set of clusters using neural networks, forming fully connected coarsened graphs through adense cluster assignment matrix. Gao & Ji (2019) and Lee et al. (2019) devised a top-K nodeselection-based pooling method (gPool and SAGPool) to form an induced subgraph for the nextlayer. Although it is efficient, this method loses the completeness of the graph structure information.In addition, Vinyals et al. (2015) proposed Set2Set, the global pooling operation by aggregatinginformation through RNNs. Zhang et al. (2018) proposed SortPool which pools graphs accordingto the feature map values that are sorted in descending order. Diehl (2019) designed a poolingoperation by contracting the edges (EdgePool). The contracting scores are calculated by featuresfrom the two incident nodes. These approaches learn hierarchical structures from node features withdifferentiable parameters. However, they tend not to reflect the topology information of the graphfor pooling.3 P RELIMINARIES3.1 G RAPH NOTATIONSA graphGis denoted as a pair (V;E)withV=fv1;:::;v Ngthe set of nodes (vertices), and E2VV the set of edges. Each node viis associated with a feature vector xi2Rf. To makenotation more compact, the set of node feature vectors of graph Gis denoted as a matrix X=[x1;x2;:::;x N]>2RNf. Also, a graph has a N-by-Nweighted adjacency matrix AwhereAi;jrepresents the weight of the edge between viandvjand a degree matrix D, a diagonal matrix whichcontains information about the degree of each node — that is, the sum of edge weights attached toeach node. As usual, we denote the combinatorial Laplacian Lof graphGwithL=DAand letkandkbe thek-th (smallest) eigenvalue and corresponding eigenvector of Lrespectively.3.2 G RAPH NEURAL NETWORKSDue to an ever increasing interest in combining deep learning and structured approaches, variousgraph-based neural networks have been proposed over the years. Based on spectral graph the-ory (Chung & Graham, 1997), approaches which convert graphs to the spectral domain and ap-ply convolution kernels of the graphs have been proposed (Bruna et al., 2013; Henaff et al., 2015;Kipf & Welling, 2016). Gilmer et al. (2017) suggested the message passing framework, which en-compasses a number of previous neural models for graphs under a differentiable message passinginterpretation. Xu et al. (2018) analyzed the representation power of various GNN architectures andproposed Graph Isomorphism Networks (GIN), where representational power is equal to the powerof the Weisfeiler-Lehman test.In this paper, we use a simple form of a message passing function similar to GIN.M(A;X ) = (X+D12AD12X)Wm (1)whereWm2Rff. After that, we define a single GNNs layer as follows:GNN (A;X ) = [(M2(A;(M1(A;X )))) ;(M1(A;X ))]Wg (2)whereM1andM2are message passing layer, is an activation function, [X;Y]denotes row-wiseconcatenation of two matrix and Wg2R2ff0is a learnable parameter for GNN (A;X ). In the restof the paper, we use GNN (A;X )in Equation equation 2 as a base GNNs module.13.3 S PECTRAL SIMILARITY BETWEEN GRAPHSSpectral graph theory has been considered as a powerful way to describe structural characteristicsof graphs. Therefore, structural similarity between two graphs can be clearly defined by comparingthe spectral properties of graphs.1It is just for fair comparison with stable performances for all models. It is a non-critical choice and it canbe substituted by any GNN architectures.3Under review as a conference paper at ICLR 2021Figure 2: The architecture of SSGPool layer combined with graph neural networks. The SSGPoollearns coarsening matrices Pto minimize task-specific loss while retaining spectral similarity. Torepresent the spectral similarity, we use the Fiedler vector of graph Laplacian.For two graphs having the same number of nodes, Spielman & Srivastava (2011) and Spielman &Teng (2011) proposed spectral similarity to determine how closely a graph Gsapproximates a G:8f2RN;(1)f>Lff>Lsf(1 +)f>Lf (3)whereLis a Laplacian matrix of a graph G. If the equation holds, we can say that Gsis an-spectralapproximation of G.For the graph coarsening problem which has different number of nodes between the original graphsand coarsened graphs, Loukas & Vandergheynst (2018) generalized it by restricting to first K-eigenspace: the restricted spectral similarity (RSS) . If there is a mapping matrix P2RNnbe-tween original vertex set V=fv1;:::;v Ngand the coarsened vertex set Vc=fv01;:::;v0ng, then RSSis defined as follows:Restricted Spectral Similarity (RSS) .Suppose that there exist an integer Kand positive constantk, such that for every kK,(1k)u>kLuku>k~Luk(1 +k)u>kLuk;~L=PLcP+; L c=P>LP (4)whereukisk-th eigenvector of L,P+andPare pseudo-inverse of Pand its transpose, and ~LisLaplacian matrix of lifted (reverse of coarsening) graph of GcfromRnback to RN. Then, theGCis said to satisfy the restricted spectral similarity property with the RSS constants fkgKk=1.4 S PECTRALLY SIMILAR GRAPH POOLINGWe suggest a new graph pooling algorithm which learns coarsening matrix to construct adjacencymatrix and node feature matrix of upper layers while keeping spectral characteristics of originalgraphs. The main idea is to keep the spectral information by maximizing the similarity between theFiedler vector of original graphs and its coarsened ones. As two vectors are on different dimensionalspaces, the vector of the coarsened graph is lifted back to the original space using the inverse of thecoarsening matrix. In order to make the whole process end-to-end trainable, we define the coarsen-ing matrix and derive the easy inversion of the coarsening matrix. Figure 2 shows the architectureof proposed method.4.1 G RAPH COARSENINGThe coarsening can be expressed with a surjective map (i.e., many-to-one) ':VN!V nbetweenthe original vertex set VNand the smaller vertex set Vn. Then, graph coarsening can be defined viaa coarsening matrix:4Under review as a conference paper at ICLR 2021Definition 1 (Coarsening matrix). MatrixP2f0;1gNnis a coarsening matrix with regard tographGif and only if it satisfies the condition that it is a surjective mapping of the vertex set,meaning that if P(i;r) = 1 thenP(i;r0) = 0 for everyr06=r.Similar to Loukas (2019), the expensive pseudo-inverse computation for Pcan be substituted bysimple transposition and re-scaling:Proposition 1 (Easy inversion). The pseudo-inverse of a coarsening matrix Pis given byP+=Q2P>, whereQ2Rnnis a diagonal matrix with Q(r;r) =jjP(:;r)jj2.4.2 P OOLING WITH COARSENING MATRIXSuppose we have the learned coarsening matrix at l-th layer,Pl2RNlNl+1. WithPl, SSGPoollayer coarsens the graph, generating a new coarsened adjacency matrix Al+1and a new node featurematrixXl+1.Most previous coarsening based pooling approaches such as Ying et al. (2018) and Ma et al. (2019)used a quadratic form of the adjacency matrix to obtain new coarsened adjacency matrix, Al+1=P>lAlPl. Instead, we use the Laplacian matrix Llto obtain a new coarsened adjacency matrix Al+1:Ll+1=P>lLlPl; A l+1=Dl+1Ll+1 (5)whereDl+1is a degree matrix obtained by leaving only diagonal terms of Ll+1.UtilizingLlinstead ofAlhas two noteworthy benefits. First, the obtained coarsened adjacency ma-trix is not diagonal-dominant: the coarsened graph obtained from the quadratic form of Ahas signif-icantly stronger self-loops than any other connections, and these self-loops might hamper the mes-sage passing of GNNs. Second, our coarsening is consistent with regard to the Laplacian form: theLaplacian matrix of the coarsened graph retains spectral properties as is desired, e.g., the nullspaceofLis preserved both by coarsening and lifting because Pl1Nl+1=1Nl+1andP+l1Nl=1Nl.Further, the new node feature matrix of the next layer Xl+1is obtained as follows:Zl=GNN l;embed (Al;Xl)Xl+1=P+l;softZl (6)wherePsoftis softmax output of P, which will be covered in the next section. It is worthwhile tonote that while most of previous methods use the form of transpose of Psoftso that features of uppernodes are obtained by sum of the original nodes (sum pooling), we use pseudoinverse of Psofttoweighted average the node features (average pooling) to get features of supernodes. As the numberof nodes in each cluster can be vary, our method can stabilize the learning.4.3 L EARNING THE COARSENING MATRIXWe describe how SSGPool generates the coarsening matrix at the l-th layer,Pl2RNlNl+1. Forconvenience, we drop the notation of layer land denote pi=P(i;:). According to Definition 1, pican be defined as a categorical random variable with probabilities i1;i2;:::; in, wherenis thenumber of nodes in the coarsened graph.It is straightforward to sample from pi, but we cannot backpropagate gradients though the samplingsince the variables are discrete. A recently popular approach to handle this difficulty is to samplefrom a continuous approximation of the discrete distribution (Maddison et al., 2016; Jang et al.,2017), and use the reparameterization trick to get (biased) gradients from this approximation. In thiswork, we simply borrow the gradient trick of Straight-Through Gumbel-Softmax estimator (Janget al., 2017) to ensure end-to-end training. The probability is estimated via the GNN modulefollowed by softmax function: =Psoft=softmax (GNN pool(A;X )) (7)Finally, the pican be drawn by one-hot of the argmax on the softmax output:pi=onehotarg maxj[ij](8)Although the original ST-Gumbel trick utilizes samples drawn from gGumbel (0;1)to givestochasticity, we drop this sampling procedure and choose the max jonly with the probability .5Under review as a conference paper at ICLR 20214.4 S PECTRAL SIMILARITY OF GRAPHS AS REGULARIZATIONIn this section, we propose the spectral regularizer for a graph pooling, which enforces coarsen-ing matrices to keep coarsened graph spectrally similar to the original graph. To start with, therelationship between the original graph and the final coarsened graph is expressed in compact form:Lf=PL0P>;~L0=P+LfP (9)whereLfandL0are Laplacian matrices of the final coarsened graph and the original graph, P=PfP0andP+=P+0P+f. By virtue of Proposition 1, the pseudo-inverse of Plcan becalculated in linear time.In spectral graph theory, the second smallest eigenvector of graph Laplacian, also known as Fiedlervector, entails the overall topology information of graphs, as it is the function that maps adjacentnodes with similar values: The larger difference between values of nodes has the farther topologicaldistance between nodes is.The Fiedler vector uof the original graph can be coarsened and lifted given a coarsening matrix P:uc=P+u;~u=Puc (10)where ~uis a vector that has been sequentially coarsened and lifted from Fiedler vector vector ugiven the matrix P. Then, the ~uis the best approximation of ugivenP, because the PP+is theprojection matrix with a smaller rank (See the proof of Proposition 1). Therefore, as the distancebetween ~uandugets closer, the original graph and coarsened graph become more similar to eachother in terms of global structure. Finally, we propose the spectral regularizer term based on cosinesimilarity:LTotal=LTask+1u>~ujujj~uj(11)whereLTaskis task-specific loss term and is a hyperparameter for the regularization term.Connection to Restricted Spectral Similarity (RSS). Followed by Loukas (2019), The RSS canbe re-formulated through the following induced semi-norm:jjujjL=pu>Lu;jjucjjLc=qu>cLcuc;whereuc=P+u(1)jjucjjLjjucjjLc(1 +)jjujjL (12)Then, we can obtain an upper bound of difference between semi-norms of the original graph and thecoarsened graph with a triangular inequality.jju~ujjLjjujjLjjjujjLjjucjjLcjjjujjL;where ~u=Puc (13)Therefore, reducing the distance between uand~uwith our regularization term makes the originalgraph and coarsened graph to be spectrally similar.5 E XPERIMENTSIn this section, we highlight the advantages of SSGPool compared to other competitive graph poolingalgorithms with various graph benchmark datasets. Also, we apply our method to a real-worldproblem, image retrieval with visual scene graphs. For the experiments, we use five competitivebaselines recently proposed for differentiable graph pooling. All the experimental details includingbaseline models, benchmarks and implementation details are in Appendix B.5.1 G RAPH CLASSIFICATIONS TASK WITH GRAPH BENCHMARK DATASETSWe evaluate SSGPool on a variety of graph datasets from benchmarks commonly used for graphclassification tasks. To examine the general ability of our model, four datasets are selected accordingto their amount of data and graph size: MUTAG (Debnath et al., 1991), ENZYME (Borgwardt et al.,2005), PROTEINS (Feragen et al., 2013) and NCI1 (Shervashidze et al., 2011).6Under review as a conference paper at ICLR 2021Table 1: Average accuracy and standard deviation for graph benchmarks are presented. The Diff-Pool* denotes DiffPool with additional losses originally proposed in Ying et al. (2018). Also,SSGPool-NoReg indicates the SSGPool without regularization. We highlight the best results ( bold )and second best results (blue).Model MUTAG ENZYME PROTEINS NCI1GNN w/o Pooling .746 .007 .301.023 .726.007 .733.005SortPool (Zhang et al., 2018) .832 .016 .277.020 .730.012 .734.011gPool (Gao & Ji, 2019) .732 .018 .303.019 .734.006 .721.004SAGPool (Lee et al., 2019) .803 .015 .326.028 .730.006 .738.009EdgePool (Diehl et al., 2019) .770 .033 .329.025 .731.004 .751.006DiffPool* (Ying et al., 2018) .853.019 .283.043 .756.009 .743.009SSGPool-NoReg .846 .015 .369.021 .745.006 .752.005SSGPool .852 .009 .382.012 .750.005 .753.010Table 1 shows overall results for graph benchmarks compared to other state-of-the-art graph poolingmethods. The average and standard deviation are obtained from 10 times of 10-fold cross valida-tions test. First of all, we highlight that the proposed regularization term significantly improvesperformance across all datasets. This implies that preserving global structures while simultaneouslypooling graphs has a substantial impact on graph representation learning.We observed that, for ENZYME and NCI datasets, the SSGPool showed best performance. Eventhough the SSGPool achieves second best performance in MUTAG and PROTEINS datasets, itshows very competitive results compared to other methods. Also, it is worthwhile to note that the se-lected four datasets have distinct statistics in terms of the number of data and graph size. As a result,all comparative models show considerably different performance depending on the datasets. For ex-ample, The DiffPool shows best performance at MUTAG and PROTEINS but for the ENZYME andNCI1, it achieves degraded scores. However, the proposed method consistently showed good perfor-mance across all datasets. We also report the results for benchmarks with varying hyperparameters(e.g., the number of pooling layer, pooling ratio and existence of regularizer) in Appendix C.5.2 I MAGE RETRIEVAL TASK WITH VISUAL SCENE GRAPHTable 2: The results of image retrieval interms of NDCG. Higher the NDCG score is,better the performance.NDCGModel 5 10 20 30 40 50ResNet152 .720 .728 .742 .756 .771 .786GNNs .785 .799 .820 .836 .845 .862SAGPool .789 .803 .824 .839 .852 .865DiffPool .790 .805 .825 .840 .853 .865SSGPool .796 .810 .830 .844 .857 .869To see more intuitive interpretation of the pooling,we apply SSGPool to perform image retrieval viavisual scene graph matching. A visual scene graph,initially proposed in Johnson et al. (2015), representscontents of an image in the form of a graph consist-ing of three kinds of components: objects, their at-tributes, and relationships between two objects.Visual scene graphs can be used to build an image-to-image retrieval system (Gordo & Larlus, 2017),which returns a list of images sorted by relevancewith respect to an image query. In an image retrievalsystem based on a visual scene graph, the relevancemeasure is defined as a degree of matching betweenvisual scene graphs. The matching between two vi-sual scene graphs can be evaluated by computing their cosine similarity between embedded visualscene graphs, either annotated by a human or algorithmically generated, into a fixed-length vector.To train and evaluate the image retrieval system, we need a ground truth measure of image relevance.Following prior work (Gordo & Larlus, 2017) which demonstrates that the similarity between imagecaptions is highly correlated to the human rating of image relevance, we utilize caption similarity asa proxy metric during our experiment. We use S-BERT (Reimers & Gurevych, 2019), a transformerpretrained to generate sentence embeddings, to compute the similarity between captions. A proxyrelevance measure between two images is obtained by first computing S-BERT representations ofthe captions and then obtaining the cosine similarity between them. With the proxy relevance score7Under review as a conference paper at ICLR 2021Figure 3: Left : An original image corresponding to the scene graphs on the right. Right : Poolingresults on each graph in each layer. Same color of nodes are meant to be mapped to the samecoarsened node in the pooled layer. Since DiffPool coarsens the graph with soft assignment matrix,we selected a top-1 coarsened node for each original node for visualization. The grey colored nodesin layer-2 are left-over coarsened nodes that were not chosen as top-1 by any original nodes. Somesignificant node labels are specified to demonstrate different properties between the methods.defined, Normalized Discounted Cumulative Gain (NDCG) is used to measure the performance ofretrieval.The proxy relevance score also provides supervision for learning graph representation. In every iter-ation, a batch of training image pairs (and corresponding visual scene graph pairs) are sampled, andthe squared error between the cosine similarity of embeddings in each pair and their proxy relevancescore is minimized. To obtain both captions and scene-graphs for images, we use 48,220 imageswhich belongs to both MS COCO dataset (Lin et al., 2014) and Visual Genome (VG) dataset (Kr-ishna et al., 2017). Following the Stanford split (Xu et al., 2017), we manually split the VG-COCOdataset with 36,601 train, 1,000 validation and 5,000 test images. We use ResNet152 (Simonyan &Zisserman, 2014), GNNs without pooling, DiffPool and SAGPool are chosen as comparative base-lines. Table 2 shows the performance on the image retrieval task. Among the overall models, theSSGPool achieves the best results over all NDCG scores.To compare the learned hierarchical structure among the graph pooling methods, we visualize thecoarsening results of each model (Figure 3). As shown in the first column, SSGPool coarsens thegraph by reflecting the structural information well. Due to this characteristic, the trees and their at-tributes (leaf-green) are coarsened to a single node, and deer eating grass and zebra are coarsened toanother node. Furthermore, it can be seen that our method successfully maintains the overall topo-logical structure of the original graph in the upper layer. In the case of DiffPool taking the coarsen-ing form like our method, however, nodes with similar features tend to be coarsened together. Also,as DiffPool has a dense coarsening matrix, the upper layer graph cannot reflect the original graphstructure and has the form of a fully connected graph. Lastly, the SAGPool constitutes hierarchies byselecting important nodes. We can see that it selects important nodes (e.g., eating, deer, zebra) butloses considerable amounts of other peripheral information. Additionally, SAGPool’s upper layergraph loses structural information from the original graph due to it is masking out all other nodesnot selected. We attach more examples of qualitative results in Appendix D.6 C ONCLUSIONSIn this paper, we proposed the end-to-end graph pooling method, Spectrally Similar Graph Pooling.In contrast to previous work, our method learns compositional hierarchies while preserving theglobal structure of the graph. The proposed method shows competitive results not only in graphbenchmarks datasets, but in real-world problem such as image retrieval with visual scene graphs.We also show that our proposed method learns meaningful hierarchical structures.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title This paper tackles an interesting problem on GNNs. However, the current presentation and evaluation of the paper is not good enough ### Review Text Graph Neural Networks (GNNs) is an increasing popular topic of research in machine learning on irregular graph-structured data. This paper tackles the problem of graph pooling and presents Spectrally Similar Graph Pooling (SSGPool) algorithm for learning hierarchical representations of the graphs. The main idea of the paper is to learn a coarsening matrix (surjective mapping of the nodes) to coarsen nodes while keeping the structural and feature information. To keep the structural information, it maximizes the similarity between the Fiedler vectors of the original and coarsened graphs while using standard GNNs for keeping feature information. The idea of the new pooling algorithm is interesting. However, the current presentation of the paper has several shortcomings. I would suggest the following comments to further improve the quality of the paper. • The contributions of the paper are not clearly mentioned in the current draft. I would suggest clearly mentioning the contributions and differentiate them with the existing methods. • The proposed pooling algorithm uses Laplacian matrix to obtain adjacency matrix for the coarsened graphs- Eq. 5 (instead of the Adjacency matrix) which is quite interesting and has several benefits as compared to the existing approaches. I think it would be interesting to see a results comparison in terms of classification accuracy (Table 1) of using the adjacency matrix and the newly proposed Laplacian matrix for obtaining new coarsened adjacency matrix during the pooling. • My major concern is the limited evaluation of the proposed method. Only four bioinformatics datasets: mutag, enzymes, proteins, and nci1 are chosen for the comparison. I would encourage the authors to run experiments on DD, collab, imdb-binary, imdb-multi, reddit-binary and reddit-multi datasets and provide a comparison. Such comparison would be helpful to better highlight the performance of the proposed method on different kinds of datasets. Also, the current results are marginally improved only on two datasets which is not enough for comparison • Wang et al., (2020) have recently proposed quite relevant graph pooling algorithm. Their results are encouraging, also the source code is publicly available. I would encourage the authors to consider this method for the comparison too. • The authors are encouraged to release their code and pre-trained models for foster reproducibility of the results • A proofread is suggested to avoid some minor mistakes, e.g., Fiedler vector vector (section 4.4) etc. References: Wang, Z., & Ji, S. (2020). Second-Order Pooling for Graph Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
H1ecDoR5Y7
ICLR.cc/2019/Conference
2019
Local Stability and Performance of Simple Gradient Penalty $\mu$-Wasserstein GAN
["Cheolhyeong Kim", "Seungtae Park", "Hyung Ju Hwang"]
Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution. Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint. In this study, we prove the local stability of optimizing the simple gradient penalty $\mu$-WGAN(SGP $\mu$-WGAN) under suitable assumptions regarding the equilibrium and penalty measure $\mu$. The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support. Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty. Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.
["WGAN", "gradient penalty", "stability", "measure valued differentiation"]
ABSTRACTWasserstein GAN(WGAN) is a model that minimizes the Wasserstein distancebetween a data distribution and sample distribution. Recent studies have proposedstabilizing the training process for the WGAN and implementing the Lipschitzconstraint. In this study, we prove the local stability of optimizing the simple gra-dient penalty -WGAN(SGP -WGAN) under suitable assumptions regarding theequilibrium and penalty measure . The measure valued differentiation conceptis employed to deal with the derivative of the penalty terms, which is helpful forhandling abstract singular measures with lower dimensional support. Based onthis analysis, we claim that penalizing the data manifold or sample manifold isthe key to regularizing the original WGAN with a gradient penalty. Experimentalresults obtained with unintuitive penalty measures that satisfy our assumptions arealso provided to support our theoretical results.1 I NTRODUCTIONDeep generative models reached a turning point after generative adversarial networks (GANs) wereproposed by Goodfellow et al. (2014). GANs are capable of modeling data with complex structures.For example, DCGAN can sample realistic images using a convolutional neural network (CNN)structure(Radford et al., 2015). GANs have been implemented in many applications in the field ofcomputer vision with good results, such as super-resolution, image translation, and text-to-imagegeneration(Ledig et al., 2017; Isola et al., 2017; Zhang et al., 2017; Reed et al., 2016).However, despite these successes, GANs are affected by training instability and mode collapse prob-lems. GANs often fail to converge, which can result in unrealistic fake samples. Furthermore, evenif GANs successfully synthesize realistic data, the fake samples exhibit little variability.A common solution to this instability problem is injecting an instance noise and finding differentdivergences. The injection of instance noise into real and fake samples during the training procedurewas proposed by Sønderby et al. (2017), where its positive impact on the low dimensional supportfor the data distribution was shown to be a regularizing factor based on the Wasserstein distance,as demonstrated analytically by Arjovsky & Bottou (2017). In f-GAN,f-divergence between thetarget and generator distributions was suggested which generalizes the divergence between two dis-tributions(Nowozin et al., 2016). In addition, a gradient penalty term which is related with SobolevIPM(Integral Probability Metric) between data distribution and sample distribution was suggestedby Mroueh et al. (2018).The Wasserstein GAN (WGAN) is known to resolve the problems of generic GANs by selectingthe Wasserstein distance as the divergence(Arjovsky et al., 2017). However, WGAN often failswith simple examples because the Lipschitz constraint on discriminator is rarely achieved duringthe optimization process and weight clipping. Thus, mimicking the Lipschitz constraint on thediscriminator by using a gradient penalty was proposed by Gulrajani et al. (2017).Noise injection and regularizing with a gradient penalty appear to be equivalent. The addition ofinstance noise in f-GAN can be approximated to adding a zero centered gradient penalty(Roth et al.,2017). Thus, regularizing GAN with a simple gradient penalty term was suggested by Meschederet al. (2018) who provided a proof of its stability.1Under review as a conference paper at ICLR 2019Based on a theoretical analysis of the dynamic system, Nagarajan & Kolter (2017) proved the localexponential stability of the gradient-based optimization dynamics in GANs by treating the simul-taneous gradient descent algorithm with a dynamic system approach. These previous studies wereuseful because they showed that the local behavior of GANs can be explained using dynamic systemtools and the related Jacobian’s eigenvalues.In this study, we aim to prove the convergence property of the simple gradient penalty -WassersteinGAN(SGP-WGAN) dynamic system under general gradient penalty measures . To the best ofour knowledge, our study is the first theoretical approach to GAN stability analysis which dealswith abstract singular penalty measure. In addition, measure valued differentiation(Heidergott &V ́azquez-Abad, 2008) is applied to take the derivative on the integral with a parametric measure,which is helpful for handling an abstract measure and its integral in our proof.The main contributions of this study are as follows.We prove the regularized effect and local stability of the dynamic system for a gen-eral penalty measure under suitable assumptions. The assumptions are written as both atractable strong version and intractable weak version. To prove the main theorem, we alsointroduce the measure valued differentiation concept to handle the parametric measure.Based on the proof of the stability, we explain the reason for the success of previous penaltymeasures. We claim that the support of a penalty measure will be strongly related to thestability, where the weight on the limiting penalty measure might affect the speed of con-vergence.We experimentally examined the general convergence results by applying two test penaltymeasures to several examples. The proposed test measures are unintuitive but they stillsatisfy the assumptions and similar convergence results were obtained in the experiment.2 P RELIMINARIESFirst, we introduce our notations and basic measure-theoretic concepts. Second, we define our SGP-WGAN optimization problem and treat this problem as a continuous dynamic system. Preliminarymeasure theoretic concepts are required to justify that the dynamic system changes in a sufficientlysmooth manner as the parameter changes, so it is possible to use linearization theorem. They arealso important for dealing with the parametric measure and its derivative. The problem setting witha simple gradient term is also discussed. The squared gradient size and simple gradient penalty termare used to build a differentiable dynamic system and to apply soft regularization as a resolving con-straint, respectively. The continuous dynamic system approach, which is a so-called ODE method,is used to analyze the GAN optimization problem with the simultaneous gradient descent algorithm,as described by Nagarajan & Kolter (2017).2.1 N OTATIONS AND PRELIMINARIES REGARDING MEASURE THEORYD(x; ) :X ! Ris a discriminator function with its parameter andG(z;) :Z ! X isa generator function with its parameter .pdis the distribution of real data and pg=pis thedistribution of the generated samples in X, which is induced from the generator function G(z;)and a known initial distribution platent (z)in the latent spaceZ.kkdenotes theL2Euclidean normif no special subscript is present.The concept of weak convergence for finite measures is used to ensure the continuity of the integralterm over the measure in the dynamic system, which must be checked before applying the theoremsrelated to stability. Throughout this study, we assume that the measures in the sample space are allfinite and bounded.Definition 1. For a set of finite measures figi2Iin(Rn;d)with euclidean distance d,figi2Iisreferred to as bounded if there exists some M > 0such that for all i2I,i(Rn)MFor instance, Mcan be set as 1 iffigare probability measures on Rn. Assuming that the penaltymeasures are bounded, Portmanteau theorem offers the equivalent definition of the weak conver-2Under review as a conference paper at ICLR 2019gence for finite measures. This definition is important for ensuring that the integrals over pandin the dynamic system change continuously.Definition 2. (Portmanteau Theorem) For a bounded sequence of finite measures fngn2Non theEuclidean space Rnwith a-field of Borel subsets B(Rn),nconverges weakly to if and only iffor every continuous bounded function onRn, its integrals with respect to nconverge toRd,i.e.,n!()Zdn!ZdThe most challenging problem in our analysis with the general penalty measure is taking the deriva-tive of the integral, where the measure depends on the variable that we want to differentiate. If ourpenalty measure is either absolutely continuous or discrete, then it is easy to deal with the integral.However, in the case of singular penalty measure, dealing with the integral term is not an easy task.Therefore, we introduce the concept of a weak derivative of a probability measure in the follow-ing(Heidergott & V ́azquez-Abad, 2008). The weak derivative of a measure is useful for handling aparametric measure that is not absolutely continuous with low dimensional support.Definition 3. (Weak Derivatives of a Probability Measure) Consider the Euclidean space and its-field of Borel subsets (Rd;B(Rd)). The probability measure Pis called weakly differentiable atif a signed finite measure P0exists whereddZ(x)dP= lim!01fZ(x)dP+Z(x)dPg=Z(x)dP0is satisfied for every continuous bounded function onRn. For the multidimensional parameter ,this can be defined similar manner.We can show that the positive part and negative part of P0have the same mass by putting (x) = 1and the Hahn–Jordan decomposition on P0. Therefore, the following triple (c;P+;P)is called aweak derivative of P, wherePare probability measures and P0is rewritten as:P0=cP+cPTherefore,ddZ(x)dP=Z(x)dP0=c(Z(x)dP+Z(x)dP)holds for every continuous bounded function onRn. It is known that the representation of(c;P+;P)forP0is not unique because (c+C;P++q;P+q)is also another repre-sentation of P0.For the general finite measure Q, a normalizing coefficient M()<1can be introduced. Theproduct rule for differentiating can also be applied in a similar manner to calculus.ddZ(x;)dP=Zr(x;)dP+Z(x;)dP0Therefore, for the general finite measure Q=M()P, its derivative Q0can be represented asbelow.Q0=M0()P+M()P0=M0()P+cM()P+cM()P2.2 P ROBLEM SETTING AS A DYNAMIC SYSTEMPrevious work of Mescheder et al. (2018) showed that the dynamic system of WGAN-GP is notnecessarily stable at equilibrium by demonstrating that the sequence of parameters is not Cauchysequence. This is mainly due to the term kxkin the dynamic system which has a derivativexkxkthatis not defined at x= 0. WGAN-GP has a penalty term EGP[(krxD(x; )k1)2]that can lead toa discontinuity in its dynamic system.These problems can be avoided by using the squared value of the gradient’s norm krxDk2, which isa differentiable function. In contrast to the WGAN-GP, recent methods based on a gradient penaltysuch as the simple gradient penalty employed by Mescheder et al. (2018) and the Sobolev GAN used3Under review as a conference paper at ICLR 2019the average of the squared values for the penalty area, whereas the WGAN-GP penalizes the size ofthe discriminator’s gradient krxDkaway from 1 in a pointwise manner.This advantage of squared gradient term1,E[krxDk2], makes the dynamic system differentiableand we define the WGAN problem with the square of the gradient’s norm as a simple gradientpenalty. This simple gradient penalty can be treated as soft regularization based on the size of thediscriminator’s gradient, especially in case where is the probability measure (Roth et al., 2017). Itis convenient to determine whether the system is stable by observing the spectrum of the Jacobianmatrix. In the following, (D(x; );pd;p;)is defined as an SGP -WGAN optimization problem(SGP-form) with a simple gradient penalty term on the penalty measure .Definition 4. The WGAN optimization problem with a simple gradient penalty term krxDk2,penalty measure , and penalty weight hyperparameter >0is given as follows, where the penaltyterm is only introduced to update the discriminator.max :Epd[D(x; )]Ep[D(x; )]2E[krxD(x; )k2]min:Epd[D(x; )]Ep[D(x; )]According to Nagarajan & Kolter (2017) and many other optimization problem studies, the simul-taneous gradient descent algorithm for GAN updating can be viewed as an autonomous dynamicsystem of discriminator parameters and generator parameters, which we denote as and. As aresult, the related dynamic system is given as follows._ =Epd[r D]Ep[r D]2r E[rTxDrxD]_=rEp[D]3 T OYEXAMPLESWe investigate two examples considered in previous studies by Mescheder et al. (2018) and Nagara-jan & Kolter (2017). We then generalize the results to a finite measure case. The first example is theunivariate Dirac GAN, which was introduced by Mescheder et al. (2018).Definition 5. (Dirac GAN) The Dirac GAN comprises a linear discriminator D(x; ) = x, datadistribution pd=0, and sample distribution p=.The Dirac GAN with a gradient penalty with an arbitrary probability measure is known to be globallyconvergent(Mescheder et al., 2018). We argue that this result can be generalized to a finite penaltymeasure case.Lemma 1. Consider the Dirac GAN problem with SGP form (D(x; ) = x; 0;; ;). Sup-pose that some small >0exists such that its finite penalty measure ;with massM( ;) = R1d ;0satisfies eitherM( ;)>0for( ;)2B((0;0))orM(0;0) = 0 and r M( ;)0for( ;)2B((0;0)).Then, the SGP -WGAN optimization dynamics with (D(x; ) = x; 0;; ;)are locally stableat the origin and the basin of attraction B=BR((0;0))is open ball with radius R. Its radius isgiven as follows.R= maxf0j2M( ;) + r M( ;)0for all ( ;)such that 2+22gMotivated by this example, we can extend this idea to the other toy example given by Nagarajan &Kolter (2017), where WGAN fails to converge to the equilibrium points ( ;) = (0;1).1In this study, we prefer to use the expectation notation on the finite measure, which can be understoodas follows. Suppose that ;=M( ;) ;where ;is normalized to the probability measure. Then,E ;[krxDk2] =E ;[M( ;)krxDk2] =RkrxDk2M( ;)d ;(x) =RkrxDk2d ;(x)4Under review as a conference paper at ICLR 2019Lemma 2. Consider the toy example (D(x; ) = x2;U(1;1);U(jj;jj);)whereU(0;0) =0and the ideal equilibrium points are given by ( ;) = (0;1). For a finite measure=onRwhich is independent of , suppose that !with6=C0forC0. Thedynamic system is locally stable near the desired equilibrium (0;1), where the spectrum of theJacobian at (0;1)is given by=2E[x2]q42E[x2]249.4 M AINCONVERGENCE THEOREMWe propose the convergence property of WGAN with a simple gradient penalty on an arbitrarypenalty measure for a realizable case: =withpd=pexists. In subsection 4.1, we providethe necessary assumptions, which comprise our main convergence theorem. In subsection 4.2, wegive the main convergence theorem with a sketch of the proof. A more rigorous analysis is given inthe Appendix.4.1 A SSUMPTIONSThe first assumption is made regarding the equilibrium condition for GANs, where we state the idealconditions for the discriminator parameter and generator parameter. As the parameters converge tothe ideal equilibrium, the sample distribution (p)converges to the real data distribution (pd)and thediscriminator cannot distinguish the generated sample and the real data.Assumption 1. p!pdas!andD(x; ) = 0 onsupp(pd)and its small openneighborhood, i.e., x2[x02supp(pd)Bx0(x0)impliesD(x; ) = 0 . For simplicity, we denote[x02supp(pd)Bx0(x0)asB(supp(pd)).The second assumption ensures that the higher order terms cannot affect the stability of the SGP-WGAN. In the Appendix, we consider the case where the WGAN fails to converge when As-sumption 2 is not satisfied. Compared with the previous study by Nagarajan & Kolter (2017), theconditions for the discriminator parameter are slightly modified.Assumption 2.g() =kEpd[r D(x; )]Ep[r D(x; )]k2;h( ) =E ;[krxD(x; )k2]are locally constant along the nullspace of the Hessian matrix.The third assumption allows us to extend our results to discrete probability distribution cases, asdescribed by Mescheder et al. (2018).Assumption 3.9g>0such thatD(x; ) = 0 on[jj<gsupp(p).The fourth assumption indicates that there are no other “bad” equilibrium points near ( ;),which justifies the projection along the axis perpendicular to the null space.Assumption 4. A bad equilibrium does not exist near the desired equilibrium point. Thus, ( ;)is an isolated equilibrium or there exist d;g>0such that all equilibrium points in Bd( )Bg()satisfy the other assumptions.The last assumption is related to the necessary conditions for the penalty measure. A calculationof the gradient penalty based on samples from the data manifold and generator manifold or theinterpolation of both was introduced in recent studies (Gulrajani et al., 2017; Roth et al., 2017;Mescheder et al., 2018). First, we propose strong conditions for the penalty measure.Assumption 5. The finite penalty measure =satisfies the followings:a!=andis independent of the discriminator parameter .bsupp(pd)supp()c9>0such thatsupp()B(supp(pd))forjj<.The assumption given above means that the support of the penalty measure should approach thedata manifolds smoothly as !. However, the penalty measure from WGAN-GP with a simple5Under review as a conference paper at ICLR 2019gradient penalty still reaches equilibrium without satisfying Assumption 5c. Therefore, we suggestAssumption 6, which is a weak version of Assumption 5. Assumption 6a2is technically required totake the derivative of the integral E ;[krxD(x; )k2]with respect to .Assumption 6. (Weak version of Assumption 5) The finite penalty measure = ;satisfies thefollowing.a ;! ;=, wheresupp( ;)only depends on . Near the equilibrium, ;can be weakly differentiated twice with respect to . In addition, its mass M( ;) = R1d ;is a twice-differentiable function of and bounded near the equilibrium.bE[r xDrT xD]is positive definite or supp(pd)supp().c9>0such thatsupp()Vforjj<, whereV=fxjrxD(x; ) = 0g.The assumption above implies the following situations; The penalty measure’s support approachesto data manifold and its weight changes smoothly with respect to and. At the equilibrium,penalty measure’s support contains data manifold. Also, ideal discriminator will remain flat on thepenalty area.In summary, the gradient penalty regularization term with any penalty measure where the supportapproachesB(supp(pd))in a smooth manner works well and this main result can explain the regu-larization effect of previously proposed penalty measures such as GP,pd,p, and their mixtures.4.2 M AINCONVERGENCE THEOREMAccording to the modified assumptions given above, we prove that the related dynamic system islocally stable near the equilibrium. The tools used for analyzing stability are mainly based on thosedescribed by Nagarajan & Kolter (2017). Our main contributions comprise proposing the necessaryconditions for the penalty measure and proving the local stability for all penalty measures that satisfyAssumption 6.Theorem 1. Suppose that our SGP -WGAN optimization problem (D;pd;p;)with equilibriumpoint ( ;)satisfies the assumptions given above. Then, the related dynamic system is locallystable at the equilibrium.A detailed proof of the main convergence theorem is given in the Appendix. A sketch of the proof isgiven in three steps. First, the undesired terms in the Jacobian matrix of the system at the equilibriumare cancelled out. Next, the Jacobian matrix at equilibrium is given byQRRT0, whereQ=E[r xDrT xD]andR=rEp[r D]j=. The system is locally stable when both QandRTRare positive definite. We can complete the proof by dealing with zero eigenvalues by showingthatN(QT)N(RT)and the projected system’s stability implies the original system’s stability.Our analysis mainly focuses on WGAN, which is the simplest case of general GAN minimax opti-mizationmax :Epd[f(D(x; ))] +Ep[f(D(x; ))]2E[krxD(x; )k2]min:Epd[f(D(x; ))] +Ep[f(D(x; ))]withf(x) =x. Similar approach is still valid for general GANs with concave function fwithf00(x)<0andf0(0)6= 0.5 E XPERIMENTAL RESULTSWe claim that every penalty measure that satisfies the assumptions can regularize the WGAN andgenerate similar results to the recently proposed gradient penalty methods. Several penalty measures2This condition is technically required to handle the derivative of the measure in a convenient manner usingthe weak formulation. Even if the measure is not differentiable, it may possible to differentiate the integral. Forinstance, is continuous but it does not have its weak derivative. However, it is still possible to differentiateE [!(x)] =!( )if the function !is differentiable at .6Under review as a conference paper at ICLR 2019were tested based on two-dimensional problems (mixture of 8 Gaussians, mixture of 25 Gaussians,and swissroll), MNIST and CIFAR-10 datasets using a simple gradient penalty term. In the com-parisons with WGAN, the recently proposed penalty measures and our test penalty measures usedthe same network settings and hyperparameters. The penalty measures and its detailed samplingmethods are listed in Table 1, where xdpd;xgp, andU(0;1).Aindicates fixed anchorpoint inX.Table 1: List of benchmark WGANs (WGAN and WGAN-GP with non-zero centered gradientpenalty) and 5 penalty measures with a simple gradient penalty term. In this table, WGAN-GPrepresents the previous model proposed by (Gulrajani et al., 2017), which penalizes the WGANwith non-zero centered gradient penalty terms, whereas GPrepresents the simple method. In ourexperiment, no additional weights are applied on 5 penalty measures and they are all probabilitydistributions.Penalty Penalty term Penalty measure, sampling methodWGAN None(Weight Clipping) NoneWGAN-GP E[(krxDk1)2] ^ x=xd+ (1)xgpg E[krxDk2] ^ x=xgpd E[krxDk2] ^ x=xdGP E[krxDk2] ^ x=xd+ (1)xgmid E[krxDk2] ^ x= 0:5xd+ 0:5xgg;anc E[krxDk2] ^ x=A+ (1)xgBy setting the previously proposed WGAN with weight-clipping(Arjovsky et al., 2017) and WGAN-GP(Gulrajani et al., 2017) as the baseline models, SGP -WGAN was examined with various penaltymeasures comprising three recently proposed measures and two artificially generated measures. pandpdwere suggested by Mescheder et al. (2018) and GPwas introduced from the WGAN-GP.We analyzed the artificial penalty measures midandg;anc as the test penalty measures.The experiments were conducted based on the implementation of the Gulrajani et al. (2017). The hy-perparameters, generator/discriminator structures, and related TensorFlow implementations can befound at https://github.com/igul222/improved_wgan_training (Gulrajani et al.,2017). Only the loss function was modified slightly from a non-zero centered gradient penalty to asimple penalty. For the CIFAR-10 image generation tasks, the inception score(Salimans et al., 2016)and FID(Heusel et al., 2017) were used as benchmark scores to evaluate the generated images.5.1 2D E XAMPLES AND MNISTWe checked the convergence of pfor the 2D examples (8 Gaussians, swissroll data, and 25 Gaus-sians) and MNIST digit generation for the SGP-WGANs with five penalty measures. MNIST and25 Gaussians were trained over 200K iterations, the 8 Gaussians were trained for 30K iterations, andthe Swiss Roll data were trained for 100K iterations. The anchor Aforg;anc was set as (2;1)for the 2D examples and 784 gray pixels for MNIST. We only present the results obtained for theMNIST dataset with the penalty measures comprising midandg;anc in Figure 1. The others arepresented in the Appendix.Figure 1: MNIST example. Images generated with mid(left) andg;anc (right).7Under review as a conference paper at ICLR 20195.2 CIFAR-10DCGAN and ResNet architectures were tested on the CIFAR-10 dataset. The generators weretrained for 200K iterations. The anchor Aforg;anc during CIFAR-10 generation was set as fixedrandom pixels. The WGAN, WGAN-GP, and five penalty measures were evaluated based on theinception score and FID, as shown in Table 2, which are useful tools for scoring the quality of gen-erated images. The images generated from midandg;anc with ResNet are shown in Figure 2. Theothers are presented in the Appendix.Table 2: Benchmark score results obtained based on the CIFAR-10 dataset under DCGAN andResNet architectures. The higher inception score and lower FID indicate the good quality of thegenerated images.PenaltyDCGAN ResNetInception FID Inception FIDWGAN35:640:09 48.7 - -WGAN-GP 6:480:10 35.0 7:820:09 18.1pg 6:460:09 38.0 7:630:10 20.9pd 6:330:07 38.9 7:630:09 20.3GP 6:400:08 35.4 7:600:09 18.3mid 6:600:07 33.9 7:860:07 16.4g;anc 6:450:07 33.7 7:360:09 22.4Figure 2: CIFAR-10 example. Images generated with mid(left) andg;anc (right) under the ResNetarchitecture.6 C ONCLUSIONIn this study, we proved the local stability of simple gradient penalty -WGAN optimization for ageneral class of finite measure . This proof provides insight into the success of regularization withpreviously proposed penalty measures. We explored previously proposed analyses based on vari-ous gradient penalty methods. Furthermore, our theoretical approach was supported by experimentsusing unintuitive penalty measures. In future research, our works can be extended to alternativegradient descent algorithm and its related optimal hyperparameters. Stability at non-realizable equi-librium points is one of the important topics on stability of GANs. Optimal penalty measure forachieving the best convergence speed can be also investigated using a spectral theory, which pro-vides the mathematical analysis on stability of GAN with a precise information on the convergencetheory.3WGAN failed to generate images for the ResNet architecture8Under review as a conference paper at ICLR 2019
BylhohL6hQ
assumptions need better justification
4: Ok but not good enough - rejection
This paper shows that an ideal equilibrium point of a SGP-WGAN is stable. It makes several assumptions that, while clear why they are needed in the proof, is unjustified in practice. The authors should elaborate on these assumptions and comment on why they are reasonable. Assumptions 1 and 3 essentially say that there is a tube (both in sample space and in parameter space) around the true data generating distribution in which the discriminator cannot distinguish. This seems a strong restriction to the effect of the discriminator is weak. For example, Assumption 1 says if given a sample slightly off the data manifold, it still cannot distinguish at all. A more reasonable assumption is the ability of the discriminator decays gracefully as samples approach the data manifold. Assumption 2 is also unjustified. Its main effect seems to be to eliminate a few terms in the projected Jacobian in the proof, but its relevance and whether it is reasonable in practice is entirely unmentioned. Finally, it is unclear why this notion of ``measure valued differentiation'' is needed. First, differentiation in measure spaces is no different from differentiation in other infinite dimensional functions spaces: the usual notions of Gateaux and Frechet differentiability apply. Second, the derivatives in questions are not true ``measure-derivatives'' in the sense that the argument to the function being differentiated is not a measure, it is a finite dimensional parameter. In the end, this seems essentially a derivative of a multi-variate function.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Local Stability and Performance of Simple Gradient Penalty $\mu$-Wasserstein GAN ### Paper Abstract Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution. Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint. In this study, we prove the local stability of optimizing the simple gradient penalty $\mu$-WGAN(SGP $\mu$-WGAN) under suitable assumptions regarding the equilibrium and penalty measure $\mu$. The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support. Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty. Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results. ### Paper Keywords ["WGAN", "gradient penalty", "stability", "measure valued differentiation"] ### Paper Content ABSTRACTWasserstein GAN(WGAN) is a model that minimizes the Wasserstein distancebetween a data distribution and sample distribution. Recent studies have proposedstabilizing the training process for the WGAN and implementing the Lipschitzconstraint. In this study, we prove the local stability of optimizing the simple gra-dient penalty -WGAN(SGP -WGAN) under suitable assumptions regarding theequilibrium and penalty measure . The measure valued differentiation conceptis employed to deal with the derivative of the penalty terms, which is helpful forhandling abstract singular measures with lower dimensional support. Based onthis analysis, we claim that penalizing the data manifold or sample manifold isthe key to regularizing the original WGAN with a gradient penalty. Experimentalresults obtained with unintuitive penalty measures that satisfy our assumptions arealso provided to support our theoretical results.1 I NTRODUCTIONDeep generative models reached a turning point after generative adversarial networks (GANs) wereproposed by Goodfellow et al. (2014). GANs are capable of modeling data with complex structures.For example, DCGAN can sample realistic images using a convolutional neural network (CNN)structure(Radford et al., 2015). GANs have been implemented in many applications in the field ofcomputer vision with good results, such as super-resolution, image translation, and text-to-imagegeneration(Ledig et al., 2017; Isola et al., 2017; Zhang et al., 2017; Reed et al., 2016).However, despite these successes, GANs are affected by training instability and mode collapse prob-lems. GANs often fail to converge, which can result in unrealistic fake samples. Furthermore, evenif GANs successfully synthesize realistic data, the fake samples exhibit little variability.A common solution to this instability problem is injecting an instance noise and finding differentdivergences. The injection of instance noise into real and fake samples during the training procedurewas proposed by Sønderby et al. (2017), where its positive impact on the low dimensional supportfor the data distribution was shown to be a regularizing factor based on the Wasserstein distance,as demonstrated analytically by Arjovsky & Bottou (2017). In f-GAN,f-divergence between thetarget and generator distributions was suggested which generalizes the divergence between two dis-tributions(Nowozin et al., 2016). In addition, a gradient penalty term which is related with SobolevIPM(Integral Probability Metric) between data distribution and sample distribution was suggestedby Mroueh et al. (2018).The Wasserstein GAN (WGAN) is known to resolve the problems of generic GANs by selectingthe Wasserstein distance as the divergence(Arjovsky et al., 2017). However, WGAN often failswith simple examples because the Lipschitz constraint on discriminator is rarely achieved duringthe optimization process and weight clipping. Thus, mimicking the Lipschitz constraint on thediscriminator by using a gradient penalty was proposed by Gulrajani et al. (2017).Noise injection and regularizing with a gradient penalty appear to be equivalent. The addition ofinstance noise in f-GAN can be approximated to adding a zero centered gradient penalty(Roth et al.,2017). Thus, regularizing GAN with a simple gradient penalty term was suggested by Meschederet al. (2018) who provided a proof of its stability.1Under review as a conference paper at ICLR 2019Based on a theoretical analysis of the dynamic system, Nagarajan & Kolter (2017) proved the localexponential stability of the gradient-based optimization dynamics in GANs by treating the simul-taneous gradient descent algorithm with a dynamic system approach. These previous studies wereuseful because they showed that the local behavior of GANs can be explained using dynamic systemtools and the related Jacobian’s eigenvalues.In this study, we aim to prove the convergence property of the simple gradient penalty -WassersteinGAN(SGP-WGAN) dynamic system under general gradient penalty measures . To the best ofour knowledge, our study is the first theoretical approach to GAN stability analysis which dealswith abstract singular penalty measure. In addition, measure valued differentiation(Heidergott &V ́azquez-Abad, 2008) is applied to take the derivative on the integral with a parametric measure,which is helpful for handling an abstract measure and its integral in our proof.The main contributions of this study are as follows.We prove the regularized effect and local stability of the dynamic system for a gen-eral penalty measure under suitable assumptions. The assumptions are written as both atractable strong version and intractable weak version. To prove the main theorem, we alsointroduce the measure valued differentiation concept to handle the parametric measure.Based on the proof of the stability, we explain the reason for the success of previous penaltymeasures. We claim that the support of a penalty measure will be strongly related to thestability, where the weight on the limiting penalty measure might affect the speed of con-vergence.We experimentally examined the general convergence results by applying two test penaltymeasures to several examples. The proposed test measures are unintuitive but they stillsatisfy the assumptions and similar convergence results were obtained in the experiment.2 P RELIMINARIESFirst, we introduce our notations and basic measure-theoretic concepts. Second, we define our SGP-WGAN optimization problem and treat this problem as a continuous dynamic system. Preliminarymeasure theoretic concepts are required to justify that the dynamic system changes in a sufficientlysmooth manner as the parameter changes, so it is possible to use linearization theorem. They arealso important for dealing with the parametric measure and its derivative. The problem setting witha simple gradient term is also discussed. The squared gradient size and simple gradient penalty termare used to build a differentiable dynamic system and to apply soft regularization as a resolving con-straint, respectively. The continuous dynamic system approach, which is a so-called ODE method,is used to analyze the GAN optimization problem with the simultaneous gradient descent algorithm,as described by Nagarajan & Kolter (2017).2.1 N OTATIONS AND PRELIMINARIES REGARDING MEASURE THEORYD(x; ) :X ! Ris a discriminator function with its parameter andG(z;) :Z ! X isa generator function with its parameter .pdis the distribution of real data and pg=pis thedistribution of the generated samples in X, which is induced from the generator function G(z;)and a known initial distribution platent (z)in the latent spaceZ.kkdenotes theL2Euclidean normif no special subscript is present.The concept of weak convergence for finite measures is used to ensure the continuity of the integralterm over the measure in the dynamic system, which must be checked before applying the theoremsrelated to stability. Throughout this study, we assume that the measures in the sample space are allfinite and bounded.Definition 1. For a set of finite measures figi2Iin(Rn;d)with euclidean distance d,figi2Iisreferred to as bounded if there exists some M > 0such that for all i2I,i(Rn)MFor instance, Mcan be set as 1 iffigare probability measures on Rn. Assuming that the penaltymeasures are bounded, Portmanteau theorem offers the equivalent definition of the weak conver-2Under review as a conference paper at ICLR 2019gence for finite measures. This definition is important for ensuring that the integrals over pandin the dynamic system change continuously.Definition 2. (Portmanteau Theorem) For a bounded sequence of finite measures fngn2Non theEuclidean space Rnwith a-field of Borel subsets B(Rn),nconverges weakly to if and only iffor every continuous bounded function onRn, its integrals with respect to nconverge toRd,i.e.,n!()Zdn!ZdThe most challenging problem in our analysis with the general penalty measure is taking the deriva-tive of the integral, where the measure depends on the variable that we want to differentiate. If ourpenalty measure is either absolutely continuous or discrete, then it is easy to deal with the integral.However, in the case of singular penalty measure, dealing with the integral term is not an easy task.Therefore, we introduce the concept of a weak derivative of a probability measure in the follow-ing(Heidergott & V ́azquez-Abad, 2008). The weak derivative of a measure is useful for handling aparametric measure that is not absolutely continuous with low dimensional support.Definition 3. (Weak Derivatives of a Probability Measure) Consider the Euclidean space and its-field of Borel subsets (Rd;B(Rd)). The probability measure Pis called weakly differentiable atif a signed finite measure P0exists whereddZ(x)dP= lim!01fZ(x)dP+Z(x)dPg=Z(x)dP0is satisfied for every continuous bounded function onRn. For the multidimensional parameter ,this can be defined similar manner.We can show that the positive part and negative part of P0have the same mass by putting (x) = 1and the Hahn–Jordan decomposition on P0. Therefore, the following triple (c;P+;P)is called aweak derivative of P, wherePare probability measures and P0is rewritten as:P0=cP+cPTherefore,ddZ(x)dP=Z(x)dP0=c(Z(x)dP+Z(x)dP)holds for every continuous bounded function onRn. It is known that the representation of(c;P+;P)forP0is not unique because (c+C;P++q;P+q)is also another repre-sentation of P0.For the general finite measure Q, a normalizing coefficient M()<1can be introduced. Theproduct rule for differentiating can also be applied in a similar manner to calculus.ddZ(x;)dP=Zr(x;)dP+Z(x;)dP0Therefore, for the general finite measure Q=M()P, its derivative Q0can be represented asbelow.Q0=M0()P+M()P0=M0()P+cM()P+cM()P2.2 P ROBLEM SETTING AS A DYNAMIC SYSTEMPrevious work of Mescheder et al. (2018) showed that the dynamic system of WGAN-GP is notnecessarily stable at equilibrium by demonstrating that the sequence of parameters is not Cauchysequence. This is mainly due to the term kxkin the dynamic system which has a derivativexkxkthatis not defined at x= 0. WGAN-GP has a penalty term EGP[(krxD(x; )k1)2]that can lead toa discontinuity in its dynamic system.These problems can be avoided by using the squared value of the gradient’s norm krxDk2, which isa differentiable function. In contrast to the WGAN-GP, recent methods based on a gradient penaltysuch as the simple gradient penalty employed by Mescheder et al. (2018) and the Sobolev GAN used3Under review as a conference paper at ICLR 2019the average of the squared values for the penalty area, whereas the WGAN-GP penalizes the size ofthe discriminator’s gradient krxDkaway from 1 in a pointwise manner.This advantage of squared gradient term1,E[krxDk2], makes the dynamic system differentiableand we define the WGAN problem with the square of the gradient’s norm as a simple gradientpenalty. This simple gradient penalty can be treated as soft regularization based on the size of thediscriminator’s gradient, especially in case where is the probability measure (Roth et al., 2017). Itis convenient to determine whether the system is stable by observing the spectrum of the Jacobianmatrix. In the following, (D(x; );pd;p;)is defined as an SGP -WGAN optimization problem(SGP-form) with a simple gradient penalty term on the penalty measure .Definition 4. The WGAN optimization problem with a simple gradient penalty term krxDk2,penalty measure , and penalty weight hyperparameter >0is given as follows, where the penaltyterm is only introduced to update the discriminator.max :Epd[D(x; )]Ep[D(x; )]2E[krxD(x; )k2]min:Epd[D(x; )]Ep[D(x; )]According to Nagarajan & Kolter (2017) and many other optimization problem studies, the simul-taneous gradient descent algorithm for GAN updating can be viewed as an autonomous dynamicsystem of discriminator parameters and generator parameters, which we denote as and. As aresult, the related dynamic system is given as follows._ =Epd[r D]Ep[r D]2r E[rTxDrxD]_=rEp[D]3 T OYEXAMPLESWe investigate two examples considered in previous studies by Mescheder et al. (2018) and Nagara-jan & Kolter (2017). We then generalize the results to a finite measure case. The first example is theunivariate Dirac GAN, which was introduced by Mescheder et al. (2018).Definition 5. (Dirac GAN) The Dirac GAN comprises a linear discriminator D(x; ) = x, datadistribution pd=0, and sample distribution p=.The Dirac GAN with a gradient penalty with an arbitrary probability measure is known to be globallyconvergent(Mescheder et al., 2018). We argue that this result can be generalized to a finite penaltymeasure case.Lemma 1. Consider the Dirac GAN problem with SGP form (D(x; ) = x; 0;; ;). Sup-pose that some small >0exists such that its finite penalty measure ;with massM( ;) = R1d ;0satisfies eitherM( ;)>0for( ;)2B((0;0))orM(0;0) = 0 and r M( ;)0for( ;)2B((0;0)).Then, the SGP -WGAN optimization dynamics with (D(x; ) = x; 0;; ;)are locally stableat the origin and the basin of attraction B=BR((0;0))is open ball with radius R. Its radius isgiven as follows.R= maxf0j2M( ;) + r M( ;)0for all ( ;)such that 2+22gMotivated by this example, we can extend this idea to the other toy example given by Nagarajan &Kolter (2017), where WGAN fails to converge to the equilibrium points ( ;) = (0;1).1In this study, we prefer to use the expectation notation on the finite measure, which can be understoodas follows. Suppose that ;=M( ;) ;where ;is normalized to the probability measure. Then,E ;[krxDk2] =E ;[M( ;)krxDk2] =RkrxDk2M( ;)d ;(x) =RkrxDk2d ;(x)4Under review as a conference paper at ICLR 2019Lemma 2. Consider the toy example (D(x; ) = x2;U(1;1);U(jj;jj);)whereU(0;0) =0and the ideal equilibrium points are given by ( ;) = (0;1). For a finite measure=onRwhich is independent of , suppose that !with6=C0forC0. Thedynamic system is locally stable near the desired equilibrium (0;1), where the spectrum of theJacobian at (0;1)is given by=2E[x2]q42E[x2]249.4 M AINCONVERGENCE THEOREMWe propose the convergence property of WGAN with a simple gradient penalty on an arbitrarypenalty measure for a realizable case: =withpd=pexists. In subsection 4.1, we providethe necessary assumptions, which comprise our main convergence theorem. In subsection 4.2, wegive the main convergence theorem with a sketch of the proof. A more rigorous analysis is given inthe Appendix.4.1 A SSUMPTIONSThe first assumption is made regarding the equilibrium condition for GANs, where we state the idealconditions for the discriminator parameter and generator parameter. As the parameters converge tothe ideal equilibrium, the sample distribution (p)converges to the real data distribution (pd)and thediscriminator cannot distinguish the generated sample and the real data.Assumption 1. p!pdas!andD(x; ) = 0 onsupp(pd)and its small openneighborhood, i.e., x2[x02supp(pd)Bx0(x0)impliesD(x; ) = 0 . For simplicity, we denote[x02supp(pd)Bx0(x0)asB(supp(pd)).The second assumption ensures that the higher order terms cannot affect the stability of the SGP-WGAN. In the Appendix, we consider the case where the WGAN fails to converge when As-sumption 2 is not satisfied. Compared with the previous study by Nagarajan & Kolter (2017), theconditions for the discriminator parameter are slightly modified.Assumption 2.g() =kEpd[r D(x; )]Ep[r D(x; )]k2;h( ) =E ;[krxD(x; )k2]are locally constant along the nullspace of the Hessian matrix.The third assumption allows us to extend our results to discrete probability distribution cases, asdescribed by Mescheder et al. (2018).Assumption 3.9g>0such thatD(x; ) = 0 on[jj<gsupp(p).The fourth assumption indicates that there are no other “bad” equilibrium points near ( ;),which justifies the projection along the axis perpendicular to the null space.Assumption 4. A bad equilibrium does not exist near the desired equilibrium point. Thus, ( ;)is an isolated equilibrium or there exist d;g>0such that all equilibrium points in Bd( )Bg()satisfy the other assumptions.The last assumption is related to the necessary conditions for the penalty measure. A calculationof the gradient penalty based on samples from the data manifold and generator manifold or theinterpolation of both was introduced in recent studies (Gulrajani et al., 2017; Roth et al., 2017;Mescheder et al., 2018). First, we propose strong conditions for the penalty measure.Assumption 5. The finite penalty measure =satisfies the followings:a!=andis independent of the discriminator parameter .bsupp(pd)supp()c9>0such thatsupp()B(supp(pd))forjj<.The assumption given above means that the support of the penalty measure should approach thedata manifolds smoothly as !. However, the penalty measure from WGAN-GP with a simple5Under review as a conference paper at ICLR 2019gradient penalty still reaches equilibrium without satisfying Assumption 5c. Therefore, we suggestAssumption 6, which is a weak version of Assumption 5. Assumption 6a2is technically required totake the derivative of the integral E ;[krxD(x; )k2]with respect to .Assumption 6. (Weak version of Assumption 5) The finite penalty measure = ;satisfies thefollowing.a ;! ;=, wheresupp( ;)only depends on . Near the equilibrium, ;can be weakly differentiated twice with respect to . In addition, its mass M( ;) = R1d ;is a twice-differentiable function of and bounded near the equilibrium.bE[r xDrT xD]is positive definite or supp(pd)supp().c9>0such thatsupp()Vforjj<, whereV=fxjrxD(x; ) = 0g.The assumption above implies the following situations; The penalty measure’s support approachesto data manifold and its weight changes smoothly with respect to and. At the equilibrium,penalty measure’s support contains data manifold. Also, ideal discriminator will remain flat on thepenalty area.In summary, the gradient penalty regularization term with any penalty measure where the supportapproachesB(supp(pd))in a smooth manner works well and this main result can explain the regu-larization effect of previously proposed penalty measures such as GP,pd,p, and their mixtures.4.2 M AINCONVERGENCE THEOREMAccording to the modified assumptions given above, we prove that the related dynamic system islocally stable near the equilibrium. The tools used for analyzing stability are mainly based on thosedescribed by Nagarajan & Kolter (2017). Our main contributions comprise proposing the necessaryconditions for the penalty measure and proving the local stability for all penalty measures that satisfyAssumption 6.Theorem 1. Suppose that our SGP -WGAN optimization problem (D;pd;p;)with equilibriumpoint ( ;)satisfies the assumptions given above. Then, the related dynamic system is locallystable at the equilibrium.A detailed proof of the main convergence theorem is given in the Appendix. A sketch of the proof isgiven in three steps. First, the undesired terms in the Jacobian matrix of the system at the equilibriumare cancelled out. Next, the Jacobian matrix at equilibrium is given byQRRT0, whereQ=E[r xDrT xD]andR=rEp[r D]j=. The system is locally stable when both QandRTRare positive definite. We can complete the proof by dealing with zero eigenvalues by showingthatN(QT)N(RT)and the projected system’s stability implies the original system’s stability.Our analysis mainly focuses on WGAN, which is the simplest case of general GAN minimax opti-mizationmax :Epd[f(D(x; ))] +Ep[f(D(x; ))]2E[krxD(x; )k2]min:Epd[f(D(x; ))] +Ep[f(D(x; ))]withf(x) =x. Similar approach is still valid for general GANs with concave function fwithf00(x)<0andf0(0)6= 0.5 E XPERIMENTAL RESULTSWe claim that every penalty measure that satisfies the assumptions can regularize the WGAN andgenerate similar results to the recently proposed gradient penalty methods. Several penalty measures2This condition is technically required to handle the derivative of the measure in a convenient manner usingthe weak formulation. Even if the measure is not differentiable, it may possible to differentiate the integral. Forinstance, is continuous but it does not have its weak derivative. However, it is still possible to differentiateE [!(x)] =!( )if the function !is differentiable at .6Under review as a conference paper at ICLR 2019were tested based on two-dimensional problems (mixture of 8 Gaussians, mixture of 25 Gaussians,and swissroll), MNIST and CIFAR-10 datasets using a simple gradient penalty term. In the com-parisons with WGAN, the recently proposed penalty measures and our test penalty measures usedthe same network settings and hyperparameters. The penalty measures and its detailed samplingmethods are listed in Table 1, where xdpd;xgp, andU(0;1).Aindicates fixed anchorpoint inX.Table 1: List of benchmark WGANs (WGAN and WGAN-GP with non-zero centered gradientpenalty) and 5 penalty measures with a simple gradient penalty term. In this table, WGAN-GPrepresents the previous model proposed by (Gulrajani et al., 2017), which penalizes the WGANwith non-zero centered gradient penalty terms, whereas GPrepresents the simple method. In ourexperiment, no additional weights are applied on 5 penalty measures and they are all probabilitydistributions.Penalty Penalty term Penalty measure, sampling methodWGAN None(Weight Clipping) NoneWGAN-GP E[(krxDk1)2] ^ x=xd+ (1)xgpg E[krxDk2] ^ x=xgpd E[krxDk2] ^ x=xdGP E[krxDk2] ^ x=xd+ (1)xgmid E[krxDk2] ^ x= 0:5xd+ 0:5xgg;anc E[krxDk2] ^ x=A+ (1)xgBy setting the previously proposed WGAN with weight-clipping(Arjovsky et al., 2017) and WGAN-GP(Gulrajani et al., 2017) as the baseline models, SGP -WGAN was examined with various penaltymeasures comprising three recently proposed measures and two artificially generated measures. pandpdwere suggested by Mescheder et al. (2018) and GPwas introduced from the WGAN-GP.We analyzed the artificial penalty measures midandg;anc as the test penalty measures.The experiments were conducted based on the implementation of the Gulrajani et al. (2017). The hy-perparameters, generator/discriminator structures, and related TensorFlow implementations can befound at https://github.com/igul222/improved_wgan_training (Gulrajani et al.,2017). Only the loss function was modified slightly from a non-zero centered gradient penalty to asimple penalty. For the CIFAR-10 image generation tasks, the inception score(Salimans et al., 2016)and FID(Heusel et al., 2017) were used as benchmark scores to evaluate the generated images.5.1 2D E XAMPLES AND MNISTWe checked the convergence of pfor the 2D examples (8 Gaussians, swissroll data, and 25 Gaus-sians) and MNIST digit generation for the SGP-WGANs with five penalty measures. MNIST and25 Gaussians were trained over 200K iterations, the 8 Gaussians were trained for 30K iterations, andthe Swiss Roll data were trained for 100K iterations. The anchor Aforg;anc was set as (2;1)for the 2D examples and 784 gray pixels for MNIST. We only present the results obtained for theMNIST dataset with the penalty measures comprising midandg;anc in Figure 1. The others arepresented in the Appendix.Figure 1: MNIST example. Images generated with mid(left) andg;anc (right).7Under review as a conference paper at ICLR 20195.2 CIFAR-10DCGAN and ResNet architectures were tested on the CIFAR-10 dataset. The generators weretrained for 200K iterations. The anchor Aforg;anc during CIFAR-10 generation was set as fixedrandom pixels. The WGAN, WGAN-GP, and five penalty measures were evaluated based on theinception score and FID, as shown in Table 2, which are useful tools for scoring the quality of gen-erated images. The images generated from midandg;anc with ResNet are shown in Figure 2. Theothers are presented in the Appendix.Table 2: Benchmark score results obtained based on the CIFAR-10 dataset under DCGAN andResNet architectures. The higher inception score and lower FID indicate the good quality of thegenerated images.PenaltyDCGAN ResNetInception FID Inception FIDWGAN35:640:09 48.7 - -WGAN-GP 6:480:10 35.0 7:820:09 18.1pg 6:460:09 38.0 7:630:10 20.9pd 6:330:07 38.9 7:630:09 20.3GP 6:400:08 35.4 7:600:09 18.3mid 6:600:07 33.9 7:860:07 16.4g;anc 6:450:07 33.7 7:360:09 22.4Figure 2: CIFAR-10 example. Images generated with mid(left) andg;anc (right) under the ResNetarchitecture.6 C ONCLUSIONIn this study, we proved the local stability of simple gradient penalty -WGAN optimization for ageneral class of finite measure . This proof provides insight into the success of regularization withpreviously proposed penalty measures. We explored previously proposed analyses based on vari-ous gradient penalty methods. Furthermore, our theoretical approach was supported by experimentsusing unintuitive penalty measures. In future research, our works can be extended to alternativegradient descent algorithm and its related optimal hyperparameters. Stability at non-realizable equi-librium points is one of the important topics on stability of GANs. Optimal penalty measure forachieving the best convergence speed can be also investigated using a spectral theory, which pro-vides the mathematical analysis on stability of GAN with a precise information on the convergencetheory.3WGAN failed to generate images for the ResNet architecture8Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title assumptions need better justification ### Review Text This paper shows that an ideal equilibrium point of a SGP-WGAN is stable. It makes several assumptions that, while clear why they are needed in the proof, is unjustified in practice. The authors should elaborate on these assumptions and comment on why they are reasonable. Assumptions 1 and 3 essentially say that there is a tube (both in sample space and in parameter space) around the true data generating distribution in which the discriminator cannot distinguish. This seems a strong restriction to the effect of the discriminator is weak. For example, Assumption 1 says if given a sample slightly off the data manifold, it still cannot distinguish at all. A more reasonable assumption is the ability of the discriminator decays gracefully as samples approach the data manifold. Assumption 2 is also unjustified. Its main effect seems to be to eliminate a few terms in the projected Jacobian in the proof, but its relevance and whether it is reasonable in practice is entirely unmentioned. Finally, it is unclear why this notion of ``measure valued differentiation'' is needed. First, differentiation in measure spaces is no different from differentiation in other infinite dimensional functions spaces: the usual notions of Gateaux and Frechet differentiability apply. Second, the derivatives in questions are not true ``measure-derivatives'' in the sense that the argument to the function being differentiated is not a measure, it is a finite dimensional parameter. In the end, this seems essentially a derivative of a multi-variate function. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SklcyJBtvB
ICLR.cc/2020/Conference
2020
Off-policy Bandits with Deficient Support
["Noveen Sachdeva", "Yi Su", "Thorsten Joachims"]
Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies.
["Recommender System", "Search Engine", "Counterfactual Learning"]
ABSTRACTOff-policy training of contextual-bandit policies is attractive in online systems(e.g. search, recommendation, ad placement), since it enables the reuse of largeamounts of log data. State-of-the-art methods for off-policy learning, however, arebased on inverse propensity score (IPS) weighting, which requires that the loggingpolicy chooses all actions with non-zero probability for any context (i.e., full sup-port). In real-world systems, this condition is often violated, and we show thatexisting off-policy learning methods based on IPS weighting can fail catastroph-ically. We therefore develop new off-policy contextual-bandit methods that cancontrollably and robustly learn even when the logging policy has deficient sup-port. To this effect, we explore three approaches that provide various guaranteesfor safe learning despite the inherent limitations of support deficient data: restrict-ing the action space, reward extrapolation, and restricting the policy space. Weanalyze the statistical and computational properties of these three approaches, andempirically evaluate their effectiveness in a series of experiments. We find thatcontrolling the policy space is both computationally efficient and that it robustlyleads to accurate policies.1 I NTRODUCTIONMany interactive systems (e.g., voice assistants, recommender systems, ad placement) can be mod-eled as contextual bandit problems (Langford & Zhang, 2008). In particular, each user requestprovides a context (e.g., user profile, query) for which the system selects an action (e.g., recom-mended product, presented ad) and receives a reward (e.g., purchase, click). Such contextual-banditdata is logged in large quantities as a by-product of normal system operation (Li et al., 2011; 2015;Joachims et al., 2017), making it an attractive and low-cost source of training data. With terabytes ofsuch log data readily available in many online systems, a range of algorithms have been proposed forbatch learning from such logged contextual-bandit feedback (Strehl et al., 2011; Dud ́ık et al., 2011;Swaminathan & Joachims, 2015a; Thomas & Brunskill, 2016; Farajtabar et al., 2018; Su et al., 2019;London & Sandler, 2019). However, as we will argue below, these algorithms require an assumptionabout the log data that makes them unsuitable for many real-world applications.This assumption is typically referred to as the positivity or support assumption, and it is requiredby the Empirical Risk Minimization (ERM) objective that these algorithms optimize. Specifically,unlike in online learning for contextual bandits (Williams, 1992; Agarwal et al., 2014), batch learn-ing from bandit feedback (BLBF) operates in the off-policy setting. During off-policy learning, thealgorithm has to address the counterfactual question of how much reward each policy in the policyspace would have received, if it had been used instead of the logging policy. To this effect, virtuallyall state-of-the-art off-policy learning methods for contextual-bandit problems rely on counterfac-tual estimators (Bottou et al., 2013; Dud ́ık et al., 2011; Swaminathan & Joachims, 2015a; Thomas& Brunskill, 2016; Farajtabar et al., 2018; Su et al., 2019) that employ inverse propensity score(IPS) weighting to get an unbiased ERM objective. Unlike regression-based direct-modeling (DM)approaches that are often hampered by bias from model-misspecification, IPS allows a controllablebias-variance trade-off through clipping and other variance-regularization techniques (Strehl et al.,2011; Swaminathan & Joachims, 2015a; London & Sandler, 2019).Unfortunately, IPS and its variance-control mechanisms break down when the logging policy doesnot have full support – meaning that some actions have zero probability of being selected under thelogging policy. In this case IPS can be highly biased. Full support is an unreasonable assumptionin many real-world systems, especially when the action space is large and many actions have poor1Under review as a conference paper at ICLR 2020rewards. For example, in a recommender system with a large catalog (e.g. movies, music), it may bethat less than 10% of the actions have support under the logging policy. We will show that existinglearning algorithms can fail catastrophically on such support deficient data.In this paper, we develop new off-policy contextual-bandit algorithms that are specifically designedto deal with support deficient log data. Since support deficiency translates into blind spots where wedo not have any knowledge about the rewards, accounting for these blind spots as part of learningis crucial for robust learning. We approach this problem from three perspectives. First, we explorerestricting the action space to those actions that have support under the logging policy. Second,we explore imputation methods that extrapolate estimated rewards to those blind spots. And, third,we restrict the policy space to only those policies that have limited exposure to the blind spots. Tomake the latter approach computationally tractable, we define a new measure of Support Divergencebetween policies, show how it can be estimated efficiently without closed-form knowledge of thelogging policy, and how it can be used as a constraint on the policy space. We analyze the statisticaland computational properties of all three approaches and perform an extensive empirical evaluation.We find that restricting the policy space is particularly effective, since it is computationally efficient,empirically effective at learning good policies, and convenient to use in practice.2 R ELATED WORKMost prior works on BLBF can be classified into two different approaches. The first – called DirectModel (DM) – is based on a reduction to supervised learning, where a regression estimate is trainedto predict rewards (Beygelzimer & Langford, 2009). To derive a policy, the action with the highestpredicted reward is chosen. A drawback of this simple approach is the bias that results from mis-specification of the regression model. Since regression models are often substantially misspecifiedfor real-world data, the DM approach often does not work well empirically.The second approach is based on policy learning via ERM with a counterfactual risk estimator. In-verse propensity score (IPS) weighting is one of the most popular estimators to be used as empiricalrisk. However, policy learning algorithms based on IPS and related estimators (Strehl et al., 2011;Swaminathan & Joachims, 2015a;b; Thomas & Brunskill, 2016; London & Sandler, 2019) requirethe assumption that the logging policy has full support for every policy in the policy space. Oneexception is the work of Liu et al. (2019). They relax the assumption to the existence of an optimalpolicy such that the logging policy covers the support of this optimal policy. However, this is anuntestable assumption that does not provide guarantees for real-world applications.Our work proposes three approaches to addressing off-policy learning with support deficiency. First,our conservative extrapolation method is related to the method proposed by Liu et al. (2019). Theyfocus on the correction of the state distribution by defining an augmented MDP, and pessimisticimputation is used to get an estimate for policy-gradient learning. Second, our method of restrictingthe policy space uses a surrogate for the support divergence of two policies that was previously usedas control variate in the SNIPS estimator (Swaminathan & Joachims, 2015b). It also appeared inthe Lagrangian formulation of the BanditNet objective (Joachims et al., 2018) and in the gradientupdate in REINFORCE algorithm (Williams, 1992). This connection gives interesting new insightthat the baselines used in policy-gradient algorithms not only help to reduce variance in gradients(Greensmith et al., 2004), but that they also connect to the problem of support deficiency in theoff-policy setting.3 O FF-POLICY LEARNING WITH DEFICIENT SUPPORTWe start by formally defining the problem of learning a contextual-bandit policy in the BLBF setting.Input to the policy are contexts x2X drawn i.i.d from a fixed but unknown distribution P(X).Given context x, the system executes a possibly stochastic policy (Yjx)that selects an actiony2 Y . For this context and action pair, the system observes a reward r2[rmin;rmax]fromP(rjx;y). Given a space of policies , the reward of any policy 2is defined asR() =ExEy(yjx)ErP(rjx;y)[r]: (1)In the BLBF setting, the learning algorithm is given a datasetD:=fxi;yi;ri;0(yijxi)gni=12Under review as a conference paper at ICLR 2020of past system interactions which consists of context-action-reward-propensity tuples. The propen-sity0(yijxi)is the probability of selecting action yifor contextxiunder the policy 0that wasused to log the data. We call 0the logging policy, and we will discuss desired conditions on thestochasticity of 0in the following. The goal of off-policy learning is to exploit the information inthe logged dataDto find a policy ^2that has high reward R(^).Analogous to the ERM principle in supervised learning, off-policy learning algorithms typicallyoptimize a counterfactual estimate ^R()ofR()as the training objective (Li et al., 2011; 2015;Bottou et al., 2013; Swaminathan & Joachims, 2015a).^= arg max2[^R()] (2)For conciseness, we ignore additional regularization terms in the objective (Swaminathan &Joachims, 2015a), since they are irrelevant to the main point of this paper. As counterfactual es-timator ^R(), most algorithms rely on some form of IPS weighting (Strehl et al., 2011; Dud ́ık et al.,2011; Swaminathan & Joachims, 2015a;b; Wang et al., 2017; Su et al., 2019) to correct the distribu-tion mismatch between the logging policy 0and each target policy 2.^RIPS() =1nnXi=1(yijxi)0(yijxi)ri: (3)A crucial condition for the effectiveness of the IPS estimator (and similar estimators) is that thelogging policy 0assigns non-zero probability to all actions that have non-zero probability underthe target policy we aim to evaluate. This condition is known as positivity or full support, and it isdefined as follows.Definition 1 (Full support) .The logging policy 0is said to have full support for when0(yjx)>0for all actions y2Y and contexts x2X for which(yjx)>0.It is known that the IPS estimator is unbiased, ED[^RIPS()] =R(), if the logging policy 0hasfull support for (Li et al., 2011). To ensure unbiased ERM, algorithms that use the IPS estimatorrequire that the logging policy 0has full support for all policies 2in the policy space. Forsufficiently rich policy spaces, like deep-networks fw(x;y)with softmax outputs of the formw(yjx) =exp(fw(x;y))Py02Yexp(fw(x;y0)); (4)this means that the logging policy 0needs to assign non-zero probability to every action yin everycontextx. This is a strong condition that is not feasible in many real-world systems, especially ifthe action space is large and many actions have poor reward.If the support requirement is violated, ERM learning can fail catastrophically. We will show belowthat the underlying reason is bias, not excessive variance that could be remedied through clippingor variance regularization (Strehl et al., 2011; Swaminathan & Joachims, 2015a). To quantify howsupport deficient a logging policy is, we denote the set of unsupported actions for context xunder0asU(x;0) :=fy2Yj0(yjx) = 0g:The bias of the IPS estimator is then characterized by the expected reward on the unsupportedactions.Proposition 1. Given contexts xP(X)and logging policy 0(Yjx), the bias of ^RIPSfor targetpolicy(Yjx)is equal to the expected reward on the unsupported action sets, i.e., bias(j0) =Ex[Py2U(x;0)(yjx)(x;y)].The proof is in Appendix A.1. From Proposition 1, it is clear that support deficient log data candrastically mislead ERM learning. To quantify the effect of support deficiency on ERM, we definethe support divergence between a logging policy 0and a target policy as follows.Definition 2 (Support Divergence) .For contexts xP(X)and any corresponding pair of targetpolicyand logging policy 0, the Support Divergence is defined asDX(j0) := ExP(X)24Xy2U(x;0)(yjx)35: (5)3Under review as a conference paper at ICLR 2020With this definition in hand, we can quantify the effect of support deficiency on ERM learning for apolicy space under logging policy 0.Theorem 1. For any given hypothesis space with logging policy 02, there exists areward distribution Prwith support in [rmin;rmax]such that in the limit of infinite trainingdata, ERM using IPS over the logged data D P(X)0(jX)Prcan select a policy^2arg max2ED[^RIPS()]that is at least (rmaxrmin) max2DX(j0)suboptimal.The proof is in Appendix A.2. To illustrate the theorem, consider a problem with rewards r2[1;0]. Furthermore, consider a policy space that contains a good policy gwithR(g) =0:1and a bad policy bwithR(b) =0:7. If policybhas support divergence DX(bj0) = 0:6orlarger, then ERM may return the bad binstead ofgeven with infinite amounts of training data.Note that it is sufficient to merely have one policy in that has large support deficiency to achievethis suboptimality. It is therefore crucial to control the support divergence DX(j0)uniformlyover all2, or to account for the suboptimality it can induce. To this effect, we explore threeapproaches in the following.3.1 S AFE LEARNING BY RESTRICTING THE ACTION SPACEThe first and arguably most direct approach to reducing DX(j0)is to disallow any action thathas zero support under the logging policy. For the remaining action set, the logging policy has fullsupport by definition. This restriction of the action set can be achieved by transforming each policy2into a new policy that sets the probability of the unsupported actions to zero.(yjx)!(yjx) :=(yjx) 1Ify=2U(x;0)g1Py02U(x;0)(y0jx)(6)This results in a new policy space . All 2have support divergence of zero DX(j0) = 0 andERM via IPS is guaranteed to be unbiased.While this transformation of the policy space from tois conceptually straightforward, it has twopotential drawbacks. First, restricting the action space without any exceptions may overly constrainthe policies in . In particular, if the optimal action yfor a specific context xdoes not havesupport under the logging policy, no 2can ever choose yeven if there are many observationsof similary’s on similar context x0. The second drawback is computational. For every context xduring training and testing, the system needs to evaluate the logging policy 0(yjx)to compute thetransformation from to. This can be prohibitively expensive especially at test time, where – aftermultiple rounds of off-policy learning with data from previously learned policies – we would needto evaluate the whole sequence of previous logging policies to execute the learned policy.3.2 S AFE LEARNING THROUGH REWARD EXTRAPOLATIONAs illustrated above, support deficiency is a problem of blind spots where we lack information aboutthe rewards of some actions in some contexts. Instead of disallowing the unsupported actions like inthe previous section, an alternative is to extrapolate the observed rewards to fill in the blind spots. Tothis effect, we propose the following augmented IPS estimator that imputes an extrapolated reward^(x;y)for each unsupported action y2U(x;0).^RIPS() =1nnXi=124(yijxi)0(yijxi)ri+Xy2U(xi;0)(yjxi)^(xi;y)35 (7)In the following proposition, we characterize the bias of the augmented IPS estimator for any givenreward extrapolation ^(x;y). We denote the mean of the reward rfor contextxand actionywith(x;y) =ErP(rjx;y)[r]. Furthermore, (x;y) :=^(x;y)(x;y)denotes the error of the rewardextrapolation for each xandy.Proposition 2. Given contexts x1;x2;:::;xndrawn i.i.d from the unknown distribution P(X), foractionyidrawn independently from logging policy 0with probability 0(Yjxi), the bias of theempirical risk defined in Equation (7) is Ex[Py2U0x(yjx)(x;y)].4Under review as a conference paper at ICLR 2020Algorithm 1: Data Augmentationinput: original logged dataset D, replaycount k,reward estimate ^(x;y); output:D0;initialization:D0=;;forj= 1;:::;k dofori= 1;:::;n doDefineUxito be the uniform distributionoverU(xi;0);DrawyUxi;D0=D0Sfxi;y;^(xi;y);1jU(xi;0)jg;endendIn this way we can learn in the original actionand policy space, but mitigate the effect of thesupport deficiency by explicitly incorporatingthe extrapolated reward ^(x;y). We exploretwo choices for ^(x;y)in the following, whichprovide different types of guarantees.Conservative Extrapolation. To minimize theuser impact of randomization in the loggingpolicy, it is generally desirable to put zero prob-ability on actions the are very likely to havelow (or even catastrophic reward). This meansthat precisely those bad actions are likely tonot be supported in the logging policy. A keydanger of blind spots regarding those actions isthat naive IPS training will inadvertently learna policy that selects those actions. This can be avoided by being maximally conservative aboutunsupported actions and imputing the lowest possible reward 8x;y2U(x;0) :^(x;y) =rmin.Intuitively, by imposing the worst possible reward for the unsupported actions, the learning algo-rithm will aim to avoid these low-reward areas. However, unlike for the policies resulting fromthe restricted action space, the learned policy is not strictly prohibited from choosing unsupportedactions – it is merely made aware of the maximum loss that the action may incur. Note that forproblems where rmin= 0, the naive IPS estimator is identical to conservative extrapolation sincethe second term in Equation (7) is zero.Regression Extrapolation. Instead of extrapolating with the worst-case reward, we may have addi-tional prior knowledge in the form of a model-based estimate that reduces the bias. In particular, weexplore using a regression estimate ^= arg min ^1nPni=1(^(xi;yi)ri)2that extrapolates fromthe observed dataD. Typically, ^comes from a parameterized class of regression functions. Otherregression objectives could also be used, such as weighted linear regression that itself uses impor-tance sampling as weights (Farajtabar et al., 2018). But, fundamentally, all regression approachesassume that the regression model is not misspecified and that it can thus extrapolate well. Note thatthe IPS part of Equation (7) can be changed to any estimators (with action set restricted on U(x;0)cfor allx), and it turns out that doubly robust (Dud ́ık et al., 2011) and CAB (Su et al., 2019) are specialextensions of regression extrapolation that substitute the IPS part with their corresponding estimator.Efficient Approximation. Evaluating the augmented IPS estimator from Equation (7) can be com-putationally expensive if the number of unsupported actions U(x;0)is large. To overcome thisproblem, we propose to use sampling to estimate the expected reward on the unsupported action,which can be thought of as augmenting the dataset Dwith additional observations where the log-ging policy has zero support. In particular, we propose the data-augmentation procedure detailed inAlgorithm 1. With the additional bandit data D0=fx0j;y0j;^(x0j;y0j);p0jgmj=1from Algorithm 1, thenew objective isarg min28<:1nnXi=1(yijxi)0(yijxi)ri+1mmXj=1(y0jjx0j)p0j^(x0j;y0j)9=;: (8)In Appendix A.5, we show that the empirical risk in Equation (8) has the same expectation (overrandomness inDandD0) as^RIPS(D)and can thus serve as an approximation for Equation (7).3.3 S AFE LEARNING BY RESTRICTING THE POLICY SPACEAs motivated by Theorem 1, the risk of learning from support deficient data scales with the maxi-mum support divergence DX(j0)among the policies in the policy space . Therefore, our thirdapproach restricts the policy space to the subset that contains the policies 2with anacceptably low support divergence DX(j0).=fj2^DX(j0)g (9)The parameter has an intuitive meaning. It specifies the maximum probability mass that a learnedpolicy can place on unsupported actions. By limiting this to , we limit the maximum bias of5Under review as a conference paper at ICLR 2020the ERM procedure according to Proposition 2 while not explicitly torquing the rewards like inconservative reward imputation.A key challenge, however, is implementing this restriction of the hypothesis space, such that theERM learner ^= arg max2[^RIPS()]only considers the subset . In particular, we donot have access to the context distribution P(X)for calculatingDX(j0), nor would it be possibleto enumerate all 2to check the condition DX(j0), which itself requires a possiblyinfeasible iteration over all actions. The following theorem (with proof in Appendix A.3) gives usan efficient way of estimating and controlling DX(j0)without explicit knowledge of P(X)oraccess to the logging policy 0beyond the logged propensities.Theorem 2. For contexts xidrawn i.i.d from P(X), actionyidrawn from logging policy 0, wedefineSD(j0) =1nPni=1(yijxi)0(yijxi). For any policy it holds thatExP(X)Ey0(jx)[SD(j0)] +DX(j0) = 1 (10)Using this theorem, the following proposition (proof in Appendix A.4, empirically verified in Ap-pendix B) gives us an efficient way of implementing the constraint DX(j0)via1SD(j0).Proposition 3. For any given 2(0;1),0< < = 2, letpmin denote the minimum propen-sity under supported set pmin =maxx;y2U(x;0)c0(yjx), then with probability larger than12 exp(2n2p2min), the constraint 1+SD(j0)1will ensure 0DX(j0).We can thus use 1SD(j0)as a surrogate forDX(j0)in the training objective:arg minw21nnXi=1w(yijxi)0(yijxi)ri. subject to 1+1nnXi=1w(yijxi)0(yijxi)1 (11)Using Lagrange multipliers, an equivalent dual form of Equation (11) is:maxu1;u20minw21nnXi=1w(yijxi)0(yijxi)(ri+u1u2)u1(1) +u2(1+) (12)For each fixed (u1;u2)pair, the inner minimization objective is ERM with IPS under a shift ofthe reward. Instead of maximizing over (u1;u2)in the outer objective, we treat (u1u2)as ahyperparameter that we select on a validation set. We explore various estimators for this model-selection problem in Section 4.Note that, among the methods we proposed for dealing with support deficiency, this approach is themost efficient to implement, and it does not require access to the logging policy during training ortesting. Furthermore, the form of the inner objective coincides with that of BanditNet (Joachimset al., 2018), which is known to work well for deep network training by controlling propensityoverfitting (Swaminathan & Joachims, 2015a).4 E MPIRICAL EVALUATIONWe empirically evaluate the effectiveness and robustness of the three proposed approaches: restrict-ing the action space, conservative and regression extrapolation, as well as restricting the policyspace. The semi-synthetic experiments are based on two real-world datasets: one is the popular im-age classification dataset CIFAR10 (Krizhevsky et al.) and the other is the credit-card fraud datasetof Dal Pozzolo et al. (2015). We use the naive IPS estimator and the regression-based Direct Method(DM) as baselines.The experiments are set up as follows. We first create a train-validation-test split for both datasets.The training set is used to generate bandit datasets for learning, the validation set is used to gener-ate bandit datasets for model selection, and the full-information test set serves as ground truth forevaluating the learned policies. To simulate bandit feedback for the CIFAR10 dataset, our experi-ment setup follows traditional supervised !bandit conversion for multi-class classification datasets(Beygelzimer & Langford, 2009). To not only have bandit data with binary multi-class rewards, we6Under review as a conference paper at ICLR 2020choose a different methodology for the credit-card dataset by designating some features as corre-sponding to actions and rewards. More details are given in Appendix B.To get logging policies for generating bandit feedback, we start by training a softmax-policy as inEquation (4) on a subset of the full-information data. We then introduce a temperature parameter into the learned policy via fw(x;y)to be able to control its stochasticity and support deficiency.In particular, we enforce zero support for some actions by clipping the propensities to 0 if they arebelow a threshold of = 0:01. The larger , the higher the support deficiency. Note that makingthe threshold at = 0:01allows us to control support while the variance of IPS stays bounded. Thisallows us to study support deficiency without having to worry about variance control.For both logging and target policies, we train softmax policies where fw(x;y)is a neural network.We use the ResNet20 architecture (He et al., 2016) for CIFAR10, and a fully-connected 2-layernetwork for the credit-card dataset.7075808590AccuracyCIFAR0.650.700.750.80True RewardCredit Card0.40.50.60.70.8True RewardCredit Card - Translated0 45 607080% Unsupported actions0.000.050.100.150.200.25Support Divergence0 20 45 607080% Unsupported actions0.00.20.40.6Support Divergence0 20 45 607080% Unsupported actions0.00.20.40.60.81.0Support DivergenceDM HardmaxNaive IPSConservative ExtrapolationAction RestrictionRegression ExtrapolationPolicy RestrictionDRFigure 1: Learning results with varying support deficiency in the logging policy.How do the methods perform at different level of support deficiency? Results are shown inFigure 1. First, as expected, learning using naive IPS degrades on both datasets as we make thelogging policy more peaked and the number of unsupported actions increases. Note that naive IPScoincides with Conservative Extrapolation, since both datasets are scaled to have a minimum rewardof zero. In the rightmost column, however, we translated the rewards to [1;0]. This has a strongdetrimental effect on naive IPS, as it is now overly optimistic about unsupported actions. Second,the approach of dealing with support deficiency by restricting the action space also performs poorly.The second row of plot sheds some light on this, as it shows the support divergence DX(j0)ofthe learned policy. It is zero for Action Restriction as expected, which means that bias is not theproblem. Instead, as the number of unsupported actions increases, the best actions are more likely tobe pruned and unavailable in the restricted policy space . Third, Regression Extrapolation performsbetter than Conservative Extrapolation on both datasets. In both cases, the DM model is quite goodwhich also benefits Regression Extrapolation. However, on the credit-card dataset the regressionseems better at ranking than at predicting the true reward, which explains why DM performs betterthan Regression Extrapolation. Fourth, the method that performs well most consistently is PolicyRestriction. Unlike all the other IPS-based methods, it performs well even under the translatedrewards in the third column of Figure 1. This is because the objective of Policy Restriction coincideswith that of BanditNet (Joachims et al., 2018), which is known to remedy propensity overfitting dueto the lack of equivariance of the IPS estimator (Swaminathan & Joachims, 2015b).How does the learning performance change with more training data? Results are shown inFigure 2. As the number of bandit examples increases, Policy Restriction, Regression Extrapolationand DM dominate over most of the range especially when the percentage of unsupported actions islarge. Among the other methods, Action Restriction can take the least advantage of more data. Thisis plausible, since its maximum performance is limited by the available actions. For similar reasons,Conservative Extrapolation (and equivalently IPS) also flattens out, since it also tightly restricts theaction space by imputing the minimum reward.7Under review as a conference paper at ICLR 20200.01 0.05 0.10 0.25# Bandit Data (Million)50607080Accuracy43% Unsupported Actions0.01 0.05 0.10 0.25# Bandit Data (Million)50607080Accuracy81% Unsupported Actions0.01 0.10 0.23 0.46# Bandit Data (Million)0.650.700.750.80True Reward46% Unsupported Actions0.01 0.10 0.23 0.46# Bandit Data (Million)0.600.650.700.750.80True Reward80% Unsupported ActionsCIFAR Credit CardDM HardmaxNaive IPSConservative ExtrapolationAction RestrictionRegression ExtrapolationPolicy RestrictionFigure 2: Learning results with varying amounts of bandit data on CIFAR10 and credit-card dataset.%Unsupp.OracleRegr.Extrap.DMCons.Extrap.SNIPS45 0.878 0.878 0.878 0.878 0.87660 0.871 0.871 0.871 0.871 0.87170 0.858 0.858 0.856 0.858 0.85877 0.856 0.854 0.854 0.856 0.85680 0.855 0.855 0.855 0.838 0.8490.0 0.1 0.2 0.4 0.6k = u_1 - u_20.20.40.60.8Estimated RewardOracleCons. ExtrapolationSNIPSReg. ExtrapolationDMFigure 3: Model selection performance on CIFAR10.How effective are the estimators for model selection? Most learning algorithms have hyperpa-rameters, and we now evaluate how the estimators perform for this secondary learning problem. Wespecifically focus on the parameter k:=u1u2in Policy Restriction, since it controls how muchthe learned policies can step outside the region of support. The table on the left of Figure 3 shows thereward of the learned policy when performing model selection with the respective estimator. Oracleis the estimator that has access to the full-information validation set, and can thus be considered as askyline. We also included the SNIPS estimator (Swaminathan & Joachims, 2015b), which imputesthe average reward on the supported action for the unsupported actions (Gilotte et al., 2018). Allestimators perform quite well for model selection on CIFAR, and the results are analogous for thecredit-card data (see Appendix B.2). However, the plot to the right of Figure 3 reveals that SNIPSdoes not accurately reflect the shape of the Oracle curve. Both Regression Extrapolation and DM,however, are found to be sufficiently accurate for reliable model selection.5 D ISCUSSION AND CONCLUSIONSWe identified and analyzed how off-policy learning based on IPS weighting can suffer severely de-graded learning performance when the logging policy is support deficient. To remedy this problem,we explored approaches that limit the impact of missing support through three different means: re-stricting the action space, reward extrapolation and restricting the policy space. We find that themost natural approach of restricting the action space is neither computationally efficient, nor doesit learn accurate policies. Reward extrapolation through regression and restricting the policy space,however, both perform well and robustly even at high levels of support deficiency. Among those twomethods, reward extrapolation has the potential drawback that we need to compute (and/or samplefrom) the complement of the logging policy, which can be computationally challenging. Further-more, having to store all old logging policies is inconvenient in practice. This makes the approachof restricting the policy space particularly attractive, since it is computationally efficient and it doesnot require access to the logging policy beyond the logged propensity values.8Under review as a conference paper at ICLR 2020
H1glUK7pFr
Official Blind Review #2
3: Weak Reject
This paper considers a new off-policy contextual-bandit method that can learn even when the logging policy has deficient support. Three approaches are explored, namely restricting the action space, reward extrapolation, and restricting the policy space. This paper is well written and it considers an important problem of deficient support. However, the proposed method was only compared to a few old benchmarks. How does the proposed method compare to more recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019), Tang et al. (2019)) in the experiments? The work by Liu et al. (2019) also considered the setting of deficient support. Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction. arXiv:1904.08473, 2019. Jie, Liu, Liu, Wang, and Peng, Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy. ICLR 2019. Tang, Feng, Li, Zhou, and Liu, Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation, arxiv: 1910.07186, 2019.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Off-policy Bandits with Deficient Support ### Paper Abstract Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies. ### Paper Keywords ["Recommender System", "Search Engine", "Counterfactual Learning"] ### Paper Content ABSTRACTOff-policy training of contextual-bandit policies is attractive in online systems(e.g. search, recommendation, ad placement), since it enables the reuse of largeamounts of log data. State-of-the-art methods for off-policy learning, however, arebased on inverse propensity score (IPS) weighting, which requires that the loggingpolicy chooses all actions with non-zero probability for any context (i.e., full sup-port). In real-world systems, this condition is often violated, and we show thatexisting off-policy learning methods based on IPS weighting can fail catastroph-ically. We therefore develop new off-policy contextual-bandit methods that cancontrollably and robustly learn even when the logging policy has deficient sup-port. To this effect, we explore three approaches that provide various guaranteesfor safe learning despite the inherent limitations of support deficient data: restrict-ing the action space, reward extrapolation, and restricting the policy space. Weanalyze the statistical and computational properties of these three approaches, andempirically evaluate their effectiveness in a series of experiments. We find thatcontrolling the policy space is both computationally efficient and that it robustlyleads to accurate policies.1 I NTRODUCTIONMany interactive systems (e.g., voice assistants, recommender systems, ad placement) can be mod-eled as contextual bandit problems (Langford & Zhang, 2008). In particular, each user requestprovides a context (e.g., user profile, query) for which the system selects an action (e.g., recom-mended product, presented ad) and receives a reward (e.g., purchase, click). Such contextual-banditdata is logged in large quantities as a by-product of normal system operation (Li et al., 2011; 2015;Joachims et al., 2017), making it an attractive and low-cost source of training data. With terabytes ofsuch log data readily available in many online systems, a range of algorithms have been proposed forbatch learning from such logged contextual-bandit feedback (Strehl et al., 2011; Dud ́ık et al., 2011;Swaminathan & Joachims, 2015a; Thomas & Brunskill, 2016; Farajtabar et al., 2018; Su et al., 2019;London & Sandler, 2019). However, as we will argue below, these algorithms require an assumptionabout the log data that makes them unsuitable for many real-world applications.This assumption is typically referred to as the positivity or support assumption, and it is requiredby the Empirical Risk Minimization (ERM) objective that these algorithms optimize. Specifically,unlike in online learning for contextual bandits (Williams, 1992; Agarwal et al., 2014), batch learn-ing from bandit feedback (BLBF) operates in the off-policy setting. During off-policy learning, thealgorithm has to address the counterfactual question of how much reward each policy in the policyspace would have received, if it had been used instead of the logging policy. To this effect, virtuallyall state-of-the-art off-policy learning methods for contextual-bandit problems rely on counterfac-tual estimators (Bottou et al., 2013; Dud ́ık et al., 2011; Swaminathan & Joachims, 2015a; Thomas& Brunskill, 2016; Farajtabar et al., 2018; Su et al., 2019) that employ inverse propensity score(IPS) weighting to get an unbiased ERM objective. Unlike regression-based direct-modeling (DM)approaches that are often hampered by bias from model-misspecification, IPS allows a controllablebias-variance trade-off through clipping and other variance-regularization techniques (Strehl et al.,2011; Swaminathan & Joachims, 2015a; London & Sandler, 2019).Unfortunately, IPS and its variance-control mechanisms break down when the logging policy doesnot have full support – meaning that some actions have zero probability of being selected under thelogging policy. In this case IPS can be highly biased. Full support is an unreasonable assumptionin many real-world systems, especially when the action space is large and many actions have poor1Under review as a conference paper at ICLR 2020rewards. For example, in a recommender system with a large catalog (e.g. movies, music), it may bethat less than 10% of the actions have support under the logging policy. We will show that existinglearning algorithms can fail catastrophically on such support deficient data.In this paper, we develop new off-policy contextual-bandit algorithms that are specifically designedto deal with support deficient log data. Since support deficiency translates into blind spots where wedo not have any knowledge about the rewards, accounting for these blind spots as part of learningis crucial for robust learning. We approach this problem from three perspectives. First, we explorerestricting the action space to those actions that have support under the logging policy. Second,we explore imputation methods that extrapolate estimated rewards to those blind spots. And, third,we restrict the policy space to only those policies that have limited exposure to the blind spots. Tomake the latter approach computationally tractable, we define a new measure of Support Divergencebetween policies, show how it can be estimated efficiently without closed-form knowledge of thelogging policy, and how it can be used as a constraint on the policy space. We analyze the statisticaland computational properties of all three approaches and perform an extensive empirical evaluation.We find that restricting the policy space is particularly effective, since it is computationally efficient,empirically effective at learning good policies, and convenient to use in practice.2 R ELATED WORKMost prior works on BLBF can be classified into two different approaches. The first – called DirectModel (DM) – is based on a reduction to supervised learning, where a regression estimate is trainedto predict rewards (Beygelzimer & Langford, 2009). To derive a policy, the action with the highestpredicted reward is chosen. A drawback of this simple approach is the bias that results from mis-specification of the regression model. Since regression models are often substantially misspecifiedfor real-world data, the DM approach often does not work well empirically.The second approach is based on policy learning via ERM with a counterfactual risk estimator. In-verse propensity score (IPS) weighting is one of the most popular estimators to be used as empiricalrisk. However, policy learning algorithms based on IPS and related estimators (Strehl et al., 2011;Swaminathan & Joachims, 2015a;b; Thomas & Brunskill, 2016; London & Sandler, 2019) requirethe assumption that the logging policy has full support for every policy in the policy space. Oneexception is the work of Liu et al. (2019). They relax the assumption to the existence of an optimalpolicy such that the logging policy covers the support of this optimal policy. However, this is anuntestable assumption that does not provide guarantees for real-world applications.Our work proposes three approaches to addressing off-policy learning with support deficiency. First,our conservative extrapolation method is related to the method proposed by Liu et al. (2019). Theyfocus on the correction of the state distribution by defining an augmented MDP, and pessimisticimputation is used to get an estimate for policy-gradient learning. Second, our method of restrictingthe policy space uses a surrogate for the support divergence of two policies that was previously usedas control variate in the SNIPS estimator (Swaminathan & Joachims, 2015b). It also appeared inthe Lagrangian formulation of the BanditNet objective (Joachims et al., 2018) and in the gradientupdate in REINFORCE algorithm (Williams, 1992). This connection gives interesting new insightthat the baselines used in policy-gradient algorithms not only help to reduce variance in gradients(Greensmith et al., 2004), but that they also connect to the problem of support deficiency in theoff-policy setting.3 O FF-POLICY LEARNING WITH DEFICIENT SUPPORTWe start by formally defining the problem of learning a contextual-bandit policy in the BLBF setting.Input to the policy are contexts x2X drawn i.i.d from a fixed but unknown distribution P(X).Given context x, the system executes a possibly stochastic policy (Yjx)that selects an actiony2 Y . For this context and action pair, the system observes a reward r2[rmin;rmax]fromP(rjx;y). Given a space of policies , the reward of any policy 2is defined asR() =ExEy(yjx)ErP(rjx;y)[r]: (1)In the BLBF setting, the learning algorithm is given a datasetD:=fxi;yi;ri;0(yijxi)gni=12Under review as a conference paper at ICLR 2020of past system interactions which consists of context-action-reward-propensity tuples. The propen-sity0(yijxi)is the probability of selecting action yifor contextxiunder the policy 0that wasused to log the data. We call 0the logging policy, and we will discuss desired conditions on thestochasticity of 0in the following. The goal of off-policy learning is to exploit the information inthe logged dataDto find a policy ^2that has high reward R(^).Analogous to the ERM principle in supervised learning, off-policy learning algorithms typicallyoptimize a counterfactual estimate ^R()ofR()as the training objective (Li et al., 2011; 2015;Bottou et al., 2013; Swaminathan & Joachims, 2015a).^= arg max2[^R()] (2)For conciseness, we ignore additional regularization terms in the objective (Swaminathan &Joachims, 2015a), since they are irrelevant to the main point of this paper. As counterfactual es-timator ^R(), most algorithms rely on some form of IPS weighting (Strehl et al., 2011; Dud ́ık et al.,2011; Swaminathan & Joachims, 2015a;b; Wang et al., 2017; Su et al., 2019) to correct the distribu-tion mismatch between the logging policy 0and each target policy 2.^RIPS() =1nnXi=1(yijxi)0(yijxi)ri: (3)A crucial condition for the effectiveness of the IPS estimator (and similar estimators) is that thelogging policy 0assigns non-zero probability to all actions that have non-zero probability underthe target policy we aim to evaluate. This condition is known as positivity or full support, and it isdefined as follows.Definition 1 (Full support) .The logging policy 0is said to have full support for when0(yjx)>0for all actions y2Y and contexts x2X for which(yjx)>0.It is known that the IPS estimator is unbiased, ED[^RIPS()] =R(), if the logging policy 0hasfull support for (Li et al., 2011). To ensure unbiased ERM, algorithms that use the IPS estimatorrequire that the logging policy 0has full support for all policies 2in the policy space. Forsufficiently rich policy spaces, like deep-networks fw(x;y)with softmax outputs of the formw(yjx) =exp(fw(x;y))Py02Yexp(fw(x;y0)); (4)this means that the logging policy 0needs to assign non-zero probability to every action yin everycontextx. This is a strong condition that is not feasible in many real-world systems, especially ifthe action space is large and many actions have poor reward.If the support requirement is violated, ERM learning can fail catastrophically. We will show belowthat the underlying reason is bias, not excessive variance that could be remedied through clippingor variance regularization (Strehl et al., 2011; Swaminathan & Joachims, 2015a). To quantify howsupport deficient a logging policy is, we denote the set of unsupported actions for context xunder0asU(x;0) :=fy2Yj0(yjx) = 0g:The bias of the IPS estimator is then characterized by the expected reward on the unsupportedactions.Proposition 1. Given contexts xP(X)and logging policy 0(Yjx), the bias of ^RIPSfor targetpolicy(Yjx)is equal to the expected reward on the unsupported action sets, i.e., bias(j0) =Ex[Py2U(x;0)(yjx)(x;y)].The proof is in Appendix A.1. From Proposition 1, it is clear that support deficient log data candrastically mislead ERM learning. To quantify the effect of support deficiency on ERM, we definethe support divergence between a logging policy 0and a target policy as follows.Definition 2 (Support Divergence) .For contexts xP(X)and any corresponding pair of targetpolicyand logging policy 0, the Support Divergence is defined asDX(j0) := ExP(X)24Xy2U(x;0)(yjx)35: (5)3Under review as a conference paper at ICLR 2020With this definition in hand, we can quantify the effect of support deficiency on ERM learning for apolicy space under logging policy 0.Theorem 1. For any given hypothesis space with logging policy 02, there exists areward distribution Prwith support in [rmin;rmax]such that in the limit of infinite trainingdata, ERM using IPS over the logged data D P(X)0(jX)Prcan select a policy^2arg max2ED[^RIPS()]that is at least (rmaxrmin) max2DX(j0)suboptimal.The proof is in Appendix A.2. To illustrate the theorem, consider a problem with rewards r2[1;0]. Furthermore, consider a policy space that contains a good policy gwithR(g) =0:1and a bad policy bwithR(b) =0:7. If policybhas support divergence DX(bj0) = 0:6orlarger, then ERM may return the bad binstead ofgeven with infinite amounts of training data.Note that it is sufficient to merely have one policy in that has large support deficiency to achievethis suboptimality. It is therefore crucial to control the support divergence DX(j0)uniformlyover all2, or to account for the suboptimality it can induce. To this effect, we explore threeapproaches in the following.3.1 S AFE LEARNING BY RESTRICTING THE ACTION SPACEThe first and arguably most direct approach to reducing DX(j0)is to disallow any action thathas zero support under the logging policy. For the remaining action set, the logging policy has fullsupport by definition. This restriction of the action set can be achieved by transforming each policy2into a new policy that sets the probability of the unsupported actions to zero.(yjx)!(yjx) :=(yjx) 1Ify=2U(x;0)g1Py02U(x;0)(y0jx)(6)This results in a new policy space . All 2have support divergence of zero DX(j0) = 0 andERM via IPS is guaranteed to be unbiased.While this transformation of the policy space from tois conceptually straightforward, it has twopotential drawbacks. First, restricting the action space without any exceptions may overly constrainthe policies in . In particular, if the optimal action yfor a specific context xdoes not havesupport under the logging policy, no 2can ever choose yeven if there are many observationsof similary’s on similar context x0. The second drawback is computational. For every context xduring training and testing, the system needs to evaluate the logging policy 0(yjx)to compute thetransformation from to. This can be prohibitively expensive especially at test time, where – aftermultiple rounds of off-policy learning with data from previously learned policies – we would needto evaluate the whole sequence of previous logging policies to execute the learned policy.3.2 S AFE LEARNING THROUGH REWARD EXTRAPOLATIONAs illustrated above, support deficiency is a problem of blind spots where we lack information aboutthe rewards of some actions in some contexts. Instead of disallowing the unsupported actions like inthe previous section, an alternative is to extrapolate the observed rewards to fill in the blind spots. Tothis effect, we propose the following augmented IPS estimator that imputes an extrapolated reward^(x;y)for each unsupported action y2U(x;0).^RIPS() =1nnXi=124(yijxi)0(yijxi)ri+Xy2U(xi;0)(yjxi)^(xi;y)35 (7)In the following proposition, we characterize the bias of the augmented IPS estimator for any givenreward extrapolation ^(x;y). We denote the mean of the reward rfor contextxand actionywith(x;y) =ErP(rjx;y)[r]. Furthermore, (x;y) :=^(x;y)(x;y)denotes the error of the rewardextrapolation for each xandy.Proposition 2. Given contexts x1;x2;:::;xndrawn i.i.d from the unknown distribution P(X), foractionyidrawn independently from logging policy 0with probability 0(Yjxi), the bias of theempirical risk defined in Equation (7) is Ex[Py2U0x(yjx)(x;y)].4Under review as a conference paper at ICLR 2020Algorithm 1: Data Augmentationinput: original logged dataset D, replaycount k,reward estimate ^(x;y); output:D0;initialization:D0=;;forj= 1;:::;k dofori= 1;:::;n doDefineUxito be the uniform distributionoverU(xi;0);DrawyUxi;D0=D0Sfxi;y;^(xi;y);1jU(xi;0)jg;endendIn this way we can learn in the original actionand policy space, but mitigate the effect of thesupport deficiency by explicitly incorporatingthe extrapolated reward ^(x;y). We exploretwo choices for ^(x;y)in the following, whichprovide different types of guarantees.Conservative Extrapolation. To minimize theuser impact of randomization in the loggingpolicy, it is generally desirable to put zero prob-ability on actions the are very likely to havelow (or even catastrophic reward). This meansthat precisely those bad actions are likely tonot be supported in the logging policy. A keydanger of blind spots regarding those actions isthat naive IPS training will inadvertently learna policy that selects those actions. This can be avoided by being maximally conservative aboutunsupported actions and imputing the lowest possible reward 8x;y2U(x;0) :^(x;y) =rmin.Intuitively, by imposing the worst possible reward for the unsupported actions, the learning algo-rithm will aim to avoid these low-reward areas. However, unlike for the policies resulting fromthe restricted action space, the learned policy is not strictly prohibited from choosing unsupportedactions – it is merely made aware of the maximum loss that the action may incur. Note that forproblems where rmin= 0, the naive IPS estimator is identical to conservative extrapolation sincethe second term in Equation (7) is zero.Regression Extrapolation. Instead of extrapolating with the worst-case reward, we may have addi-tional prior knowledge in the form of a model-based estimate that reduces the bias. In particular, weexplore using a regression estimate ^= arg min ^1nPni=1(^(xi;yi)ri)2that extrapolates fromthe observed dataD. Typically, ^comes from a parameterized class of regression functions. Otherregression objectives could also be used, such as weighted linear regression that itself uses impor-tance sampling as weights (Farajtabar et al., 2018). But, fundamentally, all regression approachesassume that the regression model is not misspecified and that it can thus extrapolate well. Note thatthe IPS part of Equation (7) can be changed to any estimators (with action set restricted on U(x;0)cfor allx), and it turns out that doubly robust (Dud ́ık et al., 2011) and CAB (Su et al., 2019) are specialextensions of regression extrapolation that substitute the IPS part with their corresponding estimator.Efficient Approximation. Evaluating the augmented IPS estimator from Equation (7) can be com-putationally expensive if the number of unsupported actions U(x;0)is large. To overcome thisproblem, we propose to use sampling to estimate the expected reward on the unsupported action,which can be thought of as augmenting the dataset Dwith additional observations where the log-ging policy has zero support. In particular, we propose the data-augmentation procedure detailed inAlgorithm 1. With the additional bandit data D0=fx0j;y0j;^(x0j;y0j);p0jgmj=1from Algorithm 1, thenew objective isarg min28<:1nnXi=1(yijxi)0(yijxi)ri+1mmXj=1(y0jjx0j)p0j^(x0j;y0j)9=;: (8)In Appendix A.5, we show that the empirical risk in Equation (8) has the same expectation (overrandomness inDandD0) as^RIPS(D)and can thus serve as an approximation for Equation (7).3.3 S AFE LEARNING BY RESTRICTING THE POLICY SPACEAs motivated by Theorem 1, the risk of learning from support deficient data scales with the maxi-mum support divergence DX(j0)among the policies in the policy space . Therefore, our thirdapproach restricts the policy space to the subset that contains the policies 2with anacceptably low support divergence DX(j0).=fj2^DX(j0)g (9)The parameter has an intuitive meaning. It specifies the maximum probability mass that a learnedpolicy can place on unsupported actions. By limiting this to , we limit the maximum bias of5Under review as a conference paper at ICLR 2020the ERM procedure according to Proposition 2 while not explicitly torquing the rewards like inconservative reward imputation.A key challenge, however, is implementing this restriction of the hypothesis space, such that theERM learner ^= arg max2[^RIPS()]only considers the subset . In particular, we donot have access to the context distribution P(X)for calculatingDX(j0), nor would it be possibleto enumerate all 2to check the condition DX(j0), which itself requires a possiblyinfeasible iteration over all actions. The following theorem (with proof in Appendix A.3) gives usan efficient way of estimating and controlling DX(j0)without explicit knowledge of P(X)oraccess to the logging policy 0beyond the logged propensities.Theorem 2. For contexts xidrawn i.i.d from P(X), actionyidrawn from logging policy 0, wedefineSD(j0) =1nPni=1(yijxi)0(yijxi). For any policy it holds thatExP(X)Ey0(jx)[SD(j0)] +DX(j0) = 1 (10)Using this theorem, the following proposition (proof in Appendix A.4, empirically verified in Ap-pendix B) gives us an efficient way of implementing the constraint DX(j0)via1SD(j0).Proposition 3. For any given 2(0;1),0< < = 2, letpmin denote the minimum propen-sity under supported set pmin =maxx;y2U(x;0)c0(yjx), then with probability larger than12 exp(2n2p2min), the constraint 1+SD(j0)1will ensure 0DX(j0).We can thus use 1SD(j0)as a surrogate forDX(j0)in the training objective:arg minw21nnXi=1w(yijxi)0(yijxi)ri. subject to 1+1nnXi=1w(yijxi)0(yijxi)1 (11)Using Lagrange multipliers, an equivalent dual form of Equation (11) is:maxu1;u20minw21nnXi=1w(yijxi)0(yijxi)(ri+u1u2)u1(1) +u2(1+) (12)For each fixed (u1;u2)pair, the inner minimization objective is ERM with IPS under a shift ofthe reward. Instead of maximizing over (u1;u2)in the outer objective, we treat (u1u2)as ahyperparameter that we select on a validation set. We explore various estimators for this model-selection problem in Section 4.Note that, among the methods we proposed for dealing with support deficiency, this approach is themost efficient to implement, and it does not require access to the logging policy during training ortesting. Furthermore, the form of the inner objective coincides with that of BanditNet (Joachimset al., 2018), which is known to work well for deep network training by controlling propensityoverfitting (Swaminathan & Joachims, 2015a).4 E MPIRICAL EVALUATIONWe empirically evaluate the effectiveness and robustness of the three proposed approaches: restrict-ing the action space, conservative and regression extrapolation, as well as restricting the policyspace. The semi-synthetic experiments are based on two real-world datasets: one is the popular im-age classification dataset CIFAR10 (Krizhevsky et al.) and the other is the credit-card fraud datasetof Dal Pozzolo et al. (2015). We use the naive IPS estimator and the regression-based Direct Method(DM) as baselines.The experiments are set up as follows. We first create a train-validation-test split for both datasets.The training set is used to generate bandit datasets for learning, the validation set is used to gener-ate bandit datasets for model selection, and the full-information test set serves as ground truth forevaluating the learned policies. To simulate bandit feedback for the CIFAR10 dataset, our experi-ment setup follows traditional supervised !bandit conversion for multi-class classification datasets(Beygelzimer & Langford, 2009). To not only have bandit data with binary multi-class rewards, we6Under review as a conference paper at ICLR 2020choose a different methodology for the credit-card dataset by designating some features as corre-sponding to actions and rewards. More details are given in Appendix B.To get logging policies for generating bandit feedback, we start by training a softmax-policy as inEquation (4) on a subset of the full-information data. We then introduce a temperature parameter into the learned policy via fw(x;y)to be able to control its stochasticity and support deficiency.In particular, we enforce zero support for some actions by clipping the propensities to 0 if they arebelow a threshold of = 0:01. The larger , the higher the support deficiency. Note that makingthe threshold at = 0:01allows us to control support while the variance of IPS stays bounded. Thisallows us to study support deficiency without having to worry about variance control.For both logging and target policies, we train softmax policies where fw(x;y)is a neural network.We use the ResNet20 architecture (He et al., 2016) for CIFAR10, and a fully-connected 2-layernetwork for the credit-card dataset.7075808590AccuracyCIFAR0.650.700.750.80True RewardCredit Card0.40.50.60.70.8True RewardCredit Card - Translated0 45 607080% Unsupported actions0.000.050.100.150.200.25Support Divergence0 20 45 607080% Unsupported actions0.00.20.40.6Support Divergence0 20 45 607080% Unsupported actions0.00.20.40.60.81.0Support DivergenceDM HardmaxNaive IPSConservative ExtrapolationAction RestrictionRegression ExtrapolationPolicy RestrictionDRFigure 1: Learning results with varying support deficiency in the logging policy.How do the methods perform at different level of support deficiency? Results are shown inFigure 1. First, as expected, learning using naive IPS degrades on both datasets as we make thelogging policy more peaked and the number of unsupported actions increases. Note that naive IPScoincides with Conservative Extrapolation, since both datasets are scaled to have a minimum rewardof zero. In the rightmost column, however, we translated the rewards to [1;0]. This has a strongdetrimental effect on naive IPS, as it is now overly optimistic about unsupported actions. Second,the approach of dealing with support deficiency by restricting the action space also performs poorly.The second row of plot sheds some light on this, as it shows the support divergence DX(j0)ofthe learned policy. It is zero for Action Restriction as expected, which means that bias is not theproblem. Instead, as the number of unsupported actions increases, the best actions are more likely tobe pruned and unavailable in the restricted policy space . Third, Regression Extrapolation performsbetter than Conservative Extrapolation on both datasets. In both cases, the DM model is quite goodwhich also benefits Regression Extrapolation. However, on the credit-card dataset the regressionseems better at ranking than at predicting the true reward, which explains why DM performs betterthan Regression Extrapolation. Fourth, the method that performs well most consistently is PolicyRestriction. Unlike all the other IPS-based methods, it performs well even under the translatedrewards in the third column of Figure 1. This is because the objective of Policy Restriction coincideswith that of BanditNet (Joachims et al., 2018), which is known to remedy propensity overfitting dueto the lack of equivariance of the IPS estimator (Swaminathan & Joachims, 2015b).How does the learning performance change with more training data? Results are shown inFigure 2. As the number of bandit examples increases, Policy Restriction, Regression Extrapolationand DM dominate over most of the range especially when the percentage of unsupported actions islarge. Among the other methods, Action Restriction can take the least advantage of more data. Thisis plausible, since its maximum performance is limited by the available actions. For similar reasons,Conservative Extrapolation (and equivalently IPS) also flattens out, since it also tightly restricts theaction space by imputing the minimum reward.7Under review as a conference paper at ICLR 20200.01 0.05 0.10 0.25# Bandit Data (Million)50607080Accuracy43% Unsupported Actions0.01 0.05 0.10 0.25# Bandit Data (Million)50607080Accuracy81% Unsupported Actions0.01 0.10 0.23 0.46# Bandit Data (Million)0.650.700.750.80True Reward46% Unsupported Actions0.01 0.10 0.23 0.46# Bandit Data (Million)0.600.650.700.750.80True Reward80% Unsupported ActionsCIFAR Credit CardDM HardmaxNaive IPSConservative ExtrapolationAction RestrictionRegression ExtrapolationPolicy RestrictionFigure 2: Learning results with varying amounts of bandit data on CIFAR10 and credit-card dataset.%Unsupp.OracleRegr.Extrap.DMCons.Extrap.SNIPS45 0.878 0.878 0.878 0.878 0.87660 0.871 0.871 0.871 0.871 0.87170 0.858 0.858 0.856 0.858 0.85877 0.856 0.854 0.854 0.856 0.85680 0.855 0.855 0.855 0.838 0.8490.0 0.1 0.2 0.4 0.6k = u_1 - u_20.20.40.60.8Estimated RewardOracleCons. ExtrapolationSNIPSReg. ExtrapolationDMFigure 3: Model selection performance on CIFAR10.How effective are the estimators for model selection? Most learning algorithms have hyperpa-rameters, and we now evaluate how the estimators perform for this secondary learning problem. Wespecifically focus on the parameter k:=u1u2in Policy Restriction, since it controls how muchthe learned policies can step outside the region of support. The table on the left of Figure 3 shows thereward of the learned policy when performing model selection with the respective estimator. Oracleis the estimator that has access to the full-information validation set, and can thus be considered as askyline. We also included the SNIPS estimator (Swaminathan & Joachims, 2015b), which imputesthe average reward on the supported action for the unsupported actions (Gilotte et al., 2018). Allestimators perform quite well for model selection on CIFAR, and the results are analogous for thecredit-card data (see Appendix B.2). However, the plot to the right of Figure 3 reveals that SNIPSdoes not accurately reflect the shape of the Oracle curve. Both Regression Extrapolation and DM,however, are found to be sufficiently accurate for reliable model selection.5 D ISCUSSION AND CONCLUSIONSWe identified and analyzed how off-policy learning based on IPS weighting can suffer severely de-graded learning performance when the logging policy is support deficient. To remedy this problem,we explored approaches that limit the impact of missing support through three different means: re-stricting the action space, reward extrapolation and restricting the policy space. We find that themost natural approach of restricting the action space is neither computationally efficient, nor doesit learn accurate policies. Reward extrapolation through regression and restricting the policy space,however, both perform well and robustly even at high levels of support deficiency. Among those twomethods, reward extrapolation has the potential drawback that we need to compute (and/or samplefrom) the complement of the logging policy, which can be computationally challenging. Further-more, having to store all old logging policies is inconvenient in practice. This makes the approachof restricting the policy space particularly attractive, since it is computationally efficient and it doesnot require access to the logging policy beyond the logged propensity values.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text This paper considers a new off-policy contextual-bandit method that can learn even when the logging policy has deficient support. Three approaches are explored, namely restricting the action space, reward extrapolation, and restricting the policy space. This paper is well written and it considers an important problem of deficient support. However, the proposed method was only compared to a few old benchmarks. How does the proposed method compare to more recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019), Tang et al. (2019)) in the experiments? The work by Liu et al. (2019) also considered the setting of deficient support. Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction. arXiv:1904.08473, 2019. Jie, Liu, Liu, Wang, and Peng, Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy. ICLR 2019. Tang, Feng, Li, Zhou, and Liu, Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation, arxiv: 1910.07186, 2019. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
e-ZdxsIwweR
ICLR.cc/2021/Conference
2021
Robust Constrained Reinforcement Learning for Continuous Control with Model Misspecification
["Daniel J Mankowitz", "Dan Andrei Calian", "Rae Jeong", "Cosmin Paduraru", "Nicolas Heess", "Sumanth Dathathri", "Martin Riedmiller", "Timothy Mann"]
Many real-world physical control systems are required to satisfy constraints upon deployment. Furthermore, real-world systems are often subject to effects such as non-stationarity, wear-and-tear, uncalibrated sensors and so on. Such effects effectively perturb the system dynamics and can cause a policy trained successfully in one domain to perform poorly when deployed to a perturbed version of the same domain. This can affect a policy's ability to maximize future rewards as well as the extent to which it satisfies constraints. We refer to this as constrained model misspecification. We present an algorithm with theoretical guarantees that mitigates this form of misspecification, and showcase its performance in multiple Mujoco tasks from the Real World Reinforcement Learning (RWRL) suite.
["reinforcement learning", "constraints", "robustness"]
ABSTRACTMany real-world physical control systems are required to satisfy constraints upondeployment. Furthermore, real-world systems are often subject to effects suchas non-stationarity, wear-and-tear, uncalibrated sensors and so on. Such effectseffectively perturb the system dynamics and can cause a policy trained successfullyin one domain to perform poorly when deployed to a perturbed version of thesame domain. This can affect a policy’s ability to maximize future rewards aswell as the extent to which it satisfies constraints. We refer to this as constrainedmodel misspecification. We present an algorithm that mitigates this form ofmisspecification, and showcase its performance in multiple simulated Mujoco tasksfrom the Real World Reinforcement Learning (RWRL) suite.1I NTRODUCTIONReinforcement Learning (RL) has had a number of recent successes in various application domainswhich include computer games (Silver et al., 2017; Mnih et al., 2015; Tessler et al., 2017) androbotics (Abdolmaleki et al., 2018a). As RL and deep learning continue to scale, an increasingnumber of real-world applications may become viable candidates to take advantage of this technology.However, the application of RL to real-world systems is often associated with a number of challenges(Dulac-Arnold et al., 2019; Dulac-Arnold et al., 2020). We will focus on the following two:Challenge 1 - Constraint satisfaction : One such challenge is that many real-world systems haveconstraints that need to be satisfied upon deployment (i.e., hard constraints); or at least the numberof constraint violations as defined by the system need to be reduced as much as possible (i.e.,soft-constraints). This is prevalent in applications ranging from physical control systems such asautonomous driving and robotics to user facing applications such as recommender systems.Challenge 2 - Model Misspecification (MM) : Many of these systems suffer from another challenge:model misspecification. We refer to the situation in which an agent is trained in one environment butdeployed in a different, perturbed version of the environment as an instance of model misspecification .This may occur in many different applications and is well-motivated in the literature (Mankowitzet al., 2018; 2019; Derman et al., 2018; 2019; Iyengar, 2005; Tamar et al., 2014).There has been much work on constrained optimization in the literature (Altman, 1999; Tessler et al.,2018; Efroni et al., 2020; Achiam et al., 2017; Bohez et al., 2019). However, to our knowledge, theeffect of model misspecification on an agent’s ability to satisfy constraints at test time has not yetbeen investigated.⇤indicates equal contribution.1Preprint. Under review.Constrained Model Misspecification (CMM) : We consider the scenario in which an agent isrequired to satisfy constraints at test time but is deployed in an environment that is different fromits training environment (i.e., a perturbed version of the training environment). Deployment in aperturbed version of the environment may affect the return achieved by the agent as well as its abilityto satisfy the constraints. We refer to this scenario as constrained model misspecification .This problem is prevalent in many real-world applications where constraints need to be satisfied butthe environment is subject to state perturbations effects such as wear-and-tear, partial observabilityetc., the exact nature of which may be unknown at training time. Since such perturbations cansignificantly impact the agent’s ability to satisfy the required constraints it is insufficient to simplyensure that constraints are satisfied in the unperturbed version of the environment. Instead, thepresence of unknown environment variations needs to be factored into the training process. Onearea where such considerations are of particular practical relevance is sim2real transfer where theunknown sim2real gap can make it hard to ensure that constraints will be satisfied on the real system(Andrychowicz et al., 2018; Peng et al., 2018; Wulfmeier et al., 2017; Rastogi et al., 2018; Christianoet al., 2016). Of course, one could address this issue by limiting the capabilities of the system beingcontrolled in order to ensure that constraints are never violated, for instance by limiting the amount ofcurrent in an electric motor. Our hope is that our methods can outperform these more blunt techniques,while still ensuring constraint satisfaction in the deployment domain.Main Contributions : In this paper, we aim to bridge the two worlds of model misspecification andconstraint satisfaction. We present an RL objective that enables us to optimize a policy that aimsto be robust to CMM. Our contributions are as follows: (1)Introducing the Robust Return RobustConstraint (R3C) and Robust Constraint (RC) RL objectives that aim to mitigate CMM as definedabove. This includes the definition of a Robust Constrained Markov Decision Process (RC-MDP).(2)Derive corresponding R3C and RC value functions and Bellman operators. Provide an argumentshowing that these Bellman operators converge to fixed points. These are implemented in the policyevaluation step of actor-critic R3C algorithms. (3)Implement five different R3C and RC algorithmicvariants on top of D4PG and DMPO, (two state-of-the-art continuous control RL algorithms). (4)Empirically demonstrate the superior performance of our algorithms, compared to various baselines,with respect to mitigating CMM. This is shown consistently across 6different Mujoco tasks from theReal-World RL (RWRL) suite1.2B ACKGROUND2.1 M ARKOV DECISION PROCESSESARobust Markov Decision Process (R-MDP) is defined as a tuple hS, A, R, ,Piwhere Sis afinite set of states, Ais a finite set of actions, R:S⇥A!Ris a bounded reward function and2[0,1)is the discount factor; P(s, a)✓M(S)is an uncertainty set where M(S)is the setof probability measures over next states s02S. This is interpreted as an agent selecting a stateand action pair, and the next state s0is determined by a conditional measure p(s0|s, a)2P(s, a)(Iyengar, 2005). We want the agent to learn a policy ⇡:S!A, which is a mapping from statesto actions that is robust with respect to this uncertainty set. For the purpose of this paper, weconsider deterministic policies, but this can easily be extended to stochastic policies too. The robustvalue function V⇡:S!Rfor a policy ⇡is defined as V⇡(s)=i n f p2P(s,⇡(s))V⇡,p(s)whereV⇡,p(s)=r(s,⇡(s)) + p(s0|s,⇡(s))V⇡,p(s0). A rectangularity assumption on the uncertainty set(Iyengar, 2005) ensures that “nature” can choose a worst-case transition function independently forevery state sand action a. This means that during a trajectory, at each timestep, nature can chooseany transition model from the uncertainty set to reduce the performance of the agent. A robust policyoptimizes for the robust (worst-case) expected return objective: JR(⇡)=i n f p2PEp,⇡[P1t=0trt].The robust value function can be expanded as V⇡(s)=r(s,⇡(s)) + infp2P(s,⇡(s))Ep[V⇡(s0)|s,⇡(s)].As in (Tamar et al., 2014), we can define an operator infP(s,a)v:R|S|!RasinfP(s,a)v=i n f {p>v|p2P(s, a)}. We can also define an operator for some policy ⇡asinf⇡:R|S|!R|S|where {inf⇡v}(s)=infP(s,⇡(s))v. Then, we have defined the Robust Bellman1https://github.com/google-research/realworldrl_suite2Preprint. Under review.operator as follows T⇡RV⇡=r⇡+inf⇡V⇡. Both the robust Bellman operator T⇡R:R|S|!R|S|for a fixed policy and the optimal robust Bellman operator T⇤Rv(s) = max ⇡T⇡Rv(s)have previouslybeen shown to be contractions (Iyengar, 2005).AConstrained Markov Decision Process (CMDP) is an extension to an MDP and consists of thetuple hS, A, P, R, C, iwhere S, A, R andare defined as in the MDP above and C:S⇥A!RKisa mapping from a state sand action ato aKdimensional vector representing immediate costs relatingtoKconstraint. We use K=1 from here on in and therefore C:S⇥A!R. We refer to the cost for aspecific state action tuple hs, aiat time tasct(s, a). The solution to a CMDP is a policy ⇡:S!Athat learns to maximize return and satisfy the constraints. The agent aims to learn a policy thatmaximizes the expected return objective J⇡R=E[P1t=0trt]subject to J⇡C=E[P1t=0tct]where is a pre-defined constraint threshold. A number of approaches (Tessler et al., 2018; Bohezet al., 2019) optimize the Lagrange relaxation of this objective min 0max ✓J⇡R(J⇡C)byoptimizing the Lagrange multiplier and the policy parameters ✓using alternating optimization. Wealso define the constraint value function V⇡,pC:S!Rfor a policy ⇡as in (Tessler et al., 2018)where V⇡,pC(s)=c(s,⇡(s)) + p(s0|s,⇡(s))V⇡,pC(s0).2.2 C ONTINUOUS CONTROL RL A LGORITHMSWe address the CMM problem by modifying two well-known continuous control algorithms byhaving them optimize the RC and R3C objectives.The first algorithm is Maximum A-Posteriori Policy Optimization (MPO) . This is a continuouscontrol RL algorithm that performs policy iteration using an RL form of expectation maximization(Abdolmaleki et al., 2018a;b). We use the distributional-critic version in Abdolmaleki et al. (2020),which we refer to as DMPO.The second algorithm is Distributed Distributional Deterministic Policy Gradient (D4PG), whichis a state-of-the-art actor-critic continuous control RL algorithm with a deterministic policy (Barth-Maron et al., 2018). It is an incremental improvement to DDPG (Lillicrap et al., 2015) with adistributional critic that is learned similarly to distributional MPO.3R OBUST CONSTRAINED (RC) O PTIMIZATION OBJECTIVEWe begin by defining a Robust Constrained MDP (RC-MDP). This combines an R-MDP and C-MDPto yield the tuple hS, A, R, C, ,Piwhere all of the variables in the tuple are defined in Section 2.We next define two optimization objectives that optimize the RC-MDP. The first variant attempts tolearn a policy that is robust with respect to the return as well as constraint satisfaction - Robust ReturnRobust Constrained (R3C) objective. The second variant is only robust with respect to constraintsatisfaction - Robust Constrained (RC) objective.Prior to defining these objectives, we add some important definitions.Definition 1. The robust constrained value function V⇡C:S!Rfor a policy ⇡is defined asV⇡C(s)=s u pp2P(s,⇡(s))V⇡,pC(s)=s u pp2P(s,⇡(s))E⇡,pP1t=0tct.This value function represents the worst-case sum of constraint penalties over the course of an episodewith respect to the uncertainty set P(s, a). We can also define an operator supP(s,a)v:R|S|!RassupP(s,a)v=s u p {p>v|p2P(s, a)}. In addition, we can define an operator on vectors for some policy⇡assup⇡:R|S|!R|S|where {sup⇡v}(s)= supP(s,⇡(s))v. Then, we can defined the SupremumBellman operator T⇡sup:R|S|!R|S|as follows T⇡supV⇡=r⇡+sup⇡V⇡. Note that this operatoris a contraction since we get the same result if we replace T⇡infwith T⇡supand replace Vwith V. Analternative derivation of the sup operator contraction has also been derived in the Appendix, SectionA.3 for completeness.3.0.1 R OBUST RETURN ROBUST CONSTRAINT (R3C) O BJECTIVEThe R3C objective is defined as:3Preprint. Under review.max⇡2⇧infp2PEp,⇡Xttr(st,at)s.t.supp02PEp0,⇡Xttc(st,at) (1)Note, a couple of interesting properties about this objective: (1) it focuses on being robust withrespect to the return for a pre-defined set of perturbations; (2) the objective also attempts to be robustwith respect to the worst case constraint value for the perturbation set. The Lagrange relaxation formof equation 1 is used to define an R3C value function.Definition 2 (R3C Value Function) .For a fixed , and using the above-mentioned rectangularityassumption (Iyengar, 2005), the R3C value function for a policy ⇡is defined as the concatenationof two value functions V⇡=f(hV⇡,V⇡Ci)=V⇡V⇡C. This implies that we keep two separateestimates of V⇡andV⇡Cand combine them together to yield V⇡. The constraint threshold termoffsets the value function, and has no effect on any policy improvement step2. As a result, thedependency on is dropped.The next step is to define the R3C Bellman operator. This is presented in Definition 3.Definition 3 (R3C Bellman operator) .The R3C Bellman operator is defined as two separate Bellmanoperators T⇡R3C=hT⇡inf,T⇡supiwhere T⇡infis the robust Bellman operator (Iyengar, 2005) andT⇡sup:R|S|!R|S|is defined as the supBellman operator. Based on this definition, applying theR3C Bellman operator to V⇡involves applying each of the Bellman operators to their respectivevalue functions. That is, T⇡R3CV=T⇡infVT⇡supVC.It has been previously shown that T⇡infis a contraction with respect to the max norm (Tamar et al.,2014) and therefore converges to a fixed point. We also provided an argument whereby T⇡supis acontraction operator in the previous section as well as in Appendix, A.3. These Bellman operatorsindividually ensure that the robust value function V(s)and the constraint value function VC(s)converge to fixed points. Therefore, T⇡R3CValso converges to a fixed point by construction.As a result of the above argument, we know that we can apply the R3C Bellman operator in valueiteration or policy iteration algorithms in the policy evaluation step. This is achieved in practiceby simultaneously learning both the robust value function V⇡(s)and the constraint value functionV⇡C(s)and combining these estimates to yield V⇡(s).It is useful to note that this structure allows for a flexible framework which can define an objectiveusing different combinations of supandinfterms, yielding combined Bellman operators that arecontraction mappings. It is also possible to take the mean with respect to the uncertainty set yieldinga soft-robust update (Derman et al., 2018; Mankowitz et al., 2019). We do not derive all of thepossible combinations of objectives in this paper, but note that the framework provides the flexibilityto incorporate each of these objectives. We next define the RC objective.3.0.2 R OBUST CONSTRAINED (RC) O BJECTIVEThe RC objective focuses on being robust with respect to constraint satisfaction and is defined as:max⇡2⇧E⇡,pXttr(st,at)s.t.supp02PEp0,⇡Xtc(st,at)< (2)This objective differs from R3C in that it only focuses on being robust with respect to constraintsatisfaction. This is especially useful in domains where perturbations are expected to have a signif-icantly larger effect on constraint satisfaction performance compared to return performance. Thecorresponding value function is defined as in Definition 2, except by replacing the robust valuefunction in the concatenation with the expected value function V⇡,p. The Bellman operator is alsosimilar to Definition 3, where the expected return Bellman operator T⇡replaces T⇡inf.2Theterm is only used in the Lagrange update in Lemma 1.4Preprint. Under review.3.1 L AGRANGE UPDATEFor both objectives, we need to learn a policy that maximizes the return while satisfying the constraint.This involves performing alternating optimization on the Lagrange relaxation of the objective. Theoptimization procedure alternates between updating the actor/critic parameters and the Lagrangemultiplier. For both objectives we have the same gradient update for the Lagrange multiplier:Lemma 1 (Lagrange derivative) .The gradient of the Lagrange multiplier is@@f=✓supp2PEp,⇡Pttc(st,at)◆, where fis the R3C or RC objective loss.This is an intuitive update in that the Lagrange multiplier is updated using the worst-case constraintviolation estimate. If the worst-case estimate is larger than , then the Lagrange multiplier is increasedto add more weight to constraint satisfaction and vice versa.4R OBUST CONSTRAINED POLICY EVA L UAT I O NWe now describe how the R3C Bellman operator can be used to perform policy evaluation. This policyevaluation step can be incorporated into any actor-critic algorithm. Instead of optimizing the regulardistributional loss (e.g. the C51 loss in Bellemare et al. (2017)), as regular D4PG and DMPO do, weoptimize the worst-case distributional loss, which is the distance: d✓rt+V⇡kˆ✓(st+1),V⇡k✓(st)◆,where V⇡k✓(st)=i n f p2P(st,⇡(st))V⇡k✓(st+1⇠p(·|st,⇡(st)))supp02P(st,⇡(st))V⇡kC,✓(st+1⇠p0(·|st,⇡(st)));P(st,⇡(st))is an uncertainty set for the current state stand action at;⇡kis thecurrent network’s policy, and ˆ✓denotes the target network parameters. The Bellman operatorsderived in the previous sections are repeatedly applied in this policy evaluation step depending on theoptimization objective (e.g., R3C or RC). This would be utilized in the critic updates of D4PG andDMPO. Note that the action value function definition, Q⇡k✓(st,at), trivially follows.5E XPERIMENTSWe perform all experiments using domains from the Real-World Reinforcement Learn-ing (RWRL) suite3, namely cartpole: {balance, swingup },walker: {stand, walk,run}, and quadruped: {walk, run }. We define a task in our experiments as a 6-tupleT=hdomain ,domain variant ,constraint ,safety coeff ,threshold ,perturbation iwhose elements refer to the domain name, the variant for that domain (i.e. RWRL task), the constraintbeing considered, the safety coefficient value, the constraint threshold and the type of robustnessperturbation being applied to the dynamics respectively. An example task would therefore be:T=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole length i. In total, we have 6different tasks on which we test our benchmark agents. The full list of tasks can be found in theAppendix, Table 7. The available constraints per domain can be found in the Appendix B.1.The baselines used in our paper can be seen in Table 1. C-ALG refers to the reward constrained,non-robust algorithms of the variants that we have adapted based on (Tessler et al., 2018; Anonymous,2020); RC-ALG refers to the robust constraint algorithms corresponding to the Bellman operatorT⇡RC; R3C-ALG refers to the robust return robust constrained algorithms corresponding to theBellman operator T⇡R3C; SR3C-ALG refers to the soft robust (with respect to return) robust constraintalgorithms and R-ALG refers to the robust return algorithms based on Mankowitz et al. (2019).5.1 E XPERIMENTAL SETUPFor each task, the action and observation dimensions are shown in the Appendix, Table 6. The lengthof an episode is 1000 steps and the upper bound on reward is 1000 (Tassa et al., 2018). All the3https://github.com/google-research/realworldrl_suite5Preprint. Under review.Baseline Algorithm Variants Baseline DescriptionC-ALG C-D4PG, C-DMPO Constraint aware, non-robust.RC-ALG RC-D4PG, RC-DMPO Robust constraint.R3C-ALG R3C-D4PG, R3C-DMPO Robust return robust constraint.R-ALG R-D4PG, R-DMPO Robust return.SR3C-ALG SR3C-D4PG Soft robust return, robust constraint.Table 1: The baseline algorithms used in this work.network architectures are the same per algorithm and approximately the same across algorithms interms of the layers and the number of parameters. A full list of all the network architecture detailscan be found in the Appendix, Table 4. All runs are averaged across 5seeds.Metrics : We use three metrics to track overall performance, namely: return R,overshoot ,Candpenalized return Rpenalized . The return is the sum of rewards the agent receives over the course ofan episode. The constraint overshoot ,C= max(0 ,J⇡C)is defined as the clipped differencebetween the average costs over the course of an episode J⇡Cand the constraint threshold . Thepenalized return is defined as Rpenalized =R ̄ ,Cwhere ̄= 1000 is an evaluation weight andequally trades off return with constraint overshoot ,C.Constraint Experiment Setup : The safety coefficient is a flag in the RWRL suite (Dulac-Arnoldet al., 2020) that determines how easy/difficult it is in the environment to violate constraints. Thesafety coefficient values range from 0.0(easy to violate constraints) to 1.0(hard to violate constraints).As such we selected for each task (1) a safety coefficient of 0.3; (2) a particular constraint supportedby the RWRL suite and (3) a corresponding constraint threshold , which ensures that the agent canfind feasible solutions (i.e., satisfy constraints) and solve the task.Robustness Experimental Setup: The robust/soft-robust agents (R3C and RC variants) are trainedusing a pre-defined uncertainty set consisting of 3task perturbations (this is based on the results fromMankowitz et al. (2019)). Each perturbation is a different instantiation of the Mujoco environment.The agent is then evaluated on a set of 9hold-out task perturbations ( 10forquadruped ). For example,if the task is T=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole length i, then theagent will have three pre-defined pole length perturbations for training, and evaluate on nine unseen pole lengths,while trying to satisfy the balance velocity constraint.Training Procedure : All agents are always acting on the unperturbed environment. This corresponds to thedefault environment in the dmcontrol suite (Tassa et al., 2018) and is referred to in the experiments as the nominalenvironment. When the agent acts, it generates next state realizations for the nominal environment as well as eachof the perturbed environments in the training uncertainty set to generate the tuple hs, a, r, [s0,s01,s02···s0N]iwhere N is the number of environments in the training uncertainty set and s0iis the next state realizationcorresponding to the ithperturbed training environment. Since the robustness update is incorporated into thepolicy evaluation stage of each algorithm, the critic loss which corresponds to the TD error in each case ismodified as follows: when computing the target, the learner samples a tuple hs, a, r, [s0,s01,s02···s0N]ifromthe experience replay. The target action value function for each next state transition [s0,s01,s02···s0N]is thencomputed by taking the inf(robust), average (soft-robust) or the nominal value (non-robust). In each caseseparate action-value functions are trained for the return Q(s, a)and the constraint QC(s, a). These valuefunction estimates then individually return the mean, inf,sup value, depending on the technique, and arecombined to yield the target to compute Q(s, a).The chosen values of the uncertainty set and evaluation set for each domain can be found in Appendix,Table 8. Note that it is common practice to manually select the pre-defined uncertainty set and the unseen testenvironments. Practitioners often have significant domain knowledge and can utilize this when choosing theuncertainty set (Derman et al., 2019; 2018; Di Castro et al., 2012; Mankowitz et al., 2018; Tamar et al., 2014).5.2 M AINRESULTSIn the first sub-section we analyze the sensitivity of a fixed constrained policy (trained using C-D4PG) operatingin perturbed versions of a given environment. This will help test the hypothesis that perturbing the environmentdoes indeed have an effect on constraint satisfaction as well as on return. In the next sub-section we analyze theperformance of the R3C and RC variants with respect to the baseline algorithms.6Preprint. Under review.Base Algorithm RR penalized max(0 ,J⇡C)D4PG C-D4PG 673.21 ±93.04 491.450 0.18 ±0.053R-D4PG 707.79 ±65.00 542.022 0.17 ±0.046R3C-D4PG 734.45 ±77.93 635.246 0.10 ±0.049RC-D4PG 684.30 ±83.69 578.598 0.11 ±0.050SR3C-D4PG 723.11 ±84.41 601.016 0.12 ±0.038DMPO C-MPO 598.75 ±72.67 411.376 0.19 ±0.049R-MPO 686.13 ±86.53 499.581 0.19 ±0.036R3C-MPO 752.47 ±57.10 652.969 0.10 ±0.040RC-MPO 673.98 ±80.91 555.809 0.12 ±0.036Table 2: Performance metrics averaged over all holdout sets for all tasks.5.2.1 F IXED POLICY SENSITIVITYIn order to validate the hypothesis that perturbing the environment affects constraint satisfaction and return, wetrained a C-D4PG agent to satisfy constraints across 10different tasks. In each case, C-D4PG learns to solve thetask and satisfy the constraints in expectation. We then perturbed each of the tasks with a supported perturbationand evaluated whether the constraint overshoot increases and the return decreases for the C-D4PG agent. Someexample graphs are shown in Figure 1 for the cartpole (left), quadruped (middle) and walker (right)domains. The upper row of graphs contain the return performance (blue curve), the penalized return performance(orange curve) as a function of increased perturbations (x-axis). The vertical red dotted line indicates the nominalmodel on which the C-D4PG agent was trained. The lower row of graphs contain the constraint overshoot(green curve) as a function of varying perturbations. As seen in the figures, as perturbations increase acrosseach dimension, both the return and penalized return degrades (top row) while the constraint overshoot (bottomrow) increases. This provides useful evidence for our hypothesis that constraint satisfaction does indeed sufferas a result of perturbing the environment dynamics. This was consistent among many more settings. The fullperformance plots can be found in the Appendix, Figures 3, 4 and 5 for cartpole ,quadruped andwalkerrespectively.Figure 1: The effect on constraint satisfaction and return as perturbations are added to cartpole ,quadruped andwalker for a fixed C-D4PG policy.5.2.2 R OBUST CONSTRAINED RESULTSWe now compare C-ALG, RC-ALG, R3C-ALG, R-ALG and SR3C-ALG4across 6tasks. The average perfor-mance across holdout sets and tasks is shown in Table 2. As seen in the table, the R3C-ALG variant outperformsall of the baselines in terms of return and constraint overshoot and therefore obtains the highest penalized returnperformance. Interestingly, the soft-robust variant yields competitive performance.We further analyze the results for three tasks using ALG=D4PG on the (leftcolumn) and ALG=DMPO (right column) in Figure 2. The three tasks areTcartpole,slider damping =hcartpole ,swingup ,balance velocity ,0.3,0.115 ,slider damping i(top row), Tcartpole,pole mass=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole mass i(middle row) and Twalker =hwalker ,walk ,joint velocity ,0.3,0.1,torso length i(bottomrow). Graphs of the additional tasks can be found in the Appendix, Figures 6 and 7. Each graph contains, on they-axis, the return R(marked by the transparent colors) and the penalized return Rpenalized (marked by the dark4We only ran the SR3C-D4PG variant to gain intuition as to soft-robust performance.7Preprint. Under review.colors superimposed on top of R). The x-axis consists of three holdout set environments in increasing order ofdifficulty from Holdout 0 toHoldout 8 . Holdout N corresponds to perturbation element N for the correspondingtask in the Appendix, Table 8. As can be seen for Tcartpole,slider damping andTcartpole,pole mass(Figure 2(top and middle rows respectively)), R3C-D4PG outperforms the baselines, especially as the perturbationsget larger. This can be seen by observing that as the perturbations increase, the penalized return for thesetechniques is significantly higher than that of the baselines. This implies that the amount of constraint violationsis significantly lower for these algorithms resulting in robust constraint satisfaction. Twalker (bottom row) hassimilar performance improved performance over the baseline algorithms.Holdout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Cartpole, Perturbation: Slider DampingSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Slider DampingRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Pole MassSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Pole MassRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedHoldout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Walker, Perturbation: Thigh LengthSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Walker, Perturbation: Thigh LengthRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedFigure 2: The holdout set performance of the baseline algorithms on D4PG variants (left) and DMPOvariants (right) for Cartpole with pole mass perturbations (top row) and walker with thigh lengthperturbations (bottom row).6C ONCLUSIONThis papers simultaneously addresses constraint satisfaction and robustness to state perturbations, two importantchallenges of real-world reinforcement learning. We present two RL objectives, R3C and RC, that yieldrobustness to constraints under the presence of state perturbations. We define R3C and RC Bellman operators toensure that value-based RL algorithms will converge to a fixed point when optimizing these objectives. We thenshow that when incorporating this into the policy evaluation step of two well-known state-of-the-art continuouscontrol RL algorithms the agent outperforms the baselines on 6Mujoco tasks. In related work, Everett et al.(2020) considers the problem of being verifiably robust to an adversary that can perturb the state s02Stodegrade performance as measured by a Q-function. Dathathri et al. (2020) consider the problem of learningagents (in deterministic environments with known dynamics) that satisfy constraints under perturbations to statess02S. In contrast, equation 1 considers the general problem of learning agents that optimize for the returnwhile satisfying constraints for a given RC-MDP.8Preprint. Under review.
4P9LkndnRQ_
Encouraging results, but the theory section needs more work
4: Ok but not good enough - rejection
The standard Reinforcement Learning framework is limited in many ways, and numerous variants have been introduced to deal with aspects such as partial observability, temporal abstraction, safety, domain transfer, etc. Yet, these issues are often studied separately and it is often unclear how to combine them together. This is the ambitious challenge taken by this paper, which attempts to bridge the two separate settings of Robust MDPs, which aim at considering ambiguity in the dynamics, and Constrained MDPs, which aim at enforcing the satisfaction of a constraint on an expected cost signal. The authors propose the formulation of two objectives, that merge the two aspects and include both a worst-case evaluation over the ambiguity set and a constraint violation penalty term. The ways of dealing with both issues are fairly standard (Lagrangian relaxation of the constraints with alternating optimization, and worst-case evaluation over a finite set of simulated transitions in practice), but their combination seems novel and relevant. These objectives come with the corresponding Bellman Expectation operators, which allow to evaluate the current policy (critic) and provide a feedback (gradient) for the actor to ensure robust constraint satisfaction. The applicability of the proposed approaches is demonstrated on a benchmark of Mujoco tasks, where they are shown to compare favorably to several baselines. My main concerns lie with the definitions and results of Section 2.3, which I think generally lack rigour and clarity, which sheds doubts on the validity of the claimed results. 1. The authors start by defining the R3V value function $\mathbb{V}$, as a bootstrap of two other values $V$ and $V_c$, that haven't been defined. I was initially confused because they are denoted as if they do not depend on the policy $\pi$, so I first thought these referred to optimal value functions (which would need to be appropriately defined, especially $V_c$ since the costs are constrained rather than optimized), but they seem to be in fact the expected returns for the policy $\pi$ (i.e. the value functions of a policy $\pi$ as opposed to the optimal value functions). 2. Likewise, do the values $V$ and $V_c$ in definition 1 depend on the dynamics $p$? It seems so, but it should be written explicitely. 3. The derivation of A.2 seems a bit sloppy, since the last term in line 4 is identified as$ \mathbb{V}$while it does not strictly correspond to the definition 1. 4. The next state $s'$ is a random variable that depends on the dynamics $p$, and thus subject to the robust inf/sup over $P$ in the objective (1), but in the derivation A.2 and the resulting R3C Bellman operator of definition 2, it is considered as a deterministic variable in which the R3V value can be evaluated freely (without any expectation over $p$, nor inf of $p$ over $P$). 4. In Theorem 1, the R3V values $\mathbb{U}$ and $\mathbb{V}$ are described as functions of $S \to\mathbb{R}^d$, but they were defined as functions of $\to\mathbb{R}$ in definition 1. Also, $d$ is not defined. 5. According to definition 2, the R3V Bellman operator applied to a real function $\mathbb{V}$ simply consists in multiplicating \mathbb{V} by the discount gamma and adding the penalized reward $r - \lambda c$. But then, this is exactly the same as the RC Bellman operator of definition 4. The difference between the two frameworks lies in how the policy value $\mathbb{V}$ is defined (regarding the presence or absence of $\inf_{p\in P}$ before $V^{p,\pi}$), but these differences are not involved when we consider arbitrary functions $\mathbb{V}: S \to\mathbb{R}$ on which to apply the Bellman operators. I feel like the authors intended the definitions 1 and 3 to be seen somehow as *operators* rather than *functions*, which could allow to retain the sup/inf in the definition of $\mathcal{T}^\pi_{R3C}$ and $\mathcal{T}^\pi_{RC}$, but it is a mere speculation and certainly not what is written in the paper. In conclusion, this paper comprises a clear motivation, promising insights and encouraging results. But in the present state of vagueness of the theoretical framework, I cannot recommend acceptance. Of course, it may only be a misunderstanding from my part merely related to presentation/clarity issues and not deeper flaws in the reasoning, in which case I am ready to update my rating upon clarification by the authors. **Minor remarks**: * Since the definitions and results of Section 2.3 are claimed as novelties, I think they should not be listed as part of the Background section * After theorem 1, a paragraph states that the $\mathcal{T}_{R3C}$ operator can be used in a Value Iteration algorithm, which is not the case since it is a Bellman expectation operator, not a Bellman optimality operator. If I am not mistaken, it can only be applied in a Policy Iteration scheme. * The first paragraph of Appendix A3 is self-referencing * Typo in (20): use norm $||$ instead of absolute value $|$
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Robust Constrained Reinforcement Learning for Continuous Control with Model Misspecification ### Paper Abstract Many real-world physical control systems are required to satisfy constraints upon deployment. Furthermore, real-world systems are often subject to effects such as non-stationarity, wear-and-tear, uncalibrated sensors and so on. Such effects effectively perturb the system dynamics and can cause a policy trained successfully in one domain to perform poorly when deployed to a perturbed version of the same domain. This can affect a policy's ability to maximize future rewards as well as the extent to which it satisfies constraints. We refer to this as constrained model misspecification. We present an algorithm with theoretical guarantees that mitigates this form of misspecification, and showcase its performance in multiple Mujoco tasks from the Real World Reinforcement Learning (RWRL) suite. ### Paper Keywords ["reinforcement learning", "constraints", "robustness"] ### Paper Content ABSTRACTMany real-world physical control systems are required to satisfy constraints upondeployment. Furthermore, real-world systems are often subject to effects suchas non-stationarity, wear-and-tear, uncalibrated sensors and so on. Such effectseffectively perturb the system dynamics and can cause a policy trained successfullyin one domain to perform poorly when deployed to a perturbed version of thesame domain. This can affect a policy’s ability to maximize future rewards aswell as the extent to which it satisfies constraints. We refer to this as constrainedmodel misspecification. We present an algorithm that mitigates this form ofmisspecification, and showcase its performance in multiple simulated Mujoco tasksfrom the Real World Reinforcement Learning (RWRL) suite.1I NTRODUCTIONReinforcement Learning (RL) has had a number of recent successes in various application domainswhich include computer games (Silver et al., 2017; Mnih et al., 2015; Tessler et al., 2017) androbotics (Abdolmaleki et al., 2018a). As RL and deep learning continue to scale, an increasingnumber of real-world applications may become viable candidates to take advantage of this technology.However, the application of RL to real-world systems is often associated with a number of challenges(Dulac-Arnold et al., 2019; Dulac-Arnold et al., 2020). We will focus on the following two:Challenge 1 - Constraint satisfaction : One such challenge is that many real-world systems haveconstraints that need to be satisfied upon deployment (i.e., hard constraints); or at least the numberof constraint violations as defined by the system need to be reduced as much as possible (i.e.,soft-constraints). This is prevalent in applications ranging from physical control systems such asautonomous driving and robotics to user facing applications such as recommender systems.Challenge 2 - Model Misspecification (MM) : Many of these systems suffer from another challenge:model misspecification. We refer to the situation in which an agent is trained in one environment butdeployed in a different, perturbed version of the environment as an instance of model misspecification .This may occur in many different applications and is well-motivated in the literature (Mankowitzet al., 2018; 2019; Derman et al., 2018; 2019; Iyengar, 2005; Tamar et al., 2014).There has been much work on constrained optimization in the literature (Altman, 1999; Tessler et al.,2018; Efroni et al., 2020; Achiam et al., 2017; Bohez et al., 2019). However, to our knowledge, theeffect of model misspecification on an agent’s ability to satisfy constraints at test time has not yetbeen investigated.⇤indicates equal contribution.1Preprint. Under review.Constrained Model Misspecification (CMM) : We consider the scenario in which an agent isrequired to satisfy constraints at test time but is deployed in an environment that is different fromits training environment (i.e., a perturbed version of the training environment). Deployment in aperturbed version of the environment may affect the return achieved by the agent as well as its abilityto satisfy the constraints. We refer to this scenario as constrained model misspecification .This problem is prevalent in many real-world applications where constraints need to be satisfied butthe environment is subject to state perturbations effects such as wear-and-tear, partial observabilityetc., the exact nature of which may be unknown at training time. Since such perturbations cansignificantly impact the agent’s ability to satisfy the required constraints it is insufficient to simplyensure that constraints are satisfied in the unperturbed version of the environment. Instead, thepresence of unknown environment variations needs to be factored into the training process. Onearea where such considerations are of particular practical relevance is sim2real transfer where theunknown sim2real gap can make it hard to ensure that constraints will be satisfied on the real system(Andrychowicz et al., 2018; Peng et al., 2018; Wulfmeier et al., 2017; Rastogi et al., 2018; Christianoet al., 2016). Of course, one could address this issue by limiting the capabilities of the system beingcontrolled in order to ensure that constraints are never violated, for instance by limiting the amount ofcurrent in an electric motor. Our hope is that our methods can outperform these more blunt techniques,while still ensuring constraint satisfaction in the deployment domain.Main Contributions : In this paper, we aim to bridge the two worlds of model misspecification andconstraint satisfaction. We present an RL objective that enables us to optimize a policy that aimsto be robust to CMM. Our contributions are as follows: (1)Introducing the Robust Return RobustConstraint (R3C) and Robust Constraint (RC) RL objectives that aim to mitigate CMM as definedabove. This includes the definition of a Robust Constrained Markov Decision Process (RC-MDP).(2)Derive corresponding R3C and RC value functions and Bellman operators. Provide an argumentshowing that these Bellman operators converge to fixed points. These are implemented in the policyevaluation step of actor-critic R3C algorithms. (3)Implement five different R3C and RC algorithmicvariants on top of D4PG and DMPO, (two state-of-the-art continuous control RL algorithms). (4)Empirically demonstrate the superior performance of our algorithms, compared to various baselines,with respect to mitigating CMM. This is shown consistently across 6different Mujoco tasks from theReal-World RL (RWRL) suite1.2B ACKGROUND2.1 M ARKOV DECISION PROCESSESARobust Markov Decision Process (R-MDP) is defined as a tuple hS, A, R, ,Piwhere Sis afinite set of states, Ais a finite set of actions, R:S⇥A!Ris a bounded reward function and2[0,1)is the discount factor; P(s, a)✓M(S)is an uncertainty set where M(S)is the setof probability measures over next states s02S. This is interpreted as an agent selecting a stateand action pair, and the next state s0is determined by a conditional measure p(s0|s, a)2P(s, a)(Iyengar, 2005). We want the agent to learn a policy ⇡:S!A, which is a mapping from statesto actions that is robust with respect to this uncertainty set. For the purpose of this paper, weconsider deterministic policies, but this can easily be extended to stochastic policies too. The robustvalue function V⇡:S!Rfor a policy ⇡is defined as V⇡(s)=i n f p2P(s,⇡(s))V⇡,p(s)whereV⇡,p(s)=r(s,⇡(s)) + p(s0|s,⇡(s))V⇡,p(s0). A rectangularity assumption on the uncertainty set(Iyengar, 2005) ensures that “nature” can choose a worst-case transition function independently forevery state sand action a. This means that during a trajectory, at each timestep, nature can chooseany transition model from the uncertainty set to reduce the performance of the agent. A robust policyoptimizes for the robust (worst-case) expected return objective: JR(⇡)=i n f p2PEp,⇡[P1t=0trt].The robust value function can be expanded as V⇡(s)=r(s,⇡(s)) + infp2P(s,⇡(s))Ep[V⇡(s0)|s,⇡(s)].As in (Tamar et al., 2014), we can define an operator infP(s,a)v:R|S|!RasinfP(s,a)v=i n f {p>v|p2P(s, a)}. We can also define an operator for some policy ⇡asinf⇡:R|S|!R|S|where {inf⇡v}(s)=infP(s,⇡(s))v. Then, we have defined the Robust Bellman1https://github.com/google-research/realworldrl_suite2Preprint. Under review.operator as follows T⇡RV⇡=r⇡+inf⇡V⇡. Both the robust Bellman operator T⇡R:R|S|!R|S|for a fixed policy and the optimal robust Bellman operator T⇤Rv(s) = max ⇡T⇡Rv(s)have previouslybeen shown to be contractions (Iyengar, 2005).AConstrained Markov Decision Process (CMDP) is an extension to an MDP and consists of thetuple hS, A, P, R, C, iwhere S, A, R andare defined as in the MDP above and C:S⇥A!RKisa mapping from a state sand action ato aKdimensional vector representing immediate costs relatingtoKconstraint. We use K=1 from here on in and therefore C:S⇥A!R. We refer to the cost for aspecific state action tuple hs, aiat time tasct(s, a). The solution to a CMDP is a policy ⇡:S!Athat learns to maximize return and satisfy the constraints. The agent aims to learn a policy thatmaximizes the expected return objective J⇡R=E[P1t=0trt]subject to J⇡C=E[P1t=0tct]where is a pre-defined constraint threshold. A number of approaches (Tessler et al., 2018; Bohezet al., 2019) optimize the Lagrange relaxation of this objective min 0max ✓J⇡R(J⇡C)byoptimizing the Lagrange multiplier and the policy parameters ✓using alternating optimization. Wealso define the constraint value function V⇡,pC:S!Rfor a policy ⇡as in (Tessler et al., 2018)where V⇡,pC(s)=c(s,⇡(s)) + p(s0|s,⇡(s))V⇡,pC(s0).2.2 C ONTINUOUS CONTROL RL A LGORITHMSWe address the CMM problem by modifying two well-known continuous control algorithms byhaving them optimize the RC and R3C objectives.The first algorithm is Maximum A-Posteriori Policy Optimization (MPO) . This is a continuouscontrol RL algorithm that performs policy iteration using an RL form of expectation maximization(Abdolmaleki et al., 2018a;b). We use the distributional-critic version in Abdolmaleki et al. (2020),which we refer to as DMPO.The second algorithm is Distributed Distributional Deterministic Policy Gradient (D4PG), whichis a state-of-the-art actor-critic continuous control RL algorithm with a deterministic policy (Barth-Maron et al., 2018). It is an incremental improvement to DDPG (Lillicrap et al., 2015) with adistributional critic that is learned similarly to distributional MPO.3R OBUST CONSTRAINED (RC) O PTIMIZATION OBJECTIVEWe begin by defining a Robust Constrained MDP (RC-MDP). This combines an R-MDP and C-MDPto yield the tuple hS, A, R, C, ,Piwhere all of the variables in the tuple are defined in Section 2.We next define two optimization objectives that optimize the RC-MDP. The first variant attempts tolearn a policy that is robust with respect to the return as well as constraint satisfaction - Robust ReturnRobust Constrained (R3C) objective. The second variant is only robust with respect to constraintsatisfaction - Robust Constrained (RC) objective.Prior to defining these objectives, we add some important definitions.Definition 1. The robust constrained value function V⇡C:S!Rfor a policy ⇡is defined asV⇡C(s)=s u pp2P(s,⇡(s))V⇡,pC(s)=s u pp2P(s,⇡(s))E⇡,pP1t=0tct.This value function represents the worst-case sum of constraint penalties over the course of an episodewith respect to the uncertainty set P(s, a). We can also define an operator supP(s,a)v:R|S|!RassupP(s,a)v=s u p {p>v|p2P(s, a)}. In addition, we can define an operator on vectors for some policy⇡assup⇡:R|S|!R|S|where {sup⇡v}(s)= supP(s,⇡(s))v. Then, we can defined the SupremumBellman operator T⇡sup:R|S|!R|S|as follows T⇡supV⇡=r⇡+sup⇡V⇡. Note that this operatoris a contraction since we get the same result if we replace T⇡infwith T⇡supand replace Vwith V. Analternative derivation of the sup operator contraction has also been derived in the Appendix, SectionA.3 for completeness.3.0.1 R OBUST RETURN ROBUST CONSTRAINT (R3C) O BJECTIVEThe R3C objective is defined as:3Preprint. Under review.max⇡2⇧infp2PEp,⇡Xttr(st,at)s.t.supp02PEp0,⇡Xttc(st,at) (1)Note, a couple of interesting properties about this objective: (1) it focuses on being robust withrespect to the return for a pre-defined set of perturbations; (2) the objective also attempts to be robustwith respect to the worst case constraint value for the perturbation set. The Lagrange relaxation formof equation 1 is used to define an R3C value function.Definition 2 (R3C Value Function) .For a fixed , and using the above-mentioned rectangularityassumption (Iyengar, 2005), the R3C value function for a policy ⇡is defined as the concatenationof two value functions V⇡=f(hV⇡,V⇡Ci)=V⇡V⇡C. This implies that we keep two separateestimates of V⇡andV⇡Cand combine them together to yield V⇡. The constraint threshold termoffsets the value function, and has no effect on any policy improvement step2. As a result, thedependency on is dropped.The next step is to define the R3C Bellman operator. This is presented in Definition 3.Definition 3 (R3C Bellman operator) .The R3C Bellman operator is defined as two separate Bellmanoperators T⇡R3C=hT⇡inf,T⇡supiwhere T⇡infis the robust Bellman operator (Iyengar, 2005) andT⇡sup:R|S|!R|S|is defined as the supBellman operator. Based on this definition, applying theR3C Bellman operator to V⇡involves applying each of the Bellman operators to their respectivevalue functions. That is, T⇡R3CV=T⇡infVT⇡supVC.It has been previously shown that T⇡infis a contraction with respect to the max norm (Tamar et al.,2014) and therefore converges to a fixed point. We also provided an argument whereby T⇡supis acontraction operator in the previous section as well as in Appendix, A.3. These Bellman operatorsindividually ensure that the robust value function V(s)and the constraint value function VC(s)converge to fixed points. Therefore, T⇡R3CValso converges to a fixed point by construction.As a result of the above argument, we know that we can apply the R3C Bellman operator in valueiteration or policy iteration algorithms in the policy evaluation step. This is achieved in practiceby simultaneously learning both the robust value function V⇡(s)and the constraint value functionV⇡C(s)and combining these estimates to yield V⇡(s).It is useful to note that this structure allows for a flexible framework which can define an objectiveusing different combinations of supandinfterms, yielding combined Bellman operators that arecontraction mappings. It is also possible to take the mean with respect to the uncertainty set yieldinga soft-robust update (Derman et al., 2018; Mankowitz et al., 2019). We do not derive all of thepossible combinations of objectives in this paper, but note that the framework provides the flexibilityto incorporate each of these objectives. We next define the RC objective.3.0.2 R OBUST CONSTRAINED (RC) O BJECTIVEThe RC objective focuses on being robust with respect to constraint satisfaction and is defined as:max⇡2⇧E⇡,pXttr(st,at)s.t.supp02PEp0,⇡Xtc(st,at)< (2)This objective differs from R3C in that it only focuses on being robust with respect to constraintsatisfaction. This is especially useful in domains where perturbations are expected to have a signif-icantly larger effect on constraint satisfaction performance compared to return performance. Thecorresponding value function is defined as in Definition 2, except by replacing the robust valuefunction in the concatenation with the expected value function V⇡,p. The Bellman operator is alsosimilar to Definition 3, where the expected return Bellman operator T⇡replaces T⇡inf.2Theterm is only used in the Lagrange update in Lemma 1.4Preprint. Under review.3.1 L AGRANGE UPDATEFor both objectives, we need to learn a policy that maximizes the return while satisfying the constraint.This involves performing alternating optimization on the Lagrange relaxation of the objective. Theoptimization procedure alternates between updating the actor/critic parameters and the Lagrangemultiplier. For both objectives we have the same gradient update for the Lagrange multiplier:Lemma 1 (Lagrange derivative) .The gradient of the Lagrange multiplier is@@f=✓supp2PEp,⇡Pttc(st,at)◆, where fis the R3C or RC objective loss.This is an intuitive update in that the Lagrange multiplier is updated using the worst-case constraintviolation estimate. If the worst-case estimate is larger than , then the Lagrange multiplier is increasedto add more weight to constraint satisfaction and vice versa.4R OBUST CONSTRAINED POLICY EVA L UAT I O NWe now describe how the R3C Bellman operator can be used to perform policy evaluation. This policyevaluation step can be incorporated into any actor-critic algorithm. Instead of optimizing the regulardistributional loss (e.g. the C51 loss in Bellemare et al. (2017)), as regular D4PG and DMPO do, weoptimize the worst-case distributional loss, which is the distance: d✓rt+V⇡kˆ✓(st+1),V⇡k✓(st)◆,where V⇡k✓(st)=i n f p2P(st,⇡(st))V⇡k✓(st+1⇠p(·|st,⇡(st)))supp02P(st,⇡(st))V⇡kC,✓(st+1⇠p0(·|st,⇡(st)));P(st,⇡(st))is an uncertainty set for the current state stand action at;⇡kis thecurrent network’s policy, and ˆ✓denotes the target network parameters. The Bellman operatorsderived in the previous sections are repeatedly applied in this policy evaluation step depending on theoptimization objective (e.g., R3C or RC). This would be utilized in the critic updates of D4PG andDMPO. Note that the action value function definition, Q⇡k✓(st,at), trivially follows.5E XPERIMENTSWe perform all experiments using domains from the Real-World Reinforcement Learn-ing (RWRL) suite3, namely cartpole: {balance, swingup },walker: {stand, walk,run}, and quadruped: {walk, run }. We define a task in our experiments as a 6-tupleT=hdomain ,domain variant ,constraint ,safety coeff ,threshold ,perturbation iwhose elements refer to the domain name, the variant for that domain (i.e. RWRL task), the constraintbeing considered, the safety coefficient value, the constraint threshold and the type of robustnessperturbation being applied to the dynamics respectively. An example task would therefore be:T=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole length i. In total, we have 6different tasks on which we test our benchmark agents. The full list of tasks can be found in theAppendix, Table 7. The available constraints per domain can be found in the Appendix B.1.The baselines used in our paper can be seen in Table 1. C-ALG refers to the reward constrained,non-robust algorithms of the variants that we have adapted based on (Tessler et al., 2018; Anonymous,2020); RC-ALG refers to the robust constraint algorithms corresponding to the Bellman operatorT⇡RC; R3C-ALG refers to the robust return robust constrained algorithms corresponding to theBellman operator T⇡R3C; SR3C-ALG refers to the soft robust (with respect to return) robust constraintalgorithms and R-ALG refers to the robust return algorithms based on Mankowitz et al. (2019).5.1 E XPERIMENTAL SETUPFor each task, the action and observation dimensions are shown in the Appendix, Table 6. The lengthof an episode is 1000 steps and the upper bound on reward is 1000 (Tassa et al., 2018). All the3https://github.com/google-research/realworldrl_suite5Preprint. Under review.Baseline Algorithm Variants Baseline DescriptionC-ALG C-D4PG, C-DMPO Constraint aware, non-robust.RC-ALG RC-D4PG, RC-DMPO Robust constraint.R3C-ALG R3C-D4PG, R3C-DMPO Robust return robust constraint.R-ALG R-D4PG, R-DMPO Robust return.SR3C-ALG SR3C-D4PG Soft robust return, robust constraint.Table 1: The baseline algorithms used in this work.network architectures are the same per algorithm and approximately the same across algorithms interms of the layers and the number of parameters. A full list of all the network architecture detailscan be found in the Appendix, Table 4. All runs are averaged across 5seeds.Metrics : We use three metrics to track overall performance, namely: return R,overshoot ,Candpenalized return Rpenalized . The return is the sum of rewards the agent receives over the course ofan episode. The constraint overshoot ,C= max(0 ,J⇡C)is defined as the clipped differencebetween the average costs over the course of an episode J⇡Cand the constraint threshold . Thepenalized return is defined as Rpenalized =R ̄ ,Cwhere ̄= 1000 is an evaluation weight andequally trades off return with constraint overshoot ,C.Constraint Experiment Setup : The safety coefficient is a flag in the RWRL suite (Dulac-Arnoldet al., 2020) that determines how easy/difficult it is in the environment to violate constraints. Thesafety coefficient values range from 0.0(easy to violate constraints) to 1.0(hard to violate constraints).As such we selected for each task (1) a safety coefficient of 0.3; (2) a particular constraint supportedby the RWRL suite and (3) a corresponding constraint threshold , which ensures that the agent canfind feasible solutions (i.e., satisfy constraints) and solve the task.Robustness Experimental Setup: The robust/soft-robust agents (R3C and RC variants) are trainedusing a pre-defined uncertainty set consisting of 3task perturbations (this is based on the results fromMankowitz et al. (2019)). Each perturbation is a different instantiation of the Mujoco environment.The agent is then evaluated on a set of 9hold-out task perturbations ( 10forquadruped ). For example,if the task is T=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole length i, then theagent will have three pre-defined pole length perturbations for training, and evaluate on nine unseen pole lengths,while trying to satisfy the balance velocity constraint.Training Procedure : All agents are always acting on the unperturbed environment. This corresponds to thedefault environment in the dmcontrol suite (Tassa et al., 2018) and is referred to in the experiments as the nominalenvironment. When the agent acts, it generates next state realizations for the nominal environment as well as eachof the perturbed environments in the training uncertainty set to generate the tuple hs, a, r, [s0,s01,s02···s0N]iwhere N is the number of environments in the training uncertainty set and s0iis the next state realizationcorresponding to the ithperturbed training environment. Since the robustness update is incorporated into thepolicy evaluation stage of each algorithm, the critic loss which corresponds to the TD error in each case ismodified as follows: when computing the target, the learner samples a tuple hs, a, r, [s0,s01,s02···s0N]ifromthe experience replay. The target action value function for each next state transition [s0,s01,s02···s0N]is thencomputed by taking the inf(robust), average (soft-robust) or the nominal value (non-robust). In each caseseparate action-value functions are trained for the return Q(s, a)and the constraint QC(s, a). These valuefunction estimates then individually return the mean, inf,sup value, depending on the technique, and arecombined to yield the target to compute Q(s, a).The chosen values of the uncertainty set and evaluation set for each domain can be found in Appendix,Table 8. Note that it is common practice to manually select the pre-defined uncertainty set and the unseen testenvironments. Practitioners often have significant domain knowledge and can utilize this when choosing theuncertainty set (Derman et al., 2019; 2018; Di Castro et al., 2012; Mankowitz et al., 2018; Tamar et al., 2014).5.2 M AINRESULTSIn the first sub-section we analyze the sensitivity of a fixed constrained policy (trained using C-D4PG) operatingin perturbed versions of a given environment. This will help test the hypothesis that perturbing the environmentdoes indeed have an effect on constraint satisfaction as well as on return. In the next sub-section we analyze theperformance of the R3C and RC variants with respect to the baseline algorithms.6Preprint. Under review.Base Algorithm RR penalized max(0 ,J⇡C)D4PG C-D4PG 673.21 ±93.04 491.450 0.18 ±0.053R-D4PG 707.79 ±65.00 542.022 0.17 ±0.046R3C-D4PG 734.45 ±77.93 635.246 0.10 ±0.049RC-D4PG 684.30 ±83.69 578.598 0.11 ±0.050SR3C-D4PG 723.11 ±84.41 601.016 0.12 ±0.038DMPO C-MPO 598.75 ±72.67 411.376 0.19 ±0.049R-MPO 686.13 ±86.53 499.581 0.19 ±0.036R3C-MPO 752.47 ±57.10 652.969 0.10 ±0.040RC-MPO 673.98 ±80.91 555.809 0.12 ±0.036Table 2: Performance metrics averaged over all holdout sets for all tasks.5.2.1 F IXED POLICY SENSITIVITYIn order to validate the hypothesis that perturbing the environment affects constraint satisfaction and return, wetrained a C-D4PG agent to satisfy constraints across 10different tasks. In each case, C-D4PG learns to solve thetask and satisfy the constraints in expectation. We then perturbed each of the tasks with a supported perturbationand evaluated whether the constraint overshoot increases and the return decreases for the C-D4PG agent. Someexample graphs are shown in Figure 1 for the cartpole (left), quadruped (middle) and walker (right)domains. The upper row of graphs contain the return performance (blue curve), the penalized return performance(orange curve) as a function of increased perturbations (x-axis). The vertical red dotted line indicates the nominalmodel on which the C-D4PG agent was trained. The lower row of graphs contain the constraint overshoot(green curve) as a function of varying perturbations. As seen in the figures, as perturbations increase acrosseach dimension, both the return and penalized return degrades (top row) while the constraint overshoot (bottomrow) increases. This provides useful evidence for our hypothesis that constraint satisfaction does indeed sufferas a result of perturbing the environment dynamics. This was consistent among many more settings. The fullperformance plots can be found in the Appendix, Figures 3, 4 and 5 for cartpole ,quadruped andwalkerrespectively.Figure 1: The effect on constraint satisfaction and return as perturbations are added to cartpole ,quadruped andwalker for a fixed C-D4PG policy.5.2.2 R OBUST CONSTRAINED RESULTSWe now compare C-ALG, RC-ALG, R3C-ALG, R-ALG and SR3C-ALG4across 6tasks. The average perfor-mance across holdout sets and tasks is shown in Table 2. As seen in the table, the R3C-ALG variant outperformsall of the baselines in terms of return and constraint overshoot and therefore obtains the highest penalized returnperformance. Interestingly, the soft-robust variant yields competitive performance.We further analyze the results for three tasks using ALG=D4PG on the (leftcolumn) and ALG=DMPO (right column) in Figure 2. The three tasks areTcartpole,slider damping =hcartpole ,swingup ,balance velocity ,0.3,0.115 ,slider damping i(top row), Tcartpole,pole mass=hcartpole ,swingup ,balance velocity ,0.3,0.115 ,pole mass i(middle row) and Twalker =hwalker ,walk ,joint velocity ,0.3,0.1,torso length i(bottomrow). Graphs of the additional tasks can be found in the Appendix, Figures 6 and 7. Each graph contains, on they-axis, the return R(marked by the transparent colors) and the penalized return Rpenalized (marked by the dark4We only ran the SR3C-D4PG variant to gain intuition as to soft-robust performance.7Preprint. Under review.colors superimposed on top of R). The x-axis consists of three holdout set environments in increasing order ofdifficulty from Holdout 0 toHoldout 8 . Holdout N corresponds to perturbation element N for the correspondingtask in the Appendix, Table 8. As can be seen for Tcartpole,slider damping andTcartpole,pole mass(Figure 2(top and middle rows respectively)), R3C-D4PG outperforms the baselines, especially as the perturbationsget larger. This can be seen by observing that as the perturbations increase, the penalized return for thesetechniques is significantly higher than that of the baselines. This implies that the amount of constraint violationsis significantly lower for these algorithms resulting in robust constraint satisfaction. Twalker (bottom row) hassimilar performance improved performance over the baseline algorithms.Holdout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Cartpole, Perturbation: Slider DampingSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Slider DampingRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Pole MassSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env-20002004006008001000PerformanceDomain: Cartpole, Perturbation: Pole MassRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedHoldout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Walker, Perturbation: Thigh LengthSR3C-D4PGRSR3C-D4PGRpenalizedRC-D4PGRRC-D4PGRpenalizedR3C-D4PGRR3C-D4PGRpenalizedR-D4PGRR-D4PGRpenalizedC-D4PGRC-D4PGRpenalizedHoldout 0Holdout 4Holdout 8Env02004006008001000PerformanceDomain: Walker, Perturbation: Thigh LengthRC-DMPORRC-DMPORpenalizedR3C-DMPORR3C-DMPORpenalizedR-DMPORR-DMPORpenalizedC-DMPORC-DMPORpenalizedFigure 2: The holdout set performance of the baseline algorithms on D4PG variants (left) and DMPOvariants (right) for Cartpole with pole mass perturbations (top row) and walker with thigh lengthperturbations (bottom row).6C ONCLUSIONThis papers simultaneously addresses constraint satisfaction and robustness to state perturbations, two importantchallenges of real-world reinforcement learning. We present two RL objectives, R3C and RC, that yieldrobustness to constraints under the presence of state perturbations. We define R3C and RC Bellman operators toensure that value-based RL algorithms will converge to a fixed point when optimizing these objectives. We thenshow that when incorporating this into the policy evaluation step of two well-known state-of-the-art continuouscontrol RL algorithms the agent outperforms the baselines on 6Mujoco tasks. In related work, Everett et al.(2020) considers the problem of being verifiably robust to an adversary that can perturb the state s02Stodegrade performance as measured by a Q-function. Dathathri et al. (2020) consider the problem of learningagents (in deterministic environments with known dynamics) that satisfy constraints under perturbations to statess02S. In contrast, equation 1 considers the general problem of learning agents that optimize for the returnwhile satisfying constraints for a given RC-MDP.8Preprint. Under review.<|im_end|> <|im_start|>assistant ### Review Title Encouraging results, but the theory section needs more work ### Review Text The standard Reinforcement Learning framework is limited in many ways, and numerous variants have been introduced to deal with aspects such as partial observability, temporal abstraction, safety, domain transfer, etc. Yet, these issues are often studied separately and it is often unclear how to combine them together. This is the ambitious challenge taken by this paper, which attempts to bridge the two separate settings of Robust MDPs, which aim at considering ambiguity in the dynamics, and Constrained MDPs, which aim at enforcing the satisfaction of a constraint on an expected cost signal. The authors propose the formulation of two objectives, that merge the two aspects and include both a worst-case evaluation over the ambiguity set and a constraint violation penalty term. The ways of dealing with both issues are fairly standard (Lagrangian relaxation of the constraints with alternating optimization, and worst-case evaluation over a finite set of simulated transitions in practice), but their combination seems novel and relevant. These objectives come with the corresponding Bellman Expectation operators, which allow to evaluate the current policy (critic) and provide a feedback (gradient) for the actor to ensure robust constraint satisfaction. The applicability of the proposed approaches is demonstrated on a benchmark of Mujoco tasks, where they are shown to compare favorably to several baselines. My main concerns lie with the definitions and results of Section 2.3, which I think generally lack rigour and clarity, which sheds doubts on the validity of the claimed results. 1. The authors start by defining the R3V value function $\mathbb{V}$, as a bootstrap of two other values $V$ and $V_c$, that haven't been defined. I was initially confused because they are denoted as if they do not depend on the policy $\pi$, so I first thought these referred to optimal value functions (which would need to be appropriately defined, especially $V_c$ since the costs are constrained rather than optimized), but they seem to be in fact the expected returns for the policy $\pi$ (i.e. the value functions of a policy $\pi$ as opposed to the optimal value functions). 2. Likewise, do the values $V$ and $V_c$ in definition 1 depend on the dynamics $p$? It seems so, but it should be written explicitely. 3. The derivation of A.2 seems a bit sloppy, since the last term in line 4 is identified as$ \mathbb{V}$while it does not strictly correspond to the definition 1. 4. The next state $s'$ is a random variable that depends on the dynamics $p$, and thus subject to the robust inf/sup over $P$ in the objective (1), but in the derivation A.2 and the resulting R3C Bellman operator of definition 2, it is considered as a deterministic variable in which the R3V value can be evaluated freely (without any expectation over $p$, nor inf of $p$ over $P$). 4. In Theorem 1, the R3V values $\mathbb{U}$ and $\mathbb{V}$ are described as functions of $S \to\mathbb{R}^d$, but they were defined as functions of $\to\mathbb{R}$ in definition 1. Also, $d$ is not defined. 5. According to definition 2, the R3V Bellman operator applied to a real function $\mathbb{V}$ simply consists in multiplicating \mathbb{V} by the discount gamma and adding the penalized reward $r - \lambda c$. But then, this is exactly the same as the RC Bellman operator of definition 4. The difference between the two frameworks lies in how the policy value $\mathbb{V}$ is defined (regarding the presence or absence of $\inf_{p\in P}$ before $V^{p,\pi}$), but these differences are not involved when we consider arbitrary functions $\mathbb{V}: S \to\mathbb{R}$ on which to apply the Bellman operators. I feel like the authors intended the definitions 1 and 3 to be seen somehow as *operators* rather than *functions*, which could allow to retain the sup/inf in the definition of $\mathcal{T}^\pi_{R3C}$ and $\mathcal{T}^\pi_{RC}$, but it is a mere speculation and certainly not what is written in the paper. In conclusion, this paper comprises a clear motivation, promising insights and encouraging results. But in the present state of vagueness of the theoretical framework, I cannot recommend acceptance. Of course, it may only be a misunderstanding from my part merely related to presentation/clarity issues and not deeper flaws in the reasoning, in which case I am ready to update my rating upon clarification by the authors. **Minor remarks**: * Since the definitions and results of Section 2.3 are claimed as novelties, I think they should not be listed as part of the Background section * After theorem 1, a paragraph states that the $\mathcal{T}_{R3C}$ operator can be used in a Value Iteration algorithm, which is not the case since it is a Bellman expectation operator, not a Bellman optimality operator. If I am not mistaken, it can only be applied in a Policy Iteration scheme. * The first paragraph of Appendix A3 is self-referencing * Typo in (20): use norm $||$ instead of absolute value $|$ ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
bFb3V8ALx4W
MIDL.io/2023/Short_Paper_Track
2023
Visualizing chest X-ray dataset biases using GANs
["Hao Liang", "Kangqi Ni", "Guha Balakrishnan"]
Recent work demonstrates that images from various chest X-ray datasets contain visual features that are strongly correlated with protected demographic attributes like race and gender. This finding raises issues of fairness, since some of these factors may be used by downstream algorithms for clinical predictions. In this work, we propose a framework, using generative adversarial networks (GANs), to visualize what features are most different between X-rays belonging to two demographic subgroups.
["Chest X-rays", "fairness", "bias", "explainability", "generative adversarial networks (GANs)"]
Medical Imaging with Deep LearningVisualizing chest X-ray dataset biases using GANsHao Liang hl106@rice.eduKangqi Ni kn22@rice.eduGuha Balakrishnan guha@rice.eduDepartment of Electrical and Computer Engineering, Rice University, USAAbstractRecent work demonstrates that images from various chest X-ray datasets contain visualfeatures that are strongly correlated with protected demographic attributes like race andgender. This finding raises issues of fairness, since some of these factors may be usedby downstream algorithms for clinical predictions. In this work, we propose a framework,using generative adversarial networks (GANs), to visualize what features are most differentbetween X-rays belonging to two demographic subgroups.Keywords: Chest X-rays, fairness, bias, explainability, generative adversarial networks(GANs)1. IntroductionRecent studies have demonstrated that patient bio-information like age, race, and genderare predictable from chest X-ray (CXR) images alone using deep learning models(Gichoyaet al., 2022; Karargyris et al., 2019; Duffy et al., 2022). For example, in the “ReadingRace” study, deep classifiers trained to predict race achieve 0 .99 AUROC on several CXRdatasets (Gichoya et al., 2022). This finding raises the question: “What visual cues dis-criminate different races?” Answering such a question can help mitigate potentially biasedbehavior of downstream algorithms that make decisions using this data. In this work, wepropose a framework to visually explain the principal differences between different demo-graphic subgroups in a medical imaging dataset. We first train an unconditional generativeadversarial network (GAN) (Goodfellow et al., 2020; Liang et al., 2020; Lin et al., 2022)on the given image dataset. Next, we project the images onto the (trained) GAN’s latentspace and compute a direction in the latent space that differentiates a pair of classes (e.g.,“Black” vs. “White” race groups). We traverse the latent space along that direction toproduce image sequences that depict the main morphological and appearance changes inmoving from one class to another.There are related works that focus on visualizing subgroup differences associated withclinical attributes. One such study uses autoencoders (Cohen et al., 2021), which oftenproduce blurry samples that do not clearly capture structural information. Others trainconditional versions of GANs (Singla et al., 2023; Dravid et al., 2022), an expensive processsince the GAN must be trained from scratch for each attribute of interest. In contrast toall these works, we demonstrate that deep generative models may be a useful tool to themedical imaging community to understand the biases within a medical imaging dataset.2. MethodOur method consists of several components, visualized in Fig. 2 and described below.Generator training: We train an unconditional StyleGAN2 generator (Karras et al.,2020a) G(·) :Rd→RH×W×1, following the default training procedure introduced in that©CC-BY 4.0, H. Liang, K. Ni & G. Balakrishnan.Liang Ni BalakrishnanFigure 1: Framework of our proposed method. (a) We train a GAN on an imagedataset, and a binary classifier on the images and labels for a demographic pre-diction task (e.g., White vs. Black race). (b) We project a subset of imagesonto the trained GAN’s latent space. To ensure the projected images are rea-sonably reconstructed, we only keep projected images whose labels (predicted bythe attribute classifier trained in (a)) agree with their original labels. We alsofit an SVM hyperplane to separate the two classes in the latent space. Finally,we visualize the differences between the classes by starting at a latent code cor-responding to a random image, and traversing along the normal direction of theSVM hyperplane, to generate a sequence of images showing a transformation.paper. dis the dimension of the “latent space” of the generator, and HandWare the heightand width of the generated CXR. In our experiments, we trained G(·) on Chexpert (Irvinet al., 2019), a large public dataset containing 224 ,316 CXRs. We only used frontal views,yielding 164 ,548 CXRs. The training procedure takes roughly 24 hours on two Nvidia A100GPUs.Attribute classifier training: We train a separate deep attribute classifier C(·) :RH×W×1→R1for each per-image binary attribute provided in the dataset. For multi-classlabels such as race, we train a separate binary classifier for each pair of races.Image projection/SVM training: Next, we follow the process introduced in (Karraset al., 2020b) to project a subset of CXR images {Xi}Ni=1onto G’s latent space, yieldinglatent codes {zi}Ni=1. We only retain those projected images whose labels (predicted by C)are the same as the original labels {Li}Ni=1, i.e., C(G(zi)) =Li. We then train a linear SVMto predict Lifrom zi.Image sequence generation: The normal vector vof the trained SVM’s hyperplaneidentifies the direction that best differentiates the two classes. We will use this fact togenerate image sequences depicting the principal perceptual changes needed to convert aCXR belonging to one demographic class to another. In particular, we select the latentvector corresponding to a random dataset CXR, and move towards the opposite class inlatent space in the direction of v. We concatenate images generated by intermediate latentcodes along this traversal to produce a sequence.2Visual explanationFigure 2: Sample visualization results. The left column corresponds to the projectedinitial image and the last three columns show images generated at different traver-sal distances in the latent space. The red text indicates the output probabilitiespredicted by the attribute classifier for each class. For example, the top left [0.98,0.01] indicate the CXR has a 98% possibility of being white and 1% possibilityof being black. We also use red boxes to highlight the areas that visually varythe most. For White/Black, the shoulder bone and right lung structures changeshape, and the lungs become more opaque. For Asian/White, the entire chestshape changes and grows larger. These visualizations also explain why the Read-ing Race study (Gichoya et al., 2022) did not find race prediction to significantlychange when blocking local regions. The proposed applied to Cardiomegaly en-larges the heart, in agreement with the known effect of that disease.3. Results and discussionWe demonstrate our framework on ChexPert with race as the target attribute. We alsovalidate our approach on the clinical attribute Cardiomegaly , which induces a known phys-iological change (enlarged heart). Sample results are shown and explained in Fig. 2.Conclusion Our results show that an unconditional generative adversarial network canbe a useful tool for visualizing differences between demographic groups of a CXR dataset.Our framework is fast and flexible, and can be applied to any binary attribute labels inthe dataset. Future work includes analyzing generated sequences to thoroughly investigatedemographic differences, and comparing results across different generative models.3Liang Ni BalakrishnanReferencesJoseph Paul Cohen, Rupert Brooks, Sovann En, Evan Zucker, Anuj Pareek, Matthew PLungren, and Akshay Chaudhari. Gifsplanation via latent shift: a simple autoencoderapproach to counterfactual generation for chest x-rays. In Medical Imaging with DeepLearning , pages 74–104. PMLR, 2021.Amil Dravid, Florian Schiffers, Boqing Gong, and Aggelos K Katsaggelos. medxgan: Visualexplanations for medical classifiers through a generative latent space. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2936–2945,2022.Grant Duffy, Shoa L Clarke, Matthew Christensen, Bryan He, Neal Yuan, Susan Cheng, andDavid Ouyang. Confounders mediate ai prediction of demographics in medical imaging.npj Digital Medicine , 5(1):188, 2022.Judy Wawira Gichoya, Imon Banerjee, Ananth Reddy Bhimireddy, John L Burns, Leo An-thony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, et al. Ai recognition of patient race in medical imaging: a modelling study.The Lancet Digital Health , 4(6):e406–e414, 2022.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Commu-nications of the ACM , 63(11):139–144, 2020.Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute,Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert:A large chest radiograph dataset with uncertainty labels and expert comparison. InProceedings of the AAAI conference on artificial intelligence , volume 33, pages 590–597,2019.Alexandros Karargyris, Satyananda Kashyap, Joy T Wu, Arjun Sharma, Mehdi Moradi, andTanveer Syeda-Mahmood. Age prediction using a large chest x-ray dataset. In MedicalImaging 2019: Computer-Aided Diagnosis , volume 10950, pages 468–476. SPIE, 2019.Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and TimoAila. Training generative adversarial networks with limited data. Advances in neuralinformation processing systems , 33:12104–12114, 2020a.Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 8110–8119, 2020b.Hao Liang, Lulan Yu, Guikang Xu, Bhiksha Raj, and Rita Singh. Controlled autoencodersto generate faces from voices. In Advances in Visual Computing: 15th InternationalSymposium, ISVC 2020, San Diego, CA, USA, October 5–7, 2020, Proceedings, Part I15, pages 476–487. Springer, 2020.4Visual explanationZinan Lin, Hao Liang, Giulia Fanti, and Vyas Sekar. Raregan: Generating samples forrare classes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36,pages 7506–7515, 2022.Sumedha Singla, Motahhare Eslami, Brian Pollack, Stephen Wallace, and Kayhan Bat-manghelich. Explaining the black-box smoothly—a counterfactual approach. MedicalImage Analysis , 84:102721, 2023.5
5VcC3frEqaZ
Interesting findings
6: Marginally above acceptance threshold
This work presents a novel approach that highlights the most different features between different scanners that belong to different demographic groups. Strengths: - This method can be useful in practice, as it can potentially mitigate biases on downstream tasks that make decisions using data that contains multiple demographic groups. - The abstract is easy to follow. Weaknesses: - Authors show only a few sample results to demonstrate that the proposed method works. I wonder whether a quantitative evaluation can be done to estimate how this method works (and which can allow future research to compare with it). - There is no comparison to existing work, for example the ones cited that focus on visualizing subgroup differences associated with clinical attributes.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Visualizing chest X-ray dataset biases using GANs ### Paper Abstract Recent work demonstrates that images from various chest X-ray datasets contain visual features that are strongly correlated with protected demographic attributes like race and gender. This finding raises issues of fairness, since some of these factors may be used by downstream algorithms for clinical predictions. In this work, we propose a framework, using generative adversarial networks (GANs), to visualize what features are most different between X-rays belonging to two demographic subgroups. ### Paper Keywords ["Chest X-rays", "fairness", "bias", "explainability", "generative adversarial networks (GANs)"] ### Paper Content Medical Imaging with Deep LearningVisualizing chest X-ray dataset biases using GANsHao Liang hl106@rice.eduKangqi Ni kn22@rice.eduGuha Balakrishnan guha@rice.eduDepartment of Electrical and Computer Engineering, Rice University, USAAbstractRecent work demonstrates that images from various chest X-ray datasets contain visualfeatures that are strongly correlated with protected demographic attributes like race andgender. This finding raises issues of fairness, since some of these factors may be usedby downstream algorithms for clinical predictions. In this work, we propose a framework,using generative adversarial networks (GANs), to visualize what features are most differentbetween X-rays belonging to two demographic subgroups.Keywords: Chest X-rays, fairness, bias, explainability, generative adversarial networks(GANs)1. IntroductionRecent studies have demonstrated that patient bio-information like age, race, and genderare predictable from chest X-ray (CXR) images alone using deep learning models(Gichoyaet al., 2022; Karargyris et al., 2019; Duffy et al., 2022). For example, in the “ReadingRace” study, deep classifiers trained to predict race achieve 0 .99 AUROC on several CXRdatasets (Gichoya et al., 2022). This finding raises the question: “What visual cues dis-criminate different races?” Answering such a question can help mitigate potentially biasedbehavior of downstream algorithms that make decisions using this data. In this work, wepropose a framework to visually explain the principal differences between different demo-graphic subgroups in a medical imaging dataset. We first train an unconditional generativeadversarial network (GAN) (Goodfellow et al., 2020; Liang et al., 2020; Lin et al., 2022)on the given image dataset. Next, we project the images onto the (trained) GAN’s latentspace and compute a direction in the latent space that differentiates a pair of classes (e.g.,“Black” vs. “White” race groups). We traverse the latent space along that direction toproduce image sequences that depict the main morphological and appearance changes inmoving from one class to another.There are related works that focus on visualizing subgroup differences associated withclinical attributes. One such study uses autoencoders (Cohen et al., 2021), which oftenproduce blurry samples that do not clearly capture structural information. Others trainconditional versions of GANs (Singla et al., 2023; Dravid et al., 2022), an expensive processsince the GAN must be trained from scratch for each attribute of interest. In contrast toall these works, we demonstrate that deep generative models may be a useful tool to themedical imaging community to understand the biases within a medical imaging dataset.2. MethodOur method consists of several components, visualized in Fig. 2 and described below.Generator training: We train an unconditional StyleGAN2 generator (Karras et al.,2020a) G(·) :Rd→RH×W×1, following the default training procedure introduced in that©CC-BY 4.0, H. Liang, K. Ni & G. Balakrishnan.Liang Ni BalakrishnanFigure 1: Framework of our proposed method. (a) We train a GAN on an imagedataset, and a binary classifier on the images and labels for a demographic pre-diction task (e.g., White vs. Black race). (b) We project a subset of imagesonto the trained GAN’s latent space. To ensure the projected images are rea-sonably reconstructed, we only keep projected images whose labels (predicted bythe attribute classifier trained in (a)) agree with their original labels. We alsofit an SVM hyperplane to separate the two classes in the latent space. Finally,we visualize the differences between the classes by starting at a latent code cor-responding to a random image, and traversing along the normal direction of theSVM hyperplane, to generate a sequence of images showing a transformation.paper. dis the dimension of the “latent space” of the generator, and HandWare the heightand width of the generated CXR. In our experiments, we trained G(·) on Chexpert (Irvinet al., 2019), a large public dataset containing 224 ,316 CXRs. We only used frontal views,yielding 164 ,548 CXRs. The training procedure takes roughly 24 hours on two Nvidia A100GPUs.Attribute classifier training: We train a separate deep attribute classifier C(·) :RH×W×1→R1for each per-image binary attribute provided in the dataset. For multi-classlabels such as race, we train a separate binary classifier for each pair of races.Image projection/SVM training: Next, we follow the process introduced in (Karraset al., 2020b) to project a subset of CXR images {Xi}Ni=1onto G’s latent space, yieldinglatent codes {zi}Ni=1. We only retain those projected images whose labels (predicted by C)are the same as the original labels {Li}Ni=1, i.e., C(G(zi)) =Li. We then train a linear SVMto predict Lifrom zi.Image sequence generation: The normal vector vof the trained SVM’s hyperplaneidentifies the direction that best differentiates the two classes. We will use this fact togenerate image sequences depicting the principal perceptual changes needed to convert aCXR belonging to one demographic class to another. In particular, we select the latentvector corresponding to a random dataset CXR, and move towards the opposite class inlatent space in the direction of v. We concatenate images generated by intermediate latentcodes along this traversal to produce a sequence.2Visual explanationFigure 2: Sample visualization results. The left column corresponds to the projectedinitial image and the last three columns show images generated at different traver-sal distances in the latent space. The red text indicates the output probabilitiespredicted by the attribute classifier for each class. For example, the top left [0.98,0.01] indicate the CXR has a 98% possibility of being white and 1% possibilityof being black. We also use red boxes to highlight the areas that visually varythe most. For White/Black, the shoulder bone and right lung structures changeshape, and the lungs become more opaque. For Asian/White, the entire chestshape changes and grows larger. These visualizations also explain why the Read-ing Race study (Gichoya et al., 2022) did not find race prediction to significantlychange when blocking local regions. The proposed applied to Cardiomegaly en-larges the heart, in agreement with the known effect of that disease.3. Results and discussionWe demonstrate our framework on ChexPert with race as the target attribute. We alsovalidate our approach on the clinical attribute Cardiomegaly , which induces a known phys-iological change (enlarged heart). Sample results are shown and explained in Fig. 2.Conclusion Our results show that an unconditional generative adversarial network canbe a useful tool for visualizing differences between demographic groups of a CXR dataset.Our framework is fast and flexible, and can be applied to any binary attribute labels inthe dataset. Future work includes analyzing generated sequences to thoroughly investigatedemographic differences, and comparing results across different generative models.3Liang Ni BalakrishnanReferencesJoseph Paul Cohen, Rupert Brooks, Sovann En, Evan Zucker, Anuj Pareek, Matthew PLungren, and Akshay Chaudhari. Gifsplanation via latent shift: a simple autoencoderapproach to counterfactual generation for chest x-rays. In Medical Imaging with DeepLearning , pages 74–104. PMLR, 2021.Amil Dravid, Florian Schiffers, Boqing Gong, and Aggelos K Katsaggelos. medxgan: Visualexplanations for medical classifiers through a generative latent space. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2936–2945,2022.Grant Duffy, Shoa L Clarke, Matthew Christensen, Bryan He, Neal Yuan, Susan Cheng, andDavid Ouyang. Confounders mediate ai prediction of demographics in medical imaging.npj Digital Medicine , 5(1):188, 2022.Judy Wawira Gichoya, Imon Banerjee, Ananth Reddy Bhimireddy, John L Burns, Leo An-thony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, et al. Ai recognition of patient race in medical imaging: a modelling study.The Lancet Digital Health , 4(6):e406–e414, 2022.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Commu-nications of the ACM , 63(11):139–144, 2020.Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute,Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert:A large chest radiograph dataset with uncertainty labels and expert comparison. InProceedings of the AAAI conference on artificial intelligence , volume 33, pages 590–597,2019.Alexandros Karargyris, Satyananda Kashyap, Joy T Wu, Arjun Sharma, Mehdi Moradi, andTanveer Syeda-Mahmood. Age prediction using a large chest x-ray dataset. In MedicalImaging 2019: Computer-Aided Diagnosis , volume 10950, pages 468–476. SPIE, 2019.Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and TimoAila. Training generative adversarial networks with limited data. Advances in neuralinformation processing systems , 33:12104–12114, 2020a.Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 8110–8119, 2020b.Hao Liang, Lulan Yu, Guikang Xu, Bhiksha Raj, and Rita Singh. Controlled autoencodersto generate faces from voices. In Advances in Visual Computing: 15th InternationalSymposium, ISVC 2020, San Diego, CA, USA, October 5–7, 2020, Proceedings, Part I15, pages 476–487. Springer, 2020.4Visual explanationZinan Lin, Hao Liang, Giulia Fanti, and Vyas Sekar. Raregan: Generating samples forrare classes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36,pages 7506–7515, 2022.Sumedha Singla, Motahhare Eslami, Brian Pollack, Stephen Wallace, and Kayhan Bat-manghelich. Explaining the black-box smoothly—a counterfactual approach. MedicalImage Analysis , 84:102721, 2023.5<|im_end|> <|im_start|>assistant ### Review Title Interesting findings ### Review Text This work presents a novel approach that highlights the most different features between different scanners that belong to different demographic groups. Strengths: - This method can be useful in practice, as it can potentially mitigate biases on downstream tasks that make decisions using data that contains multiple demographic groups. - The abstract is easy to follow. Weaknesses: - Authors show only a few sample results to demonstrate that the proposed method works. I wonder whether a quantitative evaluation can be done to estimate how this method works (and which can allow future research to compare with it). - There is no comparison to existing work, for example the ones cited that focus on visualizing subgroup differences associated with clinical attributes. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rJg851rYwH
ICLR.cc/2020/Conference
2020
Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy
["Nicolas Papernot", "Steve Chien", "Shuang Song", "Abhradeep Thakurta", "Ulfar Erlingsson"]
Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the same model architecture that performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures and initializations are chosen and hyperparameter tuning is performed, ab initio, explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the fundamental learning procedures or differential-privacy analysis.
["differential privacy", "deep learning"]
ABSTRACTBecause learning sometimes involves sensitive data, standard machine-learningalgorithms have been extended to offer strong privacy guarantees for trainingdata. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, butusing the same model architecture that performed well in a non-privacy-preservingsetting. This approach leads to less than ideal privacy/utility tradeoffs, as weshow here. Instead, we propose that model architectures and initializations arechosen and hyperparameter tuning is performed, ab initio , explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accu-racy on MNIST, FashionMNIST, and CIFAR10 without any modification of thefundamental learning procedures or differential-privacy analysis.1 I NTRODUCTIONMachine learning (ML) can be usefully applied to the analysis of sensitive data, e.g., in the domain ofhealthcare (Kononenko, 2001). However, ML models may unintentionally reveal sensitive aspectsof their training data, e.g., due to overfitting (Shokri et al., 2017; Song & Shmatikov, 2019). Tocounter this, ML techniques that offer strong privacy guarantees have been developed. Notably,the differentially private stochastic gradient descent, or DP-SGD, of Abadi et al. (2016) is an easy-to-use, generally-applicable modification of stochastic gradient descent. In addition to its rigorousprivacy guarantees, it has been empirically shown to stop the leaking of secrets (Carlini et al., 2019).To strictly bound the impact of any training example, DP-SGD makes two changes to every gradientstep: first, each example’s gradient contribution is limited to a fixed bound (in practice, by clippingall per-example gradients to a maximum `2norm); second, random (Gaussian) noise of the scaleof the clipping norm is added to each batch’s combined gradient, before it is backpropagated toupdate model parameters. Together, these changes create a new, artificial noise floor at each step ofgradient descent, such that the unique signal of any individual example is below this new noise floor;this allows differential privacy to be guaranteed for all training examples (Dwork & Roth, 2014).Training using DP-SGD is eminently practical and in addition to privacy offers advantages such asstrong generalization and the promise of reusable holdouts (Google, 2019; Dwork et al., 2015). Un-fortunately, its advantages have not been without cost: empirically, the test accuracy of differentiallyprivate ML is consistently lower than that of non-private learning (e.g., see Papernot et al. (2018)).Such accuracy loss may sometimes be inevitable: for example, the task may involve heavy-tailed dis-tributions and adding noise will definitely hinder visibility of examples in the tails (Feldman, 2019;Bagdasaryan & Shmatikov, 2019). However, this does not explain the accuracy loss of differentiallyprivate learning on standard benchmark tasks that are known to be relatively simple: MNIST (Yannet al., 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009), etc.This paper presents several new results for privacy-preserving learning that improve the state-of-the-art in terms of both privacy and accuracy. Significantly, these new results stem from a single,simple observation: differentially-private learning with DP-SGD is different enough that all aspectsof learning—model architecture, parameter initialization, and optimization strategy, as well as hy-perparameter tuning—must be reconsidered. To achieve the best privacy/accuracy tradeoffs, wemust tune our learning strategies to the specifics of privacy-preserving learning; i.e., we must “learnto learn” with privacy. Conversely, we concretely demonstrate how the architecture, initialization,1Under review as a conference paper at ICLR 2020and optimization strategy that gives the best accuracy for non-private learning can be a poor fit forlearning with privacy. Instead, by revisiting our choices, we can reduce the information loss inducedby clipping, limit the impact of added noise, and improve the utility of each gradient step whenlearning with privacy. Our contributions facilitate DP-SGD learning as follows:We show how simple architecture changes, such as the use of tanh instead of ReLU acti-vations, can improve a model’s private-learning suitability and achievable privacy/accuracytradeoffs, by eliminating the negative effects of clipping and noising large gradients.We explain how high-capacity models can be disadvantageous, as well as the advantagesof models with a final, fully-connected layer that can be independently fine tuned, and howboth help address the curse of dimensionality and high-dimensional noise.We demonstrate the importance of finding good initializations, and show how this can bedone with privacy using either transfer learning or weight scaling (Raghu et al., 2019).We show that better tradeoffs and increased wall-clock learning speeds can be achieved bytuning hyperparameters and choosing optimizers directly for DP-SGD learning.By applying the above, we advance the state of the art for MNIST, FashionMNIST, and CIFAR10,significantly improving upon the privacy/accuracy tradoffs from prior work. On MNIST, we achieve98.1% test accuracy for a privacy guarantee of (";) = (2:93;105), whereas the previous state-of-the-art reported in the TensorFlow Privacy library (Google, 2019) was 96.6%. On CIFAR10, weachieve 72% test accuracy at (";) = (2:1;105)in a setup for which to the best of our knowledgethe previous state-of-the-art was achieved by Abadi et al. (2016) at 67% accuracy.2 T RAINING -DATA MEMORIZATION , DIFFERENTIAL PRIVACY ,AND DP-SGDMachine-learning models will easily memorize whatever sensitive, personal, or private data that wasused in their training, and models may in practice disclose this data—as demonstrated by the attacksof Shokri et al. (2017), Song & Shmatikov (2019), and Carlini et al. (2019).For reasoning about the privacy guarantees of algorithms such as training by stochastic gradientdescent, differential privacy has become the established gold standard (Dwork & Roth, 2014). In-formally, an algorithm can be differentially private if it will always produce effectively the sameoutput (in a mathematically precise sense), when applied to two input datasets that differ by onlyone record. Formally, a learning algorithm Athat trains models from the set Sis(";)-differentially-private, if the following holds for all training datasets dandd0that differ by exactly one record:Pr[A(d)2S]e"Pr[A(d0)2S] + (1)Here,"gives the formal privacy guarantee, by placing a strong upper bound on any privacy loss,even in the worst possible case. A lower "indicates a stronger privacy guarantee or a tighter upperbound (Erlingsson et al., 2019). The factor allows for some probability that the property may nothold (in practice, this is required to be very small, e.g., in inverse proportion to the dataset size).A very attractive property of differential-privacy guarantees is that they hold true for all attackers—whatever they are probing and whatever their prior knowledge—and that they remain true undervarious forms of composition. In particular, the output of a differentially-private algorithm can bearbitrarily post processed, without any weakening of the guarantees. Also, if sensitive training datacontains multiple examples from the same person (or, more generally, the same sensitive group),"-differentially-private training on this data will result in model with a k"-differential-privacy guar-antee for each person, as long as at most ktraining-data records are present per person.Abadi et al. (2016) introduced DP-SGD as a method for training deep neural networks withdifferential-privacy guarantees that was able to achieve better privacy and utility than previous ef-forts (Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014). DP-SGD bounds the sensitiv-ity of the learning process to each individual training example by computing per-example gradientsfgigi20::n1with respect to the loss, for the nmodel parameters figi20::n1, and clipping eachper-example gradient to a maximum fixed `2normC. Subsequently, to the average of these per-example gradients, DP-SGD adds (Gaussian) noise that whose standard deviation is proportionalto this sensitivity. In this work, we use the canonical implementation of DP-SGD and its associatedanalysis that has been made available through the TensorFlow Privacy library (Google, 2019).2Under review as a conference paper at ICLR 20200 50 100 150 200Number of Filters k0.920.940.960.98Test AccuracyDPSGD (MNIST)SGD (MNIST)0 50 100 150 200Number of Filters k0.760.780.800.820.840.860.880.90Test AccuracyDPSGD (FashionMNIST)SGD (FashionMNIST)Figure 1: Test accuracy as a function of the number of filters kin the convolutional architecture ofTable 1; when training with vanilla SGD and DPSGD. Each point corresponds to multiple trainingruns on MNIST (left) or FashionMNIST (right). For both datasets, adding filters always improvesnon-private learning, whereas after an early point they are not beneficial to learning with privacy.3 M ODEL ARCHITECTURES BETTER SUITED TO LEARNING WITH PRIVACYWe show here that learning with differential privacy imposes additional constraints that need to betaken into account when designing neural network architectures. They help us control the sensi-tivity of learning to training examples before the clipping operation is performed in DP-SGD, thusreducing the potential negative impact of clipping on the estimated gradient direction.3.1 M ODEL CAPACITYThe success of neural networks is in part explained by their ability to scale to complex tasks throughan increase in model capacity. ResNets are an illustrative recent examples (He et al., 2016). Here, weexplain how additional capacity may notbe beneficial when learning with privacy. One of the majorchallenges in training models with differential privacy is the curse of dimensionality (Bassily et al.,2014). The accuracy of privately trained models typically degrades with the increase in the numberof dimensions. Unfortunately, strong lower bounds suggest that this dependence on dimensionalityisnecessary (Bassily et al., 2014).Table 1: MNIST and FashionMNIST modelarchitecture (33,000 parameters for k= 31) .Layer ParametersConvolution kfilters of 8x8, strides 2Max-Pooling 2x2Convolution kfilters of 4x4, strides 2Max-Pooling 2x2Fully connected 32 unitsSoftmax 10 unitsConsider the convolutional architecture describedto the right. With all other architectural details be-ing fixed, we can control the model’s capacity byvarying the number of filters kin its two convolu-tional layers. While the relationship between gen-eralization performance and the number of param-eters is not always monotonic (Neyshabur et al.,2017), we leave as future work a study of how dif-ferent measures of capacity can inform the designof model architectures for private learning. We re-port the model’s accuracy when trained with SGDand DP-SGD in Figure 1, both on MNIST (left) and FashionMNIST (right). The test accuracy ofmodels trained without privacy monotonically increases with the number of filters in their convo-lutional layers. Instead, we observe an inflection point at about 15 filters for which models trainedwith privacy achieve their highest test accuracy. Afterwards, the model’s generalization suffers asmore filters are added.There are two competing explanations of this behavior, both compatible with the lower bound statedin Bassily et al. (2014). First, recall that DP-SGD performs a clipping operation on each per-examplegradient before the average gradients is used to update model parameters; i.e., each gradient issubject to the following transformationgi gimin0@1;CqPn1i=0g2i1A (2)wheregiis the gradient corresponding to model parameter i. For a fixed clipping norm C(corre-sponding to a certain, fixed privacy guarantee), the quantityCpPn1i=0g2iby which individual param-eters are multiplied decreases as the number nof parameters in a model increases. That is, the more3Under review as a conference paper at ICLR 20201.0 1.5 2.0 2.5 3.0DP- (lower values are better)707580859095Test AccuracyReLUtanhMNIST1.0 1.5 2.0 2.5 3.0DP- (lower values are better)606570758085Test AccuracyReLUtanh FashionMNISTFigure 2: Test accuracy as a function of the privacy loss when training a pair of models with DP-SGD. The only difference between the two models is the activation function for their hidden layer:ReLU or tanh. All other elements of the architecture (number, type, and dimension of layers) andthe training algorithm (optimizer, learning rate, number of microbatches, clipping norm, and noisemultiplier) are identical. Results are averaged over 10 runs for each curve.parameters we have, the more likely DP-SGD is to clip the gradient (or signal) at each parameter.This can explain the presence of an inflection point in Figure 1, after which learning with privacybecomes increasingly difficult as capacity is increased. Second, as the number of parameters (i.e.,gi’s) increases, the norm of the noise vector that DP-SGD must add to the gradient average to ensureprivacy also increases. This noise norm increases asp#parameters, and introduces another sourceof accuracy degradation with an increased number of parameters.Our observations may seem to contradict some of the findings in Abadi et al. (2016). However, theirlimited experimental setup could offer few general lessons. First, they reduced data dimensionalityusing PCA to have inputs of only 60dimensions; second, they explored only a model architecturesusing a single layer perceptron with between 200and2;000units. Instead, our experiments involvea realistic setting where the full input is passed to a convolutional neural network with a total of 3hidden layers and over 26,000 parameters.3.2 A CTIVATION FUNCTIONSWhen training a model with differential privacy, gradients computed during SGD are clipped (recallEquation 2) to control the sensitivity of learning to training examples. If these gradients take largevalues, some of the signal will be discarded as gradients are being clipped. One way to reduce themagnitude (or at least control it), is to prevent the model’s activations from exploding. However,a common choice of activation function in modern deep neural networks is the ReLU and, unlikeother activations functions, ReLUs are unbounded.Here, we thus test the hypothesis that replacing ReLUs with a bounded activation function preventsactivations from exploding and thus keeps the magnitude of gradients to a more reasonable value.This in turn implies that the clipping operation applied by DP-SGD will discard less signal fromgradient updates—eventually resulting in higher performance at test time.On MNIST and FashionMNIST, we train two models based off the architecture of Table 1: the firstmodel uses ReLU whereas the second model uses tanh1as the activation for its hidden layers, withother architectural elements kept identical. In our experiments, we later fine-tuned those architec-tural aspects (i.e., model capacity, choice of optimizer, etc.) separately for each activation function,to avoid favoring any one choice. In all cases, tanh was an improvement, as summarized in ourconclusions (Section 6).Figure 2 visualizes the privacy-utility Pareto curve (Avent et al., 2019) of the two models trainedwith DP-SGD. Rather than plotting the test accuracy as a function of the number of steps, we plotit as a function of the privacy loss "(but the privacy loss is a monotonically increasing functionof the number of steps). On MNIST, the test accuracy of the tanh model is 98:0%compared to96:6%for the ReLU model with an identical privacy loss of "= 2:93. For comparison, baselinetanh and ReLU models trained without privacy both achieve a test accuracy of 99:0%. Similarly,on FashionMNIST, the tanh model trained with DP-SGD achieves 85:5%test accuracy compared to81:9%with ReLUs. The baselines on FashionMNIST are 89:3%for tanh and 89:4%with ReLUs.1We obtained results similar to the tanh with a sigmoid and a learning rate increased by a factor of 2 to 8.This is explained by the fact that the tanh is a rescaled sigmoid :tanh(x) = 2(x)1.4Under review as a conference paper at ICLR 2020To explain why a simple change of activation functions has a large impact on the model’s accu-racy, we conjecture that the bounded nature of the tanh prevents activations from exploding duringNumber of training stepsL2 norm of the first activation vector0250500750100012502500500075001000012500ReLU (SGD)ReLU (DP-SGD)tanh (DP-SGD)Figure 3:`2norm of the first conv activations.training. We thus monitored the `2norm of thefirst layer’s activations for our MNIST modelwhile it is being trained in three scenarios: (a)without privacy using vanilla SGD and ReLUactivations, (b) with ReLU activations and DP-SGD, and (c) with tanh activations and DP-SGD. The evolution of activation norms on testdata is visualized in Figure 3. As conjectured,the activations of our ReLU network explodeby a factor of 3when training with privacywhen compared to without privacy. Switchingto tanh activations brings down the norms ofactivations back to levels comparable with theactivations of our non-private ReLU network.4 I NITIALIZATIONS FOR LEARNING WITH DIFFERENTIAL PRIVACYBecause each gradient step expends some privacy budget, good initialization of learning is impor-tant; here, we consider transfer learning (Pratt et al., 1991) and weight scaling (Raghu et al., 2019).4.1 I NITIALIZING FROM A PRE-TRAINED MODEL USING TRANSFER LEARNINGTransfer learning can improve the initialization used when learning with privacy, and allow betterprivacy/accuracy tradoffs to be achieved.2For example, to reach reasonable accuracy ( >80%)on CIFAR10, a convolutional neural network may necessarily include many convolutional layerscomprising several hundred-thousand parameters. However, since convolutional layers for similarimage-processing tasks are known to learn similar representations—at least in early layers—it maybe possible to transfer most of these parameters from a public model, either as initializations or asfrozen parameters, and subsequently train with DP-SGD. For CIFAR10, the natural choice for suchtransfer is a CIFAR100 model, and this has been previously explored by Abadi et al. (2016).Table 2: CIFAR10 convolutional model architec-ture (in total, 2,395,434 parameters).Conv2 32 filters of 3x3, strides 1Max-Pooling 2x2Conv2 64 filters of 3x3, strides 1Max-Pooling 2x2Conv2 128 filters of 3x3, strides 1Fully connected 1024 unitsSoftmax 10 unitsTaking the Abadi et al. (2016) transfer learn-ing results for CIFAR10 as a baseline, we per-form new experiments using much of the samesetup and the model architecture of Table 2.As it is relatively simple, this model is a goodcandidate for differentially-private learning (al-though it reaches only 84:2%accuracy on CI-FAR10 when all its parameters are trained non-privately, whereas state-of-the-art models canhave over 10% higher accuracy).We performed new transfer-learning experiments based on training this model on CIFAR100 data inthree different ways: trained on a total of 5000 examples from 10 classes picked at random ( Min-rand-10 ); trained on 25,000 examples from a random half of the CIFAR100 classes, grouped into10 new, evenly-sized meta classes ( Half-rand-50 ); trained on all examples and all 100 separateclasses ( Max-100 ). From each of these trained models, transfer learning was used to initialize amodel to be trained on CIFAR10. In the subsequent CIFAR10 training, all but the last layer wasfrozen, which simplifies the learning task to that of logistic regression (but also reduces utility, withthe best non-private accuracy reduced to 75% on CIFAR10).Table 3 shows CIFAR10 privacy and accuracy resulting from fine-tuning of different transfer-learning models with DP-SGD. As shown in Table 4, the results improve on those of Abadiet al. (2016), even though they performed non-linear fine-tuning of two neural-network lay-ers, and their underlying model was able to achieve higher non-private accuracy (86%).2A different, formal take on how public models and data can facilitate learning with privacy is studied in(Bassily et al., 2018; Feldman et al., 2018).5Under review as a conference paper at ICLR 2020Type Epoch 10 Epoch 50 Epoch 100 Epoch 200 Epoch 400Min-rand-10 44.8%4.6 49.6%3.9 51.0%3.9 52.8%3.3 53.7%3.5(81.0%4.0) 50% = best 54.1% = best 55.7% = best 56.9% = best 57.6% = bestHalf-rand-50 39.4%2.9 51.4%0.8 54.7%1.5 56.8%1.3 59.0%0.9(62.1%1.4) 44.3% = best 52.6% = best 56.6% = best 58.3% = best 60.2% = bestMax-100 57.0%1.0 66.2%0.6 68.4%0.6 69.7%0.6 71.0%0.5(54.9%0.7) 59.1% = best 67.2% = best 69.5% = best 70.6% = best 72.1% = bestTable 3: Accuracy of learning with privacy (average/best of 10 runs) compared to a non-privatebaseline of 75%. A CIFAR10 model is trained from a CIFAR100-transfer-learning initialization,with all-but-the-last layer frozen during training. The DP-SGD "upper bounds at = 105are"10= 0:32,"50= 0:73,"100= 1:04,"200= 1:48,"400= 2:12for the subscript-indicated epochs.The source model CIFAR100 accuracy (first column), is uncorellated to the CIFAR10 accuracy.Table 4: CIFAR10 privacyand accuracy tradeoffs.This paper Abadi et al.(";acc:) (";acc:)(0:3;59%) –(1:0;70%) –(2:1;72%) (2:0;67%)– (4:0;70%)– (8:0;73%)In addition, the results show the benefits of model architectureswhose final layer can be fine-tuned using logistic regression training,or other forms of convex optimization. Such training can be madepossible by including a final fully-connected layer into a network;in additional experiments (not detailed here), the inclusion of sucha layer did not harm the training of the original, source model fromwhich transfer learning was done. Furthermore, the number of pa-rameters in this layer did not seem to matter much: privacy/accuracytradeoffs remained the same, even when the layer was grown by anorder of magnitude, which is consistent with what is known aboutdifferentially-private convex optimization (Jain & Thakurta, 2014).4.2 I NITIALIZATION BY WEIGHT SCALING0 5 10 15 20 25 30 35Epoch0.00.10.20.30.40.5Test AccuracyDP seed 1DP seed 2DP seed 3Mean Var 1Mean Var 2Mean Var 3Figure 4: Colored lines show DP-SGDaccuracy for three “seed” random initiali-zations of a CIFAR-10 model. Coloredbands show accuracy range of 30 DP-SGDmodels using Mean Var initialization basedon per-layer parameter statistics in the cor-responding seed model. In all models, theprivacy"at each epoch is identical; how-ever, Mean Var initialization substantiallyimproves the privacy/accuracy tradeoff.Initialization by transfer learning is only applicablewhen a suitable public model exists whose weightscan facilitate learning with privacy on sensitive data.But, such model may not exist, and DP-SGD learningmay possibly benefit from other, non-standard meansof initialization. We consider the Mean Var weight-scaling approach of Raghu et al. (2019) and initializeDP-SGD learning with Gaussian random parameterdistributions whose layer-wise mean and variance areextracted from a seed model trained on the same sensi-tive input data. The weight-scaling approach does notdirectly transfer the parameters of an existing model;instead, just the layer-wise mean and variance are ex-tracted, and those statistics are used to configure theGaussian random distributions from which a secondmodel with the same architecture is initialized.In the context of learning with privacy, Mean Varweight scaling can improve model initialization bytransfer from one differentially-private model to an-other. First, DP-SGD can be applied to train a modelwith standard random initialization. From this model,per-layer mean/variance statistics can be extracted toinitialize a new model of the same architecture, sub-sequently trained with strong privacy guarantees. (This extraction can be done privately, althoughthe privacy risk of summary statistics that drive random initialization should be vanishing. Follow-ing Bassily et al. (2018); Papernot et al. (2018), one can use the formal framework of sub-sampleand aggregate in conjunction with Propose-Test-Release (PTR) for this selection. The algorithm firstsplits the training data into disjoint subsets, and trains models independently on each of the splits.Using these trained models, the parameter is chosen via consensus voting with differential privacy.Notice that if the training data set is large, and there is a strong consensus, then the cost towards6Under review as a conference paper at ICLR 20200 10 20 30 40Epochs80859095Test AccuracysgdadamMNIST0 10 20 30 40Epochs72747678808284Test AccuracysgdadamFashionMNIST0 100 200 300 400Epochs3040506070Test AccuracysgdadamCIFAR10Figure 5: Learning curves for DP-SGD and DP-Adam. Early on in training, DP-Adam convergesfaster to an accuracy that is within 1 point of its final accuracy, however DP-SGD increases moresteadily towards the end of training, thus both achieve comparable results. Given one of the datasets,the privacy budget "for both models is identical at each epoch.privacy is very low.) The idea is that the mean and variance pairs can be obtained quickly at a mod-est privacy budget, but the faster convergence of the Mean Var initialized model both reduces theoverall privacy budget needed for training, and mitigates the increased wall-clock time of DP-SGD.We experiment with a relatively deep CIFAR10 convolutional model (see Appendix A), since Raghuet al. found the benefits of Mean Var initialization most pronounced for large models. We firsttrained a model using random initialization, and then did weight scaling by transferring that model’sstatistics to a new model. In this proof-of-concept, both models were trained with the same noisevariance (= 0:5), but one could reserve a larger portion of the privacy budget for the new model.We should note that we did not directly transfer the weight statistics between corresponding layers inthe original and new models. Rather, we used the weight statistics of each of original model’s earlylayers of the original model for two of the layers in the new model. This gives superior performanceto a one-to-one transfer; we conjecture that this is because early layers have higher variance.Figure 4 shows the results of this experiment for some early training epochs. Each run that used stan-dard He random initialization (He et al., 2015) gave near identical results, achieving 37% accuracyat epoch 33. The Mean Var initialization runs showed substantially higher variance, with the bestmodels having 7% better accuracy at epoch 33. These results are intriguing, and reminiscent of thelottery ticket hypothesis (Frankle & Carbin, 2019); they suggest a strategy of training a collection ofMean Var models and keeping those that show early promise.5 T UNING OPTIMIZERS FOR PRIVATE LEARNINGArchitectural choices presented in Section 3 control how sensitive learning is to training examples.This helps us to learn with privacy—because it eliminates the negative effects of clipping and noisinglarge gradients. We now turn our attention to the training algorithm itself. We find that it is importantto tailor algorithm and hyperparameter choices to the specificities of private learning: a batch sizeor learning rate that yields good results without privacy may not perform well with privacy.5.1 A DAPTIVE OPTIMIZERS PROVIDE MARGINAL GAINS WHEN LEARNING WITH PRIVACYWe first explore the choice of optimizer, and in particular whether adaptive optimizers that leveragethe history of iterates help convergence when learning privately. We compare learning curves for DP-SGD and the differentially private counterpart of Adam (Kingma & Ba, 2014), a canonical adaptiveoptimizer. A qualitative analysis of Figure 5 leads to the same conclusion for all datasets (MNIST,FashionMNIST, and CIFAR10). While DP-Adam may converge faster initially, its convergence rateeventually slows down sufficiently for DP-SGD to achieve comparable (if not higher) accuracy.To explain the ineffectiveness of adaptive optimizers, we hypothesize that the iterates they accumu-late during training are affected negatively by noise introduced to preserve privacy. Indeed, whilethere is enough signal from the training data included in any given batch sampled early in training,later in training most training examples have a loss of zero and do not contribute to the gradientsbeing noised. Carrying this noise from one gradient descent step to the next to adapt learning ratestherefore inadequately slows down training. To verify this, we track the estimate of the first momentin Adam on MNIST. The mean absolute value of its components converges when learning withoutprivacy (from 0.5 after the first epoch to about 0.8 for epochs 45 through 60). Instead, it increasessteadily throughout training with privacy (from 0.5 at the first epoch to above 1. after 60 epochs).Thus, choosing an adaptive optimizer (e.g., DP-Adam) is not necessary if one is interested in achiev-ing maximal accuracy: given a fixed privacy budget, fine-tuning the learning rate is more important7Under review as a conference paper at ICLR 2020as we confirm in Section 5.2. Note that this resonates well with recent results questioning the gen-eralization capabilities of adaptive optimizers (Wilson et al., 2017).5.2 C HOOSING A (LARGE ) BATCH SIZE AND LEARNING RATEHaving observed that few training examples contribute signal after the first epochs, it is natural toask whether increasing the size of batches could improve the noise-to-signal ratio in DP learning.To ensure a fair comparison, we fix the privacy budget "and deduce the number of epochs wecan train the model for given a desired batch size. For instance, in Table 5, we compare modelstrained for 7epochs on batches of 1;024examples to models trained for 40epochs on batches of256examples. In both cases, the total privacy budget for training these models is "= 2:7. We run ahyperparameter search to fine-tune the choice of learning rate for both DP-SGD and DP-Adam. Wethen compare the test accuracy achieved with small and large batch sizes.Non-private Differentially-privateOptimizer Batch Epochs Time Learning Rate Test Acc. Learning Rate Test Acc.SGD256 40 240s 1:0710190:3% 3:3210186:1%1024 7 42s 3:6810186:3% 4:46 85 :1%Adam256 40 240s 1:0610390:5% 1:3210386:0%1024 7 42s 4:3210388:7% 7:0810385:1%Table 5: Impact of batch size on trade-off between accuracy and privacy. The privacy budget is fixedto"= 2:7for all rows. A hyperparameter search is then conducted to find the best learning rate totrain the model with or without differential privacy on FashionMNIST.Hyperparameters should be tuned for DP-SGD, not SGD. We confirm that DP-Adam doesnot improve over DP-SGD. Yet, this experiment shows how training for a small number of epochsat a large batch size can do comparably well to a large number of epochs at a small batch size:the wall-clock time gain is important (about 5) and the cost in performance is moderate—half apercentage point. This suggests that earlier theoretical analysis (Talwar et al., 2014) also holds in thenon-convex setting. Furthermore, note how learning rates vary across the non-DP and DP settings.6 C ONCLUSIONSRather than first train a non-private model and later attempt to make it private, we bypass non-private training altogether and directly incorporate specificities of privacy-preserving learning in theselection of architectures, initializations, and tuning strategies. This improves substantially uponthe state-of-the-art privacy/accuracy trade-offs on three benchmarks, as summarized below. Up tonow, we evaluated each component (e.g., change of activation function, optimizer, etc.) individuallyto demonstrate its influence on private learning. Instead, here this summary table compares eachapproach after all hyperparameters explored in the paper have been jointly fined tuned. In particu-lar, note how even in their own individually-best setting, tanh continues to consistently outperformReLU with for example 98.1% test accuracy (instead of 96.6% for ReLU) on MNIST.Dataset Technique Acc. " AssumptionsMNISTSGD w/ tanh (not private) 99.0%1 0 -DP-SGD w/ ReLU 96.6% 2.93 105-DP-SGD w/ tanh (ours) 98.1% 2.93 105-FashionMNISTSGD w/ ReLU (not private) 89.4%1 0 -DP-SGD w/ ReLU 81.9% 2.7 105-DP-SGD w/ tanh (ours) 86.1% 2.7 105-CIFAR10Transfer + SGD (not private) 75%1 0 -Transfer + DP-SGD (Abadi et al.) 67% 2 105Public DataTransfer + DP-SGD (ours) 72% 2.1 105Public Data8Under review as a conference paper at ICLR 2020
B1gcmtH15B
Official Blind Review #2
3: Weak Reject
Overall, this work empirically evaluates different techniques used in privacy learning and suggest useful methods to stabilize or improve performance. Detail comments: Strength: Despite the progress of privacy-preserving learning in theory, there are few works providing learning details for better training. Especially, considering the instability in perturbation-based private algorithms, e.g., most DP ones, the work could be valuable in the sense of practice. Weakness: As far as empirical research, the compared techniques are too few. What if we use those less popular techniques, for example, RMSprop optimization method? The model capacity of neural networks, especially deep networks, has some non-trivial relation to the number of filters or the number parameters. It is important to quantify such relation. A good reference might be [A]. Briefly, the generalization performance may not be monotonic against the number of parameters. The baselines are not enough. Of course, Abadi et al.’s work is outstanding in handling the privacy learning of deep networks. It has been further developed by the following researchers. For example, [B] and [C]. Does the conclusion still hold for these algorithms? [A] Neyshabur, B., Bhojanapalli, S., Mcallester, D., & Srebro, N. (2017). Exploring Generalization in Deep Learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 5947–5956). [B] Yu, L., Liu, L., Pu, C., Gursoy, M. E., & Truex, S. (2019). Differentially Private Model Publishing for Deep Learning. Proceedings of 40th IEEE Symposium on Security and Privacy. [C] Phan, N., Vu, M. N., Liu, Y., Jin, R., Dou, D., Wu, X., & Thai, M. T. (2019). Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness. Proceedings of the Twenty-Eighth International Joint Conference on Artificial
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy ### Paper Abstract Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the same model architecture that performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures and initializations are chosen and hyperparameter tuning is performed, ab initio, explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the fundamental learning procedures or differential-privacy analysis. ### Paper Keywords ["differential privacy", "deep learning"] ### Paper Content ABSTRACTBecause learning sometimes involves sensitive data, standard machine-learningalgorithms have been extended to offer strong privacy guarantees for trainingdata. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, butusing the same model architecture that performed well in a non-privacy-preservingsetting. This approach leads to less than ideal privacy/utility tradeoffs, as weshow here. Instead, we propose that model architectures and initializations arechosen and hyperparameter tuning is performed, ab initio , explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accu-racy on MNIST, FashionMNIST, and CIFAR10 without any modification of thefundamental learning procedures or differential-privacy analysis.1 I NTRODUCTIONMachine learning (ML) can be usefully applied to the analysis of sensitive data, e.g., in the domain ofhealthcare (Kononenko, 2001). However, ML models may unintentionally reveal sensitive aspectsof their training data, e.g., due to overfitting (Shokri et al., 2017; Song & Shmatikov, 2019). Tocounter this, ML techniques that offer strong privacy guarantees have been developed. Notably,the differentially private stochastic gradient descent, or DP-SGD, of Abadi et al. (2016) is an easy-to-use, generally-applicable modification of stochastic gradient descent. In addition to its rigorousprivacy guarantees, it has been empirically shown to stop the leaking of secrets (Carlini et al., 2019).To strictly bound the impact of any training example, DP-SGD makes two changes to every gradientstep: first, each example’s gradient contribution is limited to a fixed bound (in practice, by clippingall per-example gradients to a maximum `2norm); second, random (Gaussian) noise of the scaleof the clipping norm is added to each batch’s combined gradient, before it is backpropagated toupdate model parameters. Together, these changes create a new, artificial noise floor at each step ofgradient descent, such that the unique signal of any individual example is below this new noise floor;this allows differential privacy to be guaranteed for all training examples (Dwork & Roth, 2014).Training using DP-SGD is eminently practical and in addition to privacy offers advantages such asstrong generalization and the promise of reusable holdouts (Google, 2019; Dwork et al., 2015). Un-fortunately, its advantages have not been without cost: empirically, the test accuracy of differentiallyprivate ML is consistently lower than that of non-private learning (e.g., see Papernot et al. (2018)).Such accuracy loss may sometimes be inevitable: for example, the task may involve heavy-tailed dis-tributions and adding noise will definitely hinder visibility of examples in the tails (Feldman, 2019;Bagdasaryan & Shmatikov, 2019). However, this does not explain the accuracy loss of differentiallyprivate learning on standard benchmark tasks that are known to be relatively simple: MNIST (Yannet al., 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009), etc.This paper presents several new results for privacy-preserving learning that improve the state-of-the-art in terms of both privacy and accuracy. Significantly, these new results stem from a single,simple observation: differentially-private learning with DP-SGD is different enough that all aspectsof learning—model architecture, parameter initialization, and optimization strategy, as well as hy-perparameter tuning—must be reconsidered. To achieve the best privacy/accuracy tradeoffs, wemust tune our learning strategies to the specifics of privacy-preserving learning; i.e., we must “learnto learn” with privacy. Conversely, we concretely demonstrate how the architecture, initialization,1Under review as a conference paper at ICLR 2020and optimization strategy that gives the best accuracy for non-private learning can be a poor fit forlearning with privacy. Instead, by revisiting our choices, we can reduce the information loss inducedby clipping, limit the impact of added noise, and improve the utility of each gradient step whenlearning with privacy. Our contributions facilitate DP-SGD learning as follows:We show how simple architecture changes, such as the use of tanh instead of ReLU acti-vations, can improve a model’s private-learning suitability and achievable privacy/accuracytradeoffs, by eliminating the negative effects of clipping and noising large gradients.We explain how high-capacity models can be disadvantageous, as well as the advantagesof models with a final, fully-connected layer that can be independently fine tuned, and howboth help address the curse of dimensionality and high-dimensional noise.We demonstrate the importance of finding good initializations, and show how this can bedone with privacy using either transfer learning or weight scaling (Raghu et al., 2019).We show that better tradeoffs and increased wall-clock learning speeds can be achieved bytuning hyperparameters and choosing optimizers directly for DP-SGD learning.By applying the above, we advance the state of the art for MNIST, FashionMNIST, and CIFAR10,significantly improving upon the privacy/accuracy tradoffs from prior work. On MNIST, we achieve98.1% test accuracy for a privacy guarantee of (";) = (2:93;105), whereas the previous state-of-the-art reported in the TensorFlow Privacy library (Google, 2019) was 96.6%. On CIFAR10, weachieve 72% test accuracy at (";) = (2:1;105)in a setup for which to the best of our knowledgethe previous state-of-the-art was achieved by Abadi et al. (2016) at 67% accuracy.2 T RAINING -DATA MEMORIZATION , DIFFERENTIAL PRIVACY ,AND DP-SGDMachine-learning models will easily memorize whatever sensitive, personal, or private data that wasused in their training, and models may in practice disclose this data—as demonstrated by the attacksof Shokri et al. (2017), Song & Shmatikov (2019), and Carlini et al. (2019).For reasoning about the privacy guarantees of algorithms such as training by stochastic gradientdescent, differential privacy has become the established gold standard (Dwork & Roth, 2014). In-formally, an algorithm can be differentially private if it will always produce effectively the sameoutput (in a mathematically precise sense), when applied to two input datasets that differ by onlyone record. Formally, a learning algorithm Athat trains models from the set Sis(";)-differentially-private, if the following holds for all training datasets dandd0that differ by exactly one record:Pr[A(d)2S]e"Pr[A(d0)2S] + (1)Here,"gives the formal privacy guarantee, by placing a strong upper bound on any privacy loss,even in the worst possible case. A lower "indicates a stronger privacy guarantee or a tighter upperbound (Erlingsson et al., 2019). The factor allows for some probability that the property may nothold (in practice, this is required to be very small, e.g., in inverse proportion to the dataset size).A very attractive property of differential-privacy guarantees is that they hold true for all attackers—whatever they are probing and whatever their prior knowledge—and that they remain true undervarious forms of composition. In particular, the output of a differentially-private algorithm can bearbitrarily post processed, without any weakening of the guarantees. Also, if sensitive training datacontains multiple examples from the same person (or, more generally, the same sensitive group),"-differentially-private training on this data will result in model with a k"-differential-privacy guar-antee for each person, as long as at most ktraining-data records are present per person.Abadi et al. (2016) introduced DP-SGD as a method for training deep neural networks withdifferential-privacy guarantees that was able to achieve better privacy and utility than previous ef-forts (Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014). DP-SGD bounds the sensitiv-ity of the learning process to each individual training example by computing per-example gradientsfgigi20::n1with respect to the loss, for the nmodel parameters figi20::n1, and clipping eachper-example gradient to a maximum fixed `2normC. Subsequently, to the average of these per-example gradients, DP-SGD adds (Gaussian) noise that whose standard deviation is proportionalto this sensitivity. In this work, we use the canonical implementation of DP-SGD and its associatedanalysis that has been made available through the TensorFlow Privacy library (Google, 2019).2Under review as a conference paper at ICLR 20200 50 100 150 200Number of Filters k0.920.940.960.98Test AccuracyDPSGD (MNIST)SGD (MNIST)0 50 100 150 200Number of Filters k0.760.780.800.820.840.860.880.90Test AccuracyDPSGD (FashionMNIST)SGD (FashionMNIST)Figure 1: Test accuracy as a function of the number of filters kin the convolutional architecture ofTable 1; when training with vanilla SGD and DPSGD. Each point corresponds to multiple trainingruns on MNIST (left) or FashionMNIST (right). For both datasets, adding filters always improvesnon-private learning, whereas after an early point they are not beneficial to learning with privacy.3 M ODEL ARCHITECTURES BETTER SUITED TO LEARNING WITH PRIVACYWe show here that learning with differential privacy imposes additional constraints that need to betaken into account when designing neural network architectures. They help us control the sensi-tivity of learning to training examples before the clipping operation is performed in DP-SGD, thusreducing the potential negative impact of clipping on the estimated gradient direction.3.1 M ODEL CAPACITYThe success of neural networks is in part explained by their ability to scale to complex tasks throughan increase in model capacity. ResNets are an illustrative recent examples (He et al., 2016). Here, weexplain how additional capacity may notbe beneficial when learning with privacy. One of the majorchallenges in training models with differential privacy is the curse of dimensionality (Bassily et al.,2014). The accuracy of privately trained models typically degrades with the increase in the numberof dimensions. Unfortunately, strong lower bounds suggest that this dependence on dimensionalityisnecessary (Bassily et al., 2014).Table 1: MNIST and FashionMNIST modelarchitecture (33,000 parameters for k= 31) .Layer ParametersConvolution kfilters of 8x8, strides 2Max-Pooling 2x2Convolution kfilters of 4x4, strides 2Max-Pooling 2x2Fully connected 32 unitsSoftmax 10 unitsConsider the convolutional architecture describedto the right. With all other architectural details be-ing fixed, we can control the model’s capacity byvarying the number of filters kin its two convolu-tional layers. While the relationship between gen-eralization performance and the number of param-eters is not always monotonic (Neyshabur et al.,2017), we leave as future work a study of how dif-ferent measures of capacity can inform the designof model architectures for private learning. We re-port the model’s accuracy when trained with SGDand DP-SGD in Figure 1, both on MNIST (left) and FashionMNIST (right). The test accuracy ofmodels trained without privacy monotonically increases with the number of filters in their convo-lutional layers. Instead, we observe an inflection point at about 15 filters for which models trainedwith privacy achieve their highest test accuracy. Afterwards, the model’s generalization suffers asmore filters are added.There are two competing explanations of this behavior, both compatible with the lower bound statedin Bassily et al. (2014). First, recall that DP-SGD performs a clipping operation on each per-examplegradient before the average gradients is used to update model parameters; i.e., each gradient issubject to the following transformationgi gimin0@1;CqPn1i=0g2i1A (2)wheregiis the gradient corresponding to model parameter i. For a fixed clipping norm C(corre-sponding to a certain, fixed privacy guarantee), the quantityCpPn1i=0g2iby which individual param-eters are multiplied decreases as the number nof parameters in a model increases. That is, the more3Under review as a conference paper at ICLR 20201.0 1.5 2.0 2.5 3.0DP- (lower values are better)707580859095Test AccuracyReLUtanhMNIST1.0 1.5 2.0 2.5 3.0DP- (lower values are better)606570758085Test AccuracyReLUtanh FashionMNISTFigure 2: Test accuracy as a function of the privacy loss when training a pair of models with DP-SGD. The only difference between the two models is the activation function for their hidden layer:ReLU or tanh. All other elements of the architecture (number, type, and dimension of layers) andthe training algorithm (optimizer, learning rate, number of microbatches, clipping norm, and noisemultiplier) are identical. Results are averaged over 10 runs for each curve.parameters we have, the more likely DP-SGD is to clip the gradient (or signal) at each parameter.This can explain the presence of an inflection point in Figure 1, after which learning with privacybecomes increasingly difficult as capacity is increased. Second, as the number of parameters (i.e.,gi’s) increases, the norm of the noise vector that DP-SGD must add to the gradient average to ensureprivacy also increases. This noise norm increases asp#parameters, and introduces another sourceof accuracy degradation with an increased number of parameters.Our observations may seem to contradict some of the findings in Abadi et al. (2016). However, theirlimited experimental setup could offer few general lessons. First, they reduced data dimensionalityusing PCA to have inputs of only 60dimensions; second, they explored only a model architecturesusing a single layer perceptron with between 200and2;000units. Instead, our experiments involvea realistic setting where the full input is passed to a convolutional neural network with a total of 3hidden layers and over 26,000 parameters.3.2 A CTIVATION FUNCTIONSWhen training a model with differential privacy, gradients computed during SGD are clipped (recallEquation 2) to control the sensitivity of learning to training examples. If these gradients take largevalues, some of the signal will be discarded as gradients are being clipped. One way to reduce themagnitude (or at least control it), is to prevent the model’s activations from exploding. However,a common choice of activation function in modern deep neural networks is the ReLU and, unlikeother activations functions, ReLUs are unbounded.Here, we thus test the hypothesis that replacing ReLUs with a bounded activation function preventsactivations from exploding and thus keeps the magnitude of gradients to a more reasonable value.This in turn implies that the clipping operation applied by DP-SGD will discard less signal fromgradient updates—eventually resulting in higher performance at test time.On MNIST and FashionMNIST, we train two models based off the architecture of Table 1: the firstmodel uses ReLU whereas the second model uses tanh1as the activation for its hidden layers, withother architectural elements kept identical. In our experiments, we later fine-tuned those architec-tural aspects (i.e., model capacity, choice of optimizer, etc.) separately for each activation function,to avoid favoring any one choice. In all cases, tanh was an improvement, as summarized in ourconclusions (Section 6).Figure 2 visualizes the privacy-utility Pareto curve (Avent et al., 2019) of the two models trainedwith DP-SGD. Rather than plotting the test accuracy as a function of the number of steps, we plotit as a function of the privacy loss "(but the privacy loss is a monotonically increasing functionof the number of steps). On MNIST, the test accuracy of the tanh model is 98:0%compared to96:6%for the ReLU model with an identical privacy loss of "= 2:93. For comparison, baselinetanh and ReLU models trained without privacy both achieve a test accuracy of 99:0%. Similarly,on FashionMNIST, the tanh model trained with DP-SGD achieves 85:5%test accuracy compared to81:9%with ReLUs. The baselines on FashionMNIST are 89:3%for tanh and 89:4%with ReLUs.1We obtained results similar to the tanh with a sigmoid and a learning rate increased by a factor of 2 to 8.This is explained by the fact that the tanh is a rescaled sigmoid :tanh(x) = 2(x)1.4Under review as a conference paper at ICLR 2020To explain why a simple change of activation functions has a large impact on the model’s accu-racy, we conjecture that the bounded nature of the tanh prevents activations from exploding duringNumber of training stepsL2 norm of the first activation vector0250500750100012502500500075001000012500ReLU (SGD)ReLU (DP-SGD)tanh (DP-SGD)Figure 3:`2norm of the first conv activations.training. We thus monitored the `2norm of thefirst layer’s activations for our MNIST modelwhile it is being trained in three scenarios: (a)without privacy using vanilla SGD and ReLUactivations, (b) with ReLU activations and DP-SGD, and (c) with tanh activations and DP-SGD. The evolution of activation norms on testdata is visualized in Figure 3. As conjectured,the activations of our ReLU network explodeby a factor of 3when training with privacywhen compared to without privacy. Switchingto tanh activations brings down the norms ofactivations back to levels comparable with theactivations of our non-private ReLU network.4 I NITIALIZATIONS FOR LEARNING WITH DIFFERENTIAL PRIVACYBecause each gradient step expends some privacy budget, good initialization of learning is impor-tant; here, we consider transfer learning (Pratt et al., 1991) and weight scaling (Raghu et al., 2019).4.1 I NITIALIZING FROM A PRE-TRAINED MODEL USING TRANSFER LEARNINGTransfer learning can improve the initialization used when learning with privacy, and allow betterprivacy/accuracy tradoffs to be achieved.2For example, to reach reasonable accuracy ( >80%)on CIFAR10, a convolutional neural network may necessarily include many convolutional layerscomprising several hundred-thousand parameters. However, since convolutional layers for similarimage-processing tasks are known to learn similar representations—at least in early layers—it maybe possible to transfer most of these parameters from a public model, either as initializations or asfrozen parameters, and subsequently train with DP-SGD. For CIFAR10, the natural choice for suchtransfer is a CIFAR100 model, and this has been previously explored by Abadi et al. (2016).Table 2: CIFAR10 convolutional model architec-ture (in total, 2,395,434 parameters).Conv2 32 filters of 3x3, strides 1Max-Pooling 2x2Conv2 64 filters of 3x3, strides 1Max-Pooling 2x2Conv2 128 filters of 3x3, strides 1Fully connected 1024 unitsSoftmax 10 unitsTaking the Abadi et al. (2016) transfer learn-ing results for CIFAR10 as a baseline, we per-form new experiments using much of the samesetup and the model architecture of Table 2.As it is relatively simple, this model is a goodcandidate for differentially-private learning (al-though it reaches only 84:2%accuracy on CI-FAR10 when all its parameters are trained non-privately, whereas state-of-the-art models canhave over 10% higher accuracy).We performed new transfer-learning experiments based on training this model on CIFAR100 data inthree different ways: trained on a total of 5000 examples from 10 classes picked at random ( Min-rand-10 ); trained on 25,000 examples from a random half of the CIFAR100 classes, grouped into10 new, evenly-sized meta classes ( Half-rand-50 ); trained on all examples and all 100 separateclasses ( Max-100 ). From each of these trained models, transfer learning was used to initialize amodel to be trained on CIFAR10. In the subsequent CIFAR10 training, all but the last layer wasfrozen, which simplifies the learning task to that of logistic regression (but also reduces utility, withthe best non-private accuracy reduced to 75% on CIFAR10).Table 3 shows CIFAR10 privacy and accuracy resulting from fine-tuning of different transfer-learning models with DP-SGD. As shown in Table 4, the results improve on those of Abadiet al. (2016), even though they performed non-linear fine-tuning of two neural-network lay-ers, and their underlying model was able to achieve higher non-private accuracy (86%).2A different, formal take on how public models and data can facilitate learning with privacy is studied in(Bassily et al., 2018; Feldman et al., 2018).5Under review as a conference paper at ICLR 2020Type Epoch 10 Epoch 50 Epoch 100 Epoch 200 Epoch 400Min-rand-10 44.8%4.6 49.6%3.9 51.0%3.9 52.8%3.3 53.7%3.5(81.0%4.0) 50% = best 54.1% = best 55.7% = best 56.9% = best 57.6% = bestHalf-rand-50 39.4%2.9 51.4%0.8 54.7%1.5 56.8%1.3 59.0%0.9(62.1%1.4) 44.3% = best 52.6% = best 56.6% = best 58.3% = best 60.2% = bestMax-100 57.0%1.0 66.2%0.6 68.4%0.6 69.7%0.6 71.0%0.5(54.9%0.7) 59.1% = best 67.2% = best 69.5% = best 70.6% = best 72.1% = bestTable 3: Accuracy of learning with privacy (average/best of 10 runs) compared to a non-privatebaseline of 75%. A CIFAR10 model is trained from a CIFAR100-transfer-learning initialization,with all-but-the-last layer frozen during training. The DP-SGD "upper bounds at = 105are"10= 0:32,"50= 0:73,"100= 1:04,"200= 1:48,"400= 2:12for the subscript-indicated epochs.The source model CIFAR100 accuracy (first column), is uncorellated to the CIFAR10 accuracy.Table 4: CIFAR10 privacyand accuracy tradeoffs.This paper Abadi et al.(";acc:) (";acc:)(0:3;59%) –(1:0;70%) –(2:1;72%) (2:0;67%)– (4:0;70%)– (8:0;73%)In addition, the results show the benefits of model architectureswhose final layer can be fine-tuned using logistic regression training,or other forms of convex optimization. Such training can be madepossible by including a final fully-connected layer into a network;in additional experiments (not detailed here), the inclusion of sucha layer did not harm the training of the original, source model fromwhich transfer learning was done. Furthermore, the number of pa-rameters in this layer did not seem to matter much: privacy/accuracytradeoffs remained the same, even when the layer was grown by anorder of magnitude, which is consistent with what is known aboutdifferentially-private convex optimization (Jain & Thakurta, 2014).4.2 I NITIALIZATION BY WEIGHT SCALING0 5 10 15 20 25 30 35Epoch0.00.10.20.30.40.5Test AccuracyDP seed 1DP seed 2DP seed 3Mean Var 1Mean Var 2Mean Var 3Figure 4: Colored lines show DP-SGDaccuracy for three “seed” random initiali-zations of a CIFAR-10 model. Coloredbands show accuracy range of 30 DP-SGDmodels using Mean Var initialization basedon per-layer parameter statistics in the cor-responding seed model. In all models, theprivacy"at each epoch is identical; how-ever, Mean Var initialization substantiallyimproves the privacy/accuracy tradeoff.Initialization by transfer learning is only applicablewhen a suitable public model exists whose weightscan facilitate learning with privacy on sensitive data.But, such model may not exist, and DP-SGD learningmay possibly benefit from other, non-standard meansof initialization. We consider the Mean Var weight-scaling approach of Raghu et al. (2019) and initializeDP-SGD learning with Gaussian random parameterdistributions whose layer-wise mean and variance areextracted from a seed model trained on the same sensi-tive input data. The weight-scaling approach does notdirectly transfer the parameters of an existing model;instead, just the layer-wise mean and variance are ex-tracted, and those statistics are used to configure theGaussian random distributions from which a secondmodel with the same architecture is initialized.In the context of learning with privacy, Mean Varweight scaling can improve model initialization bytransfer from one differentially-private model to an-other. First, DP-SGD can be applied to train a modelwith standard random initialization. From this model,per-layer mean/variance statistics can be extracted toinitialize a new model of the same architecture, sub-sequently trained with strong privacy guarantees. (This extraction can be done privately, althoughthe privacy risk of summary statistics that drive random initialization should be vanishing. Follow-ing Bassily et al. (2018); Papernot et al. (2018), one can use the formal framework of sub-sampleand aggregate in conjunction with Propose-Test-Release (PTR) for this selection. The algorithm firstsplits the training data into disjoint subsets, and trains models independently on each of the splits.Using these trained models, the parameter is chosen via consensus voting with differential privacy.Notice that if the training data set is large, and there is a strong consensus, then the cost towards6Under review as a conference paper at ICLR 20200 10 20 30 40Epochs80859095Test AccuracysgdadamMNIST0 10 20 30 40Epochs72747678808284Test AccuracysgdadamFashionMNIST0 100 200 300 400Epochs3040506070Test AccuracysgdadamCIFAR10Figure 5: Learning curves for DP-SGD and DP-Adam. Early on in training, DP-Adam convergesfaster to an accuracy that is within 1 point of its final accuracy, however DP-SGD increases moresteadily towards the end of training, thus both achieve comparable results. Given one of the datasets,the privacy budget "for both models is identical at each epoch.privacy is very low.) The idea is that the mean and variance pairs can be obtained quickly at a mod-est privacy budget, but the faster convergence of the Mean Var initialized model both reduces theoverall privacy budget needed for training, and mitigates the increased wall-clock time of DP-SGD.We experiment with a relatively deep CIFAR10 convolutional model (see Appendix A), since Raghuet al. found the benefits of Mean Var initialization most pronounced for large models. We firsttrained a model using random initialization, and then did weight scaling by transferring that model’sstatistics to a new model. In this proof-of-concept, both models were trained with the same noisevariance (= 0:5), but one could reserve a larger portion of the privacy budget for the new model.We should note that we did not directly transfer the weight statistics between corresponding layers inthe original and new models. Rather, we used the weight statistics of each of original model’s earlylayers of the original model for two of the layers in the new model. This gives superior performanceto a one-to-one transfer; we conjecture that this is because early layers have higher variance.Figure 4 shows the results of this experiment for some early training epochs. Each run that used stan-dard He random initialization (He et al., 2015) gave near identical results, achieving 37% accuracyat epoch 33. The Mean Var initialization runs showed substantially higher variance, with the bestmodels having 7% better accuracy at epoch 33. These results are intriguing, and reminiscent of thelottery ticket hypothesis (Frankle & Carbin, 2019); they suggest a strategy of training a collection ofMean Var models and keeping those that show early promise.5 T UNING OPTIMIZERS FOR PRIVATE LEARNINGArchitectural choices presented in Section 3 control how sensitive learning is to training examples.This helps us to learn with privacy—because it eliminates the negative effects of clipping and noisinglarge gradients. We now turn our attention to the training algorithm itself. We find that it is importantto tailor algorithm and hyperparameter choices to the specificities of private learning: a batch sizeor learning rate that yields good results without privacy may not perform well with privacy.5.1 A DAPTIVE OPTIMIZERS PROVIDE MARGINAL GAINS WHEN LEARNING WITH PRIVACYWe first explore the choice of optimizer, and in particular whether adaptive optimizers that leveragethe history of iterates help convergence when learning privately. We compare learning curves for DP-SGD and the differentially private counterpart of Adam (Kingma & Ba, 2014), a canonical adaptiveoptimizer. A qualitative analysis of Figure 5 leads to the same conclusion for all datasets (MNIST,FashionMNIST, and CIFAR10). While DP-Adam may converge faster initially, its convergence rateeventually slows down sufficiently for DP-SGD to achieve comparable (if not higher) accuracy.To explain the ineffectiveness of adaptive optimizers, we hypothesize that the iterates they accumu-late during training are affected negatively by noise introduced to preserve privacy. Indeed, whilethere is enough signal from the training data included in any given batch sampled early in training,later in training most training examples have a loss of zero and do not contribute to the gradientsbeing noised. Carrying this noise from one gradient descent step to the next to adapt learning ratestherefore inadequately slows down training. To verify this, we track the estimate of the first momentin Adam on MNIST. The mean absolute value of its components converges when learning withoutprivacy (from 0.5 after the first epoch to about 0.8 for epochs 45 through 60). Instead, it increasessteadily throughout training with privacy (from 0.5 at the first epoch to above 1. after 60 epochs).Thus, choosing an adaptive optimizer (e.g., DP-Adam) is not necessary if one is interested in achiev-ing maximal accuracy: given a fixed privacy budget, fine-tuning the learning rate is more important7Under review as a conference paper at ICLR 2020as we confirm in Section 5.2. Note that this resonates well with recent results questioning the gen-eralization capabilities of adaptive optimizers (Wilson et al., 2017).5.2 C HOOSING A (LARGE ) BATCH SIZE AND LEARNING RATEHaving observed that few training examples contribute signal after the first epochs, it is natural toask whether increasing the size of batches could improve the noise-to-signal ratio in DP learning.To ensure a fair comparison, we fix the privacy budget "and deduce the number of epochs wecan train the model for given a desired batch size. For instance, in Table 5, we compare modelstrained for 7epochs on batches of 1;024examples to models trained for 40epochs on batches of256examples. In both cases, the total privacy budget for training these models is "= 2:7. We run ahyperparameter search to fine-tune the choice of learning rate for both DP-SGD and DP-Adam. Wethen compare the test accuracy achieved with small and large batch sizes.Non-private Differentially-privateOptimizer Batch Epochs Time Learning Rate Test Acc. Learning Rate Test Acc.SGD256 40 240s 1:0710190:3% 3:3210186:1%1024 7 42s 3:6810186:3% 4:46 85 :1%Adam256 40 240s 1:0610390:5% 1:3210386:0%1024 7 42s 4:3210388:7% 7:0810385:1%Table 5: Impact of batch size on trade-off between accuracy and privacy. The privacy budget is fixedto"= 2:7for all rows. A hyperparameter search is then conducted to find the best learning rate totrain the model with or without differential privacy on FashionMNIST.Hyperparameters should be tuned for DP-SGD, not SGD. We confirm that DP-Adam doesnot improve over DP-SGD. Yet, this experiment shows how training for a small number of epochsat a large batch size can do comparably well to a large number of epochs at a small batch size:the wall-clock time gain is important (about 5) and the cost in performance is moderate—half apercentage point. This suggests that earlier theoretical analysis (Talwar et al., 2014) also holds in thenon-convex setting. Furthermore, note how learning rates vary across the non-DP and DP settings.6 C ONCLUSIONSRather than first train a non-private model and later attempt to make it private, we bypass non-private training altogether and directly incorporate specificities of privacy-preserving learning in theselection of architectures, initializations, and tuning strategies. This improves substantially uponthe state-of-the-art privacy/accuracy trade-offs on three benchmarks, as summarized below. Up tonow, we evaluated each component (e.g., change of activation function, optimizer, etc.) individuallyto demonstrate its influence on private learning. Instead, here this summary table compares eachapproach after all hyperparameters explored in the paper have been jointly fined tuned. In particu-lar, note how even in their own individually-best setting, tanh continues to consistently outperformReLU with for example 98.1% test accuracy (instead of 96.6% for ReLU) on MNIST.Dataset Technique Acc. " AssumptionsMNISTSGD w/ tanh (not private) 99.0%1 0 -DP-SGD w/ ReLU 96.6% 2.93 105-DP-SGD w/ tanh (ours) 98.1% 2.93 105-FashionMNISTSGD w/ ReLU (not private) 89.4%1 0 -DP-SGD w/ ReLU 81.9% 2.7 105-DP-SGD w/ tanh (ours) 86.1% 2.7 105-CIFAR10Transfer + SGD (not private) 75%1 0 -Transfer + DP-SGD (Abadi et al.) 67% 2 105Public DataTransfer + DP-SGD (ours) 72% 2.1 105Public Data8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text Overall, this work empirically evaluates different techniques used in privacy learning and suggest useful methods to stabilize or improve performance. Detail comments: Strength: Despite the progress of privacy-preserving learning in theory, there are few works providing learning details for better training. Especially, considering the instability in perturbation-based private algorithms, e.g., most DP ones, the work could be valuable in the sense of practice. Weakness: As far as empirical research, the compared techniques are too few. What if we use those less popular techniques, for example, RMSprop optimization method? The model capacity of neural networks, especially deep networks, has some non-trivial relation to the number of filters or the number parameters. It is important to quantify such relation. A good reference might be [A]. Briefly, the generalization performance may not be monotonic against the number of parameters. The baselines are not enough. Of course, Abadi et al.’s work is outstanding in handling the privacy learning of deep networks. It has been further developed by the following researchers. For example, [B] and [C]. Does the conclusion still hold for these algorithms? [A] Neyshabur, B., Bhojanapalli, S., Mcallester, D., & Srebro, N. (2017). Exploring Generalization in Deep Learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 5947–5956). [B] Yu, L., Liu, L., Pu, C., Gursoy, M. E., & Truex, S. (2019). Differentially Private Model Publishing for Deep Learning. Proceedings of 40th IEEE Symposium on Security and Privacy. [C] Phan, N., Vu, M. N., Liu, Y., Jin, R., Dou, D., Wu, X., & Thai, M. T. (2019). Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness. Proceedings of the Twenty-Eighth International Joint Conference on Artificial ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
B1gF56VYPH
ICLR.cc/2020/Conference
2020
Deep 3D Pan via local adaptive "t-shaped" convolutions with global and local adaptive dilations
["Juan Luis Gonzalez Bello", "Munchurl Kim"]
Recent advances in deep learning have shown promising results in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or “Deep 3D Pan”, with “t-shaped” adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image’s pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes, and our VICLAB_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the “t-shaped” kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.
["Deep learning", "Stereoscopic view synthesis", "Monocular depth", "Deep 3D Pan"]
ABSTRACTRecent advances in deep learning have shown promising results in many low-levelvision tasks. However, solving the single-image-based view synthesis is still anopen problem. In particular, the generation of new images at parallel camera viewsgiven a single input image is of great interest, as it enables 3D visualization of the2D input scenery. We propose a novel network architecture to perform stereo-scopic view synthesis at arbitrary camera positions along the X-axis, or “Deep 3DPan”, with “t-shaped” adaptive kernels equipped with globally and locally adap-tive dilations. Our proposed network architecture, the monster-net, is devised witha novel t-shaped adaptive kernel with globally and locally adaptive dilation, whichcan efficiently incorporate global camera shift into and handle local 3D geometriesof the target image’s pixels for the synthesis of naturally looking 3D panned viewswhen a 2-D input image is given. Extensive experiments were performed on theKITTI, CityScapes, and our VICLAB STEREO indoors dataset to prove the effi-cacy of our method. Our monster-net significantly outperforms the state-of-the-artmethod (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Ourproposed monster-net is capable of reconstructing more reliable image structuresin synthesized images with coherent geometry. Moreover, the disparity informa-tion that can be extracted from the “t-shaped” kernel is much more reliable thanthat of the SOTA for the unsupervised monocular depth estimation task, confirm-ing the effectiveness of our method.1 I NTRODUCTIONRecent advances in deep learning have pushed forward the state-of-the-art performance for novelview synthesis problems. Novel view synthesis is the task of generating a new view seen from adifferent camera position, given a single or multiple input images, and finds many applications inrobotics, navigation, virtual and augmented reality (VR/AR), cinematography, etc. In particular,the challenging task of generating stereo images given a single input view is of great interest as itenables 3D visualization of the 2D input scene. In addition, the falling price and the increasingavailability of the equipment required for VR/AR has fueled the demand for stereoscopic contents.The previous works, such as Deep3D (Xie et al., 2016), have addressed the right-view generationproblem in a fully supervised fashion when the input is the left-view to which the output is thesynthetic right-view at a fixed camera shift. In contrast, our proposed Deep 3D Pan pipeline enablesthe generation of new views at arbitrary camera positions along the horizontal X-axis of an inputimage with far better quality by incorporating adaptive “t-shaped” convolutions with globally andlocally adaptive dilations. Our proposed “t-shaped” kernel with adaptive dilations takes into accountthe camera shift amount and the local 3D geometries of the target pixels. Panning at arbitrarycamera positions allows our proposed model to adjust the baseline (distance between cameras) fordifferent levels of 3D sensation. Additionally, arbitrary panning unlocks the possibility to adjust fordifferent inter-pupillary distances of various persons. Figure 1 shows some generated left and righthttps://www.VICLAB.kaist.ac.kr1Published as a conference paper at ICLR 2020Figure 1: Generated left and right images by our proposed Deep 3D Pan for an input center image.view images for a given single image input by our proposed Deep 3D Pan pipeline, which we call“monster-net” ( mon ocular to stereo network). In this paper, we define “panning” in the context of3D modeling, implying that camera movement is in parallel to the center view camera plane.In the following sections, we review the related works to stereoscopic view synthesis and discussthe differences with our proposed method, followed by the formulation of our Deep 3d Pan pipelineand finally, we present outstanding results on various challenging stereo datasets, showing superiorperformance against the previous state-of-the-art methods.2 R ELATED WORKNovel view synthesis is a well-studied problem in deep learning-based computer vision, and hasalready surpassed the classical techniques for both cases of the multiple-image (Woodford et al.,2007; Liu et al., 2009; Chaurasia et al., 2013) and single-image input (Horry et al., 1997; Hoiemet al., 2005). The latter, single-image based novel view synthesis, is known to be a much morecomplex problem compared to multiple-image based ones. Previous deep learning-based approachesusually tend to utilize one of the two techniques to generate a novel view: (i) optical flow guidedimage warping, and (ii) a “flavor” of kernel estimation, also known as adaptive convolutions.The first technique, optical flow guided image warping , has been adopted by several authors totrain convolutional neural networks (CNNs) for optical flow or disparity estimation from single orstereo images in an unsupervised fashion. However, their final goal was not to synthesize novelviews. These works include those of (Godard et al., 2017; Zhou et al., 2016; Gonzalez & Kim,2019b; Tosi et al., 2019; Liu et al., 2019; Wang et al., 2019b; Ranjan et al., 2019; Lai et al., 2019).Not all previous existing works have used flow-guided warping for unsupervised training or to reg-ularize supervised methods for optical flow estimation. The work of Im et al. (2019) implementedplane sweep at the feature level to generate a cost volume for multi-view stereo depth estimation.Such plane sweep can be seen as a type of 1D convolution, similar to the 1D kernel utilized in thesecond approach of kernel estimation for new view synthesis.On the other hand, the second approach, kernel estimation or adaptive convolutions , has provedto be a superior image synthesis technique and has been incorporated in several different ways. Forexample: (1) Flynn et al. (2016), in their early DeepStereo, formulated a CNN capable of synthe-sizing a middle view by blending multiple plane-swept lateral view inputs weighted by a “selectionvolume”, which can be interpreted as a 1D (or line-shaped) adaptive convolution; (2) in a similarway, Xie et al. (2016) devised the Deep3D, a non fully-convolutional network that estimates a seriesof “probabilistic disparity maps” that are then used to blend multiple shifted versions of the left-viewinput to generate a synthetic right-view image; (3) The adaptive separable convolutions (SepConv)in the work of Niklaus et al. (2017) approximated adaptive 2D convolutions by two (vertical andhorizontal) 1D kernels that are applied sequentially to the input current and previous frames ( t0andt1) for the video interpolation problem; (4) In the works of (Zhou et al., 2018; Srinivasan et al.,2Published as a conference paper at ICLR 2020Figure 2: Synthesis techniques based on adaptive convolutions. The background is the input image.Red dots represent target pixel locations in output images. Green (along with red) dots representsampling positions where the corresponding pixels are used to generate one target pixel.2019), although with additional considerations, their multiplane image representation approach canbe loosely understood as a 1D adaptive convolution as the final operation involves the reduction ofa plane sweep volume; (5) Geometric-aware networks in the work of Liu et al. (2018) indirectlyachieved adaptive convolutions by learning a fixed number of affine transformations on an input im-age, where the resulting affine-transformed images are then blended together to generate one outputimage; and finally, (6) in the work of Gonzalez & Kim (2019a), the authors developed the Deep3D Zoom Net, which estimates a selection volume for the “blending of multiple upscaled versionsof the input image”, which can be treated as an special case of a 1D adaptive convolution. The(Flynn et al., 2016) and (Zhou et al., 2018) approaches require two or more images as inputs, thus,greatly reducing the complexity of the synthesis task as most ambiguities are removed by countingon multiple views. In our work, we focus on the single-image based stereoscopic view synthesistask, which is a far more difficult problem as the network needs to understand the 3D geometry inthe scene, and to handle complex occlusions, ambiguities and non-Lambertian surfaces.Although the aforementioned methods are distinguished one another, as the different synthesis tech-niques have their own properties, they can be all interpreted as belonging to a category of adaptiveconvolutions which are visualized in Figure 2. As observed in Figure 2-(a), DeepStereo (Flynn et al.,2016) and Deep3D (Xie et al., 2016) share the same shape of kernel, that is, a 1D horizontal-onlykernel that samples pixels at a fixed interval, or dilation, along the X-axis for each target outputpixel. A 1D horizontal-only constant-dilation kernel suffers from three major drawbacks:1. Inefficient usage of kernel values. When sampling the positions opposite to the cameramovement (which are the pixel locations corresponding to a1-a3in Figure 2-(a), assuming arightward camera shift), experiments showed that these kernel values would often be zeros.The same effect repeats when sampling the positions further away from the maximumdisparity value of the given scene (which corresponds to the pixel location at a7, assumingthat the maximum disparity is 2 and the dilation is 1) as the network is not able to find validstereo correspondences for these kernel positions;2. Right-view synthesis is limited to the trained baseline (distance between stereo cameras),as the models over-fit to a specific training dataset with a fixed baseline; and3. The 1D line kernel has limited occlusion handling capabilities, as the network will try tofill in the gaps with the information contained only along the horizontal direction, limitingthe reconstruction performance of the models on the occluded areas.In contrast, the kernels predicted by the geometric-aware networks (Liu et al., 2018) have deformablestructures adaptive to the given input images, as shown in Fig. 2-(b). However, only onedeformedkernel shape is predicted and shared to synthesize all target output pixels, leading to limited per-formance. Another drawback of the geometric-aware networks is their complexity, as they requirethree sub-networks and a super-pixel segmentation step as pre-processing, hindering the processingof high-resolution images. For the Deep 3D Zoom Net (Gonzalez & Kim, 2019a) case (Fig. 2-(c)),3Published as a conference paper at ICLR 2020the 1D kernel tends to point to the center of the image, as it performs a blending operation of multipleupscaled versions of the input image. The dilation size of this 1D kernel is adaptive according to thedesired 3D-zoom factor. Finally, for the video interpolation case, the SepConv (Niklaus et al., 2017)approximates an NxN adaptive kernel via a 1xN and an Nx1 component (see Fig. 2-(d)) which aresequentially applied to the input images to generate the synthetic output. SepConv has, by design,limited receptive fields, as the dilation size is fixed to 1. Besides, the sequential nature of the kernelforces the vertical component to sample pixels from the output of the horizontal convolution, whichcould be already degraded due to heavy deformations introduced by the horizontal component.Recent works have also attempted to improve upon the stereoscopic view synthesis by improvingthe loss functions involved in the CNN’s training. The work of Zhang et al. (2019) proposed a multi-scale adversarial correlation matching (MS-ACM) loss that learns to penalize structures and ignorenoise and textures by maximizing and minimizing the correlation- l1distance in the discriminator’sfeature-space between the generated right-view and the target-view in an adversarial training setup.Whereas the objective function is a key factor in training any CNN, we believe that, at its currentstate, the stereoscopic view synthesis problem can benefit more from a better pipeline that can handlethe previously mentioned issues and using the widely accepted l1and perceptual losses (Johnsonet al., 2016) for image reconstruction, rather than a more complex loss function.Our proposed dilation adaptive “t-shaped” convolutions incorporate global (new camera positionalong the X-axis) and local (3D geometries of specific target pixels) information of the input sceneinto the synthesis of each output pixel value, by not only learning the specific kernel that will gener-ate each output pixel, but also by learning the proper dilation value for each kernel. The “t” shape ofthe kernel allows the network to account for occlusions by filling-in the gaps (missing information inthe output) due to shifted camera positions using not only left-and-right pixels (like DeepStereo andDeep3D), but also up-and-down neighboring pixel information. In addition, the notions of globaland local dilations allow our proposed mon ocular to stereo network, the monster-net, to generatearbitrarily 3D panned versions of the input center view along the X-axis, a useful feature not presentin previous works that allows adjusting for eye-to-eye separation and/or level of 3D sensation.3 M ETHODIn order to effectively synthesize an arbitrary 3D panned image, we propose a global dilation filter asshown in Figure 3. Our proposed cross-shaped global dilation filter Td(p)at a target pixel locationp= (x;y)2Ito, where Itois a generated image, is defined asTd(p) =hTc(x;y);[Tu;Tb;Tl;Tr]Ti(1)whereTc(x;y)is the filter parameter value of Td(p)at the center location p. The upper, bottom,left and right wing parameters ( Tu;Tb;Tl;Tr) of the cross-shaped dilation ( d) filter are defined asTu= [Tu(x;yd);Tu(x;y2d);:::;T u(x;ynud)]TTb= [Tb(x;y+d);Tb(x;y+ 2d);:::;T b(x;y+nbd)]TTl= [Tl(xd;y);Tl(x2d;y);:::;T l(xnld;y)]TTr= [Tr(x+d;y);Tr(x+ 2d;y);:::;T r(x+nrd;y)]T(2)wherenu,nb,nlandnrindicate the numbers of filter parameters in Tu;Tb;Tl, andTr, respectively.For the cross-shaped dilation filter shown in Figure 3, it is more appropriate to have a longer length ofthe right (left) filter wing than the other three wings when the camera panning is rightward (leftward),as it allows capturing more useful information for the synthesis of a right (left) panned image. Inthis case,nr(nl) is set to be greater than nl(nr),nuandnb, such that the global dilation filtershowed in Figure 3 can be elaborated as a “t-shaped” kernel which can then take into account thecamera panning direction for synthesis. Figure 4 shows examples of “t-shaped” kernels overlaid ontop of an input center image. As shown in Figure 4-(a), the “t-shaped” kernel has a longer left wingof filter parameters for the synthesis of a leftward camera panning while in Figure 4-(b) it shows alonger right-wing of filter parameters for the synthesis of a rightward camera panning.Why “t” shape? Experiments with symmetric kernel shapes (e.g., “+” shape) were performed first,but it was noted that most of the elements on the left (right), upper and bottom sides against the4Published as a conference paper at ICLR 2020Figure 3: Our proposed global dilation ( d) filter with a general cross shape.Figure 4: Our proposed “t-shaped” kernels are overlaid on top of a center input image. The distancebetween samples (dilation) is adaptive according to the amount and direction of 3D panning to beapplied to the input image and the local 3D geometry of the scene.centered red dot of the kernel tended to have very small values close to zeros for most target pixelsfor the rightward (leftward) movement of the camera. Similar to SepConv (Niklaus et al., 2017),the experiments with a horizontal kernel applied first followed by a vertical kernel were performed,yielding poor results. It was discovered that the “t” shaped kernel is more efficient than the “+”shaped kernel as it picks up more effective sampling positions with a fewer parameters than thestandard adaptive convolutions such as those in SepConv. As depicted in Figure 5, the “t-shaped”kernels can embed useful information like disparity and occlusion from a monocular image into thestereo synthesis process.The longer right (left) wing of the “t-shaped” kernel contains disparity information , as it willtry to sample pixels from the right (left) side to the target pixel when the camera is assumed to movein the rightward (leftward) direction. Figure 5-(a) depicts a primitive disparity map Dpthat wasconstructed by the weighted sum of the kernel values in the longer kernel wing, as described byDp(p) =nrXi=1inrTr(x+id;y) (3)whereTr(x+id;y)is the i-th value of the longer wing Trat pixel location p= (x;y)for therightward 3D panning of an input center image Ic. Note that Dpis normalized in the range [0;1]. In-terestingly, as shown in Figure 5-(a), the generated disparity map looks very natural and appropriate,which implies the effectiveness of our “t-shaped” kernel approach.The short left (right), upper and bottom wings of the “t-shaped” kernel contain occlusioninformation , as the network will try to fill in the gaps utilizing surrounding information that is notpresent in the long part of the “t-shaped” kernel. It is also interesting to see the occlusion map inFigure 5-(b) where a primitive rightward occlusion map Orpwas constructed by summing up the“t-shaped” kernel values in the short wing parts according to the following:Orp(p) =nlXi=1Tl(xid;y) +nuXi=1Tu(x;yid) +nbXi=1Tb(x;y+id) (4)The bright regions or spots in Figure 5-(b) indicate the occlusions due to the camera shift along thehorizontal axis of the input center image, which are likely to happen for the case of the camera’s5Published as a conference paper at ICLR 2020Figure 5: Disparity ( Dp) and occlusion ( Orp) maps generated from the proposed “t-shaped” kernel.rightward panning. For both Equations (3) and (4), the primitive disparity and occlusion maps forthe leftward panning case can be obtained by swapping the randlindices.3.1 G LOBALLY AND LOCALLY ADAPTIVE DILATIONS FOR THE SYNTHESIS OF A NEW VIEWIMAGE AT A SHIFTED CAMERA POSITIONIn general, the disparity amounts between stereo images are variable at different pixel locationsaccording to the distance between stereo cameras and the local scene geometries. Therefore, it isnecessary to take into account the variable disparity in synthesizing a new view in a globally andlocally adaptive fashion. For this, a “t-shaped” kernel is introduced with a controllable dilation fac-tor by which both camera shift and local changes in image geometry can be effectively taken intoaccount when synthesizing a new (left or right) view for the input center image. Any kernel witha fixed dilation may cause a limited accuracy in synthesizing a novel view because the disparityamounts vary over the whole image according to the cameras’ baseline and the local geometries.So, our “t-shaped” kernel is proposed to make the synthesis of novel views not only globally, but lo-cally adaptive to the camera shift and its local changes in image geometry by controlling its dilationsize per-pixel in the output image. Globally, a short dilation value is more appropriate when slightlyshifting the camera, while a high dilation value is desirable when largely shifting the camera posi-tion. In a local manner, a small dilation value is appropriate for far-away objects from the camerawhile very close objects to the camera can be better reconstructed with a larger dilation value.3.1.1 G LOBAL DILATIONWe define the global dilation gdas the pixel distance between two consecutive kernel samplingpositions, which is given by the pan amount Pato be applied to the input center image Icdivided bythe total number of filter parameters in the longer “t-shaped” kernel wing ( nlornr).Pahas a unitof pixels mapped in the image corresponding to the camera shift into the left or right direction andtakes on floating numbers. Therefore, the global dilation gdis given bygd=fPa=nrif Pa>0,Pa=nlif Pa<0g (5)wherePatakes on positive (negative) values for the rightward (leftward) panning scenario. The panamount needed to generate a left-view or a right-view is determined during training according to theclosest possible objects to the camera. The “closest possible objects” vary over different trainingdatasets. For our novel view synthesis task, like in (Godard et al., 2017; Gonzalez & Kim, 2019b),we assume the KITTI dataset to have a maximum or “closest possible object” disparity of 153 pixels.During training, Pais set to 153 and -153 for the rightward and leftward panning, respectively.3.1.2 L OCAL DILATIONWhile global dilation allows the “t-shaped” kernel to take into account the global camera shift, alocally adaptive mechanism is needed to synthesize new views of locally variable disparity. Such amechanism is realized by first generating multiple images with the “t-shaped” kernel at Ndifferentdilations and blending them per-pixel in a locally adaptive manner. The blending is a weighted sumof filtered images by the “t-shaped” kernel with Ndifferent dilations, where the blending weights(w1;w2;:::;w N)control the local dilation per-pixel and are learned via a convolutional neuralnetwork (CNN) along with the parameter values of the “t-shaped” kernel. Let jgdjbe the maximumdilation value that is a fractional number. Figures 4-(c), -(d) and -(e) illustrative three “t-shaped”kernels with a maximum dilation jgdjand two dilation values less than jgdj. To generate an output6Published as a conference paper at ICLR 2020Figure 6: Our t-net architecture. The t-net estimates the kernel values and the dilation weights usedfor the local adaptive t convolutions with global and local adaptive dilation.image Itopanned to the rightward direction ( gd>0) or to the leftward direction ( gd<0), theinput center image Icis first filtered by N“t-shaped” kernels Tdiof different dilations ( d1;:::;d N).Then, local adaptive dilations are calculated by linearly combining the resulting Nintermediatefiltered images according to the corresponding blending weights (w1;w2;:::;w N). Based on theNdifferent global dilations, the output image value Ito(p)at a pixel location pcan be calculated asIto(p) =NXi=1wi(p) [IcTdi] (p) (6)where [IcTdi](p)is a “t-shaped” convolution at location pbetween IcandTdiof dilationdi=(1 + (1i)=N)gdfori= 1;:::;N .wi(p)indicates a blending weight for the i-th global dilation.3.2 N ETWORK ARCHITECTUREWe propose an end-to-end trainable CNN, called the “monster-net” ( mon ocular to stereo net). Themonster-net is made of two main building blocks, a novel view synthesis network, the “t-net”, and aresolution restoration block, the “sr-block”. Given an input center image Icand pan amount Pa, thefinal output panned image Iois obtained by sequentially applying the aforementioned modules byIo=monster -net(Ic;Pa) =sr-block (t-net(Ic;Pa;t);fIncsg;sr) (7)wheretandsrparameterize the t-net and the sr-block respectively. fIncsgis the stack of progres-sively shifted-downscaled versions of the input center image Icdescribed in the SR-BLOCK section.The t-net . The “t-net” estimates both the “t-shaped” global dilation kernel parameters ( Td) and theadaptive local dilation weights ( w1;w2;:::;w N). The t-net is designed to have large receptive fieldsto synthesize detailed image structures of a new view image, which corresponds to a shifted cameraposition. Such large receptive fields are useful in capturing the global image structure and contex-tual information needed for a new view image to be synthesized. For this, an auto-encoder with skipconnections (not a U-net structure) is adopted, which allows the t-net to have effectively large re-ceptive fields and to efficiently fuse global and local (fine details) information on the decoder stage.For better feature extraction, we adopt the residual connections in the encoder side, as proposed by(Gonzalez & Kim, 2019b). The t-net estimates all necessary values to perform the operation de-scribed by Equation (6). The t-net, depicted in Figure 6, has two output branches: the first outputbranch yields 81 channels, where the first 49 are horizontal parameter maps, and the following 32vertical parameter maps; the second output branch generates the 3-channel blending weight maps forthe local adaptive dilation. That is, each channel-wise vector at a pixel location for the first outputbranch corresponds to the t-kernel parameter values [Tc;TTl;TTr;TTu;TTb], and each channel-wisevector for the second output branch corresponds to the blending weights [w1;w2;:::;w N], utilizedfor local dilations in Equation (6). As our t-net is devised to generate arbitrarily panned novel views,feeding the pan amount as a 1-channel constant feature map ( Pa(p) =Pa8p) helps the networktake into account the varying pan direction, and the amount of occlusion on the 3D panned output.The effect of feeding the pan amount is further discussed in appendix A-1.Super resolution (SR) block. As generating a full resolution dilation-adaptive t-kernel would becomputationally too expensive, we propose to estimate it at a low resolution (LR) for the synthesis7Published as a conference paper at ICLR 2020Figure 7: (a) Shifted-LR versions of the center-view contain different information as they are sam-pled from different groups of pixels via bilinear interpolation depending on the stride (controlled bythe maximum disparity). (b) Our light sr-block. All convs have 3x3 kernels otherwise specified.Table 1: Stereoscopic view synthesis performance on the 400 KITTI2015 training images (left) andthe 500 CityScapes validation images (right). lp: perceptual loss."#indicate the better performance.Model training dataset loss RMSE #PSNR "SSIM "RMSE #PSNR "SSIM "Deep3D K l1 26.13 20.07 0.637 30.10 18.82 0.655Deep3D-B K l1+lp 26.00 20.10 0.633 31.34 18.46 0.636SepConv K l1 27.22 19.73 0.633 27.77 19.54 0.660SepConv-D K l1+lp 26.36 20.02 0.626 29.66 18.95 0.647monstet-net K l1+lp 25.61 20.24 0.641 20.28 22.34 0.710monster-net K+CS l1 24.11 20.76 0.667 12.87 26.36 0.816monster-net (full) K+CS l1+lp 24.44 20.64 0.651 13.12 26.20 0.805monster-net K+CS+VL l1+lp 24.62 20.55 0.645 - - -of a half-resolution novel view, and then to apply deep learning based SR techniques to bring theLR novel view to the high (or original) resolution (HR). In comparison, in Deep3D and SepConv,the estimated LR kernel is upscaled with conventional methods to the HR and then applied to theinput image(s), which is a costly operation as it is carried out in the HR dimensions and can lead toblurred areas as the kernel is just bilinearly interpolated. In our proposed pipeline, instead of utilizingcommon single image SR methods like (Dong et al., 2015; Shi et al., 2016; Kim et al., 2016), wepropose to apply a stereo-SR method. The stereo-SR technique in (Jeon et al., 2018) takes a LRstereo pair (left and right views) as input and progressively shifts the right-view producing a stackthat is concatenated with the left-view and later processed by a CNN to obtain the super-resolvedleft-view. This process is made at an arbitrary and fixed stride (e.g. 1 pixel at every step of the stack)and does not take into account the maximum disparity between the input views. For our Deep 3DPan pipeline, we propose to use the maximum disparity prior that can be obtained from the longwing of the t-kernel to dynamically set the shifting stride. Additionally, instead of interpolatingand processing the low resolution panned view Ito(p)on the HR dimensions, we progressively shiftand then downscale the high-resolution center view Icby a factor of x2. This allows our sr-blockto operate on the LR dimensions without performance degradation, as high frequency informationin the horizontal axis is not lost but distributed along the levels of the shifted center view stack asdepicted in Figure 7-(a). Our sr-block, depicted in Figure 7-(b), is a simple, yet effective modulethat takes as input the LR Itoview and the shifted-downscaled center view stack Incsdescribed byIncs=g(Ic;nPaNsmax(Dp)) (8)whereg(I;s)is ans-strided horizontal-shift and 2x down-scaling operator applied on image I. Thestridescan take any real number and the resulting image is obtained via bilinear interpolation. Nsis the depth of the stack, and was set to Ns= 32 for all our experiments). The stack is concatenatedwith the LR Itoand passed trough four Conv-ReLU layers followed by a residual connection as shownin Figure 7-(b). The final step up-scales the resulting features into the target resolution via nearestinterpolation followed by a convolutional layer. The last layer reduces the number of channelsto three for the final RGB output Io. Nearest upscaling was adopted as it yields no checkerboardartifacts in contrast with transposed or sub-pixel convolutions (Niklaus et al., 2017).8Published as a conference paper at ICLR 2020Figure 8: Comparison against the state-of-the-art methods for stereoscopic view synthesis.4 E XPERIMENTS AND RESULTSTo demonstrate the effectiveness of our “t-shaped”-dilation-adaptive kernel, we performed severalexperiments on the challenging KITTI2012 (Geiger et al., 2012), KITTI2015 (Menze & Geiger,2015), and CityScapes (Cordts et al., 2016) datasets. As these stereo datasets only consist of out-door scenes, we also performed experiments on our indoors dataset, called the VICLAB STEREOdataset. Surprisingly, to our best knowledge, this is the first stereo dataset available that focuses onthe indoor scene, which is planned to be publicly available for research. Additionally, our formu-lation of global and local adaptive dilations allows our monster-net to be trained on multiple stereodatasets at the same time, even if these have different baselines. Instead of over-fitting on a singlecamera baseline like the previous methods (Xie et al. (2016); Godard et al. (2017); Zhang et al.(2019); Luo et al. (2018)), our monster-net can build knowledge when simultaneously trained onmany datasets. To our best knowledge, our Deep 3D Pan pipeline is the first method designed tobe trained on multiple baseline datasets concurrently for the stereoscopic view synthesis problemwhere unsupervised monocular depth estimation is even used particularly. For more details aboutthe datasets and multi-dataset training, please see the appendix A-3.We compare our monster-net against the stereoscopic view synthesis SOTA: Deep3D Xie et al.(2016) and a version of SepConv (Niklaus et al., 2017) modified for right-view synthesis. Firstly,for a fair comparison, the backbone convolutional auto-encoders for the Deep3D and SepConv wereset up to be equivalent to our t-net’s, that is, a six-stage encoder-decoder with skip connectionsand residual blocks in the encoder side. Secondly, we compare our monster-net with Deep3D-B,a “Bigger” version of Deep3D, where, instead of 32 elements in the 1D kernel as in its originalwork, we use 49 elements to match the number of horizontal kernel values in our t-net. Thirdly, wecompare against SepConv-D, a dilated version of the SepConv such that the receptive field of theseparable convolutions has a size of 153x153. The Deep3D and the SepConv models are trainedwithout using perceptual loss as in their original works. For a more meaningful comparison, theDeep3D-B and the SepConv-D are trained with a combination of l1and perceptual loss lp(Johnsonet al., 2016), and demonstrate that a better loss function than l1does not contribute enough to thestereoscopic view synthesis problem. For more implementation details, reefer to the appendix A-4.Additionally, we compare the quality of the embedded disparity in the long wing of the “t-shaped”kernel with those of the state-of-the-art models for the monocular depth estimation task. For that,we first define a disparity refinement sub-network that uses the primitive disparity obtained fromthe long wing of the “t-shaped” kernel as prior information. Secondly, we define a special post-processing (spp) step, which, instead of relying on a naive element wise summation as in Godard9Published as a conference paper at ICLR 2020Table 2: Depth metrics (Eigen et al., 2014) for KITTI2015. Models are trained with video (V),stereo (S), semi global matching (SMG) or GT depth (Supp). Top models in terms of a1accuracyare highlighted. Simplified table, see appendix A.9 for the full version.Model Supp V S dataset abs rel #sq rel #rms#log rms #a1"a2"a3"Wang et al. (2019a) (9-view) x K 0.112 0.418 2.320 0.153 0.882 0.974 0.992Tosi et al. (2019) (pp) SMG x K+CS 0.096 0.673 4.351 0.184 0.890 0.961 0.981ours with refine block (spp) x K+CS 0.099 0.950 4.739 0.160 0.900 0.971 0.989Gur & Wolf (2019) x K 0.110 0.666 4.186 0.168 0.880 0.966 0.988Luo et al. (2018) x K 0.094 0.626 4.252 0.177 0.891 0.965 0.984Wang et al. (2019a) (1/9-view) x x K 0.088 0.245 1.949 0.127 0.915 0.984 0.996et al. (2017), takes into account the ambiguities of the first and second forward passes to generatea remarkable sharp and consistent disparity map. For more details on the refinement block and ourspecial post-processing, reefer to the appendix A-2.4.1 R ESULTS ON THE KITTI, C ITYSCAPES AND THE VICLAB STEREO DATASETSTable 1 shows the performance comparison for our method and previous works. It is important tomention that our monster-net performs inference on full resolution images, while the previous ap-proaches for single-view novel view synthesis perform estimation on reduced resolution inputs. Ourmethod outperforms the Deep3D baseline by a considerable margin of 0.7dB in PSNR, 2.0 in RMSE,and 0.03 in SSIM. The qualitative results are shown in Figure 8. Our method produces superior look-ing images. In Deep3D and SepConv, many objects appear too blurred such that their boundariescan hardly be recognized in the synthetic images (e.g the motorcycle, persons, traffic signs, etc.). Wechallenged the models trained on KITTI (K) to perform inference on the CityScapes validation split(CS), and observed that our method generalizes much better than the Deep3D baseline with up to3dB higher in PSNR. When training the monster-net with K+CS, we get an additional improvementof 4dB PSNR in the validation CS dataset. Incorporating an indoor dataset to our training pipelineis also possible, making our network applicable to a wide variety of scenarios. We added the VI-CLAB STEREO (VL) dataset to the training, that is K+CS+VL, and observed little impact on the Kdataset performance as shown in Table 1. We also tested the performance of our monster-net on thevalidation split of the VL dataset. We observed that our full monster-net trained on K+CS achieveda mean PSNR of 19.92dB, while achieving a mean PSNR of 21.78 dB when trained on K+CS+VL.For a network trained on the outdoors dataset only it is difficult to generalize to the indoors case,as the latter contains mainly homogeneous areas, whereas the outdoors case mainly contains texturerich scenes. Visualizations on CS and VL, and ablation studies that prove the efficacy of each of ourdesign choices can be found in the appendices A-5, A-6 and A-8.4.2 R ESULTS ON DISPARITY ESTIMATIONWith the addition of a relatively shallow disparity refinement sub-network, the monster-net remark-ably outperforms all the state-of-the-art models for the unsupervised monocular depth estimationtask, as shown in Table 2. Our monster-net with disparity refinement even outperforms the super-vised monocular disparity estimation methods such as (Luo et al., 2018; Gur & Wolf, 2019) andmultiple view unsupervised methods such as (Wang et al., 2019a; Ranjan et al., 2019).5 C ONCLUSIONWe presented an adaptive “t-shaped” kernel equipped with globally and locally adaptive dilationsfor the Deep 3D Pan problem, defined as the task of arbitrarily shifting the camera position alongthe X-axis for stereoscopic view synthesis. Our proposed monster-net showed superior performanceto the SOTA for right-view generation on the KITTI and the CityScapes datasets. Our monster-net also showed very good generalization capabilities with 3dB gain in PSNR against the Deep3Dbaseline. In addition, our method presents no-discontinuities, consistent geometries, good contrast,and naturally looking left or right synthetic panned images. Our monster-net can be extended forimage registration, monocular video to stereo video, and generation of novel views at any cameratranslation by just allowing a pixel-wise rotation of our “t-shaped” kernel.10Published as a conference paper at ICLR 2020ACKNOWLEDGMENTSThis work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00419, Intelligent High RealisticVisual Processing for Smart Broadcasting Media).
B1euNLVnFH
Official Blind Review #1
6: Weak Accept
The paper considers the problem of performing stereoscopic view synthesis (i.e., generating a new view seen from a different camera position) at an arbitrary position along the X-axis from a single input image only. This is an important problem as it enables 3D visualization of a 2D input scene. The paper focuses on the particular problem of generating a stereoscopic view from a single image (i.e., a right and left view from a center image). For this purpose, the paper proposes a t-net architecture which is an autoencoder or U-net like architecture that estimates the values for the t-convolutions proposed in the paper. The network (called monster-net) takes a center image and a pan amount as input, and from those synthesizes the image with the respective view. The paper demonstrates that their idea of t-convolutions outperforms recent competing approaches such as deep 3D on available datasets as well as on an in-house collected dataset. The figures provided demonstrate that the views generated by the proposed Monster-net visibly look slightly better than those generated by the competing approaches DeepD and SepConv. In addition, the paper is well written and easy to follows. I therefore recommend acceptance of this paper. I would like to emphasize that while I work in deep learning, I don't work on view synthesis and therefore it is difficult for me to evaluate the novelty of the proposed approach as well as the difficulty of the problem.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Deep 3D Pan via local adaptive "t-shaped" convolutions with global and local adaptive dilations ### Paper Abstract Recent advances in deep learning have shown promising results in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or “Deep 3D Pan”, with “t-shaped” adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image’s pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes, and our VICLAB_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the “t-shaped” kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method. ### Paper Keywords ["Deep learning", "Stereoscopic view synthesis", "Monocular depth", "Deep 3D Pan"] ### Paper Content ABSTRACTRecent advances in deep learning have shown promising results in many low-levelvision tasks. However, solving the single-image-based view synthesis is still anopen problem. In particular, the generation of new images at parallel camera viewsgiven a single input image is of great interest, as it enables 3D visualization of the2D input scenery. We propose a novel network architecture to perform stereo-scopic view synthesis at arbitrary camera positions along the X-axis, or “Deep 3DPan”, with “t-shaped” adaptive kernels equipped with globally and locally adap-tive dilations. Our proposed network architecture, the monster-net, is devised witha novel t-shaped adaptive kernel with globally and locally adaptive dilation, whichcan efficiently incorporate global camera shift into and handle local 3D geometriesof the target image’s pixels for the synthesis of naturally looking 3D panned viewswhen a 2-D input image is given. Extensive experiments were performed on theKITTI, CityScapes, and our VICLAB STEREO indoors dataset to prove the effi-cacy of our method. Our monster-net significantly outperforms the state-of-the-artmethod (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Ourproposed monster-net is capable of reconstructing more reliable image structuresin synthesized images with coherent geometry. Moreover, the disparity informa-tion that can be extracted from the “t-shaped” kernel is much more reliable thanthat of the SOTA for the unsupervised monocular depth estimation task, confirm-ing the effectiveness of our method.1 I NTRODUCTIONRecent advances in deep learning have pushed forward the state-of-the-art performance for novelview synthesis problems. Novel view synthesis is the task of generating a new view seen from adifferent camera position, given a single or multiple input images, and finds many applications inrobotics, navigation, virtual and augmented reality (VR/AR), cinematography, etc. In particular,the challenging task of generating stereo images given a single input view is of great interest as itenables 3D visualization of the 2D input scene. In addition, the falling price and the increasingavailability of the equipment required for VR/AR has fueled the demand for stereoscopic contents.The previous works, such as Deep3D (Xie et al., 2016), have addressed the right-view generationproblem in a fully supervised fashion when the input is the left-view to which the output is thesynthetic right-view at a fixed camera shift. In contrast, our proposed Deep 3D Pan pipeline enablesthe generation of new views at arbitrary camera positions along the horizontal X-axis of an inputimage with far better quality by incorporating adaptive “t-shaped” convolutions with globally andlocally adaptive dilations. Our proposed “t-shaped” kernel with adaptive dilations takes into accountthe camera shift amount and the local 3D geometries of the target pixels. Panning at arbitrarycamera positions allows our proposed model to adjust the baseline (distance between cameras) fordifferent levels of 3D sensation. Additionally, arbitrary panning unlocks the possibility to adjust fordifferent inter-pupillary distances of various persons. Figure 1 shows some generated left and righthttps://www.VICLAB.kaist.ac.kr1Published as a conference paper at ICLR 2020Figure 1: Generated left and right images by our proposed Deep 3D Pan for an input center image.view images for a given single image input by our proposed Deep 3D Pan pipeline, which we call“monster-net” ( mon ocular to stereo network). In this paper, we define “panning” in the context of3D modeling, implying that camera movement is in parallel to the center view camera plane.In the following sections, we review the related works to stereoscopic view synthesis and discussthe differences with our proposed method, followed by the formulation of our Deep 3d Pan pipelineand finally, we present outstanding results on various challenging stereo datasets, showing superiorperformance against the previous state-of-the-art methods.2 R ELATED WORKNovel view synthesis is a well-studied problem in deep learning-based computer vision, and hasalready surpassed the classical techniques for both cases of the multiple-image (Woodford et al.,2007; Liu et al., 2009; Chaurasia et al., 2013) and single-image input (Horry et al., 1997; Hoiemet al., 2005). The latter, single-image based novel view synthesis, is known to be a much morecomplex problem compared to multiple-image based ones. Previous deep learning-based approachesusually tend to utilize one of the two techniques to generate a novel view: (i) optical flow guidedimage warping, and (ii) a “flavor” of kernel estimation, also known as adaptive convolutions.The first technique, optical flow guided image warping , has been adopted by several authors totrain convolutional neural networks (CNNs) for optical flow or disparity estimation from single orstereo images in an unsupervised fashion. However, their final goal was not to synthesize novelviews. These works include those of (Godard et al., 2017; Zhou et al., 2016; Gonzalez & Kim,2019b; Tosi et al., 2019; Liu et al., 2019; Wang et al., 2019b; Ranjan et al., 2019; Lai et al., 2019).Not all previous existing works have used flow-guided warping for unsupervised training or to reg-ularize supervised methods for optical flow estimation. The work of Im et al. (2019) implementedplane sweep at the feature level to generate a cost volume for multi-view stereo depth estimation.Such plane sweep can be seen as a type of 1D convolution, similar to the 1D kernel utilized in thesecond approach of kernel estimation for new view synthesis.On the other hand, the second approach, kernel estimation or adaptive convolutions , has provedto be a superior image synthesis technique and has been incorporated in several different ways. Forexample: (1) Flynn et al. (2016), in their early DeepStereo, formulated a CNN capable of synthe-sizing a middle view by blending multiple plane-swept lateral view inputs weighted by a “selectionvolume”, which can be interpreted as a 1D (or line-shaped) adaptive convolution; (2) in a similarway, Xie et al. (2016) devised the Deep3D, a non fully-convolutional network that estimates a seriesof “probabilistic disparity maps” that are then used to blend multiple shifted versions of the left-viewinput to generate a synthetic right-view image; (3) The adaptive separable convolutions (SepConv)in the work of Niklaus et al. (2017) approximated adaptive 2D convolutions by two (vertical andhorizontal) 1D kernels that are applied sequentially to the input current and previous frames ( t0andt1) for the video interpolation problem; (4) In the works of (Zhou et al., 2018; Srinivasan et al.,2Published as a conference paper at ICLR 2020Figure 2: Synthesis techniques based on adaptive convolutions. The background is the input image.Red dots represent target pixel locations in output images. Green (along with red) dots representsampling positions where the corresponding pixels are used to generate one target pixel.2019), although with additional considerations, their multiplane image representation approach canbe loosely understood as a 1D adaptive convolution as the final operation involves the reduction ofa plane sweep volume; (5) Geometric-aware networks in the work of Liu et al. (2018) indirectlyachieved adaptive convolutions by learning a fixed number of affine transformations on an input im-age, where the resulting affine-transformed images are then blended together to generate one outputimage; and finally, (6) in the work of Gonzalez & Kim (2019a), the authors developed the Deep3D Zoom Net, which estimates a selection volume for the “blending of multiple upscaled versionsof the input image”, which can be treated as an special case of a 1D adaptive convolution. The(Flynn et al., 2016) and (Zhou et al., 2018) approaches require two or more images as inputs, thus,greatly reducing the complexity of the synthesis task as most ambiguities are removed by countingon multiple views. In our work, we focus on the single-image based stereoscopic view synthesistask, which is a far more difficult problem as the network needs to understand the 3D geometry inthe scene, and to handle complex occlusions, ambiguities and non-Lambertian surfaces.Although the aforementioned methods are distinguished one another, as the different synthesis tech-niques have their own properties, they can be all interpreted as belonging to a category of adaptiveconvolutions which are visualized in Figure 2. As observed in Figure 2-(a), DeepStereo (Flynn et al.,2016) and Deep3D (Xie et al., 2016) share the same shape of kernel, that is, a 1D horizontal-onlykernel that samples pixels at a fixed interval, or dilation, along the X-axis for each target outputpixel. A 1D horizontal-only constant-dilation kernel suffers from three major drawbacks:1. Inefficient usage of kernel values. When sampling the positions opposite to the cameramovement (which are the pixel locations corresponding to a1-a3in Figure 2-(a), assuming arightward camera shift), experiments showed that these kernel values would often be zeros.The same effect repeats when sampling the positions further away from the maximumdisparity value of the given scene (which corresponds to the pixel location at a7, assumingthat the maximum disparity is 2 and the dilation is 1) as the network is not able to find validstereo correspondences for these kernel positions;2. Right-view synthesis is limited to the trained baseline (distance between stereo cameras),as the models over-fit to a specific training dataset with a fixed baseline; and3. The 1D line kernel has limited occlusion handling capabilities, as the network will try tofill in the gaps with the information contained only along the horizontal direction, limitingthe reconstruction performance of the models on the occluded areas.In contrast, the kernels predicted by the geometric-aware networks (Liu et al., 2018) have deformablestructures adaptive to the given input images, as shown in Fig. 2-(b). However, only onedeformedkernel shape is predicted and shared to synthesize all target output pixels, leading to limited per-formance. Another drawback of the geometric-aware networks is their complexity, as they requirethree sub-networks and a super-pixel segmentation step as pre-processing, hindering the processingof high-resolution images. For the Deep 3D Zoom Net (Gonzalez & Kim, 2019a) case (Fig. 2-(c)),3Published as a conference paper at ICLR 2020the 1D kernel tends to point to the center of the image, as it performs a blending operation of multipleupscaled versions of the input image. The dilation size of this 1D kernel is adaptive according to thedesired 3D-zoom factor. Finally, for the video interpolation case, the SepConv (Niklaus et al., 2017)approximates an NxN adaptive kernel via a 1xN and an Nx1 component (see Fig. 2-(d)) which aresequentially applied to the input images to generate the synthetic output. SepConv has, by design,limited receptive fields, as the dilation size is fixed to 1. Besides, the sequential nature of the kernelforces the vertical component to sample pixels from the output of the horizontal convolution, whichcould be already degraded due to heavy deformations introduced by the horizontal component.Recent works have also attempted to improve upon the stereoscopic view synthesis by improvingthe loss functions involved in the CNN’s training. The work of Zhang et al. (2019) proposed a multi-scale adversarial correlation matching (MS-ACM) loss that learns to penalize structures and ignorenoise and textures by maximizing and minimizing the correlation- l1distance in the discriminator’sfeature-space between the generated right-view and the target-view in an adversarial training setup.Whereas the objective function is a key factor in training any CNN, we believe that, at its currentstate, the stereoscopic view synthesis problem can benefit more from a better pipeline that can handlethe previously mentioned issues and using the widely accepted l1and perceptual losses (Johnsonet al., 2016) for image reconstruction, rather than a more complex loss function.Our proposed dilation adaptive “t-shaped” convolutions incorporate global (new camera positionalong the X-axis) and local (3D geometries of specific target pixels) information of the input sceneinto the synthesis of each output pixel value, by not only learning the specific kernel that will gener-ate each output pixel, but also by learning the proper dilation value for each kernel. The “t” shape ofthe kernel allows the network to account for occlusions by filling-in the gaps (missing information inthe output) due to shifted camera positions using not only left-and-right pixels (like DeepStereo andDeep3D), but also up-and-down neighboring pixel information. In addition, the notions of globaland local dilations allow our proposed mon ocular to stereo network, the monster-net, to generatearbitrarily 3D panned versions of the input center view along the X-axis, a useful feature not presentin previous works that allows adjusting for eye-to-eye separation and/or level of 3D sensation.3 M ETHODIn order to effectively synthesize an arbitrary 3D panned image, we propose a global dilation filter asshown in Figure 3. Our proposed cross-shaped global dilation filter Td(p)at a target pixel locationp= (x;y)2Ito, where Itois a generated image, is defined asTd(p) =hTc(x;y);[Tu;Tb;Tl;Tr]Ti(1)whereTc(x;y)is the filter parameter value of Td(p)at the center location p. The upper, bottom,left and right wing parameters ( Tu;Tb;Tl;Tr) of the cross-shaped dilation ( d) filter are defined asTu= [Tu(x;yd);Tu(x;y2d);:::;T u(x;ynud)]TTb= [Tb(x;y+d);Tb(x;y+ 2d);:::;T b(x;y+nbd)]TTl= [Tl(xd;y);Tl(x2d;y);:::;T l(xnld;y)]TTr= [Tr(x+d;y);Tr(x+ 2d;y);:::;T r(x+nrd;y)]T(2)wherenu,nb,nlandnrindicate the numbers of filter parameters in Tu;Tb;Tl, andTr, respectively.For the cross-shaped dilation filter shown in Figure 3, it is more appropriate to have a longer length ofthe right (left) filter wing than the other three wings when the camera panning is rightward (leftward),as it allows capturing more useful information for the synthesis of a right (left) panned image. Inthis case,nr(nl) is set to be greater than nl(nr),nuandnb, such that the global dilation filtershowed in Figure 3 can be elaborated as a “t-shaped” kernel which can then take into account thecamera panning direction for synthesis. Figure 4 shows examples of “t-shaped” kernels overlaid ontop of an input center image. As shown in Figure 4-(a), the “t-shaped” kernel has a longer left wingof filter parameters for the synthesis of a leftward camera panning while in Figure 4-(b) it shows alonger right-wing of filter parameters for the synthesis of a rightward camera panning.Why “t” shape? Experiments with symmetric kernel shapes (e.g., “+” shape) were performed first,but it was noted that most of the elements on the left (right), upper and bottom sides against the4Published as a conference paper at ICLR 2020Figure 3: Our proposed global dilation ( d) filter with a general cross shape.Figure 4: Our proposed “t-shaped” kernels are overlaid on top of a center input image. The distancebetween samples (dilation) is adaptive according to the amount and direction of 3D panning to beapplied to the input image and the local 3D geometry of the scene.centered red dot of the kernel tended to have very small values close to zeros for most target pixelsfor the rightward (leftward) movement of the camera. Similar to SepConv (Niklaus et al., 2017),the experiments with a horizontal kernel applied first followed by a vertical kernel were performed,yielding poor results. It was discovered that the “t” shaped kernel is more efficient than the “+”shaped kernel as it picks up more effective sampling positions with a fewer parameters than thestandard adaptive convolutions such as those in SepConv. As depicted in Figure 5, the “t-shaped”kernels can embed useful information like disparity and occlusion from a monocular image into thestereo synthesis process.The longer right (left) wing of the “t-shaped” kernel contains disparity information , as it willtry to sample pixels from the right (left) side to the target pixel when the camera is assumed to movein the rightward (leftward) direction. Figure 5-(a) depicts a primitive disparity map Dpthat wasconstructed by the weighted sum of the kernel values in the longer kernel wing, as described byDp(p) =nrXi=1inrTr(x+id;y) (3)whereTr(x+id;y)is the i-th value of the longer wing Trat pixel location p= (x;y)for therightward 3D panning of an input center image Ic. Note that Dpis normalized in the range [0;1]. In-terestingly, as shown in Figure 5-(a), the generated disparity map looks very natural and appropriate,which implies the effectiveness of our “t-shaped” kernel approach.The short left (right), upper and bottom wings of the “t-shaped” kernel contain occlusioninformation , as the network will try to fill in the gaps utilizing surrounding information that is notpresent in the long part of the “t-shaped” kernel. It is also interesting to see the occlusion map inFigure 5-(b) where a primitive rightward occlusion map Orpwas constructed by summing up the“t-shaped” kernel values in the short wing parts according to the following:Orp(p) =nlXi=1Tl(xid;y) +nuXi=1Tu(x;yid) +nbXi=1Tb(x;y+id) (4)The bright regions or spots in Figure 5-(b) indicate the occlusions due to the camera shift along thehorizontal axis of the input center image, which are likely to happen for the case of the camera’s5Published as a conference paper at ICLR 2020Figure 5: Disparity ( Dp) and occlusion ( Orp) maps generated from the proposed “t-shaped” kernel.rightward panning. For both Equations (3) and (4), the primitive disparity and occlusion maps forthe leftward panning case can be obtained by swapping the randlindices.3.1 G LOBALLY AND LOCALLY ADAPTIVE DILATIONS FOR THE SYNTHESIS OF A NEW VIEWIMAGE AT A SHIFTED CAMERA POSITIONIn general, the disparity amounts between stereo images are variable at different pixel locationsaccording to the distance between stereo cameras and the local scene geometries. Therefore, it isnecessary to take into account the variable disparity in synthesizing a new view in a globally andlocally adaptive fashion. For this, a “t-shaped” kernel is introduced with a controllable dilation fac-tor by which both camera shift and local changes in image geometry can be effectively taken intoaccount when synthesizing a new (left or right) view for the input center image. Any kernel witha fixed dilation may cause a limited accuracy in synthesizing a novel view because the disparityamounts vary over the whole image according to the cameras’ baseline and the local geometries.So, our “t-shaped” kernel is proposed to make the synthesis of novel views not only globally, but lo-cally adaptive to the camera shift and its local changes in image geometry by controlling its dilationsize per-pixel in the output image. Globally, a short dilation value is more appropriate when slightlyshifting the camera, while a high dilation value is desirable when largely shifting the camera posi-tion. In a local manner, a small dilation value is appropriate for far-away objects from the camerawhile very close objects to the camera can be better reconstructed with a larger dilation value.3.1.1 G LOBAL DILATIONWe define the global dilation gdas the pixel distance between two consecutive kernel samplingpositions, which is given by the pan amount Pato be applied to the input center image Icdivided bythe total number of filter parameters in the longer “t-shaped” kernel wing ( nlornr).Pahas a unitof pixels mapped in the image corresponding to the camera shift into the left or right direction andtakes on floating numbers. Therefore, the global dilation gdis given bygd=fPa=nrif Pa>0,Pa=nlif Pa<0g (5)wherePatakes on positive (negative) values for the rightward (leftward) panning scenario. The panamount needed to generate a left-view or a right-view is determined during training according to theclosest possible objects to the camera. The “closest possible objects” vary over different trainingdatasets. For our novel view synthesis task, like in (Godard et al., 2017; Gonzalez & Kim, 2019b),we assume the KITTI dataset to have a maximum or “closest possible object” disparity of 153 pixels.During training, Pais set to 153 and -153 for the rightward and leftward panning, respectively.3.1.2 L OCAL DILATIONWhile global dilation allows the “t-shaped” kernel to take into account the global camera shift, alocally adaptive mechanism is needed to synthesize new views of locally variable disparity. Such amechanism is realized by first generating multiple images with the “t-shaped” kernel at Ndifferentdilations and blending them per-pixel in a locally adaptive manner. The blending is a weighted sumof filtered images by the “t-shaped” kernel with Ndifferent dilations, where the blending weights(w1;w2;:::;w N)control the local dilation per-pixel and are learned via a convolutional neuralnetwork (CNN) along with the parameter values of the “t-shaped” kernel. Let jgdjbe the maximumdilation value that is a fractional number. Figures 4-(c), -(d) and -(e) illustrative three “t-shaped”kernels with a maximum dilation jgdjand two dilation values less than jgdj. To generate an output6Published as a conference paper at ICLR 2020Figure 6: Our t-net architecture. The t-net estimates the kernel values and the dilation weights usedfor the local adaptive t convolutions with global and local adaptive dilation.image Itopanned to the rightward direction ( gd>0) or to the leftward direction ( gd<0), theinput center image Icis first filtered by N“t-shaped” kernels Tdiof different dilations ( d1;:::;d N).Then, local adaptive dilations are calculated by linearly combining the resulting Nintermediatefiltered images according to the corresponding blending weights (w1;w2;:::;w N). Based on theNdifferent global dilations, the output image value Ito(p)at a pixel location pcan be calculated asIto(p) =NXi=1wi(p) [IcTdi] (p) (6)where [IcTdi](p)is a “t-shaped” convolution at location pbetween IcandTdiof dilationdi=(1 + (1i)=N)gdfori= 1;:::;N .wi(p)indicates a blending weight for the i-th global dilation.3.2 N ETWORK ARCHITECTUREWe propose an end-to-end trainable CNN, called the “monster-net” ( mon ocular to stereo net). Themonster-net is made of two main building blocks, a novel view synthesis network, the “t-net”, and aresolution restoration block, the “sr-block”. Given an input center image Icand pan amount Pa, thefinal output panned image Iois obtained by sequentially applying the aforementioned modules byIo=monster -net(Ic;Pa) =sr-block (t-net(Ic;Pa;t);fIncsg;sr) (7)wheretandsrparameterize the t-net and the sr-block respectively. fIncsgis the stack of progres-sively shifted-downscaled versions of the input center image Icdescribed in the SR-BLOCK section.The t-net . The “t-net” estimates both the “t-shaped” global dilation kernel parameters ( Td) and theadaptive local dilation weights ( w1;w2;:::;w N). The t-net is designed to have large receptive fieldsto synthesize detailed image structures of a new view image, which corresponds to a shifted cameraposition. Such large receptive fields are useful in capturing the global image structure and contex-tual information needed for a new view image to be synthesized. For this, an auto-encoder with skipconnections (not a U-net structure) is adopted, which allows the t-net to have effectively large re-ceptive fields and to efficiently fuse global and local (fine details) information on the decoder stage.For better feature extraction, we adopt the residual connections in the encoder side, as proposed by(Gonzalez & Kim, 2019b). The t-net estimates all necessary values to perform the operation de-scribed by Equation (6). The t-net, depicted in Figure 6, has two output branches: the first outputbranch yields 81 channels, where the first 49 are horizontal parameter maps, and the following 32vertical parameter maps; the second output branch generates the 3-channel blending weight maps forthe local adaptive dilation. That is, each channel-wise vector at a pixel location for the first outputbranch corresponds to the t-kernel parameter values [Tc;TTl;TTr;TTu;TTb], and each channel-wisevector for the second output branch corresponds to the blending weights [w1;w2;:::;w N], utilizedfor local dilations in Equation (6). As our t-net is devised to generate arbitrarily panned novel views,feeding the pan amount as a 1-channel constant feature map ( Pa(p) =Pa8p) helps the networktake into account the varying pan direction, and the amount of occlusion on the 3D panned output.The effect of feeding the pan amount is further discussed in appendix A-1.Super resolution (SR) block. As generating a full resolution dilation-adaptive t-kernel would becomputationally too expensive, we propose to estimate it at a low resolution (LR) for the synthesis7Published as a conference paper at ICLR 2020Figure 7: (a) Shifted-LR versions of the center-view contain different information as they are sam-pled from different groups of pixels via bilinear interpolation depending on the stride (controlled bythe maximum disparity). (b) Our light sr-block. All convs have 3x3 kernels otherwise specified.Table 1: Stereoscopic view synthesis performance on the 400 KITTI2015 training images (left) andthe 500 CityScapes validation images (right). lp: perceptual loss."#indicate the better performance.Model training dataset loss RMSE #PSNR "SSIM "RMSE #PSNR "SSIM "Deep3D K l1 26.13 20.07 0.637 30.10 18.82 0.655Deep3D-B K l1+lp 26.00 20.10 0.633 31.34 18.46 0.636SepConv K l1 27.22 19.73 0.633 27.77 19.54 0.660SepConv-D K l1+lp 26.36 20.02 0.626 29.66 18.95 0.647monstet-net K l1+lp 25.61 20.24 0.641 20.28 22.34 0.710monster-net K+CS l1 24.11 20.76 0.667 12.87 26.36 0.816monster-net (full) K+CS l1+lp 24.44 20.64 0.651 13.12 26.20 0.805monster-net K+CS+VL l1+lp 24.62 20.55 0.645 - - -of a half-resolution novel view, and then to apply deep learning based SR techniques to bring theLR novel view to the high (or original) resolution (HR). In comparison, in Deep3D and SepConv,the estimated LR kernel is upscaled with conventional methods to the HR and then applied to theinput image(s), which is a costly operation as it is carried out in the HR dimensions and can lead toblurred areas as the kernel is just bilinearly interpolated. In our proposed pipeline, instead of utilizingcommon single image SR methods like (Dong et al., 2015; Shi et al., 2016; Kim et al., 2016), wepropose to apply a stereo-SR method. The stereo-SR technique in (Jeon et al., 2018) takes a LRstereo pair (left and right views) as input and progressively shifts the right-view producing a stackthat is concatenated with the left-view and later processed by a CNN to obtain the super-resolvedleft-view. This process is made at an arbitrary and fixed stride (e.g. 1 pixel at every step of the stack)and does not take into account the maximum disparity between the input views. For our Deep 3DPan pipeline, we propose to use the maximum disparity prior that can be obtained from the longwing of the t-kernel to dynamically set the shifting stride. Additionally, instead of interpolatingand processing the low resolution panned view Ito(p)on the HR dimensions, we progressively shiftand then downscale the high-resolution center view Icby a factor of x2. This allows our sr-blockto operate on the LR dimensions without performance degradation, as high frequency informationin the horizontal axis is not lost but distributed along the levels of the shifted center view stack asdepicted in Figure 7-(a). Our sr-block, depicted in Figure 7-(b), is a simple, yet effective modulethat takes as input the LR Itoview and the shifted-downscaled center view stack Incsdescribed byIncs=g(Ic;nPaNsmax(Dp)) (8)whereg(I;s)is ans-strided horizontal-shift and 2x down-scaling operator applied on image I. Thestridescan take any real number and the resulting image is obtained via bilinear interpolation. Nsis the depth of the stack, and was set to Ns= 32 for all our experiments). The stack is concatenatedwith the LR Itoand passed trough four Conv-ReLU layers followed by a residual connection as shownin Figure 7-(b). The final step up-scales the resulting features into the target resolution via nearestinterpolation followed by a convolutional layer. The last layer reduces the number of channelsto three for the final RGB output Io. Nearest upscaling was adopted as it yields no checkerboardartifacts in contrast with transposed or sub-pixel convolutions (Niklaus et al., 2017).8Published as a conference paper at ICLR 2020Figure 8: Comparison against the state-of-the-art methods for stereoscopic view synthesis.4 E XPERIMENTS AND RESULTSTo demonstrate the effectiveness of our “t-shaped”-dilation-adaptive kernel, we performed severalexperiments on the challenging KITTI2012 (Geiger et al., 2012), KITTI2015 (Menze & Geiger,2015), and CityScapes (Cordts et al., 2016) datasets. As these stereo datasets only consist of out-door scenes, we also performed experiments on our indoors dataset, called the VICLAB STEREOdataset. Surprisingly, to our best knowledge, this is the first stereo dataset available that focuses onthe indoor scene, which is planned to be publicly available for research. Additionally, our formu-lation of global and local adaptive dilations allows our monster-net to be trained on multiple stereodatasets at the same time, even if these have different baselines. Instead of over-fitting on a singlecamera baseline like the previous methods (Xie et al. (2016); Godard et al. (2017); Zhang et al.(2019); Luo et al. (2018)), our monster-net can build knowledge when simultaneously trained onmany datasets. To our best knowledge, our Deep 3D Pan pipeline is the first method designed tobe trained on multiple baseline datasets concurrently for the stereoscopic view synthesis problemwhere unsupervised monocular depth estimation is even used particularly. For more details aboutthe datasets and multi-dataset training, please see the appendix A-3.We compare our monster-net against the stereoscopic view synthesis SOTA: Deep3D Xie et al.(2016) and a version of SepConv (Niklaus et al., 2017) modified for right-view synthesis. Firstly,for a fair comparison, the backbone convolutional auto-encoders for the Deep3D and SepConv wereset up to be equivalent to our t-net’s, that is, a six-stage encoder-decoder with skip connectionsand residual blocks in the encoder side. Secondly, we compare our monster-net with Deep3D-B,a “Bigger” version of Deep3D, where, instead of 32 elements in the 1D kernel as in its originalwork, we use 49 elements to match the number of horizontal kernel values in our t-net. Thirdly, wecompare against SepConv-D, a dilated version of the SepConv such that the receptive field of theseparable convolutions has a size of 153x153. The Deep3D and the SepConv models are trainedwithout using perceptual loss as in their original works. For a more meaningful comparison, theDeep3D-B and the SepConv-D are trained with a combination of l1and perceptual loss lp(Johnsonet al., 2016), and demonstrate that a better loss function than l1does not contribute enough to thestereoscopic view synthesis problem. For more implementation details, reefer to the appendix A-4.Additionally, we compare the quality of the embedded disparity in the long wing of the “t-shaped”kernel with those of the state-of-the-art models for the monocular depth estimation task. For that,we first define a disparity refinement sub-network that uses the primitive disparity obtained fromthe long wing of the “t-shaped” kernel as prior information. Secondly, we define a special post-processing (spp) step, which, instead of relying on a naive element wise summation as in Godard9Published as a conference paper at ICLR 2020Table 2: Depth metrics (Eigen et al., 2014) for KITTI2015. Models are trained with video (V),stereo (S), semi global matching (SMG) or GT depth (Supp). Top models in terms of a1accuracyare highlighted. Simplified table, see appendix A.9 for the full version.Model Supp V S dataset abs rel #sq rel #rms#log rms #a1"a2"a3"Wang et al. (2019a) (9-view) x K 0.112 0.418 2.320 0.153 0.882 0.974 0.992Tosi et al. (2019) (pp) SMG x K+CS 0.096 0.673 4.351 0.184 0.890 0.961 0.981ours with refine block (spp) x K+CS 0.099 0.950 4.739 0.160 0.900 0.971 0.989Gur & Wolf (2019) x K 0.110 0.666 4.186 0.168 0.880 0.966 0.988Luo et al. (2018) x K 0.094 0.626 4.252 0.177 0.891 0.965 0.984Wang et al. (2019a) (1/9-view) x x K 0.088 0.245 1.949 0.127 0.915 0.984 0.996et al. (2017), takes into account the ambiguities of the first and second forward passes to generatea remarkable sharp and consistent disparity map. For more details on the refinement block and ourspecial post-processing, reefer to the appendix A-2.4.1 R ESULTS ON THE KITTI, C ITYSCAPES AND THE VICLAB STEREO DATASETSTable 1 shows the performance comparison for our method and previous works. It is important tomention that our monster-net performs inference on full resolution images, while the previous ap-proaches for single-view novel view synthesis perform estimation on reduced resolution inputs. Ourmethod outperforms the Deep3D baseline by a considerable margin of 0.7dB in PSNR, 2.0 in RMSE,and 0.03 in SSIM. The qualitative results are shown in Figure 8. Our method produces superior look-ing images. In Deep3D and SepConv, many objects appear too blurred such that their boundariescan hardly be recognized in the synthetic images (e.g the motorcycle, persons, traffic signs, etc.). Wechallenged the models trained on KITTI (K) to perform inference on the CityScapes validation split(CS), and observed that our method generalizes much better than the Deep3D baseline with up to3dB higher in PSNR. When training the monster-net with K+CS, we get an additional improvementof 4dB PSNR in the validation CS dataset. Incorporating an indoor dataset to our training pipelineis also possible, making our network applicable to a wide variety of scenarios. We added the VI-CLAB STEREO (VL) dataset to the training, that is K+CS+VL, and observed little impact on the Kdataset performance as shown in Table 1. We also tested the performance of our monster-net on thevalidation split of the VL dataset. We observed that our full monster-net trained on K+CS achieveda mean PSNR of 19.92dB, while achieving a mean PSNR of 21.78 dB when trained on K+CS+VL.For a network trained on the outdoors dataset only it is difficult to generalize to the indoors case,as the latter contains mainly homogeneous areas, whereas the outdoors case mainly contains texturerich scenes. Visualizations on CS and VL, and ablation studies that prove the efficacy of each of ourdesign choices can be found in the appendices A-5, A-6 and A-8.4.2 R ESULTS ON DISPARITY ESTIMATIONWith the addition of a relatively shallow disparity refinement sub-network, the monster-net remark-ably outperforms all the state-of-the-art models for the unsupervised monocular depth estimationtask, as shown in Table 2. Our monster-net with disparity refinement even outperforms the super-vised monocular disparity estimation methods such as (Luo et al., 2018; Gur & Wolf, 2019) andmultiple view unsupervised methods such as (Wang et al., 2019a; Ranjan et al., 2019).5 C ONCLUSIONWe presented an adaptive “t-shaped” kernel equipped with globally and locally adaptive dilationsfor the Deep 3D Pan problem, defined as the task of arbitrarily shifting the camera position alongthe X-axis for stereoscopic view synthesis. Our proposed monster-net showed superior performanceto the SOTA for right-view generation on the KITTI and the CityScapes datasets. Our monster-net also showed very good generalization capabilities with 3dB gain in PSNR against the Deep3Dbaseline. In addition, our method presents no-discontinuities, consistent geometries, good contrast,and naturally looking left or right synthetic panned images. Our monster-net can be extended forimage registration, monocular video to stereo video, and generation of novel views at any cameratranslation by just allowing a pixel-wise rotation of our “t-shaped” kernel.10Published as a conference paper at ICLR 2020ACKNOWLEDGMENTSThis work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00419, Intelligent High RealisticVisual Processing for Smart Broadcasting Media).<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text The paper considers the problem of performing stereoscopic view synthesis (i.e., generating a new view seen from a different camera position) at an arbitrary position along the X-axis from a single input image only. This is an important problem as it enables 3D visualization of a 2D input scene. The paper focuses on the particular problem of generating a stereoscopic view from a single image (i.e., a right and left view from a center image). For this purpose, the paper proposes a t-net architecture which is an autoencoder or U-net like architecture that estimates the values for the t-convolutions proposed in the paper. The network (called monster-net) takes a center image and a pan amount as input, and from those synthesizes the image with the respective view. The paper demonstrates that their idea of t-convolutions outperforms recent competing approaches such as deep 3D on available datasets as well as on an in-house collected dataset. The figures provided demonstrate that the views generated by the proposed Monster-net visibly look slightly better than those generated by the competing approaches DeepD and SepConv. In addition, the paper is well written and easy to follows. I therefore recommend acceptance of this paper. I would like to emphasize that while I work in deep learning, I don't work on view synthesis and therefore it is difficult for me to evaluate the novelty of the proposed approach as well as the difficulty of the problem. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
B1e8CsRctX
ICLR.cc/2019/Conference
2019
Generative Ensembles for Robust Anomaly Detection
["Hyunsun Choi", "Eric Jang"]
Deep generative models are capable of learning probability distributions over large, high-dimensional datasets such as images, video and natural language. Generative models trained on samples from p(x) ought to assign low likelihoods to out-of-distribution (OoD) samples from q(x), making them suitable for anomaly detection applications. We show that in practice, likelihood models are themselves susceptible to OoD errors, and even assign large likelihoods to images from other natural datasets. To mitigate these issues, we propose Generative Ensembles, a model-independent technique for OoD detection that combines density-based anomaly detection with uncertainty estimation. Our method outperforms ODIN and VIB baselines on image datasets, and achieves comparable performance to a classification model on the Kaggle Credit Fraud dataset.
["Anomaly Detection", "Uncertainty", "Out-of-Distribution", "Generative Models"]
ABSTRACTDeep generative models are capable of learning probability distributions overlarge, high-dimensional datasets such as images, video and natural language. Gen-erative models trained on samples from p(x)ought to assign low likelihoods toout-of-distribution (OoD) samples from q(x), making them suitable for anomalydetection applications. We show that in practice, likelihood models are themselvessusceptible to OoD errors, and even assign large likelihoods to images from othernatural datasets. To mitigate these issues, we propose Generative Ensembles,a model-independent technique for OoD detection that combines density-basedanomaly detection with uncertainty estimation. Our method outperforms the Out-of-DIstribution detector for Neural networks (ODIN) and Variational InformationBottleneck (VIB) baselines on image datasets, and achieves comparable perfor-mance to a classification model on the Kaggle Credit Fraud dataset.1 I NTRODUCTIONKnowing when a machine learning (ML) model is qualified to make predictions on an input is criticalto safe deployment of ML technology in the real world. When training and test distributions differ,neural networks may provide – with high confidence – arbitrary predictions on inputs that they areunaccustomed to seeing. To mitigate these Out-of-Distribution (OoD) errors, we require methods todetermine whether a given input is sampled from a different stochastic generator than the one usedto train the model.OoD detection techniques have broad applications beyond safe deployment of ML technology. Asdatasets for ML grow ever larger and trend towards automated data collection, we require scalablemethods for identifying outliers and quantifying noise before we can attempt to train models on thatdata. Identifying anomalies in data is a crucial feature of many data-driven applications, such ascredit fraud detection and monitoring patient data in medical settings.Generative modeling algorithms have improved dramatically in recent years, and are capable oflearning probabilistic models over large, high-dimensional datasets such as images, video, and nat-ural language (Vaswani et al., 2017; Wang et al., 2018). A generative model p(x), parameterizedby random variable and trained to approximate data distribution p(x), ought to assign low like-lihoods to samples from any distribution q(x)that differs from p(x). Density estimation does notpresuppose a specific “alternate” distribution at training time, making it an attractive alternative toclassification-based anomaly detection methods.In this work, we apply several classes of generative models to OoD detection problems and demon-strate a significant shortcoming to high-dimensional density estimation models: the anomaly de-tection model itself may be mispecified . Explicit likelihood models can, in practice, realize highlikelihoods to adversarial examples, random noise, and even other natural image datasets. We alsoillustrate how GAN discriminators presuppose a particular OoD distribution, which makes them par-ticularly fragile at OoD classification. We propose Generative Ensembles, which combine densityestimation with uncertainty estimation to detect OoD in a robust manner. Generative Ensemblesare model-independent and are trained independently of the task-specific ML model of interest.Our method outperforms task-specific OoD baselines on the majority of evaluated OoD tasks anddemonstrate competitive results with discriminative classification approaches on the Kaggle CreditFraud dataset.1Under review as a conference paper at ICLR 20192 G ENERATIVE ENSEMBLESWe consider several classes of generative modeling techniques in our experiments. AutoregressiveModels and Normalizing Flows (NF) are fully-observed likelihood models that construct a tractablelog-likelihood approximation to the data-generating density p(x)(Uria et al., 2016; Dinh et al.,2014; Rezende & Mohamed, 2015). Variational Autoencoders (V AE) are latent variable modelsthat maximize a variational lower bound on log density (Kingma & Welling, 2013; Rezende et al.,2014). Finally, Generative Adversarial Networks (GAN) are implicit density models that minimize adivergence metric between p(x)and generative distribution q(x)(Goodfellow et al., 2014). We referto a GAN’s generative distribution as q(x)(in lieu ofp(x)) because from the GAN discriminator’spoint of view, the outputs of the generator are OoD and depend on .Although logp(x)and its lower bounds are proper scoring methods (Lakshminarayanan et al., 2017),we approximate them in practice with continuous-valued neural network function approximatorslogp(x). Neural networks have non-smooth predictive distributions, which makes them susceptibleto malformed inputs that exploit idiosyncratic computation within the model (Szegedy et al., 2013).Likelihood function approximators are no exception. When judging natural images, we assumean OoD input xq(x)should remain OoD within some LP-norm, and yet a Fast Gradient SignMethod (FGSM) attack (Goodfellow et al., 2015) on the predictive distribution can realize extremelyhigh likelihood predictions (Nguyen et al., 2015). Conversely, a FGSM attack in the reverse directionon an in-distribution sample xp(x)creates a perceptually identical input with low likelihoodpredictions (Kos et al., 2018). To make matters worse, we show in Figure 1 that likelihood modelscan be fooled by OoD samples that are not even adversarial by construction, such as SVHN testimages on a likelihood model trained on CIFAR-10. Concurrent work by Nalisnick et al. (2018)also show this phenomena and present additional analyses on why generative models systematicallyassign higher likelihoods to SVHN.Figure 1: Left: density estimation models are not robust to OoD inputs. A GLOW model (Kingma& Dhariwal, 2018) trained on CIFAR-10 assigns much higher likelihoods to samples from SVHNthan samples from CIFAR-10. Right: We use ensembles of generative models to implement theWatanabe-Akaike Information Criterion (WAIC), which combines density estimation with uncer-tainty estimation. Histograms correspond to predictions over test sets from each dataset.Generative Ensembles detect OoD examples by combining a density evaluation model with pre-dictive uncertainty estimation on the density model via ensemble variance. Following the resultsof Lakshminarayanan et al. (2017), we elect to use independently trained ensembles instead ofa Bayesian Dropout approximation (Gal & Ghahramani, 2016). For generative models that ad-mit exact likelihoods (or variational approximations), the ensemble can be used to implement theWatanabe-Akaike Information Criterion (WAIC), which consists of a density estimation score witha Bayesian correction term for model bias (Watanabe, 2010):WAIC (x) =E[logp(x)]Var[logp(x)] (1)2Under review as a conference paper at ICLR 20192.1 O OD D ETECTION WITH GAN D ISCRIMINATORSWe describe how to construct Generative Ensembles based on implicit density models such asGANs, and highlight the importance of OoD detection approaches that do not presuppose a spe-cific OoD distribution. A discriminative model tasked with classifying between p(x)andq(x)isfragile to inputs that lie in neither distribution. Figure 2b illustrates a simple 2D density modelingtask where individual GAN discriminators – when trained to convergence – learn a discriminativeboundary that does not adequately capture p(x).However, unlike discriminative anomaly classifiers on a static datasets, which model p(x)=q(x), thelikelihood ratio p(x)=q(x)implicitly assumed by a GAN discriminator is uniquely randomized byGAN training dynamics on . By training an ensemble of GANs we can estimate the posteriordistribution over model decision boundaries p(x)=q(x), or equivalently, the posterior distributionover alternate distributions q(x). In other words, we can use uncertainty estimation on randomlysampled discriminators to de-correlate the OoD classification errors made by a single discriminator(Figure 2c).(a) Normalizing Flow (b) Independent Discriminators (c) Ensemble Variance (GAN)Figure 2: In this toy example, we learn generative models for a 2D multivariate normal with identitycovariance centered at (5, 5). (a) Explicit density models such as Normalizing Flows concentrateprobability mass at the data distribution (b) Four independently trained GANs learn random discrim-inative boundaries, each corresponding to a different implied generator distribution. To ensure thatthe GAN discriminators form a clear discriminative boundary between p(x)andq(x), we train thediscriminators an additional 10k steps to convergence. Each of these boundaries fails to enclose thetrue data distribution. (c) Predictive uncertainty over an ensemble of discriminators “fences in” theshared, low-variance region corresponding to p(x).3 R ELATED WORKWe can categorize existing OoD detection techniques in Table 1 using two criteria: (1) Does itassume a specific anomaly distribution? (2) Is the technique specific to the model, or does it onlydepend on the inputs to the model?A common approach to OoD detection (a.k.a. anomaly detection) is to label a dataset of anomalousdata and train a binary classifier on that label. Alternatively, a classification task model may be aug-mented with a “None of the above” class. The classifier then learns a decision boundary (likelihoodratio) between p(x)andq(x). However, the discriminative approach to anomaly detection requiresthe anomaly distribution to be specified at training time; this is a severe flaw when anomalous datais rare (e.g. medical seizures) or non-stationary (e.g. generated by an adversary).3.1 U NCERTAINTY ESTIMATIONOoD detection is closely related to the problem of uncertainty estimation, whose goal is to yieldcalibrated confidence measures for a model’s predictive distribution p(yjx). Well-calibrated un-certainty estimation integrates several forms of uncertainty into p(yjx): model mispecification un-3Under review as a conference paper at ICLR 2019Table 1: Categorization of several OoD detection techniques, based on whether they depend on aspecific model/task, and whether they assume a specific anomaly distribution.Model-Dependent Model-IndependentOoDDependentAuxiliary “Other” class Binary classification (likelihood ratio)Adversarial TrainingOoDIndependentHendrycks & Gimpel (2016) Density EstimationGal & Ghahramani (2016) Generative Ensembles (ours)Liang et al. (2017)Lakshminarayanan et al. (2017)Alemi et al. (2018b)certainty (OoD detection of invalid inputs), aleatoric uncertainty (irreducible input noise for validinputs), and epistemic uncertainty (unknown model parameters for valid inputs). In this paper, westudy OoD detection in isolation; instead of considering whether p(yjx)should be trusted for agivenx, we are trying to determine whether xshould be fed into p(yjx)at all.Predictive uncertainty estimation is a model-dependent OoD technique because it depends on task-specific information (such as labels and task model architecture) in order to yield an integratedestimate of uncertainty. ODIN (Liang et al., 2017), MC Dropout (Gal & Ghahramani, 2016) andDeepEnsemble (Lakshminarayanan et al., 2017) model a calibrated predictive distribution for a clas-sification task. Variational information bottleneck (VIB) (Alemi et al., 2018b) performs divergenceestimation in latent space to detect OoD, but is technically a model-dependent technique becausethe latent code is trained jointly with the downstream classification task.One limitation of model-dependent OoD techniques is that they may discard information about p(x)in learning the task-specific loss function p(yjx). Consider a contrived binary classification modelon images that learns to solve the task perfectly by discarding all information except the contentsof the first pixel (no other information is preserved in the features). Subsequently, the model yieldsconfident predictions on any distribution that happens to preserve identical first-pixel statistics. Incontrast, density estimation in data space xconsiders the structure of the entire input manifold,without bias towards a particular downstream task or task-specific compression.In our work we estimate predictive uncertainty of the scoring model itself. Unlike predictive un-certainty methods applied to the task model’s predictions, Generative Ensembles do not requiretask-specific labels to train. Furthermore, model-independent OoD detection aids interpretation ofpredictive uncertainty by isolating the uncertainty component arising from OoD inputs.3.2 A DVERSARIAL DEFENSESong et al. (2017) make the observation that adversarial examples designed to fool a downstreamtask have low likelihood under an independent generative model. They propose a “data purification”pipeline where inputs are first modified via gradient ascent on model likelihood, before passing itto the unmodified classifier. Their evaluations are restricted to Lp-norm attacks on in-distributioninputs to the task model, and do not take into account that the generative model itself may be sus-ceptible to OoD errors. In fact, a preprocessing step with gradient ascent on model likelihood hasthe exact opposite of the desired effect when the input is OoD to begin with.Our work considers adversarial defense in a broader OoD context. Although adversarial attacksliterature typically considers small Lp-norm modifications to input (demonstrating the alarmingsensitivity of neural networks), there is no such restriction in practice to the degree with which aninput can be perturbed in a test setting. Adversarial defense is nothing more than making ML modelsrobust to OoD inputs; whether they come from an attacker or not is irrelevant. We evaluate our meth-ods on simple OoD transformations (flipping images), common ML datasets, and the adversaraialsetting where a worst-case input is created from a single model in the ensemble.He et al. (2017) demonstrate that ensembling adversarial defenses does not completely mitigatelocal sensitivity of neural networks. It is certainly plausible that sufficient search over a Generative4Under review as a conference paper at ICLR 2019Ensemble’s predictions can find OoD inputs with both low variance and high likelihood. The focusof our work is to measure the extent to which uncertainty estimation improves robustness to modelmispecification error, not to present a provably secure system. Having said that, model-independentOoD detection is easy to obfuscate in a practical ML security setting since the user only has accessto the task model. Furthermore, a Generative Ensemble’s WAIC estimate can be made more robustby sampling additional models from the posterior over model parameters.4 E XPERIMENTAL RESULTSFollowing the experiments proposed by Liang et al. (2017) and Alemi et al. (2018b), we train OoDmodels on MNIST, Fashion MNIST, CIFAR-10 datasets, and evaluate anomaly detection on testsamples from other datasets. In line with the aforementioned works, we measure anomaly detectioncapability based on AUROC over several quantities shown in Table 2. Our proposed quantities in-clude single Wasserstein GAN (WGAN) discriminators (Arjovsky et al., 2017) with fine-tuning ( D),ensemble variance of discriminators ( Var(D)), likelihood models ( logp(x)), and WAIC estimatedusing an ensemble of likelihood models. We follow the protocol as suggested by Lakshminarayananet al. (2017) to use 5 independent models with different parameter initializations, trained on the fulltraining set (no bootstrap). For likelihood estimators based on variational autoencoders (V AE), wealso evaluate the rate term DKL(q(zjx)kp(z)), which corresponds to information loss between thelatent inference distribution and prior.For MNIST and Fashion MNIST datasets, we use a V AE to predict a 16-sample ImportanceWeighted AutoEncoder (IWAE) bound. We extend the V AE example code1from Tensorflow Prob-ability (Dillon et al., 2017) to use a Masked Autoregressive Flow prior (Papamakarios et al., 2017),and train the model for 5k steps. Additional architectural details are found in Appendix B.Our WGAN model’s generator and discriminator share the same architecture with the V AE decoderand encoder, respectively. The discriminator has an additional linear projection layer to its predictionof the Wasserstein metric. To ensure Drepresents a meaningful discriminative boundary betweenthe two distributions, we freeze the generator and fine-tune the discriminator for an additional 4ksteps on stationary p(x)andq(x). We also include Gaussian noise adversarially perturbed by FGSMon a single model (Adversarial).For CIFAR-10 WGAN experiments, we change the first filter size in the discriminator from 7 to8. For log-likelihood estimation, we train a vanilla GLOW model (Kingma & Dhariwal, 2018) for250k steps, as we require a more powerful generative model to obtain good results.The baseline methods are model-dependent and learn from the joint distribution of images and la-bels, while our methods use only images. For the VIB baseline, we use the rate term as the thresholdvariable. The experiments in Alemi et al. (2018b) make use of (28, 28, 5) “location-aware” featuresconcatenated to the model inputs, to assist in distinguishing spatial inhomogeneities in the data. Inthis work we train vanilla generative models with no special modifications, so for fair comparisonwe also train VIB without location-aware features. For CIFAR-10 experiments, we train VIB for26 epochs and converge at 75.7% classification accuracy on the test set. All other experimentalparameters for VIB are identical to those in Alemi et al. (2018b).Despite being trained on strictly less data (no labels), our methods – in particular Generative En-sembles – outperform ODIN and VIB on most OoD tasks. The V AE rate term appears to be quiteeffective, outperforming likelihood and WAIC estimation in data space. It is robust to adversarialinputs on the same model, because the FGSM perturbation primarily minimizes the (larger) distor-tion component of the approximate likelihood. The performance of V AE rate versus VIB rate alsosuggests that latent codes learned from generative objectives are more useful for OoD detection thatlatent codes learned via a classification-specific objective.4.1 F AILURE ANALYSISIn this section we discuss the experiments in which Generative Ensembles performed poorly, andsuggest simple fixes to address these issues.1https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/vae.py5Under review as a conference paper at ICLR 2019Table 2: We train models on MNIST, Fashion MNIST, and CIFAR-10 and compare OoD classifi-cation ability to baseline methods using the threshold-independent Area Under ROC curve metric(AUROC).Dcorresponds to single WGAN discriminators with 4k fine-tuning steps on stationaryp(x),q(x).Var(D)is uncertainty estimated by an ensemble of discriminators. Rate is the DKLtermin the V AE objective. logp(x)is a single likelihood model (V AE, GLOW). WAIC is the Watanabe-Akaike Information Criterion as estimated by the Generative Ensemble. ODIN results reproducedfrom Liang et al. (2017). Best results for each task shown in bold.Train Dataset OoD Dataset ODIN VIB D Var(D) Rate logp(x)WAICMNIST Omniglot 100 97.1 56.1 80.3 99.1 98.2 100notMNIST 98.2 98.6 93.1 99.6 99.9 100 100Fashion MNIST N/A 85.0 83.1 99.9 94.7 100 100Uniform 100 76.6 95.6 100 99.3 100 100Gaussian 100 99.2 0.6 100 100 100 100HFlip N/A 63.7 41.5 57.7 90.0 84.9 86.1VFlip N/A 75.1 44.7 60.9 89.3 81.9 80.7Adversarial N/A N/A 30.8 100 100 0 100Fashion MNIST Omniglot N/A 94.3 19.4 83.5 97.7 56.8 79.6notMNIST N/A 89.6 22.3 96.0 99.7 92.0 98.7MNIST N/A 94.1 70.1 74.7 97.1 42.3 76.6Uniform N/A 79.6 0 82.7 95.6 100 100Gaussian N/A 89.3 0 99.8 89.2 100 100HFlip N/A 66.7 58.0 54.1 72.4 59.4 62.3VFlip N/A 90.2 69.6 69.6 87.1 66.8 74.0Adversarial N/A N/A 0 100 100 0 100CIFAR-10 CelebA N/A 73.5 56.5 74.3 N/A 99.7 99.9SVHN N/A 52.8 68.9 61.4 N/A 7.5 100ImageNet32 81.6 70.1 47.1 62.9 N/A 93.8 95.6Uniform 99.2 54.0 100 100 N/A 100 100Gaussian 99.7 45.8 100 100 N/A 100 100HFlip N/A 50.6 52.0 50.3 N/A 50.1 50.0VFlip N/A 51.2 60.9 52.3 N/A 50.6 50.4In an earlier draft of this work, a V AE trained on Fashion MNIST performed poorly on all OoDdatasets when using logp(x)and WAIC metrics. This was surprising, since the same metrics per-formed well when the same V AE architecture was trained on MNIST. To explain this phenomenon,we show in Figure 3 inputs and V AE-decoded outputs from Fashion MNIST and MNIST test sets.Fashion MNIST images are reconstructed properly, while MNIST images are are barely recogniz-able after decoding.A V AEs training objective can be interpreted as the sum of a pixel-wise autoencoding loss (distor-tion) and a “semantic” loss (rate). Even though Fashion MNIST appears to be better reconstructedin a semantic sense, the distortion values between the FMNIST and MNIST test datasets are numeri-cally quite similar, as shown in Figure 3. Distortion terms make up the bulk of the IWAE predictionsin our models, thus explaining why logp(x)was not very discriminative when classifying OoDMNIST examples.Higgins et al. (2016) propose -V AE, a simple modification to the standard V AE objective:p(xjz) +DKL(q(zjx)kp(z)).controls the relative balance between rate and distortion termsduring training. Setting < 1is a commonly prescribed fix for encouraging V AEs to approachthe “autoencoding limit” and avoid posterior collapse (Alemi et al., 2018a). At test time, this re-sults in higher-fidelity autoencoding at the expense of higher rates, which seems to be a more usefulsignal for identifying outliers than the total pixel distortion (also suggested by Table 2, column 7).Re-training the ensemble with =:1encourages a higher distortion penalty during training, andthereby fixes the OoD detection model.6Under review as a conference paper at ICLR 2019(a) Fashion MNIST (b) MNIST (OoD)Figure 3: Top: Inputs and decoded outputs from a V AE trained on Fashion MNIST( = 1) for Fash-ion MNIST (left) and MNIST (right). Although Fashion MNIST inputs appear to be better recon-structed (suggesting higher likelihoods), they have comparable distortions to MNIST. The bottomrow shows that Fashion MNIST and MNIST test samples have comparable rate-distortion scatterplots and IWAE histograms.4.2 C REDIT CARD ANOMALY DETECTIONWe consider the problem of detecting fraudulent credit card transactions from the Kaggle CreditFraud Challenge (Dal Pozzolo et al., 2015). A conventional approach to fraud detection is to includea small fraction of fraudulent transactions in the training set, and then learn a discriminative clas-sifier. Instead, we treat fraud detection as an anomaly detection problem where a generative modelonly sees normal credit card transactions at training time. This is motivated by realistic test scenar-ios, where an adversary is hardly restricted to generating data identically distributed to the trainingset.We compare single likelihood models (16-sample IWAE) and Generative Ensembles (ensemble vari-ance of IWAE) to a binary classifier baseline that has access to a training set of fraudulent transac-tions in Table 3. The classifier baseline is a fully-connected network with 2 hidden ReLU layersof 512 units, and is trained using a weighted sigmoid cross entropy loss (positive weight=580) withDropout and RMSProp ( = 1e5). The V AE encoder and decoder are fully connected networkswith single hidden layers (32 and 30 units, respectively) and trained using Adam ( = 1e3).Unsurprisingly, the classifier baseline performs best because fraudulent test samples are distributedidentically to fraudulent training samples. Even so, the single-model density estimation and Gener-ative Ensemble achieve reasonable results.Table 3: Comparison of density-based anomaly detection approaches to a classification baseline onthe Kaggle Credit Card Fraud Dataset. The test set consists of 492 fraudulent transactions and 492normal transactions. Threshold-independent metrics include False Positives at 95% True Positives(FPR@95%TPR), Area Under ROC (AUROC), and Average Precision (AP). Density-based models(Single IWAE, WAIC) are trained only on normal credit card transactions, while the classifier istrained on normal and fraudulent transactions. Arrows denote the direction of better scores.Method FPR@95%TPR #AUROC"AP"Classifier 4.0 99.1 99.3Single IWAE 15.7 94.6 92.0WAIC 15.2 94.7 92.17Under review as a conference paper at ICLR 20195 D ISCUSSION AND FUTURE WORKOoD detection is a critical piece of infrastructure for ML applications where the test data distributionis not known at training time. We present Generative Ensembles, a simple yet powerful techniquefor model-independent OoD detection that improves density models with uncertainty estimation.An important future direction of research is that of scalability : learning good generative modelsof semantically rich, high-dimensional inputs (e.g. video) is an active research area in its ownright. An open question is whether an ensemble of weak generative models (where each model maynot necessarily generate high-quality samples) can still yield density and uncertainty predictionsuseful enough for OoD detection. Preliminary evidence on CIFAR-10 are promising; although theensemble average on the test set is 3:5bits/dim and samples from the prior do not resemble anyrecognizable objects, the ensemble still performs well at OoD detection. In future work we willexplore other methods of de-correlating samples from the posterior over model parameters, as wellas combining independent scores ( D, Rate, logp(x), WAIC) into a more powerful OoD model.
SJxjubiqhm
Needs a lot of work on improving technical rigor and clarity
5: Marginally below acceptance threshold
Note to Area Chair: Another paper submitted to ICLR under the title “Do Deep Generative Models Know What They Don’t Know?” shares several similarities with the current submission. This paper highlights a deficiency of current generative models in detecting out-of-distribution based samples based on likelihoods assigned by the model (in cases where the likelihoods are well-defined) or the discriminator distribution for GANs (where likelihoods are typically not defined). To remedy this deficiency, the paper proposes to use ensembles of generative models to obtain a robust WAIC criteria for anomaly detection. My main concern is with the level of technical rigor of this work. Much of this has to do with the presentation, which reads to me more like a summary blog post rather than a technical paper. - I couldn’t find a formal specification of the anomaly detection setup and how generative models are used for this task anywhere in the paper. - Section 2 seems to be the major contribution of this work. But it was very hard to understand what exactly is going on. What is the notation for the generative distribution? Introduction uses p_theta. Page 2, Paragraph 1 uses q_theta (x). Eq. (1) uses p_theta and then the following paragraphs use q_theta. - In Eq. (1), is theta a random variable? - How are generative ensembles trained? All the paper says is “independently trained”. Is the parameter initialization different? Is the dataset shuffling different? Is the dataset sampled with replacement (as in bootstrapping)? - “By training an ensemble of GANs we can estimate the posterior distribution over model deciscion boundaries D_theta(x), or equivalently, the posterior distribution over alternate distributions q_theta. In other words, we can use uncertainty estimation on randomly sampled discriminators to de-correlate the OoD classification errors made by a single discriminator” Why is the discriminator parameterized by theta? What is an ensemble of GANs? Multiple generators or multiple discriminators or both? What are “randomly sampled discriminators”? What do the authors mean by "posterior distribution over alternate distributions"? With regards to the technical assessment, I have the following questions for the authors: - In Figure 1, how do the histograms look for the training distribution of CIFAR? If the histograms for train and test have an overlap much higher than the overlap between the train of CIFAR and test set of any other distribution, then ensembling seems unnecessary and anomaly detecting can simply be done via setting a maximum and a minimum threshold on the likelihood for a test point. In addition to the histograms, I'd be curious to see results with this baseline mechanism. - Why should the WAIC criteria weigh the mean and variance equally? - Did the authors actually try to fix the posterior collapse issue in Figure 3b using beta-VAEs as recommended? Given the simplicity of implementing beta-VAEs, this should be a rather easy experiment to include. Minor typos: - ODIN and VIB are not defined in the abstract - Page 3: “deciscion” - Page 2, para 2: “log_\theta p(x)”
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Generative Ensembles for Robust Anomaly Detection ### Paper Abstract Deep generative models are capable of learning probability distributions over large, high-dimensional datasets such as images, video and natural language. Generative models trained on samples from p(x) ought to assign low likelihoods to out-of-distribution (OoD) samples from q(x), making them suitable for anomaly detection applications. We show that in practice, likelihood models are themselves susceptible to OoD errors, and even assign large likelihoods to images from other natural datasets. To mitigate these issues, we propose Generative Ensembles, a model-independent technique for OoD detection that combines density-based anomaly detection with uncertainty estimation. Our method outperforms ODIN and VIB baselines on image datasets, and achieves comparable performance to a classification model on the Kaggle Credit Fraud dataset. ### Paper Keywords ["Anomaly Detection", "Uncertainty", "Out-of-Distribution", "Generative Models"] ### Paper Content ABSTRACTDeep generative models are capable of learning probability distributions overlarge, high-dimensional datasets such as images, video and natural language. Gen-erative models trained on samples from p(x)ought to assign low likelihoods toout-of-distribution (OoD) samples from q(x), making them suitable for anomalydetection applications. We show that in practice, likelihood models are themselvessusceptible to OoD errors, and even assign large likelihoods to images from othernatural datasets. To mitigate these issues, we propose Generative Ensembles,a model-independent technique for OoD detection that combines density-basedanomaly detection with uncertainty estimation. Our method outperforms the Out-of-DIstribution detector for Neural networks (ODIN) and Variational InformationBottleneck (VIB) baselines on image datasets, and achieves comparable perfor-mance to a classification model on the Kaggle Credit Fraud dataset.1 I NTRODUCTIONKnowing when a machine learning (ML) model is qualified to make predictions on an input is criticalto safe deployment of ML technology in the real world. When training and test distributions differ,neural networks may provide – with high confidence – arbitrary predictions on inputs that they areunaccustomed to seeing. To mitigate these Out-of-Distribution (OoD) errors, we require methods todetermine whether a given input is sampled from a different stochastic generator than the one usedto train the model.OoD detection techniques have broad applications beyond safe deployment of ML technology. Asdatasets for ML grow ever larger and trend towards automated data collection, we require scalablemethods for identifying outliers and quantifying noise before we can attempt to train models on thatdata. Identifying anomalies in data is a crucial feature of many data-driven applications, such ascredit fraud detection and monitoring patient data in medical settings.Generative modeling algorithms have improved dramatically in recent years, and are capable oflearning probabilistic models over large, high-dimensional datasets such as images, video, and nat-ural language (Vaswani et al., 2017; Wang et al., 2018). A generative model p(x), parameterizedby random variable and trained to approximate data distribution p(x), ought to assign low like-lihoods to samples from any distribution q(x)that differs from p(x). Density estimation does notpresuppose a specific “alternate” distribution at training time, making it an attractive alternative toclassification-based anomaly detection methods.In this work, we apply several classes of generative models to OoD detection problems and demon-strate a significant shortcoming to high-dimensional density estimation models: the anomaly de-tection model itself may be mispecified . Explicit likelihood models can, in practice, realize highlikelihoods to adversarial examples, random noise, and even other natural image datasets. We alsoillustrate how GAN discriminators presuppose a particular OoD distribution, which makes them par-ticularly fragile at OoD classification. We propose Generative Ensembles, which combine densityestimation with uncertainty estimation to detect OoD in a robust manner. Generative Ensemblesare model-independent and are trained independently of the task-specific ML model of interest.Our method outperforms task-specific OoD baselines on the majority of evaluated OoD tasks anddemonstrate competitive results with discriminative classification approaches on the Kaggle CreditFraud dataset.1Under review as a conference paper at ICLR 20192 G ENERATIVE ENSEMBLESWe consider several classes of generative modeling techniques in our experiments. AutoregressiveModels and Normalizing Flows (NF) are fully-observed likelihood models that construct a tractablelog-likelihood approximation to the data-generating density p(x)(Uria et al., 2016; Dinh et al.,2014; Rezende & Mohamed, 2015). Variational Autoencoders (V AE) are latent variable modelsthat maximize a variational lower bound on log density (Kingma & Welling, 2013; Rezende et al.,2014). Finally, Generative Adversarial Networks (GAN) are implicit density models that minimize adivergence metric between p(x)and generative distribution q(x)(Goodfellow et al., 2014). We referto a GAN’s generative distribution as q(x)(in lieu ofp(x)) because from the GAN discriminator’spoint of view, the outputs of the generator are OoD and depend on .Although logp(x)and its lower bounds are proper scoring methods (Lakshminarayanan et al., 2017),we approximate them in practice with continuous-valued neural network function approximatorslogp(x). Neural networks have non-smooth predictive distributions, which makes them susceptibleto malformed inputs that exploit idiosyncratic computation within the model (Szegedy et al., 2013).Likelihood function approximators are no exception. When judging natural images, we assumean OoD input xq(x)should remain OoD within some LP-norm, and yet a Fast Gradient SignMethod (FGSM) attack (Goodfellow et al., 2015) on the predictive distribution can realize extremelyhigh likelihood predictions (Nguyen et al., 2015). Conversely, a FGSM attack in the reverse directionon an in-distribution sample xp(x)creates a perceptually identical input with low likelihoodpredictions (Kos et al., 2018). To make matters worse, we show in Figure 1 that likelihood modelscan be fooled by OoD samples that are not even adversarial by construction, such as SVHN testimages on a likelihood model trained on CIFAR-10. Concurrent work by Nalisnick et al. (2018)also show this phenomena and present additional analyses on why generative models systematicallyassign higher likelihoods to SVHN.Figure 1: Left: density estimation models are not robust to OoD inputs. A GLOW model (Kingma& Dhariwal, 2018) trained on CIFAR-10 assigns much higher likelihoods to samples from SVHNthan samples from CIFAR-10. Right: We use ensembles of generative models to implement theWatanabe-Akaike Information Criterion (WAIC), which combines density estimation with uncer-tainty estimation. Histograms correspond to predictions over test sets from each dataset.Generative Ensembles detect OoD examples by combining a density evaluation model with pre-dictive uncertainty estimation on the density model via ensemble variance. Following the resultsof Lakshminarayanan et al. (2017), we elect to use independently trained ensembles instead ofa Bayesian Dropout approximation (Gal & Ghahramani, 2016). For generative models that ad-mit exact likelihoods (or variational approximations), the ensemble can be used to implement theWatanabe-Akaike Information Criterion (WAIC), which consists of a density estimation score witha Bayesian correction term for model bias (Watanabe, 2010):WAIC (x) =E[logp(x)]Var[logp(x)] (1)2Under review as a conference paper at ICLR 20192.1 O OD D ETECTION WITH GAN D ISCRIMINATORSWe describe how to construct Generative Ensembles based on implicit density models such asGANs, and highlight the importance of OoD detection approaches that do not presuppose a spe-cific OoD distribution. A discriminative model tasked with classifying between p(x)andq(x)isfragile to inputs that lie in neither distribution. Figure 2b illustrates a simple 2D density modelingtask where individual GAN discriminators – when trained to convergence – learn a discriminativeboundary that does not adequately capture p(x).However, unlike discriminative anomaly classifiers on a static datasets, which model p(x)=q(x), thelikelihood ratio p(x)=q(x)implicitly assumed by a GAN discriminator is uniquely randomized byGAN training dynamics on . By training an ensemble of GANs we can estimate the posteriordistribution over model decision boundaries p(x)=q(x), or equivalently, the posterior distributionover alternate distributions q(x). In other words, we can use uncertainty estimation on randomlysampled discriminators to de-correlate the OoD classification errors made by a single discriminator(Figure 2c).(a) Normalizing Flow (b) Independent Discriminators (c) Ensemble Variance (GAN)Figure 2: In this toy example, we learn generative models for a 2D multivariate normal with identitycovariance centered at (5, 5). (a) Explicit density models such as Normalizing Flows concentrateprobability mass at the data distribution (b) Four independently trained GANs learn random discrim-inative boundaries, each corresponding to a different implied generator distribution. To ensure thatthe GAN discriminators form a clear discriminative boundary between p(x)andq(x), we train thediscriminators an additional 10k steps to convergence. Each of these boundaries fails to enclose thetrue data distribution. (c) Predictive uncertainty over an ensemble of discriminators “fences in” theshared, low-variance region corresponding to p(x).3 R ELATED WORKWe can categorize existing OoD detection techniques in Table 1 using two criteria: (1) Does itassume a specific anomaly distribution? (2) Is the technique specific to the model, or does it onlydepend on the inputs to the model?A common approach to OoD detection (a.k.a. anomaly detection) is to label a dataset of anomalousdata and train a binary classifier on that label. Alternatively, a classification task model may be aug-mented with a “None of the above” class. The classifier then learns a decision boundary (likelihoodratio) between p(x)andq(x). However, the discriminative approach to anomaly detection requiresthe anomaly distribution to be specified at training time; this is a severe flaw when anomalous datais rare (e.g. medical seizures) or non-stationary (e.g. generated by an adversary).3.1 U NCERTAINTY ESTIMATIONOoD detection is closely related to the problem of uncertainty estimation, whose goal is to yieldcalibrated confidence measures for a model’s predictive distribution p(yjx). Well-calibrated un-certainty estimation integrates several forms of uncertainty into p(yjx): model mispecification un-3Under review as a conference paper at ICLR 2019Table 1: Categorization of several OoD detection techniques, based on whether they depend on aspecific model/task, and whether they assume a specific anomaly distribution.Model-Dependent Model-IndependentOoDDependentAuxiliary “Other” class Binary classification (likelihood ratio)Adversarial TrainingOoDIndependentHendrycks & Gimpel (2016) Density EstimationGal & Ghahramani (2016) Generative Ensembles (ours)Liang et al. (2017)Lakshminarayanan et al. (2017)Alemi et al. (2018b)certainty (OoD detection of invalid inputs), aleatoric uncertainty (irreducible input noise for validinputs), and epistemic uncertainty (unknown model parameters for valid inputs). In this paper, westudy OoD detection in isolation; instead of considering whether p(yjx)should be trusted for agivenx, we are trying to determine whether xshould be fed into p(yjx)at all.Predictive uncertainty estimation is a model-dependent OoD technique because it depends on task-specific information (such as labels and task model architecture) in order to yield an integratedestimate of uncertainty. ODIN (Liang et al., 2017), MC Dropout (Gal & Ghahramani, 2016) andDeepEnsemble (Lakshminarayanan et al., 2017) model a calibrated predictive distribution for a clas-sification task. Variational information bottleneck (VIB) (Alemi et al., 2018b) performs divergenceestimation in latent space to detect OoD, but is technically a model-dependent technique becausethe latent code is trained jointly with the downstream classification task.One limitation of model-dependent OoD techniques is that they may discard information about p(x)in learning the task-specific loss function p(yjx). Consider a contrived binary classification modelon images that learns to solve the task perfectly by discarding all information except the contentsof the first pixel (no other information is preserved in the features). Subsequently, the model yieldsconfident predictions on any distribution that happens to preserve identical first-pixel statistics. Incontrast, density estimation in data space xconsiders the structure of the entire input manifold,without bias towards a particular downstream task or task-specific compression.In our work we estimate predictive uncertainty of the scoring model itself. Unlike predictive un-certainty methods applied to the task model’s predictions, Generative Ensembles do not requiretask-specific labels to train. Furthermore, model-independent OoD detection aids interpretation ofpredictive uncertainty by isolating the uncertainty component arising from OoD inputs.3.2 A DVERSARIAL DEFENSESong et al. (2017) make the observation that adversarial examples designed to fool a downstreamtask have low likelihood under an independent generative model. They propose a “data purification”pipeline where inputs are first modified via gradient ascent on model likelihood, before passing itto the unmodified classifier. Their evaluations are restricted to Lp-norm attacks on in-distributioninputs to the task model, and do not take into account that the generative model itself may be sus-ceptible to OoD errors. In fact, a preprocessing step with gradient ascent on model likelihood hasthe exact opposite of the desired effect when the input is OoD to begin with.Our work considers adversarial defense in a broader OoD context. Although adversarial attacksliterature typically considers small Lp-norm modifications to input (demonstrating the alarmingsensitivity of neural networks), there is no such restriction in practice to the degree with which aninput can be perturbed in a test setting. Adversarial defense is nothing more than making ML modelsrobust to OoD inputs; whether they come from an attacker or not is irrelevant. We evaluate our meth-ods on simple OoD transformations (flipping images), common ML datasets, and the adversaraialsetting where a worst-case input is created from a single model in the ensemble.He et al. (2017) demonstrate that ensembling adversarial defenses does not completely mitigatelocal sensitivity of neural networks. It is certainly plausible that sufficient search over a Generative4Under review as a conference paper at ICLR 2019Ensemble’s predictions can find OoD inputs with both low variance and high likelihood. The focusof our work is to measure the extent to which uncertainty estimation improves robustness to modelmispecification error, not to present a provably secure system. Having said that, model-independentOoD detection is easy to obfuscate in a practical ML security setting since the user only has accessto the task model. Furthermore, a Generative Ensemble’s WAIC estimate can be made more robustby sampling additional models from the posterior over model parameters.4 E XPERIMENTAL RESULTSFollowing the experiments proposed by Liang et al. (2017) and Alemi et al. (2018b), we train OoDmodels on MNIST, Fashion MNIST, CIFAR-10 datasets, and evaluate anomaly detection on testsamples from other datasets. In line with the aforementioned works, we measure anomaly detectioncapability based on AUROC over several quantities shown in Table 2. Our proposed quantities in-clude single Wasserstein GAN (WGAN) discriminators (Arjovsky et al., 2017) with fine-tuning ( D),ensemble variance of discriminators ( Var(D)), likelihood models ( logp(x)), and WAIC estimatedusing an ensemble of likelihood models. We follow the protocol as suggested by Lakshminarayananet al. (2017) to use 5 independent models with different parameter initializations, trained on the fulltraining set (no bootstrap). For likelihood estimators based on variational autoencoders (V AE), wealso evaluate the rate term DKL(q(zjx)kp(z)), which corresponds to information loss between thelatent inference distribution and prior.For MNIST and Fashion MNIST datasets, we use a V AE to predict a 16-sample ImportanceWeighted AutoEncoder (IWAE) bound. We extend the V AE example code1from Tensorflow Prob-ability (Dillon et al., 2017) to use a Masked Autoregressive Flow prior (Papamakarios et al., 2017),and train the model for 5k steps. Additional architectural details are found in Appendix B.Our WGAN model’s generator and discriminator share the same architecture with the V AE decoderand encoder, respectively. The discriminator has an additional linear projection layer to its predictionof the Wasserstein metric. To ensure Drepresents a meaningful discriminative boundary betweenthe two distributions, we freeze the generator and fine-tune the discriminator for an additional 4ksteps on stationary p(x)andq(x). We also include Gaussian noise adversarially perturbed by FGSMon a single model (Adversarial).For CIFAR-10 WGAN experiments, we change the first filter size in the discriminator from 7 to8. For log-likelihood estimation, we train a vanilla GLOW model (Kingma & Dhariwal, 2018) for250k steps, as we require a more powerful generative model to obtain good results.The baseline methods are model-dependent and learn from the joint distribution of images and la-bels, while our methods use only images. For the VIB baseline, we use the rate term as the thresholdvariable. The experiments in Alemi et al. (2018b) make use of (28, 28, 5) “location-aware” featuresconcatenated to the model inputs, to assist in distinguishing spatial inhomogeneities in the data. Inthis work we train vanilla generative models with no special modifications, so for fair comparisonwe also train VIB without location-aware features. For CIFAR-10 experiments, we train VIB for26 epochs and converge at 75.7% classification accuracy on the test set. All other experimentalparameters for VIB are identical to those in Alemi et al. (2018b).Despite being trained on strictly less data (no labels), our methods – in particular Generative En-sembles – outperform ODIN and VIB on most OoD tasks. The V AE rate term appears to be quiteeffective, outperforming likelihood and WAIC estimation in data space. It is robust to adversarialinputs on the same model, because the FGSM perturbation primarily minimizes the (larger) distor-tion component of the approximate likelihood. The performance of V AE rate versus VIB rate alsosuggests that latent codes learned from generative objectives are more useful for OoD detection thatlatent codes learned via a classification-specific objective.4.1 F AILURE ANALYSISIn this section we discuss the experiments in which Generative Ensembles performed poorly, andsuggest simple fixes to address these issues.1https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/vae.py5Under review as a conference paper at ICLR 2019Table 2: We train models on MNIST, Fashion MNIST, and CIFAR-10 and compare OoD classifi-cation ability to baseline methods using the threshold-independent Area Under ROC curve metric(AUROC).Dcorresponds to single WGAN discriminators with 4k fine-tuning steps on stationaryp(x),q(x).Var(D)is uncertainty estimated by an ensemble of discriminators. Rate is the DKLtermin the V AE objective. logp(x)is a single likelihood model (V AE, GLOW). WAIC is the Watanabe-Akaike Information Criterion as estimated by the Generative Ensemble. ODIN results reproducedfrom Liang et al. (2017). Best results for each task shown in bold.Train Dataset OoD Dataset ODIN VIB D Var(D) Rate logp(x)WAICMNIST Omniglot 100 97.1 56.1 80.3 99.1 98.2 100notMNIST 98.2 98.6 93.1 99.6 99.9 100 100Fashion MNIST N/A 85.0 83.1 99.9 94.7 100 100Uniform 100 76.6 95.6 100 99.3 100 100Gaussian 100 99.2 0.6 100 100 100 100HFlip N/A 63.7 41.5 57.7 90.0 84.9 86.1VFlip N/A 75.1 44.7 60.9 89.3 81.9 80.7Adversarial N/A N/A 30.8 100 100 0 100Fashion MNIST Omniglot N/A 94.3 19.4 83.5 97.7 56.8 79.6notMNIST N/A 89.6 22.3 96.0 99.7 92.0 98.7MNIST N/A 94.1 70.1 74.7 97.1 42.3 76.6Uniform N/A 79.6 0 82.7 95.6 100 100Gaussian N/A 89.3 0 99.8 89.2 100 100HFlip N/A 66.7 58.0 54.1 72.4 59.4 62.3VFlip N/A 90.2 69.6 69.6 87.1 66.8 74.0Adversarial N/A N/A 0 100 100 0 100CIFAR-10 CelebA N/A 73.5 56.5 74.3 N/A 99.7 99.9SVHN N/A 52.8 68.9 61.4 N/A 7.5 100ImageNet32 81.6 70.1 47.1 62.9 N/A 93.8 95.6Uniform 99.2 54.0 100 100 N/A 100 100Gaussian 99.7 45.8 100 100 N/A 100 100HFlip N/A 50.6 52.0 50.3 N/A 50.1 50.0VFlip N/A 51.2 60.9 52.3 N/A 50.6 50.4In an earlier draft of this work, a V AE trained on Fashion MNIST performed poorly on all OoDdatasets when using logp(x)and WAIC metrics. This was surprising, since the same metrics per-formed well when the same V AE architecture was trained on MNIST. To explain this phenomenon,we show in Figure 3 inputs and V AE-decoded outputs from Fashion MNIST and MNIST test sets.Fashion MNIST images are reconstructed properly, while MNIST images are are barely recogniz-able after decoding.A V AEs training objective can be interpreted as the sum of a pixel-wise autoencoding loss (distor-tion) and a “semantic” loss (rate). Even though Fashion MNIST appears to be better reconstructedin a semantic sense, the distortion values between the FMNIST and MNIST test datasets are numeri-cally quite similar, as shown in Figure 3. Distortion terms make up the bulk of the IWAE predictionsin our models, thus explaining why logp(x)was not very discriminative when classifying OoDMNIST examples.Higgins et al. (2016) propose -V AE, a simple modification to the standard V AE objective:p(xjz) +DKL(q(zjx)kp(z)).controls the relative balance between rate and distortion termsduring training. Setting < 1is a commonly prescribed fix for encouraging V AEs to approachthe “autoencoding limit” and avoid posterior collapse (Alemi et al., 2018a). At test time, this re-sults in higher-fidelity autoencoding at the expense of higher rates, which seems to be a more usefulsignal for identifying outliers than the total pixel distortion (also suggested by Table 2, column 7).Re-training the ensemble with =:1encourages a higher distortion penalty during training, andthereby fixes the OoD detection model.6Under review as a conference paper at ICLR 2019(a) Fashion MNIST (b) MNIST (OoD)Figure 3: Top: Inputs and decoded outputs from a V AE trained on Fashion MNIST( = 1) for Fash-ion MNIST (left) and MNIST (right). Although Fashion MNIST inputs appear to be better recon-structed (suggesting higher likelihoods), they have comparable distortions to MNIST. The bottomrow shows that Fashion MNIST and MNIST test samples have comparable rate-distortion scatterplots and IWAE histograms.4.2 C REDIT CARD ANOMALY DETECTIONWe consider the problem of detecting fraudulent credit card transactions from the Kaggle CreditFraud Challenge (Dal Pozzolo et al., 2015). A conventional approach to fraud detection is to includea small fraction of fraudulent transactions in the training set, and then learn a discriminative clas-sifier. Instead, we treat fraud detection as an anomaly detection problem where a generative modelonly sees normal credit card transactions at training time. This is motivated by realistic test scenar-ios, where an adversary is hardly restricted to generating data identically distributed to the trainingset.We compare single likelihood models (16-sample IWAE) and Generative Ensembles (ensemble vari-ance of IWAE) to a binary classifier baseline that has access to a training set of fraudulent transac-tions in Table 3. The classifier baseline is a fully-connected network with 2 hidden ReLU layersof 512 units, and is trained using a weighted sigmoid cross entropy loss (positive weight=580) withDropout and RMSProp ( = 1e5). The V AE encoder and decoder are fully connected networkswith single hidden layers (32 and 30 units, respectively) and trained using Adam ( = 1e3).Unsurprisingly, the classifier baseline performs best because fraudulent test samples are distributedidentically to fraudulent training samples. Even so, the single-model density estimation and Gener-ative Ensemble achieve reasonable results.Table 3: Comparison of density-based anomaly detection approaches to a classification baseline onthe Kaggle Credit Card Fraud Dataset. The test set consists of 492 fraudulent transactions and 492normal transactions. Threshold-independent metrics include False Positives at 95% True Positives(FPR@95%TPR), Area Under ROC (AUROC), and Average Precision (AP). Density-based models(Single IWAE, WAIC) are trained only on normal credit card transactions, while the classifier istrained on normal and fraudulent transactions. Arrows denote the direction of better scores.Method FPR@95%TPR #AUROC"AP"Classifier 4.0 99.1 99.3Single IWAE 15.7 94.6 92.0WAIC 15.2 94.7 92.17Under review as a conference paper at ICLR 20195 D ISCUSSION AND FUTURE WORKOoD detection is a critical piece of infrastructure for ML applications where the test data distributionis not known at training time. We present Generative Ensembles, a simple yet powerful techniquefor model-independent OoD detection that improves density models with uncertainty estimation.An important future direction of research is that of scalability : learning good generative modelsof semantically rich, high-dimensional inputs (e.g. video) is an active research area in its ownright. An open question is whether an ensemble of weak generative models (where each model maynot necessarily generate high-quality samples) can still yield density and uncertainty predictionsuseful enough for OoD detection. Preliminary evidence on CIFAR-10 are promising; although theensemble average on the test set is 3:5bits/dim and samples from the prior do not resemble anyrecognizable objects, the ensemble still performs well at OoD detection. In future work we willexplore other methods of de-correlating samples from the posterior over model parameters, as wellas combining independent scores ( D, Rate, logp(x), WAIC) into a more powerful OoD model.<|im_end|> <|im_start|>assistant ### Review Title Needs a lot of work on improving technical rigor and clarity ### Review Text Note to Area Chair: Another paper submitted to ICLR under the title “Do Deep Generative Models Know What They Don’t Know?” shares several similarities with the current submission. This paper highlights a deficiency of current generative models in detecting out-of-distribution based samples based on likelihoods assigned by the model (in cases where the likelihoods are well-defined) or the discriminator distribution for GANs (where likelihoods are typically not defined). To remedy this deficiency, the paper proposes to use ensembles of generative models to obtain a robust WAIC criteria for anomaly detection. My main concern is with the level of technical rigor of this work. Much of this has to do with the presentation, which reads to me more like a summary blog post rather than a technical paper. - I couldn’t find a formal specification of the anomaly detection setup and how generative models are used for this task anywhere in the paper. - Section 2 seems to be the major contribution of this work. But it was very hard to understand what exactly is going on. What is the notation for the generative distribution? Introduction uses p_theta. Page 2, Paragraph 1 uses q_theta (x). Eq. (1) uses p_theta and then the following paragraphs use q_theta. - In Eq. (1), is theta a random variable? - How are generative ensembles trained? All the paper says is “independently trained”. Is the parameter initialization different? Is the dataset shuffling different? Is the dataset sampled with replacement (as in bootstrapping)? - “By training an ensemble of GANs we can estimate the posterior distribution over model deciscion boundaries D_theta(x), or equivalently, the posterior distribution over alternate distributions q_theta. In other words, we can use uncertainty estimation on randomly sampled discriminators to de-correlate the OoD classification errors made by a single discriminator” Why is the discriminator parameterized by theta? What is an ensemble of GANs? Multiple generators or multiple discriminators or both? What are “randomly sampled discriminators”? What do the authors mean by "posterior distribution over alternate distributions"? With regards to the technical assessment, I have the following questions for the authors: - In Figure 1, how do the histograms look for the training distribution of CIFAR? If the histograms for train and test have an overlap much higher than the overlap between the train of CIFAR and test set of any other distribution, then ensembling seems unnecessary and anomaly detecting can simply be done via setting a maximum and a minimum threshold on the likelihood for a test point. In addition to the histograms, I'd be curious to see results with this baseline mechanism. - Why should the WAIC criteria weigh the mean and variance equally? - Did the authors actually try to fix the posterior collapse issue in Figure 3b using beta-VAEs as recommended? Given the simplicity of implementing beta-VAEs, this should be a rather easy experiment to include. Minor typos: - ODIN and VIB are not defined in the abstract - Page 3: “deciscion” - Page 2, para 2: “log_\theta p(x)” ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
AJTAcS7SZzf
ICLR.cc/2021/Conference
2021
AUTOSAMPLING: SEARCH FOR EFFECTIVE DATA SAMPLING SCHEDULES
["Ming Sun", "Haoxuan Dou", "Baopu Li", "Junjie Yan", "Wanli Ouyang"]
Data sampling acts as a pivotal role in training deep learning models. However, an effective sampling schedule is difficult to learn due to its inherent high-dimension as a hyper-parameter. In this paper, we propose the AutoSampling method to automatically learn sampling schedules for model training, which consists of the multi-exploitation step aiming for optimal local sampling schedules and the exploration step for the ideal sampling distribution. More specifically, we achieve sampling schedule search with shortened exploitation cycle to provide enough supervision. In addition, we periodically estimate the sampling distribution from the learned sampling schedules and perturb it to search in the distribution space. The combination of two searches allows us to learn a robust sampling schedule. We apply our AutoSampling method to a variety of image classification tasks illustrating the effectiveness of the proposed method.
["Hyper-parameter Learning", "AutoML", "Computer Vision"]
ABSTRACTData sampling acts as a pivotal role in training deep learning models. However, aneffective sampling schedule is difficult to learn due to its inherent high-dimensionas a hyper-parameter. In this paper, we propose the AutoSampling method toautomatically learn sampling schedules for model training, which consists ofthe multi-exploitation step aiming for optimal local sampling schedules and theexploration step for the ideal sampling distribution. More specifically, we achievesampling schedule search with shortened exploitation cycle to provide enoughsupervision. In addition, we periodically estimate the sampling distribution fromthe learned sampling schedules and perturb it to search in the distribution space.The combination of two searches allows us to learn a robust sampling schedule.We apply our AutoSampling method to a variety of image classification tasksillustrating the effectiveness of the proposed method.1 I NTRODUCTIONData sampling policies can greatly influence the performance of model training in computer visiontasks, and therefore finding robust sampling policies can be important. Handcrafted rules, e.g. dataresampling, reweighting, and importance sampling, promote better model performance by adjustingthe training data frequency and order (Estabrooks et al., 2004; Weiss et al., 2007; Bengio et al., 2009;Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Shrivastava et al., 2016; Jesson et al.,2017). Handcrafted rules heavily rely on the assumption over the dataset and cannot adapt well todatasets with their own characteristics. To handle this issue, learning-based methods (Li et al., 2019;Jiang et al., 2017; Fan et al., 2017) were designed to automatically reweight or select training datautilizing meta-learning techniques or a policy network.However existing learning-based sampling methods still rely on human priors as proxies to optimizesampling policies, which may fail in practice. Such priors often include assumptions on policynetwork design for data selection (Fan et al., 2017), or dataset conditions like noisiness (Li et al., 2019;Loshchilov & Hutter, 2015) or imbalance (Wang et al., 2019). These approaches take images features,losses, importance or their representations as inputs and use the policy network or other learningapproaches with small amount of parameters for estimating the sampling probability. However, forexample, images with similar visual features can be redundant in training, but their losses or featuresfed into the policy network are more likely to be close, causing the same probability to be sampled forredundant samples if we rely on aforementioned priors. Therefore, we propose to directly optimizethe sampling schedule itself so that no prior knowledge is required for the dataset. Specifically, thesampling schedule refers to order by which data are selected for the entire training course. In thisway, we only rely on data themselves to determine the optimal sampling schedule without any prior.Directly optimizing a sampling schedule is challenging due to its inherent high dimension. Forexample, for the ImageNet classification dataset (Deng et al., 2009) with around one million samples,the dimension of parameters would be in the same order. While popular approaches such as deepreinforcement learning (Cubuk et al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al.,2015), population-based training (Jaderberg et al., 2017) or simple random search (Bergstra & Bengio,2012) have already been utilized to tune low-dimensional hyper-parameters like augmentation sched-ules, their applications in directly finding good sampling schedules remain unexploited. For instance,the dimension of a data augmentation policy is generally only in dozens, and it needs thousandsof training runs (Cubuk et al., 2018) to sample enough rewards to find an optimal augmentation1Under review as a conference paper at ICLR 2021policy because high-quality rewards require many epochs of training to obtain. As such, optimizing asampling schedule may require orders of magnitude more rewards than data augmentation to gatherand hence training runs, which result in prohibitively slow convergence.To overcome the aforementioned challenge, we propose a data sampling policy search framework,named AutoSampling, to sufficiently learn an optimal sampling schedule in a population-basedtraining fashion (Jaderberg et al., 2017). Unlike previous methods, which focus on collecting long-term rewards and updating hyper-parameters or agents offline, our AutoSampling method collectsrewards online with a shortened collection cycle but without priors. Specifically, the AutoSamplingcollects rewards within several training iterations, tens or hundred times shorter than that in existingworks (Ho et al., 2019; Cubuk et al., 2018). In this manner, we provide the search process with muchmore frequent feedback to ensure sufficient optimization of the sampling schedule. Each time when afew training iterations pass, we collect the reward from the previous several iterations, accumulatethem and later update the sampling distribution using the rewards. Then we perturb the samplingdistribution to search in distribution space, and use it to generate new mini-batches for later iterations,which are recorded into the output sampling schedule. As illustrated in Sec. 4.1, shortened collectioncycles with less interference also can better reflect the training value of each data.Our contributions are as follows:To our best knowledge, we are the first to propose to directly learn a robust samplingschedule from the data themselves without any human prior or condition on the dataset.We propose the AutoSampling method to handle the optimization difficulty due to the highdimension of sampling schedules, and efficiently learn a robust sampling schedule throughshortened reward collection cycle and online update of the sampling schedule.Comprehensive experiments on CIFAR-10/100 and ImageNet datasets (Krizhevsky, 2009; Denget al., 2009) with different networks show that the Autosampling can increase the top-1 accuracy byup to 2.85% on CIFAR-10, 2.19% on CIFAR-100, and 2.83% on ImageNet.2 B ACKGROUND2.1 R ELATED WORKData sampling is of great significance to deep learning, and has been extensively studied. Approacheswith human-designed rules take pre-defined heuristic rules to modify the frequency and order bywhich training data is presented. In particular, one intuitive method is to resample or reweightdata according to their frequencies, difficulties or importance in training (Estabrooks et al., 2004;Weiss et al., 2007; Drummond et al., 2003; Bengio et al., 2009; Lin et al., 2017; Shrivastava et al.,2016; Loshchilov & Hutter, 2015; Wang et al., 2019; Johnson & Guestrin, 2018; Katharopoulos &Fleuret, 2018; Byrd & Lipton, 2018; Jesson et al., 2017). These methods have been widely used inimbalanced training or hard mining problems. However, they are often restricted to certain tasksand datasets based on which they are proposed, and their ability to generalize to a broader range oftasks with different data distribution may be limited. In another word, these methods often implicitlyassume certain conditions on the dataset, such as cleanness or imbalance. In addition, learning-basedmethods have been proposed for finding suitable sampling schemes automatically. Methods usingmeta-learning or reinforcement learning are also utilized to automatically select or reweight dataduring training (Li et al., 2019; Jiang et al., 2017; Ren et al., 2018; Fan et al., 2017), but they areonly tested on small-scale or noisy datasets. Whether or not they can generalize over tasks of otherdatasets still remain untested. In this work, we directly study the data sampling without any prior,and we also investigate its wide generalization ability across different datasets such as CIFAR-10,CIFAR-100 and ImageNet using many typical networks.As for hyper-parameter tuning, popular approaches such as deep reinforcement learning (Cubuket al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al., 2015) or simply random search(Bergstra & Bengio, 2012) have already been utilized to tune low-dimensional hyper-parameters andproven to be effective. Nevertheless, they have not been adopted to find good sampling scheduledue to its inherent high dimensiona. Some recent works tackle the challenge of optimizing high-dimensional hyper-parameter. MacKay et al. (2019) uses structured best-response functions andJonathan Lorraine (2019) achieve this goal through the combinations of the implicit function theoremand efficient inverse Hessian approximations. However, they have not been tested on the task ofoptimizing sampling schedules, which is the major focus of our work in this paper.2Under review as a conference paper at ICLR 2021Model_1Batch1_1Model_2Batch1_2Model_NpBatch1_NpInterval1(Training&Evaluation)Batch1_2Model_1Batch2_1Model_2Batch2_2Model_NpBatch2_NpExploitMulti-Exploitation(Tintervals)RecordedOptimalSamplingScheduleInterval2(Training&Evaluation)ExploitModel_1BatchT_1Model_2BatchT_2Model_NpBatchT_NpBatchT_1ExploitIntervalT(Training&Evaluation)Worker_1Worker_2Worker_Nph* =Batch2_NpEstimatedP(D)UniformDistributionSmooththroughMixtureSamplingschedulefornextmulti-exploitationstepBatch1_1Batch2_1BatchT_1Batch1_2Batch2_2BatchT_2Batch1_NpEstimatePerturbbySamplingBatch2_NpBatchT_NpExplorationFigure 1: Overview of AutoSampling illustrated through one multi-exploitation-and-exploration cycle.a) The multi-exploitation step, illustrated by the left half, is the process of learning optimal samplingschedule locally. The same color of model for each worker indicates that the same model weight iscloned into it. Also for simplicity, in this figure we adopt the exploitation interval of length 1. b) Theexploration step, shown by the right half, is to search in the sampling distribution space. Specifically,we estimate the sampling distribution from the schedules collected in the multi-exploitation step andperturb it to generate new sampling schedules for all workers.2.2 P OPULATION BASED TRAININGHyper-parameter tuning task can be framed as a bi-level optimization problem with the followingobjective function,minh2HL(;h)subject to= arg max2eval(;h)(1)whererepresents the model weight and h= (h1;h2;;hT)is the hyper-parameter scheduleforTtraining intervals. Population based training (PBT) (Jaderberg et al., 2017) solves the bi-level optimization problem by training a population Pof child models in parallel with differenthyper-parameter schedules initialized:P=f(i;hi;t)gNpi=1 (2)wherei;hirespectively represents the child model weight, the corresponding hyper-parameterschedule for the training interval ton workeri, andNpis the number of workers. PBT proceeds inintervals, which usually consists of several epochs of training. During the interval, the population ofmodels are trained in parallel to finish the lower-level optimization of weights i.Between intervals, an exploit-and-explore procedure is adopted to conduct the upper-level optimiza-tion of the hyper-parameter schedule. In particular for interval t, to exploit we evaluate child modelson a held-out validation dataset:ht;t= arg maxpi=(i;hi;t)2Peval(i;hi)!i;i= 1;;Np(3)We record the best performing hyper-parameter setting htand broadcast the top-performing modeltto all workers. To explore, we initialize new hyper-parameter schedules for interval t+ 1withdifferent random seeds on all workers, which can be viewed as a search in the hyper-parameterspace. The next exploit-and-explore cycle will then be continued. In the end, the top-performinghyper-parameter schedule h= (h1;h2;;hT)can be obtained.PBT is applied to tune low-dimenisal hyper-parameters such as data augmentation schedules (Hoet al., 2019; Jaderberg et al., 2017). However, it cannot be directly used for finding samplingstrategies due to the high dimension. Unlike PBT, our AutoSampling adopts a multi-exploitation-and-exploration structure, leading to much shorter reward collection cycles that contribute to much moreand effective rewards for sufficient optimization within a practical computational budget.3 A UTOSAMPLING WITH SEARCHINGThe overview of our AutoSampling is illustrated in Fig.1. AutoSampling alternately runs multi-exploitation step and exploration step. In the exploration step, we 1) update the sampling distribution3Under review as a conference paper at ICLR 2021Algorithm 1: The Multi-Exploitation StepInput: Training dataset D, populationP=f(i;hi;t)gNpi=1, number of workers Np, number ofexploitation intervals T, exploitation interval length NsInitialize H ()fort= 1toTdoforj= 1toNsdofor(i;ht;i;t)2Pdoi 5L (i;ht;i)Bupdate the weight of child model iend forht;t= arg maxPeval(i;hi)H H+ht Bupdate the sampling for child model ifori= 1toNpdoi t Bclone the optimal weightend forend forend forReturn H,Pusing the rewards collected from the multi-exploitation step (the sampling distribution is uniformdistribution initially); 2) perturb the updated sampling distribution for child models so that differentchild models have different sampling distributions; 3) use the corresponding perturbed samplingdistribution for each child model to sample mini-batches of training data. In the multi-exploitationstep, we 1) train multiple child models using the mini-batches sampled from the exploration step;2) collect short-term rewards from the child models. AutoSampling finishes with a recorded top-performing sampling schedule, which can be transferred to other models.3.1 M ULTI -EXPLOITATION BY SEARCHING IN THE DATA SPACEIn the multi-exploitation step, we aim to search locally in the data space by collecting short-termrewards and sub-schedules. Specifically, we wish to learn a sampling schedule for Texploitationintervals. In each interval, there are a population PofNpchild models. Denote ht;ias the trainingdata sub-schedule in the tthinterval for the ithchild model. When all of the Texploitation intervalsfor theithchild model are considered, we have Hi=fht;ijt= 1;:::;Tg=fx1;;xNg, whereNis the number of training data for the multi-exploitation step. Each interval consists of Nstrainingiterations that is also equivalent to Nstraining mini-batches, where Nsis the length of the interval.AutoSampling is expected to produce a sequence of training samples, denoted by H, so that a givenmodel is optimally trained. The population fHigforms the local search space, from which we aim tosearch for an optimal sampling schedule H.Given the population P, we train them in parallel on Npworkers. Once an interval of data ht;icontainingNstraining batches have been used for training, we evaluate all child models and use thetop evaluation performance as the reward. According to the reward, we record the top-performingweight and sub-schedule for the current interval t, in particular,ht;t= arg maxpi=(i;hi;t)2Peval(i;ht;i) (4)On the other hand, we update all child model weights of Pby cloning into them with the top-performing weight tso we can continue searching based on the more promising child. We willcontinue the exploit steps through the whole training process, and output the recorded optimalsampling schedule H=fh1;h2;;hTg. By using exploitation interval of mini-batches ratherthan epochs or even entire training runs adopted by earlier methods, AutoSampling may yield a betterand more robust sampling schedule. It should be pointed out that even though in AutoSamplingrewards are collected within a much shorter interval, they remain effective. As we directly optimizethe sampling schedule, we are concerned with only the data themselves. The short-term rewardsreflect the training value of data from the exploitation interval they are collected. But for global hyper-parameters such as augmentation schedules, short-term rewards may lead to inferior performanceas these hyper-parameters are concerned with the overall training outcome. We describe the multi-exploitation with details in Alg.1.4Under review as a conference paper at ICLR 2021Algorithm 2: Search based AutoSamplingInput: Training dataset D, population size NpInitializeH () ,P(D) uniform(D) and initialize child models 1;;Npwhile not end of training dofori= 1toNpdoSamplehifrom Mixture( log(P(D) +),Nuuniform (D))end forInitializeP=f(i;hi;t)gNpi=1H;P Alg.1EstimateP(D)according to Equation (5)UpdateP(D)according to Equation (6)H H+Hend whileReturnH,P(D)3.2 E XPLORATION BY SEARCHING IN SAMPLING DISTRIBUTION SPACEIn the exploration, we search in sampling distribution space by updating and perturbing the samplingdistribution. We first estimate the underlying sampling distribution P(D)from the top samplingschedulehproduced in the multi-exploitation, that is, for x2D,P(x) =count (x2H)Px2Dcount (x2H)(5)wherecount (x2H)denotes the number of x’s appearances in H. We further perturb theP(D)and generate the sampling schedules on each worker for the later multi-exploitation. Weintroduce perturbations into the generated schedules by simply sampling from the multinomialdistribution P(D)using different random seeds. However, in our experiments, we observe that thedistribution produced by P(D)tends to be extremely skewed and a majority of the data actuallyhave zero frequencies. Such skewness causes highly imbalanced training mini-batches, and thereforedestabilizes subsequent model training.Distribution Smoothing To tackle the above issue, we first smooth P(D)through the logarithmicfunction, and then apply a probability mixture with uniform distributions. In particular for the datasetD,P0(D) =Mixture (log(P(D) +);Nuuniform (D)) (6)where1is the smoothing factor and Nuuniform (D)denotesNuuniform multinomialdistributions on the dataset D. The smoothing through the logfunction can greatly reduce theskewness, however, log(P(D) +)may still contain zero probabilities for some training data,resulting in unstable training. Therefore, we further smooth it through a probability mixture with Nuuniform distribution uniform (D)to ensure presence of all data. This is equivalent to combining Nuepochs of training data to the training batches sampled from P(D), and shuffling the union. Once wehave new diverse sampling schedules for the population, we proceed to the next multi-exploitationstep.We continue this alternation between multi-exploitation and exploration steps until the end of training.Note that to generate sampling schedule for the first multi-exploitation run, we initialize P(D)to bean uniform multinomial distribution. In the end, we output a sequence of optimal sampling schedulesH= (H1;;Hn)fornalternations. The entire process is illustrated in details in Alg.2.4 E XPERIMENTSIn this section, we present comprehensive experiments on various datasets to illustrate the perfor-mance of AutoSampling, and also demonstrate the process of progressively learning better samplingdistribution.4.1 A BLATION STUDYFor this part, we gradually build up and test components of AutoSampling on CIFAR-100, and thenexamine their performances on CIFAR-10 and ImageNet datasets. The training implementationdetails and computational complexity can be found in Appendix A.1.5Under review as a conference paper at ICLR 2021Table 1: Performance on CIFAR-100 using different configurations of AutoSampling and baselines.Worker is the number of workers used and Interval is the exploitation interval in terms of batches.NETWORK WORKER INTERVAL EXPLORATION TYPE TOP 1(%)RESNET 18 (Z HANG ET AL ., 2019) - - - 78.34 0:05RESNET 18 1 - U NIFORM 78.460:035RESNET 18 20 80 B ATCHES RANDOM 78.760:003RESNET 18 20 20 B ATCHES RANDOM 78.990:003RESNET 18 80 20 B ATCHES RANDOM 79.090:017RESNET 18 20 20 B ATCHES MIXTURE 79.440:020RESNET 50 (J IN ET AL ., 2019) - - - 79.34RESNET 50 1 - U NIFORM 79.700:023RESNET 50 20 80 B ATCHES RANDOM 80.550:129RESNET 50 20 20 B ATCHES RANDOM 81.050:064RESNET 50 80 20 B ATCHES RANDOM 81.190:072RESNET 50 20 20 B ATCHES MIXTURE 81.530:088DENSE NET121 1 - U NIFORM 80.130:028DENSE NET121 20 80 B ATCHES RANDOM 80.620:694DENSE NET121 20 20 B ATCHES RANDOM 81.110:127DENSE NET121 80 20 B ATCHES RANDOM 81.080:021DENSE NET121 20 20 B ATCHES MIXTURE 80.970:006Table 2: Experiments on CIFAR-10.NETWORK EXPLORATION TYPE TOP1(%)RESNET 18 UNIFORM 93.010:009RESNET 18 R ANDOM 95.860:003RESNET 18 M IXTURE 95.800:018RESNET 50 UNIFORM 93.600:004RESNET 50 R ANDOM 96.100:002RESNET 50 M IXTURE 96.090:070Table 3: Experiments on ImageNet.NETWORK EXPLORATION TYPE TOP1(%)RESNET 18 UNIFORM 70.38RESNET 18 R ANDOM 72.07RESNET 18 M IXTURE 72.91RESNET 34 UNIFORM 74.09RESNET 34 R ANDOM 76.11RESNET 34 M IXTURE 76.92Adding Workers To look into the influence of the worker numbers, we conduct experiments usingworker numbers of 1, 20, 80 respectively with the same setting ( Ns= 20 with random exploration).With the worker number of 1, the experiment is simply the normal model training using stochasticgradient descent. To show the competitiveness of our baselines, we also include state-of-the-artresults on CIFAR-100 with ResNet-18 and ResNet-50 (Zhang et al., 2019; Jin et al., 2019). Wenotice significant performance gain using the worker number of 20 for ResNet-18, ResNet-50 andDenseNet-121 (He et al., 2015; Huang et al., 2017), as illustrated in Table 1. However, we note thatincreasing worker number from 20 to 80 only brings marginal performance gains across variousmodel structures, as shown in Table 1. Therefore, we set the worker number to be 20 for the rest ofthe experiments.Shortening Exploitation Intervals To study the effects of the shortened exploitation interval, werun experiments using different exploitation intervals of 20 and 80 batches(iterations) respectively. Asshown in Table 1, models with the shorter exploitation interval of 20 batches(iterations) perform betterthan the one with the longer exploitation interval across all three network structures, conforming toour assumptions that the reward collected reflects value of each data used in the exploitation interval.This result adheres to our intuition that shorter exploitation interval can encourage the sampler toaccumulate more rewards to learn better sampling schedules. For the rest of this section we keep theexploitation interval of 20.Adding Exploration Type We further add mixture as the exploration type to see the effects oflearning the underlying sampling distribution, and completing the proposed method. As shown inTable 1, with ResNet-18 and ResNet-50 we push performance higher with the mixture exploration,and outperform the baseline method by about 1 and 1.8 percentage on CIFAR-100 respectively.However, we found that it is not true in the case of DenseNet-121 and this case may be attributed tothe bigger capacity of DenseNet-121.Generalization Over Datasets In addition, we experiment on other datasets. We report the results onCIFAR10 in Table 2 and the results of ResNet-18, ResNet-34 on ImageNet in Table 3. For CIFAR-10,we notice that the mixture and random exploration methods are comparable while both outperformingthe uniform baseline, and we believe it is due to the simplicity of the dataset. In the more challenging6Under review as a conference paper at ICLR 20210 10000 20000 30000 40000 5000060708090100110120130 Epoch-80Epoch-160Epoch-240Figure 2: The comparison between histograms estimated from the sampling schedules of Epoch80, 160 and 240 from CIFAR-100 with ResNet-18. We divide the 50000 training images into 500segments of 100 images, and calculate the histograms of total data counts of all segments. We reorderthex-axis based on the ranking of data counts for epoch 240 for easier comparison.Table 4: Static vs dynamic sampling schedule on CIFAR-100 (%)NETWORK SAMPLING TYPEUNIFORM STATIC DYNAMICRESNET 18 78.46 0:035 78.800:007 79.440:020RESNET 50 79.70 0:023 80.210:014 81.530:088ImageNet, the mixture exploration outperforms the random exploration by a clear margin. We alsocompare our AutoSampling with some recent non-uniform sampling methods on CIFAR-100, whichcan be found in Appendix A.2.4.2 S TATIC VS DYNAMIC SCHEDULESWe aim to see if the final sampling distribution estimated by our AutoSampling is sufficient to producerobust sampling schedules. In another word, we wish to know training with the AutoSampling iseither a process of learning a robust sampling distribution, or a process of dynamically adjusting thesampling schedule for optimal training. To this end, we conduct training using different samplingschedules. First, we calculate the sampling distribution estimated throughout the learning steps ofAutoSampling, and use it to generate the sampling schedule of a full training process, which wedenote as STATIC . Moreover, we denote the sampling schedule learned using AutoSampling asDYNAMIC , since AutoSampling dynamically adjust the sampling schedule alongside the trainingprocess. Finally, we denote the baseline method as UNIFORM , which uses the sampling schedulegenerated from uniform distribution.We report results on CIFAR-100 with ResNet-18 and ResNet-50 in Table 4. Model trained withSTATIC sampling schedules exceeds the baseline UNIFORM significantly, indicating the superiority ofthe learned sampling distribution over the uniform distribution. It shows the ability of AutoSamplingto learn good sampling distribution. Nonetheless, note that models trained with DYNAMIC samplingschedules outperform models trained with STATIC , by a margin bigger than the one between STATICand UNIFORM . This result shows the fact that despite the AutoSampling’s capability of learning goodsampling distribution, its flexibility during training matters even more. Moreover, this phenomenonalso indicates that models at different stages of learning process may require different samplingdistributions to achieve optimal training. One single sampling distribution, even gradually estimatedusing AutoSampling, seems incapable of covering the needs from different learning stages. Weplot the histograms of data counts in training estimated from schedules of different learning stageswith ResNet-18 on CIFAR-100 in Fig.2, showing the great differences between optimized samplingdistributions from different epochs.4.3 A NALYZING SAMPLING SCHEDULES LEARNED BY AUTOSAMPLINGTo further investigate the sampling schedule learned by AutoSampling, we review the images at thetail and head part of the sampling spectrum. In particular, given a sampling schedule learned we rankall images based on their appearances in training. Training images at the top and bottom of the orderare extracted, corresponding to high and low probabilities of being sampled respectively. In Fig.3, weshow 4 classes of exemplary images. The images of low probability tend to have clearer imagery7Under review as a conference paper at ICLR 2021BabyBikeChimpCamalLowProbabilityHighProbabilityFigure 3: Example images on the head and tail of the sampling spectrum. The images on the left arethe ones with low sampling probability, while the images on the right more likely to be sampled. Weobtain these images using AutoSampling with the ResNet-18 model on CIFAR-100.Table 5: Transfer of sampling distributions learned by three model structures to ResNet-50 onCIFAR-100 (%). UNIFORM denotes the baseline result using uniform sampling distribution.NETWORK SAMPLING SCHEDULE SOURCEUNIFORM RESNET 18 R ESNET 50 D ENSENET 121RESNET 50 79.70 0:023 80.270:014 80.210:014 80.470:194features enabling easy recognition, while the images of high probability tend to be more obscure,indicating that the sampling schedule may show hard samples mining effects. However, as shown inA.3 and Fig. 4, the loss values and probabilities of being sampled seem to be not highly correlated,which indicates more potential of AutoSampling beyond visually hard example mining. In addition,we notice the images of low probability also contain low quality images. For instance, in Fig.3 theleftmost image of CAMAL class contains only legs. This shows that AutoSampling may potentiallyrule out problematic training data for better training.Furthermore, we examine the transfer ability of sampling distributions learned by AutoSampling toother network structures. Specifically, we run training on ResNet-50 (He et al., 2015) using STATICsampling schedule generated by three distributions learned by AutoSampling on 3 different models.As shown in Table 5, using sampling schedules learned by AutoSampling from other models, wedemonstrate similar improvements over the UNIFORM baseline. This result, combined with the aboveobservations on images of different sampling probability, indicates that there may exist a commonoptimal sampling schedule determined by the intrinsic property of the data rather than the modelbeing optimized. Our AutoSampling is an effort to gradually converge to such an optimal schedule.4.4 D ISCUSSIONSThe experimental results and observations from Section 4.2 and 4.3 shed light on the possibleexistence of an optimal sampling schedule, which relies only on the intrinsic property of the data andthe learning stage of the model, regardless of the specific model structure or any prior knowledge. Thelearned sampling schedule may provide enough rewards in the searching process, leading to sufficientconvergence compared to other related works. Once obtained, the optimal sampling schedule mayalso be generalized over other model structures for robust training. Although AutoSampling requiresrelatively large amount of computing resources to find a robust sampler, we want to point out thatthe efficiency of our method can be improved through better training techniques. Moreover, thepossibility of an optimal sampling schedule relying solely on the data themselves may indicate moreefficient sampling policy search algorithms, if one can quickly and effectively determine data valuebased on its property.5 C ONCLUSIONSIn this paper, we introduce a new search based AutoSampling scheme to overcome the issue ofinsufficient rewards for optimizing high-dimensional sampling hyper-parameter by utilizing a shorterperiod of reward collection. We use a shortened exploitation interval to search in the local data spaceand provide sufficient rewards. For the exploration step, we estimate sampling distribution from thesearched sampling schedule and perturb it to search in the distribution space. We test our method andit consistently outperforms the baseline methods across different benchmarks.8Under review as a conference paper at ICLR 2021
HcUj9JCXDX
A paper marginally above average
6: Marginally above acceptance threshold
The authors mainly concentrate on data sampling. To address the issue of optimizing high-dimensional sampling hyper-parameter in data sampling and release the requirement of prior knowledge from current methods, the authors introduce a searching-based method named AutoSampling. This method is comprised of exploration step and exploitation step which are conducted alternatively. The exploitation step train multi child models with current sampling strategy and save the best model for next iteration. while the exploration step estimates the sampling distribution according to the sampled data in exploitation step and rectifies it to sample all data possibly. The authors have conducted sufficient experiments to verify the superior of their method, especially for the effectiveness and generalizability. Advantages: l The exploitation step and exploration step in AutoSampling is interesting, it is straightforward that this method can work well as the sampling strategy is updated dynamically according to the current state of model. l The proposed AutoSampling is simple and effective, one can implement it without much effort. l This method owns great generalizability and does not require any knowledge prior. l This paper is well organized and written. Disadvantages: l In Table 1, the number of workers does have an influence on performance, and this influence is positively correlated in my opinion, however, we can see a performance degradation for DenseNet121. The authors did not explain this. l The transferability of the gained optimal sampling schedule is discussed in Section 4.4, a simple experiment is recommended.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title AUTOSAMPLING: SEARCH FOR EFFECTIVE DATA SAMPLING SCHEDULES ### Paper Abstract Data sampling acts as a pivotal role in training deep learning models. However, an effective sampling schedule is difficult to learn due to its inherent high-dimension as a hyper-parameter. In this paper, we propose the AutoSampling method to automatically learn sampling schedules for model training, which consists of the multi-exploitation step aiming for optimal local sampling schedules and the exploration step for the ideal sampling distribution. More specifically, we achieve sampling schedule search with shortened exploitation cycle to provide enough supervision. In addition, we periodically estimate the sampling distribution from the learned sampling schedules and perturb it to search in the distribution space. The combination of two searches allows us to learn a robust sampling schedule. We apply our AutoSampling method to a variety of image classification tasks illustrating the effectiveness of the proposed method. ### Paper Keywords ["Hyper-parameter Learning", "AutoML", "Computer Vision"] ### Paper Content ABSTRACTData sampling acts as a pivotal role in training deep learning models. However, aneffective sampling schedule is difficult to learn due to its inherent high-dimensionas a hyper-parameter. In this paper, we propose the AutoSampling method toautomatically learn sampling schedules for model training, which consists ofthe multi-exploitation step aiming for optimal local sampling schedules and theexploration step for the ideal sampling distribution. More specifically, we achievesampling schedule search with shortened exploitation cycle to provide enoughsupervision. In addition, we periodically estimate the sampling distribution fromthe learned sampling schedules and perturb it to search in the distribution space.The combination of two searches allows us to learn a robust sampling schedule.We apply our AutoSampling method to a variety of image classification tasksillustrating the effectiveness of the proposed method.1 I NTRODUCTIONData sampling policies can greatly influence the performance of model training in computer visiontasks, and therefore finding robust sampling policies can be important. Handcrafted rules, e.g. dataresampling, reweighting, and importance sampling, promote better model performance by adjustingthe training data frequency and order (Estabrooks et al., 2004; Weiss et al., 2007; Bengio et al., 2009;Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Shrivastava et al., 2016; Jesson et al.,2017). Handcrafted rules heavily rely on the assumption over the dataset and cannot adapt well todatasets with their own characteristics. To handle this issue, learning-based methods (Li et al., 2019;Jiang et al., 2017; Fan et al., 2017) were designed to automatically reweight or select training datautilizing meta-learning techniques or a policy network.However existing learning-based sampling methods still rely on human priors as proxies to optimizesampling policies, which may fail in practice. Such priors often include assumptions on policynetwork design for data selection (Fan et al., 2017), or dataset conditions like noisiness (Li et al., 2019;Loshchilov & Hutter, 2015) or imbalance (Wang et al., 2019). These approaches take images features,losses, importance or their representations as inputs and use the policy network or other learningapproaches with small amount of parameters for estimating the sampling probability. However, forexample, images with similar visual features can be redundant in training, but their losses or featuresfed into the policy network are more likely to be close, causing the same probability to be sampled forredundant samples if we rely on aforementioned priors. Therefore, we propose to directly optimizethe sampling schedule itself so that no prior knowledge is required for the dataset. Specifically, thesampling schedule refers to order by which data are selected for the entire training course. In thisway, we only rely on data themselves to determine the optimal sampling schedule without any prior.Directly optimizing a sampling schedule is challenging due to its inherent high dimension. Forexample, for the ImageNet classification dataset (Deng et al., 2009) with around one million samples,the dimension of parameters would be in the same order. While popular approaches such as deepreinforcement learning (Cubuk et al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al.,2015), population-based training (Jaderberg et al., 2017) or simple random search (Bergstra & Bengio,2012) have already been utilized to tune low-dimensional hyper-parameters like augmentation sched-ules, their applications in directly finding good sampling schedules remain unexploited. For instance,the dimension of a data augmentation policy is generally only in dozens, and it needs thousandsof training runs (Cubuk et al., 2018) to sample enough rewards to find an optimal augmentation1Under review as a conference paper at ICLR 2021policy because high-quality rewards require many epochs of training to obtain. As such, optimizing asampling schedule may require orders of magnitude more rewards than data augmentation to gatherand hence training runs, which result in prohibitively slow convergence.To overcome the aforementioned challenge, we propose a data sampling policy search framework,named AutoSampling, to sufficiently learn an optimal sampling schedule in a population-basedtraining fashion (Jaderberg et al., 2017). Unlike previous methods, which focus on collecting long-term rewards and updating hyper-parameters or agents offline, our AutoSampling method collectsrewards online with a shortened collection cycle but without priors. Specifically, the AutoSamplingcollects rewards within several training iterations, tens or hundred times shorter than that in existingworks (Ho et al., 2019; Cubuk et al., 2018). In this manner, we provide the search process with muchmore frequent feedback to ensure sufficient optimization of the sampling schedule. Each time when afew training iterations pass, we collect the reward from the previous several iterations, accumulatethem and later update the sampling distribution using the rewards. Then we perturb the samplingdistribution to search in distribution space, and use it to generate new mini-batches for later iterations,which are recorded into the output sampling schedule. As illustrated in Sec. 4.1, shortened collectioncycles with less interference also can better reflect the training value of each data.Our contributions are as follows:To our best knowledge, we are the first to propose to directly learn a robust samplingschedule from the data themselves without any human prior or condition on the dataset.We propose the AutoSampling method to handle the optimization difficulty due to the highdimension of sampling schedules, and efficiently learn a robust sampling schedule throughshortened reward collection cycle and online update of the sampling schedule.Comprehensive experiments on CIFAR-10/100 and ImageNet datasets (Krizhevsky, 2009; Denget al., 2009) with different networks show that the Autosampling can increase the top-1 accuracy byup to 2.85% on CIFAR-10, 2.19% on CIFAR-100, and 2.83% on ImageNet.2 B ACKGROUND2.1 R ELATED WORKData sampling is of great significance to deep learning, and has been extensively studied. Approacheswith human-designed rules take pre-defined heuristic rules to modify the frequency and order bywhich training data is presented. In particular, one intuitive method is to resample or reweightdata according to their frequencies, difficulties or importance in training (Estabrooks et al., 2004;Weiss et al., 2007; Drummond et al., 2003; Bengio et al., 2009; Lin et al., 2017; Shrivastava et al.,2016; Loshchilov & Hutter, 2015; Wang et al., 2019; Johnson & Guestrin, 2018; Katharopoulos &Fleuret, 2018; Byrd & Lipton, 2018; Jesson et al., 2017). These methods have been widely used inimbalanced training or hard mining problems. However, they are often restricted to certain tasksand datasets based on which they are proposed, and their ability to generalize to a broader range oftasks with different data distribution may be limited. In another word, these methods often implicitlyassume certain conditions on the dataset, such as cleanness or imbalance. In addition, learning-basedmethods have been proposed for finding suitable sampling schemes automatically. Methods usingmeta-learning or reinforcement learning are also utilized to automatically select or reweight dataduring training (Li et al., 2019; Jiang et al., 2017; Ren et al., 2018; Fan et al., 2017), but they areonly tested on small-scale or noisy datasets. Whether or not they can generalize over tasks of otherdatasets still remain untested. In this work, we directly study the data sampling without any prior,and we also investigate its wide generalization ability across different datasets such as CIFAR-10,CIFAR-100 and ImageNet using many typical networks.As for hyper-parameter tuning, popular approaches such as deep reinforcement learning (Cubuket al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al., 2015) or simply random search(Bergstra & Bengio, 2012) have already been utilized to tune low-dimensional hyper-parameters andproven to be effective. Nevertheless, they have not been adopted to find good sampling scheduledue to its inherent high dimensiona. Some recent works tackle the challenge of optimizing high-dimensional hyper-parameter. MacKay et al. (2019) uses structured best-response functions andJonathan Lorraine (2019) achieve this goal through the combinations of the implicit function theoremand efficient inverse Hessian approximations. However, they have not been tested on the task ofoptimizing sampling schedules, which is the major focus of our work in this paper.2Under review as a conference paper at ICLR 2021Model_1Batch1_1Model_2Batch1_2Model_NpBatch1_NpInterval1(Training&Evaluation)Batch1_2Model_1Batch2_1Model_2Batch2_2Model_NpBatch2_NpExploitMulti-Exploitation(Tintervals)RecordedOptimalSamplingScheduleInterval2(Training&Evaluation)ExploitModel_1BatchT_1Model_2BatchT_2Model_NpBatchT_NpBatchT_1ExploitIntervalT(Training&Evaluation)Worker_1Worker_2Worker_Nph* =Batch2_NpEstimatedP(D)UniformDistributionSmooththroughMixtureSamplingschedulefornextmulti-exploitationstepBatch1_1Batch2_1BatchT_1Batch1_2Batch2_2BatchT_2Batch1_NpEstimatePerturbbySamplingBatch2_NpBatchT_NpExplorationFigure 1: Overview of AutoSampling illustrated through one multi-exploitation-and-exploration cycle.a) The multi-exploitation step, illustrated by the left half, is the process of learning optimal samplingschedule locally. The same color of model for each worker indicates that the same model weight iscloned into it. Also for simplicity, in this figure we adopt the exploitation interval of length 1. b) Theexploration step, shown by the right half, is to search in the sampling distribution space. Specifically,we estimate the sampling distribution from the schedules collected in the multi-exploitation step andperturb it to generate new sampling schedules for all workers.2.2 P OPULATION BASED TRAININGHyper-parameter tuning task can be framed as a bi-level optimization problem with the followingobjective function,minh2HL(;h)subject to= arg max2eval(;h)(1)whererepresents the model weight and h= (h1;h2;;hT)is the hyper-parameter scheduleforTtraining intervals. Population based training (PBT) (Jaderberg et al., 2017) solves the bi-level optimization problem by training a population Pof child models in parallel with differenthyper-parameter schedules initialized:P=f(i;hi;t)gNpi=1 (2)wherei;hirespectively represents the child model weight, the corresponding hyper-parameterschedule for the training interval ton workeri, andNpis the number of workers. PBT proceeds inintervals, which usually consists of several epochs of training. During the interval, the population ofmodels are trained in parallel to finish the lower-level optimization of weights i.Between intervals, an exploit-and-explore procedure is adopted to conduct the upper-level optimiza-tion of the hyper-parameter schedule. In particular for interval t, to exploit we evaluate child modelson a held-out validation dataset:ht;t= arg maxpi=(i;hi;t)2Peval(i;hi)!i;i= 1;;Np(3)We record the best performing hyper-parameter setting htand broadcast the top-performing modeltto all workers. To explore, we initialize new hyper-parameter schedules for interval t+ 1withdifferent random seeds on all workers, which can be viewed as a search in the hyper-parameterspace. The next exploit-and-explore cycle will then be continued. In the end, the top-performinghyper-parameter schedule h= (h1;h2;;hT)can be obtained.PBT is applied to tune low-dimenisal hyper-parameters such as data augmentation schedules (Hoet al., 2019; Jaderberg et al., 2017). However, it cannot be directly used for finding samplingstrategies due to the high dimension. Unlike PBT, our AutoSampling adopts a multi-exploitation-and-exploration structure, leading to much shorter reward collection cycles that contribute to much moreand effective rewards for sufficient optimization within a practical computational budget.3 A UTOSAMPLING WITH SEARCHINGThe overview of our AutoSampling is illustrated in Fig.1. AutoSampling alternately runs multi-exploitation step and exploration step. In the exploration step, we 1) update the sampling distribution3Under review as a conference paper at ICLR 2021Algorithm 1: The Multi-Exploitation StepInput: Training dataset D, populationP=f(i;hi;t)gNpi=1, number of workers Np, number ofexploitation intervals T, exploitation interval length NsInitialize H ()fort= 1toTdoforj= 1toNsdofor(i;ht;i;t)2Pdoi 5L (i;ht;i)Bupdate the weight of child model iend forht;t= arg maxPeval(i;hi)H H+ht Bupdate the sampling for child model ifori= 1toNpdoi t Bclone the optimal weightend forend forend forReturn H,Pusing the rewards collected from the multi-exploitation step (the sampling distribution is uniformdistribution initially); 2) perturb the updated sampling distribution for child models so that differentchild models have different sampling distributions; 3) use the corresponding perturbed samplingdistribution for each child model to sample mini-batches of training data. In the multi-exploitationstep, we 1) train multiple child models using the mini-batches sampled from the exploration step;2) collect short-term rewards from the child models. AutoSampling finishes with a recorded top-performing sampling schedule, which can be transferred to other models.3.1 M ULTI -EXPLOITATION BY SEARCHING IN THE DATA SPACEIn the multi-exploitation step, we aim to search locally in the data space by collecting short-termrewards and sub-schedules. Specifically, we wish to learn a sampling schedule for Texploitationintervals. In each interval, there are a population PofNpchild models. Denote ht;ias the trainingdata sub-schedule in the tthinterval for the ithchild model. When all of the Texploitation intervalsfor theithchild model are considered, we have Hi=fht;ijt= 1;:::;Tg=fx1;;xNg, whereNis the number of training data for the multi-exploitation step. Each interval consists of Nstrainingiterations that is also equivalent to Nstraining mini-batches, where Nsis the length of the interval.AutoSampling is expected to produce a sequence of training samples, denoted by H, so that a givenmodel is optimally trained. The population fHigforms the local search space, from which we aim tosearch for an optimal sampling schedule H.Given the population P, we train them in parallel on Npworkers. Once an interval of data ht;icontainingNstraining batches have been used for training, we evaluate all child models and use thetop evaluation performance as the reward. According to the reward, we record the top-performingweight and sub-schedule for the current interval t, in particular,ht;t= arg maxpi=(i;hi;t)2Peval(i;ht;i) (4)On the other hand, we update all child model weights of Pby cloning into them with the top-performing weight tso we can continue searching based on the more promising child. We willcontinue the exploit steps through the whole training process, and output the recorded optimalsampling schedule H=fh1;h2;;hTg. By using exploitation interval of mini-batches ratherthan epochs or even entire training runs adopted by earlier methods, AutoSampling may yield a betterand more robust sampling schedule. It should be pointed out that even though in AutoSamplingrewards are collected within a much shorter interval, they remain effective. As we directly optimizethe sampling schedule, we are concerned with only the data themselves. The short-term rewardsreflect the training value of data from the exploitation interval they are collected. But for global hyper-parameters such as augmentation schedules, short-term rewards may lead to inferior performanceas these hyper-parameters are concerned with the overall training outcome. We describe the multi-exploitation with details in Alg.1.4Under review as a conference paper at ICLR 2021Algorithm 2: Search based AutoSamplingInput: Training dataset D, population size NpInitializeH () ,P(D) uniform(D) and initialize child models 1;;Npwhile not end of training dofori= 1toNpdoSamplehifrom Mixture( log(P(D) +),Nuuniform (D))end forInitializeP=f(i;hi;t)gNpi=1H;P Alg.1EstimateP(D)according to Equation (5)UpdateP(D)according to Equation (6)H H+Hend whileReturnH,P(D)3.2 E XPLORATION BY SEARCHING IN SAMPLING DISTRIBUTION SPACEIn the exploration, we search in sampling distribution space by updating and perturbing the samplingdistribution. We first estimate the underlying sampling distribution P(D)from the top samplingschedulehproduced in the multi-exploitation, that is, for x2D,P(x) =count (x2H)Px2Dcount (x2H)(5)wherecount (x2H)denotes the number of x’s appearances in H. We further perturb theP(D)and generate the sampling schedules on each worker for the later multi-exploitation. Weintroduce perturbations into the generated schedules by simply sampling from the multinomialdistribution P(D)using different random seeds. However, in our experiments, we observe that thedistribution produced by P(D)tends to be extremely skewed and a majority of the data actuallyhave zero frequencies. Such skewness causes highly imbalanced training mini-batches, and thereforedestabilizes subsequent model training.Distribution Smoothing To tackle the above issue, we first smooth P(D)through the logarithmicfunction, and then apply a probability mixture with uniform distributions. In particular for the datasetD,P0(D) =Mixture (log(P(D) +);Nuuniform (D)) (6)where1is the smoothing factor and Nuuniform (D)denotesNuuniform multinomialdistributions on the dataset D. The smoothing through the logfunction can greatly reduce theskewness, however, log(P(D) +)may still contain zero probabilities for some training data,resulting in unstable training. Therefore, we further smooth it through a probability mixture with Nuuniform distribution uniform (D)to ensure presence of all data. This is equivalent to combining Nuepochs of training data to the training batches sampled from P(D), and shuffling the union. Once wehave new diverse sampling schedules for the population, we proceed to the next multi-exploitationstep.We continue this alternation between multi-exploitation and exploration steps until the end of training.Note that to generate sampling schedule for the first multi-exploitation run, we initialize P(D)to bean uniform multinomial distribution. In the end, we output a sequence of optimal sampling schedulesH= (H1;;Hn)fornalternations. The entire process is illustrated in details in Alg.2.4 E XPERIMENTSIn this section, we present comprehensive experiments on various datasets to illustrate the perfor-mance of AutoSampling, and also demonstrate the process of progressively learning better samplingdistribution.4.1 A BLATION STUDYFor this part, we gradually build up and test components of AutoSampling on CIFAR-100, and thenexamine their performances on CIFAR-10 and ImageNet datasets. The training implementationdetails and computational complexity can be found in Appendix A.1.5Under review as a conference paper at ICLR 2021Table 1: Performance on CIFAR-100 using different configurations of AutoSampling and baselines.Worker is the number of workers used and Interval is the exploitation interval in terms of batches.NETWORK WORKER INTERVAL EXPLORATION TYPE TOP 1(%)RESNET 18 (Z HANG ET AL ., 2019) - - - 78.34 0:05RESNET 18 1 - U NIFORM 78.460:035RESNET 18 20 80 B ATCHES RANDOM 78.760:003RESNET 18 20 20 B ATCHES RANDOM 78.990:003RESNET 18 80 20 B ATCHES RANDOM 79.090:017RESNET 18 20 20 B ATCHES MIXTURE 79.440:020RESNET 50 (J IN ET AL ., 2019) - - - 79.34RESNET 50 1 - U NIFORM 79.700:023RESNET 50 20 80 B ATCHES RANDOM 80.550:129RESNET 50 20 20 B ATCHES RANDOM 81.050:064RESNET 50 80 20 B ATCHES RANDOM 81.190:072RESNET 50 20 20 B ATCHES MIXTURE 81.530:088DENSE NET121 1 - U NIFORM 80.130:028DENSE NET121 20 80 B ATCHES RANDOM 80.620:694DENSE NET121 20 20 B ATCHES RANDOM 81.110:127DENSE NET121 80 20 B ATCHES RANDOM 81.080:021DENSE NET121 20 20 B ATCHES MIXTURE 80.970:006Table 2: Experiments on CIFAR-10.NETWORK EXPLORATION TYPE TOP1(%)RESNET 18 UNIFORM 93.010:009RESNET 18 R ANDOM 95.860:003RESNET 18 M IXTURE 95.800:018RESNET 50 UNIFORM 93.600:004RESNET 50 R ANDOM 96.100:002RESNET 50 M IXTURE 96.090:070Table 3: Experiments on ImageNet.NETWORK EXPLORATION TYPE TOP1(%)RESNET 18 UNIFORM 70.38RESNET 18 R ANDOM 72.07RESNET 18 M IXTURE 72.91RESNET 34 UNIFORM 74.09RESNET 34 R ANDOM 76.11RESNET 34 M IXTURE 76.92Adding Workers To look into the influence of the worker numbers, we conduct experiments usingworker numbers of 1, 20, 80 respectively with the same setting ( Ns= 20 with random exploration).With the worker number of 1, the experiment is simply the normal model training using stochasticgradient descent. To show the competitiveness of our baselines, we also include state-of-the-artresults on CIFAR-100 with ResNet-18 and ResNet-50 (Zhang et al., 2019; Jin et al., 2019). Wenotice significant performance gain using the worker number of 20 for ResNet-18, ResNet-50 andDenseNet-121 (He et al., 2015; Huang et al., 2017), as illustrated in Table 1. However, we note thatincreasing worker number from 20 to 80 only brings marginal performance gains across variousmodel structures, as shown in Table 1. Therefore, we set the worker number to be 20 for the rest ofthe experiments.Shortening Exploitation Intervals To study the effects of the shortened exploitation interval, werun experiments using different exploitation intervals of 20 and 80 batches(iterations) respectively. Asshown in Table 1, models with the shorter exploitation interval of 20 batches(iterations) perform betterthan the one with the longer exploitation interval across all three network structures, conforming toour assumptions that the reward collected reflects value of each data used in the exploitation interval.This result adheres to our intuition that shorter exploitation interval can encourage the sampler toaccumulate more rewards to learn better sampling schedules. For the rest of this section we keep theexploitation interval of 20.Adding Exploration Type We further add mixture as the exploration type to see the effects oflearning the underlying sampling distribution, and completing the proposed method. As shown inTable 1, with ResNet-18 and ResNet-50 we push performance higher with the mixture exploration,and outperform the baseline method by about 1 and 1.8 percentage on CIFAR-100 respectively.However, we found that it is not true in the case of DenseNet-121 and this case may be attributed tothe bigger capacity of DenseNet-121.Generalization Over Datasets In addition, we experiment on other datasets. We report the results onCIFAR10 in Table 2 and the results of ResNet-18, ResNet-34 on ImageNet in Table 3. For CIFAR-10,we notice that the mixture and random exploration methods are comparable while both outperformingthe uniform baseline, and we believe it is due to the simplicity of the dataset. In the more challenging6Under review as a conference paper at ICLR 20210 10000 20000 30000 40000 5000060708090100110120130 Epoch-80Epoch-160Epoch-240Figure 2: The comparison between histograms estimated from the sampling schedules of Epoch80, 160 and 240 from CIFAR-100 with ResNet-18. We divide the 50000 training images into 500segments of 100 images, and calculate the histograms of total data counts of all segments. We reorderthex-axis based on the ranking of data counts for epoch 240 for easier comparison.Table 4: Static vs dynamic sampling schedule on CIFAR-100 (%)NETWORK SAMPLING TYPEUNIFORM STATIC DYNAMICRESNET 18 78.46 0:035 78.800:007 79.440:020RESNET 50 79.70 0:023 80.210:014 81.530:088ImageNet, the mixture exploration outperforms the random exploration by a clear margin. We alsocompare our AutoSampling with some recent non-uniform sampling methods on CIFAR-100, whichcan be found in Appendix A.2.4.2 S TATIC VS DYNAMIC SCHEDULESWe aim to see if the final sampling distribution estimated by our AutoSampling is sufficient to producerobust sampling schedules. In another word, we wish to know training with the AutoSampling iseither a process of learning a robust sampling distribution, or a process of dynamically adjusting thesampling schedule for optimal training. To this end, we conduct training using different samplingschedules. First, we calculate the sampling distribution estimated throughout the learning steps ofAutoSampling, and use it to generate the sampling schedule of a full training process, which wedenote as STATIC . Moreover, we denote the sampling schedule learned using AutoSampling asDYNAMIC , since AutoSampling dynamically adjust the sampling schedule alongside the trainingprocess. Finally, we denote the baseline method as UNIFORM , which uses the sampling schedulegenerated from uniform distribution.We report results on CIFAR-100 with ResNet-18 and ResNet-50 in Table 4. Model trained withSTATIC sampling schedules exceeds the baseline UNIFORM significantly, indicating the superiority ofthe learned sampling distribution over the uniform distribution. It shows the ability of AutoSamplingto learn good sampling distribution. Nonetheless, note that models trained with DYNAMIC samplingschedules outperform models trained with STATIC , by a margin bigger than the one between STATICand UNIFORM . This result shows the fact that despite the AutoSampling’s capability of learning goodsampling distribution, its flexibility during training matters even more. Moreover, this phenomenonalso indicates that models at different stages of learning process may require different samplingdistributions to achieve optimal training. One single sampling distribution, even gradually estimatedusing AutoSampling, seems incapable of covering the needs from different learning stages. Weplot the histograms of data counts in training estimated from schedules of different learning stageswith ResNet-18 on CIFAR-100 in Fig.2, showing the great differences between optimized samplingdistributions from different epochs.4.3 A NALYZING SAMPLING SCHEDULES LEARNED BY AUTOSAMPLINGTo further investigate the sampling schedule learned by AutoSampling, we review the images at thetail and head part of the sampling spectrum. In particular, given a sampling schedule learned we rankall images based on their appearances in training. Training images at the top and bottom of the orderare extracted, corresponding to high and low probabilities of being sampled respectively. In Fig.3, weshow 4 classes of exemplary images. The images of low probability tend to have clearer imagery7Under review as a conference paper at ICLR 2021BabyBikeChimpCamalLowProbabilityHighProbabilityFigure 3: Example images on the head and tail of the sampling spectrum. The images on the left arethe ones with low sampling probability, while the images on the right more likely to be sampled. Weobtain these images using AutoSampling with the ResNet-18 model on CIFAR-100.Table 5: Transfer of sampling distributions learned by three model structures to ResNet-50 onCIFAR-100 (%). UNIFORM denotes the baseline result using uniform sampling distribution.NETWORK SAMPLING SCHEDULE SOURCEUNIFORM RESNET 18 R ESNET 50 D ENSENET 121RESNET 50 79.70 0:023 80.270:014 80.210:014 80.470:194features enabling easy recognition, while the images of high probability tend to be more obscure,indicating that the sampling schedule may show hard samples mining effects. However, as shown inA.3 and Fig. 4, the loss values and probabilities of being sampled seem to be not highly correlated,which indicates more potential of AutoSampling beyond visually hard example mining. In addition,we notice the images of low probability also contain low quality images. For instance, in Fig.3 theleftmost image of CAMAL class contains only legs. This shows that AutoSampling may potentiallyrule out problematic training data for better training.Furthermore, we examine the transfer ability of sampling distributions learned by AutoSampling toother network structures. Specifically, we run training on ResNet-50 (He et al., 2015) using STATICsampling schedule generated by three distributions learned by AutoSampling on 3 different models.As shown in Table 5, using sampling schedules learned by AutoSampling from other models, wedemonstrate similar improvements over the UNIFORM baseline. This result, combined with the aboveobservations on images of different sampling probability, indicates that there may exist a commonoptimal sampling schedule determined by the intrinsic property of the data rather than the modelbeing optimized. Our AutoSampling is an effort to gradually converge to such an optimal schedule.4.4 D ISCUSSIONSThe experimental results and observations from Section 4.2 and 4.3 shed light on the possibleexistence of an optimal sampling schedule, which relies only on the intrinsic property of the data andthe learning stage of the model, regardless of the specific model structure or any prior knowledge. Thelearned sampling schedule may provide enough rewards in the searching process, leading to sufficientconvergence compared to other related works. Once obtained, the optimal sampling schedule mayalso be generalized over other model structures for robust training. Although AutoSampling requiresrelatively large amount of computing resources to find a robust sampler, we want to point out thatthe efficiency of our method can be improved through better training techniques. Moreover, thepossibility of an optimal sampling schedule relying solely on the data themselves may indicate moreefficient sampling policy search algorithms, if one can quickly and effectively determine data valuebased on its property.5 C ONCLUSIONSIn this paper, we introduce a new search based AutoSampling scheme to overcome the issue ofinsufficient rewards for optimizing high-dimensional sampling hyper-parameter by utilizing a shorterperiod of reward collection. We use a shortened exploitation interval to search in the local data spaceand provide sufficient rewards. For the exploration step, we estimate sampling distribution from thesearched sampling schedule and perturb it to search in the distribution space. We test our method andit consistently outperforms the baseline methods across different benchmarks.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title A paper marginally above average ### Review Text The authors mainly concentrate on data sampling. To address the issue of optimizing high-dimensional sampling hyper-parameter in data sampling and release the requirement of prior knowledge from current methods, the authors introduce a searching-based method named AutoSampling. This method is comprised of exploration step and exploitation step which are conducted alternatively. The exploitation step train multi child models with current sampling strategy and save the best model for next iteration. while the exploration step estimates the sampling distribution according to the sampled data in exploitation step and rectifies it to sample all data possibly. The authors have conducted sufficient experiments to verify the superior of their method, especially for the effectiveness and generalizability. Advantages: l The exploitation step and exploration step in AutoSampling is interesting, it is straightforward that this method can work well as the sampling strategy is updated dynamically according to the current state of model. l The proposed AutoSampling is simple and effective, one can implement it without much effort. l This method owns great generalizability and does not require any knowledge prior. l This paper is well organized and written. Disadvantages: l In Table 1, the number of workers does have an influence on performance, and this influence is positively correlated in my opinion, however, we can see a performance degradation for DenseNet121. The authors did not explain this. l The transferability of the gained optimal sampling schedule is discussed in Section 4.4, a simple experiment is recommended. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
r1f78iAcFm
ICLR.cc/2019/Conference
2019
GRAPH TRANSFORMATION POLICY NETWORK FOR CHEMICAL REACTION PREDICTION
["Kien Do", "Truyen Tran", "Svetha Venkatesh"]
We address a fundamental problem in chemistry known as chemical reaction product prediction. Our main insight is that the input reactant and reagent molecules can be jointly represented as a graph, and the process of generating product molecules from reactant molecules can be formulated as a sequence of graph transformations. To this end, we propose Graph Transformation Policy Network (GTPN) - a novel generic method that combines the strengths of graph neural networks and reinforcement learning to learn the reactions directly from data with minimal chemical knowledge. Compared to previous methods, GTPN has some appealing properties such as: end-to-end learning, and making no assumption about the length or the order of graph transformations. In order to guide model search through the complex discrete space of sets of bond changes effectively, we extend the standard policy gradient loss by adding useful constraints. Evaluation results show that GTPN improves the top-1 accuracy over the current state-of-the-art method by about 3% on the large USPTO dataset. Our model's performances and prediction errors are also analyzed carefully in the paper.
["Chemical Reaction", "Graph Transformation", "Reinforcement Learning"]
ABSTRACTWe address a fundamental problem in chemistry known as chemical reaction prod-uct prediction. Our main insight is that the input reactant and reagent molecules canbe jointly represented as a graph, and the process of generating product moleculesfrom reactant molecules can be formulated as a sequence of graph transformations.To this end, we propose Graph Transformation Policy Network ( GTPN )a novelgeneric method that combines the strengths of graph neural networks and rein-forcement learning to learn the reactions directly from data with minimal chemicalknowledge. Compared to previous methods, GTPN has some appealing propertiessuch as: end-to-end learning, and making no assumption about the length or theorder of graph transformations. In order to guide model search through the complexdiscrete space of sets of bond changes effectively, we extend the standard policygradient loss by adding useful constraints. Evaluation results show that GTPNimproves the top-1 accuracy over the current state-of-the-art method by about 3%on the large USPTO dataset. Our model’s performances and prediction errors arealso analyzed carefully in the paper.1 I NTRODUCTIONChemical reaction product prediction is a fundamental problem in organic chemistry. It paves theway for planning syntheses of new substances (Chen & Baldi, 2009). For decades, huge effort hasbeen spent to solve this problem. However, most methods still depend on the handcrafted reactionrules (Chen & Baldi, 2009; Kayala & Baldi, 2011; Wei et al., 2016) or heuristically extracted reactiontemplates (Segler & Waller, 2017; Coley et al., 2017), thus are not well generalizable to unseenreactions.A reaction can be regarded as a set (or unordered sequence) of graph transformations in whichreactants represented as molecular graphs are transformed into products by modifying the bondsbetween some atom pairs (Jochum et al., 1980; Ugi et al., 1979). See Fig. 1 for an illustration. Wecall an atom pair (u;v)that changes its connectivity during reaction and its new bond bareactiontriple (u;v;b ). The reaction product prediction problem now becomes predicting a set of reactiontriples given the input reactants and reagents. We argue that in order to solve this problem well, anintelligent system should have two key capabilities: (a) Understanding the molecular graph structureof the input reactants and reagents so that it can identify possible reactivity patterns (i.e., atom pairswith changing connectivity). (b) Knowing how to choose from these reactivity patterns a correct setof reaction triples to generate the desired products.Recent state-of-the-art methods (Jin et al., 2017; Bradshaw et al., 2018) have built the first capabilityby leveraging graph neural networks (Duvenaud et al., 2015; Hamilton et al., 2017; Pham et al.,2017; Gilmer et al., 2017). However, these methods are either unaware of the valid sets of reactiontriples (Jin et al., 2017) or limited to sequences of reaction triples with a predefined orders (Bradshawet al., 2018). The main challenge is that the space of all possible configurations of reaction triplesis extremely large and non-differentiable. Moreover, a small change in the predicted set of reactiontriples can lead to very different reaction products and a little mistake can produce invalid prediction.In this paper, we propose a novel method called Graph Transformation Policy Network ( GTPN ) thataddresses the aforementioned challenges. Our model consists of three main components: a graphneural network (GNN), a node pair prediction network (NPPN) and a policy network (PN). Startingfrom the initial graph of reactant and reagent molecules, our model iteratively alternates between1Under review as a conference paper at ICLR 2019C:10 C:11 N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9C:10 C:11 N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9C:10C:11N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9Reactants Intermediate Molecules ProductFigure 1: A sample reaction represented as a set of graph transformations from reactants (leftmost) toproducts (rightmost). Atoms are labeled with their type (Carbon, Oxygen,...) and their index (1, 2,...)in the molecular graph. The atom pairs that change connectivity and their new bonds (if existed) arehighlighted in green. There are two bond changes in this case: 1) The double bond between O:1 andC:2 becomes single. 2) A new single bond between C:2 and C:10 is added.modeling an input graph using GNN and predicting a reaction triple using NPPN and PN to generatea new intermediate graph as input for the next step until it decides to stop. The final generated graphis considered as the predicted products of the reaction. Importantly, GTPN does not assume anyfixed number or any order of bond changes but learn these properties itself. One can view GTPN asa reinforcement learning (RL) agent that operates on a complex and non-differentiable space of setsof reaction triples. To guide our model towards learning a diverse yet robust-to-small-changes policy,we customize our loss function by adding some useful constraints to the standard policy gradient loss(Mnih et al., 2016).To the best of our knowledge, GTPN is the most generic approach for the reaction product predictionproblem so far in the sense that: i) It combines graph neural networks and reinforcement learninginto a unified framework and trains everything end-to-end; ii) It does not use any handcrafted orheuristically extracted reaction rules/templates to predict the products. Instead, it automatically learnsvarious types of reactions from the training data and can generalize to unseen reactions; iii) It caninterpret how the products are formed via the sequence of reaction triples it generates.We evaluate GTPN on two large public datasets named USPTO-15k andUSPTO . Our methodsignificantly outperforms all baselines in the top-1 accuracy, achieving new state-of-the-art resultsof 82.39% and 83.20% on USPTO-15k andUSPTO , respectively. In addition, we also providecomprehensive analyses about the performance of GTPN and about different types of errors ourmodel could make.2 M ETHOD2.1 C HEMICAL REACTION AS MARKOV DECISION PROCESS OF GRAPH TRANSFORMATIONSA reaction occurs when reactant molecules interact with each other in the presence (or absence) ofreagent molecules to form new product molecules by breaking or adding some of their bonds. Ourmain insight is that reaction product prediction can be formulated as predicting a sequence of suchbond changes given the reactant and reagent molecules as input. A bond change is characterized bythe atom pair ( where the change happens) and the new bond type ( what is the change). We call thisatom pair a reaction atom pair and call this atom pair with the new bond type a reaction triple .More formally, we represent the entire system of input reactant and reagent molecules as a labeledgraphG= (V;E)with multiple connected components, each of which corresponds to a molecule.Nodes inVare atoms labeled with their atomic numbers and edges in Eare bonds labeled with theirbond types. GivenGas input, we predict a sequence of reaction triples that transforms Ginto a graphof product molecules G0.As reactions vary in number of transformation steps, we represent the sequence of reaction triplesas(;u;v;b )0;(;u;v;b )1;:::;(;u;v;b )T1or(;u;v;b )0:Tfor short. Here Tis the maximumnumber of steps, (u;v)is a pair of nodes, bis the new edge type of (u;v), andis a binary signalthat indicates the end of the sequence. If the sequence ends at Tend<T,0;:::Tend1will be 1andTend;:::;T1will be 0. At every step , if= 1, we apply the predicted edge change (u;v;b )on2Under review as a conference paper at ICLR 2019Input GraphsEmbedded NodesTop K atom pairs123 457RNN6...Embedded NodesTop K atom pairsRNNFigure 2: Workflow of a Graph Transformation Policy Network ( GTPN ). At every step of theforward pass, our model performs 7 major functions: 1) Computing the atom representation vectors,2) Computing the most possible Kreaction atom pairs, 3) Predicting the continuation signal , 4)Predicting the reaction atom pair (u;v), 5) Predicting a new bond bof this atom pair, 6) Updating theatom representation vectors, and 7) Updating the recurrent state.the current graphGto create a new intermediate graph G+1as input for the next step + 1. Thisiterative process of graph transformation can be formulated as a Markov Decision Process (MDP)characterized by a tuple (S;A;P;R; ), in whichSis a set of states,Ais a set of actions, Pis astate transition function, Ris a reward function, and is a discount factor. Since the process is finiteand contains no loop, we set the discount factor to be 1. The rest of the MDP tuple are defined asfollows:State : A states2Sis an intermediate graph Ggenerated at step (0 <T ). When= 0, we denotes0=G0=G.Action : An action a2A performed at step is the tuple (;u;v;b ). The action iscomposed of three consecutive sub-actions: ,(u;v), andb. If= 0, our model willignore the next sub-actions (u;v)andb, and all the future actions (;u;v;b )+1:T. Notethat settingto be the first sub-action is useful in case a reaction does not happen, i.e.,0= 0State Transition : If= 1, the current graph Gis modified based on the reaction triple(u;v;b )to generate a new intermediate graph G+1. We do not incorporate chemical rulessuch as valency check during state transition because the current bond change may resultin invalid intermediate molecules G, but later, other bond changes may compensate it tocreate the valid final products GTend.Reward : We use both immediate rewards and delayed rewards to encourage our modelto learn the optimal policy faster. At every step , if the model predicts ,(u;v)orbcorrectly, it will receive a positive reward for each correct sub-action. Otherwise, a negativereward is given. After the prediction process has terminated, if the generated products areexactly the same as the groundtruth products, we give the model a positive reward, otherwisea negative reward. The concrete reward values are provided in Appendix A.3.2.2 G RAPH TRANSFORMATION POLICY NETWORKIn this section, we describe the architecture of our model a Graph Transformation Policy Network(GTPN ).GTPN has three main components namely a Graph Neural Network (GNN), a Node PairPrediciton Network (NPPN), and a Policy Network (PN). Each component is responsible for one orseveral key functions shown in Fig. 2: GNN performs functions 1 and 6; NPPN performs function 2;and PN performs functions 3, 4 and 5. Apart from these components, GTPN also has a RecurrentNeural Network (RNN) to keep track of the past transformations. The hidden state hof this RNN isused by NPPN and PN to make accurate prediction.3Under review as a conference paper at ICLR 20192.2.1 G RAPH NEURAL NETWORKTo model the intermediate graph Gat step, we compute the node state vector xiof every node iinGby using a variant of the Message Passing Neural Networks (Gilmer et al., 2017):xi=MessagePassingmx1i;vi;N(i)(1)wheremis the number of message passing steps; viis the feature vector of node i;N(i)is the set ofall neighbor nodes of node i; andx1iis the state vector of node iat the previous step. When = 0,x1iis initialized from viusing a neural network. Details about the MessagePassing (:)function areprovided in Appendix A.1.2.2.2 N ODE PAIRPREDICTION NETWORKIn order to predict how likely an atom pair (i;j)of the intermediate graph Gwill change its bond,we assign (i;j)with a score sij2R. Ifsijis high, (i;j)is more probably a reaction atom pair,otherwise, less probably. Similar to (Jin et al., 2017), we use two different networks called “local”network and “global” network for this task. In case of the “local” network,sijis computed as:zij=W1h1;(xi+xj);eij+b1(2)sij=fatom pairzij(3)wherefatom pairis a neural network; is a nonlinear activation function (e.g., ReLU); [:]denotesvector concatenation; W1andb1are parameters; h1is the hidden state of the RNN at the previousstep; andeijis the representation vector of the bond between (i;j). If there is no bond between (i;j)we assume that its bond type is “NULL” . We consider zijas the representation vector for the atompair(i;j).The “ global ” network leverages self-attention (Vaswani et al., 2017; Wang et al., 2018) to detectcompatibility between atom iand all other atoms before computing the scores:rij=V1(xi+xj);eij+c1aij=softmaxV2rij+c2ci=Xj2Vaijxjzij=W1h1;(xi+xj);(ci+cj);eij+b1(4)sij=fatom pairzij(5)whereaijis the attention score from node ito every other node j;ciis the context vector of atom ithat summarizes the information from all other atoms.During experiments, we tried both options mentioned above and saw that the “global” network clearlyoutperforms the “local” network so we set the “global” network as a default module in our model.In addition, since reagents never change their form during a reaction, we explicitly exclude all atompairs that have either atoms belong to the reagents. This leads to better results than not using reagentinformation. Detailed analyses are provided in Appendix A.5.Top-Katom pairs Because the number of atom pairs that actually participate in a reaction is verysmall (usually smaller than 10) compared to the total number of atom pairs of the input molecules(usually hundreds or thousands), it is much more efficient to identify reaction triples from a smallsubset of highly probable reaction atom pairs. For that reason, we extract K(KjVj2)atom pairswith the highest scores. Later, we will predict reaction triples taken from these Katom pairs only.We denote the set of top- Katom pairs, their corresponding scores, and representation vectors as (uk;vk)jk=1;K,sukvkjk=1;KandZK=zukvkjk=1;K, respectively.2.2.3 P OLICY NETWORKPredicting continuation signal To account for varying number of transformation steps, PN gener-ates a continuation signal 2f0;1gto indicate whether prediction should continue or terminate.4Under review as a conference paper at ICLR 2019is drawn from a Bernoulli distribution:p(= 1) = sigmoidfsignalh1;g(ZK)(6)whereh1is the previous RNN state; ZKis the set of representation vectors of the top Katompairs at the current step; fsignalis a neural network; gis a function that maps an unordered set ofinputs to an output vector. For simplicity, we use a mean function:z1K=g(ZK) =1KKXk=1Wz1ukvkPredicting atom pair At the next sub-step, PN predicts which atom pair changes its bond duringthe reaction by sampling from the top- Katom pairs with probability:p((uk;vk)) =softmaxKsukvk(7)wheresukvkis the score of the atom pair (uk;vk)computed in Eq. (5). After predicting the atompair(u;v), we will mask it to ensure that it could not be in the top Kagain at future steps.Predicting bond type Given an atom pair (u;v)sampled from the previous sub-step, we predicta new bond type bbetweenuandvto get a complete reaction triple (u;v;b )using the probability:p(bj(u;v)) =softmaxBfbondh1;zuv;(ebebold)(8)whereBis the total number of bond types; zuvis the representation vector of (u;v)computed inEq. (4);boldis the old bond of (u;v);eboldandebare the embedding vectors corresponding to thebond typeboldandb, respectively; and fbondis a neural network.2.3 U PDATING STATESAfter predicting a complete reaction triple (u;v;b ), our model updates: i) the new recurrent hiddenstateh, and ii) the new node representation vectors x+1iof the new intermediate graph G+1fori2V. These updates are presented in Appendix A.2.2.4 T RAININGLoss function plays a central role in achieving fast training and high performance. We design thefollowing loss:L=1LA2C+2Lvalue+3Latom pair+4Lover length+5Lin topKwhereLA2Cis the Advantage Actor-Critic (A2C) loss (Mnih et al., 2016) to account for the correctsequence of reaction triples; Lvalueis the loss for estimating the value function used in A2C; Latom pairaccounts for binary change in the bond of an atom pair; Lover lengthpenalizes long predicted sequences;andLin topKis the rank loss to force a ground-truth reaction atom pair to appear in the top- K; and1;:::; 5>0are tunable coefficients. The component losses are explained in the following.2.4.1 R EACTION TRIPLE LOSSThe loss follows a policy gradient method known as Advantage Actor-Critic (A2C):LA2C=Tend1X=0Asignallogp() +Aatom pair logp((u;v)) +Abondlogp(b)ATendsignallogTend(9)whereTendis the first step that = 0;Asignal,Aatom pair andAbondare called advantages . To computethese advantages, we use the unbiased estimations called Temporal Different errors, defined as:5Under review as a conference paper at ICLR 2019Asignal =rsignal+VZ+1KV(ZK) (10)Aatom pair =ratom pair +VZ+1KV(ZK) (11)Abond =rbond+VZ+1KV(ZK) (12)wherersignal,ratom pair ,rbondare immediate rewards at step ; at the final step =Tend, the modelreceives additional delayed rewards; is the discount factor; and Vis the parametric value function.We trainVusing the following mean square error loss:Lvalue=TendX=0kV(ZK)Rk2(13)whereRis the return at step .Episode termination during training Although the loss defined in Eq. (9) is correct, it is not goodto use in practice because: i) If our model selects a wrong sub-action at any sub-step of the stepTwrong (Twrong< T end), the whole predicted sequence will be incorrect regardless of what will bepredicted from Twrong+ 1toTend. Therefore, computing the loss for actions from Twrong+ 1toTendisredundant. ii) More importantly, the incorrect updates of the graph structure at subsequent steps fromTwrong+ 1toTendwill lead to cumulative prediction errors which make the training of our modelmuch more difficult.To resolve this issue, during training, we use a binary vector 2f0;1g3Tto keep track of the firstwrong sub-action: t=1ifttfirst wrong0ift>t first wrongwheretfirst wrong denotes the sub-step at which ourmodel chooses a wrong sub-action the first time. The actor-critic loss in Eq. (9) now becomes:LA2C=TX=0Asignallogp() +(+1)Aatom pair logp((u;v)) +(+2)Abondlogp(b)(14)whereTis the maximum number of steps. Similarly, we change the value loss into:Lvalue=TX=0kV(ZK)Rk22.4.2 R EACTION ATOM PAIR LOSSTo train our model to assign higher scores to reaction atom pairs and lower to non-reaction atompairs, we use the following cross-entropy loss function:Latom pair=Tfirst wrongX=0Xi2VXj2V;j6=iij(yijlogpij+ (1yij) log(1pij)) (15)whereTfirst wrong =jtfirst wrong3k;ijt2f0;1gis a mask of the atom pair (i;j)at step;yij2f0;1gisthe label indicating whether the atom pair (i;j)is a reaction atom pair or not; pij=sigmoid (sij)(see Eq. (5)).2.4.3 C ONSTRAINT ON THE SEQUENCE LENGTHOne major difficulty of the chemical reaction prediction problem is to know exactly when to stopprediction so we can make accurate inference. By forcing the model to stop immediately whenmaking wrong prediction, we can prevent cumulative error and significantly reduce variance duringtraining. But it also comes with a cost: The model cannot learn (because it does not have to learn)when to stop. This phenomenon can be visualized easily as the model predicts 1for the signal at6Under review as a conference paper at ICLR 2019Dataset #reactions #changes #molecules #atoms #bondsUSPTO-15ktrain 10,500 1 | 11 | 2.3 1 | 20 | 3.6 4 | 100 | 34.9 3 | 110 | 34.7valid 1,500 1 | 11 | 2.3 1 | 20 | 3.6 7 | 94 | 34.5 5 | 99 | 34.2test 3,000 1 | 11 | 2.3 1 | 16 | 3.6 7 | 98 | 34.9 5 | 102 | 34.7USPTOtrain 409,035 1 | 6 | 2.2 2 | 29 | 4.8 9 | 150 | 39.7 6 | 165 | 38.6valid 30,000 1 | 6 | 2.2 2 | 25 | 4.8 9 | 150 | 39.6 7 | 158 | 38.5test 40,000 1 | 6 | 2.2 2 | 22 | 4.8 9 | 150 | 39.8 7 | 162 | 38.7Table 1: Statistics of USPTO-15k andUSPTO datasets. “changes” means bond changes, “molecules”means reactants and reagents in a reaction; “atoms” and“bonds” are defined for a molecule. Apartfrom “#reactions” , other columns are presented in the format “min | max | mean”.every stepduring inference. In order to make the model aware of the correct sequence length duringtraining, we define a loss that punishes the model if it produces a longer sequence than the groundtruth sequence:Lover length=XTgtend<T endlogp(= 0) (16)whereTgtendis the end step of the ground-truth sequence. Note that the loss in Eq. (16) is not appliedwhenTendTgtend. The reason is that forcing = 1withTend <Tgtendis not theoretically correctbecause all the signals after Tendare assumed to be 0. The incentive to force Tendclose toTgtendwhenit is smaller than Tgtendhas already been included in the advantages in Eq. (14).2.4.4 C ONSTRAINT ON THE TOP -KATOM PAIRSIdeally, the loss from Eq. (15) pushes a reaction atom pair (~u;~v)into the top-Katom pairs at eachstep <Tgtend. However, this is not guaranteed, especially when comes close to Tgtend. To encouragethe ground-truth reaction atom pair (~u;~v)with the highest score to appear in the top K, we introducean additional rank-based loss:Lin topK=Tfirst wrongX=0logp((~u;~v)in topK)wherep((~u;~v)in topK)is computed as:p((~u;~v)in topK) =exp (s~u~v)exp (s~u~v) +PKk=1expsukvk (17)3 E XPERIMENTS3.1 D ATASETWe evaluate our model on two standard datasets USPTO-15k (15K reactions) and USPTO (480Kreactions) which have been used in previous works (Jin et al., 2017; Schwaller et al., 2018; Bradshawet al., 2018). Details about these datasets are given in Table 1. The USPTO dataset contains reactant,reagent and product molecules represented as SMILES strings. Using RDKit1, we convert theSMILES strings into molecule objects and store them as graphs. For each reaction, every atom in thereactant and reagent molecules is identified with a unique “atom map number”. This identity is thesame in the products. Using this knowledge, we compare every atom pair in the input molecules withthe correspondent in the product molecules to obtain a ground-truth set of reaction triples for training.InUSPTO-15k , the ground-truth sets of reaction triples was precomputed by (Jin et al., 2017).1https://www.rdkit.org/7Under review as a conference paper at ICLR 2019ModelUSPTO-15k USPTOC@6 C@8 C@10 C@6 C@8 C@10WLN?(Jin et al., 2017) 81.6 86.1 89.1 89.8 92.0 93.3WLN (Jin et al., 2017) 88.45 91.65 93.34 90.97 93.98 95.26CLN (Pham et al., 2017) 88.68 91.63 93.07 90.72 93.57 94.80Our GNN 88.92 92.00 93.57 91.24 94.17 95.33Table 2: Results for reaction atom pair prediction. C@k is coverage at k. Best results are highlightedin bold. WLN?is the original model from (Jin et al., 2017) while WLN is our re-implemented version.Except for WLN?, other models explicitly use reagent information.1 5 10 15 20k0.20.40.60.81.0valuecoverage@krecall@kFigure 3: Coverage@k andRecall@k with respect to kfor the USPTO dataset.3.2 R EACTION ATOM PAIRPREDICTIONIn this section, we test our model’s ability to identify reaction atom pairs by formulating it as a rankingproblem with the scores computed in Eq. (5). Similar to (Jin et al., 2017), we use Coverage@k as theevaluation metric, which is the proportion of reactions that have allgroundtruth reaction atom pairsappear in the top kpredicted atom pairs.We compare our proposed graph neural network (GNN) with Weisfeiler-Lehman Network (WLN)(Jin et al., 2017) and Column Network (CLN) (Pham et al., 2017). Since our GNN explicitly usesreagent information to compute the scores of atom pairs, we modify the implementation of WLN andCLN accordingly for fair comparison. From Table 2, we observe that our GNN clearly outperformsWLN and CLN in all cases. We attribute this improvement to the use of a separate node state vectorxti(different from the node feature vector vi) for updating the structural information of a node (seeEq. (21)). The other two models, on the other hand, only use a single vector to store both the nodefeatures and structure, hence, some information may be lost. In addition, using explicit reagentinformation boosts the prediction accuracy, which improves the WLN by 1-7% depending on themetrics. The presence of reagent information reduces the number of atom pairs to be searched on andcontributes to the likelihood of reaction atom pairs. Further results are presented in Appendix A.5.3.3 T OP-KATOM PAIREXTRACTIONThe performance of our model depends on the number of selected top atom pairs K. The valueofKpresents a trade-off between coverage and efficiency. In addition to the metric Coverage@kin Sec. 3.2, we use Recall@k which is the proportion of correct atom pairs that appear in top ktofind the good K. Fig. 3 shows Coverage@k andRecall@k for the USPTO dataset with respect tok. We see that both curves increase rapidly when k <10and stablize when k >10. We also ranexperiments with k= 10 ,15,20and observed that their prediction results are quite similar. Hence,in what follows we select K= 10 for efficiency.8Under review as a conference paper at ICLR 2019ModelUSPTO-15k USPTOP@1 P@3 P@5 P@1 P@3 P@5WLDN (Jin et al., 2017) 76.7 85.6 86.8 79.6 87.7 89.2Seq2Seq (Schwaller et al., 2018) - - - 80.3?86.2?87.5?GTPN 72.31 - - 71.26 - -GTPN}74.56 82.62 84.23 73.25 80.56 83.53GTPN}|74.56 83.19 84.97 73.25 84.31 85.76GTPN}82.39 85.60 86.68 83.20 84.97 85.90GTPN}|82.39 85.73 86.78 83.20 86.03 86.48Table 3: Results for reaction prediction. P@k is precision at k. State-of-the-art results from (Jinet al., 2017) are written in italic. Results from (Schwaller et al., 2018) are marked with?and they arecomputed on a slightly different version of USPTO that contains only single-product reactions. Bestresults are highlighted in bold.}: With beam search (beam width = 20),: Invalid product removal,|: Duplicated product removal.3.4 R EACTION PRODUCT PREDICTIONThis experiment validates GTPN on full reaction product prediction against the recent state-of-the-artmethods (Jin et al., 2017; Schwaller et al., 2018) using the accuracy metric. The recent methodELECTRO (Bradshaw et al., 2018) is not compatible here because it was only evaluated on a subsetofUSPTO limited to linear chain topology. Comparison against ELECTRO is reported separatelyin Appendix A.6. Table 3 shows the prediction results. We produce multiple reaction productcandidates by using beam search decoding with beam width N= 20 . Details about beam search andits behaviors are presented in Appendix A.4.In brief, we compute the normalized-over-length log probabilities of Npredicted sequences ofreaction triples and sort these values in descending order to get a rank list of Npossible reactionoutcomes. Given a predicted sequence of reaction triples (u;v;b )0:T, we can generate reactionproducts from input reactants simply by replacing the old bond of (u;v)withb. However, theseproducts are not guaranteed to be valid (e.g., maximum valence constraint violation or aromaticmolecules cannot be kekulized) so we post-process the outputs by removing all invalid products.The removal increases the top-1 accuracy by about 8% and 10% on USPTO-15k andUSPTO ,respectively. Due to the permutation invariance of the predicted sequence of reaction triples, someproduct candidates are duplicate and will also be removed. This does not lead to any change in P@1but slightly improves P@3 andP@5 by about 0.5-1% on the two datasets.Overall, GTPN with beam search and post-processing outperforms both WLDN (Jin et al., 2017)and Seq2Seq (Schwaller et al., 2018) in the top-1 accuracy. For the top-3 and top-5, our model’sperformance is comparable to WLDN’s on USPTO-15k and is worse than WLDN’s on USPTO . It isnot surprising since our model is trained to accurately predict the top-1 outcomes instead of rankingthe candidates directly like WLDN. It is important to emphasize that we did not tune the modelhyper-parameters when training on USPTO but reused the optimal settings from USPTO-15k (whichis 25 times smaller than USPTO ) so the results may not be optimal (see Appendix A.3 for moretraining detail).4 R ELATED WORK4.1 L EARNING TO PREDICT CHEMICAL REACTIONIn chemical reaction prediction, machine learning has replaced rule-based methods (Chen & Baldi,2009) for better generalizability and scalability. Existing machine learning-based techiques are eithertemplate-free (Kayala & Baldi, 2011; Jin et al., 2017; Fooshee et al., 2018) and template-based (Weiet al., 2016; Segler & Waller, 2017; Coley et al., 2017). Both groups share the same mechanism:running multiple stages with the aid of reaction templates or rules. For example, in (Wei et al., 2016)the authors proposed a two-stage model that first classifies reactions into different types based on theneural fingerprint vectors (Duvenaud et al., 2015) of reactant and reagent molecules. Then, it applies9Under review as a conference paper at ICLR 2019pre-designed SMARTS transformation on the reactants with respect to the most suitable predictedreaction type to generate the reaction products.The work of (Jin et al., 2017) treats a reaction as a set of bond changes so in the first step, theypredict which atom pairs are likely to be reactive using a variant of graph neural networks calledWeisfeiler-Lehman Networks (WLNs). In the next step, they do almost the same as (Coley et al.,2017) by modifying the bond type between the selected atom pairs (with chemical rules satisfied) tocreate product candidates and rank them (with reactant molecules as addition input) using anotherkind of WLNs called Weifeiler-Lehman Different Networks (WLDNs).To the best of our knowledge, (Jin et al., 2017) is the first work that achieves remarkable results (withthe Precision@1 is about 79.6%) on the large USPTO dataset containing more than 480 thousandsreactions. Works of (Nam & Kim, 2016) and (Schwaller et al., 2018) avoid multi-stage prediction bybuilding a seq2seq model that generates the (canonical) SMILES string of the single product fromthe concatenated SMILES strings of the reactants and reagents in an end-to-end manner. However,their methods cannot deal with sets of reactants/reagents/products properly as well as cannot provideconcrete reaction mechanism for every reaction.The most recent work on this topic is (Bradshaw et al., 2018) which solves the reaction predictionproblem by predicting a sequence of bond changes given input reactants and reagents representedas graphs. To handle ordering, they only select reactions with predefined topology. Our method, bycontrast, is order-free and can be applied to almost any kind of reactions.4.2 G RAPH NEURAL NETWORKS FOR MODELING MOLECULESIn recent years, there has been a fast development of graph neural networks (GNNs) for modelingmolecules. These models are proposed to solve different problems in chemistry including toxicityprediction (Duvenaud et al., 2015), drug activity classification (Shervashidze et al., 2011; Daiet al., 2016; Pham et al., 2018), protein interface prediction (Fout et al., 2017) and drug generation(Simonovsky & Komodakis, 2018; Jin et al., 2018). Most of them can be regarded as variants ofmessage-passing graph neural networks (MPGNNs) (Gilmer et al., 2017).4.3 R EINFORCEMENT LEARNING FOR STRUCTURAL REASONINGReinforcement learning (RL) has become a standard approach to many structural reasoning problems2because it allows agents to perform discrete actions. A typical example of using RL for structuralreasoning is drug generation (Li et al., 2018; You et al., 2018). Both (Li et al., 2018) and (Youet al., 2018) learn the same generative policy whose action set including: i) adding a new atom ora molecular scaffold to the intermediate graph, ii) connecting existing pair of atoms with bonds,and iii) terminating generation. However, (You et al., 2018) uses an adversarial loss to enforceglobal chemical constraints on the generated molecules as a whole instead of using the commonreconstruction loss as in (Li et al., 2018). Other examples are path-based relational reasoning inknowledge graphs (Das et al., 2018) and learning combinatorial optimization over graphs (Khalilet al., 2017).5 D ISCUSSIONWe have introduced a novel method named Graph Transformation Policy Network ( GTPN ) forpredicting products of a chemical reaction. GTPN uses graph neural networks to represent inputreactant and reagent molecules, and uses reinforcement learning to find an optimal sequence ofbond changes that transforms the reactants into products. We train GTPN using the AdvantageActor-Critic (A2C) method with appropriate constraints to account for notable aspects of chemicalreaction. Experiments on real datasets have demonstrated the competitiveness of our model.Although the GTPN was proposed to solve the chemical reaction problem, it is indeed generic tosolve the graph transformation problem, which can be useful in reasoning about relations (e.g., see(Zambaldi et al., 2018)) and changes in relation. Open rooms include addressing dynamic graphsover time, extending toward full chemical planning and structural reasoning using RL.2Structural reasoning is a problem of inferring or generating new structure (e.g. objects with relations)10Under review as a conference paper at ICLR 2019
BylmJnLcnX
Nice application. Some edits are needed.
6: Marginally above acceptance threshold
Update: Score increased. ___________________________________ Original review: The paper presents an approach to predict the products of chemical reactions, given the reactants and reagents. It works by stepwise predicting the atom pairs that change their bonds in course of reaction, and then adjusting the bonds between them. This can be interpreted as a stepwise graph transformation. I think this is an interesting applied ML paper with fair results. The presentation is clear and understandable. The experimental setup is reasonable. However, the paper is not ready yet to be accepted in my opinion. I think a higher score is justified if the authors address the following points: - Relation to previous work, originality In contrast to what the authors claim, what is predicted here is not exactly the reaction mechanism, but an implementation of the principle of minimal chemical distance, which was already described by Ugi and coworkers in 1980 [see Jochum, Gasteiger, Ugi, The Principle of Minimum Chemical Distance Angew. Chem. Int. Ed. Engl. 1980, 19, p 495-505]. The “insight” the authors have about treating reagents and reactants jointly is Organic Chemistry 101, and that reactions are stepwise graph transformations was also reported by Ugi et al, however, already in 1979! [Ugi, et al. "New applications of computers in chemistry." Angewandte Chemie International Edition in English 18.2 (1979): 111-123. ] I assume the authors were not aware of these papers, but now they are, so this needs to be modified accordingly, and these papers need to be referred to in the introduction. - Questions: The authors suggest that graph neural networks are more generic that so-called heuristic features (fingerprints) – which, as Duvenaud et al have elaborated, can be interpreted just as graph neural networks themselves – with fixed weights. Also, there are results by the Hochreiter group which show that graph neural networks perform worse that classical chemical features under rigorous testing { DOI: 10.1039/C8SC00148K } Do the authors think their models could also improve if they used the classical fingerprints? Is the GRU really needed to encode the past bond changes? What happens if you remove it? The statement that the method has the advantage of not relying on handcrafted reaction templates is somewhat overselling, because instead it uses a handcrafted complex neural network architecture. How complicated is it to train the network? If you remove some of the “tricks” of shaping the loss function, does it still train well? To what degree is the ranking of the different models just a matter of hyperparameter tuning or different architectures? If you used a different graph neural net instead of an MPNN on top of your GTPN method, what would you expect? Are the differences between the models significant? During prediction, you apply a flag to the starting molecules if they are a reagent or reactant. How do you know upfront what a reagent or reactant is during inference? On page 5 and 7, you speak of the correct sequence of reaction triples (which implies an ordering), even though earlier you claim the algorithm is order-invariant? Where do you get the ground truth labels from? I assume these are already annotated in the data. In the appendix, please replace the pie chart with a bar chart. - Language: I would suggest the authors to adapt the language of their paper towards a more academic tone. Science is not a sports competition of getting slightly higher numbers in benchmarks, but rather about providing insights and explanations. Words like “beating” or “record” are locker room talk, and to be avoided.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title GRAPH TRANSFORMATION POLICY NETWORK FOR CHEMICAL REACTION PREDICTION ### Paper Abstract We address a fundamental problem in chemistry known as chemical reaction product prediction. Our main insight is that the input reactant and reagent molecules can be jointly represented as a graph, and the process of generating product molecules from reactant molecules can be formulated as a sequence of graph transformations. To this end, we propose Graph Transformation Policy Network (GTPN) - a novel generic method that combines the strengths of graph neural networks and reinforcement learning to learn the reactions directly from data with minimal chemical knowledge. Compared to previous methods, GTPN has some appealing properties such as: end-to-end learning, and making no assumption about the length or the order of graph transformations. In order to guide model search through the complex discrete space of sets of bond changes effectively, we extend the standard policy gradient loss by adding useful constraints. Evaluation results show that GTPN improves the top-1 accuracy over the current state-of-the-art method by about 3% on the large USPTO dataset. Our model's performances and prediction errors are also analyzed carefully in the paper. ### Paper Keywords ["Chemical Reaction", "Graph Transformation", "Reinforcement Learning"] ### Paper Content ABSTRACTWe address a fundamental problem in chemistry known as chemical reaction prod-uct prediction. Our main insight is that the input reactant and reagent molecules canbe jointly represented as a graph, and the process of generating product moleculesfrom reactant molecules can be formulated as a sequence of graph transformations.To this end, we propose Graph Transformation Policy Network ( GTPN )a novelgeneric method that combines the strengths of graph neural networks and rein-forcement learning to learn the reactions directly from data with minimal chemicalknowledge. Compared to previous methods, GTPN has some appealing propertiessuch as: end-to-end learning, and making no assumption about the length or theorder of graph transformations. In order to guide model search through the complexdiscrete space of sets of bond changes effectively, we extend the standard policygradient loss by adding useful constraints. Evaluation results show that GTPNimproves the top-1 accuracy over the current state-of-the-art method by about 3%on the large USPTO dataset. Our model’s performances and prediction errors arealso analyzed carefully in the paper.1 I NTRODUCTIONChemical reaction product prediction is a fundamental problem in organic chemistry. It paves theway for planning syntheses of new substances (Chen & Baldi, 2009). For decades, huge effort hasbeen spent to solve this problem. However, most methods still depend on the handcrafted reactionrules (Chen & Baldi, 2009; Kayala & Baldi, 2011; Wei et al., 2016) or heuristically extracted reactiontemplates (Segler & Waller, 2017; Coley et al., 2017), thus are not well generalizable to unseenreactions.A reaction can be regarded as a set (or unordered sequence) of graph transformations in whichreactants represented as molecular graphs are transformed into products by modifying the bondsbetween some atom pairs (Jochum et al., 1980; Ugi et al., 1979). See Fig. 1 for an illustration. Wecall an atom pair (u;v)that changes its connectivity during reaction and its new bond bareactiontriple (u;v;b ). The reaction product prediction problem now becomes predicting a set of reactiontriples given the input reactants and reagents. We argue that in order to solve this problem well, anintelligent system should have two key capabilities: (a) Understanding the molecular graph structureof the input reactants and reagents so that it can identify possible reactivity patterns (i.e., atom pairswith changing connectivity). (b) Knowing how to choose from these reactivity patterns a correct setof reaction triples to generate the desired products.Recent state-of-the-art methods (Jin et al., 2017; Bradshaw et al., 2018) have built the first capabilityby leveraging graph neural networks (Duvenaud et al., 2015; Hamilton et al., 2017; Pham et al.,2017; Gilmer et al., 2017). However, these methods are either unaware of the valid sets of reactiontriples (Jin et al., 2017) or limited to sequences of reaction triples with a predefined orders (Bradshawet al., 2018). The main challenge is that the space of all possible configurations of reaction triplesis extremely large and non-differentiable. Moreover, a small change in the predicted set of reactiontriples can lead to very different reaction products and a little mistake can produce invalid prediction.In this paper, we propose a novel method called Graph Transformation Policy Network ( GTPN ) thataddresses the aforementioned challenges. Our model consists of three main components: a graphneural network (GNN), a node pair prediction network (NPPN) and a policy network (PN). Startingfrom the initial graph of reactant and reagent molecules, our model iteratively alternates between1Under review as a conference paper at ICLR 2019C:10 C:11 N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9C:10 C:11 N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9C:10C:11N:12O:1C:2C:3C:4C:5C:6C:7Br:8C:9Reactants Intermediate Molecules ProductFigure 1: A sample reaction represented as a set of graph transformations from reactants (leftmost) toproducts (rightmost). Atoms are labeled with their type (Carbon, Oxygen,...) and their index (1, 2,...)in the molecular graph. The atom pairs that change connectivity and their new bonds (if existed) arehighlighted in green. There are two bond changes in this case: 1) The double bond between O:1 andC:2 becomes single. 2) A new single bond between C:2 and C:10 is added.modeling an input graph using GNN and predicting a reaction triple using NPPN and PN to generatea new intermediate graph as input for the next step until it decides to stop. The final generated graphis considered as the predicted products of the reaction. Importantly, GTPN does not assume anyfixed number or any order of bond changes but learn these properties itself. One can view GTPN asa reinforcement learning (RL) agent that operates on a complex and non-differentiable space of setsof reaction triples. To guide our model towards learning a diverse yet robust-to-small-changes policy,we customize our loss function by adding some useful constraints to the standard policy gradient loss(Mnih et al., 2016).To the best of our knowledge, GTPN is the most generic approach for the reaction product predictionproblem so far in the sense that: i) It combines graph neural networks and reinforcement learninginto a unified framework and trains everything end-to-end; ii) It does not use any handcrafted orheuristically extracted reaction rules/templates to predict the products. Instead, it automatically learnsvarious types of reactions from the training data and can generalize to unseen reactions; iii) It caninterpret how the products are formed via the sequence of reaction triples it generates.We evaluate GTPN on two large public datasets named USPTO-15k andUSPTO . Our methodsignificantly outperforms all baselines in the top-1 accuracy, achieving new state-of-the-art resultsof 82.39% and 83.20% on USPTO-15k andUSPTO , respectively. In addition, we also providecomprehensive analyses about the performance of GTPN and about different types of errors ourmodel could make.2 M ETHOD2.1 C HEMICAL REACTION AS MARKOV DECISION PROCESS OF GRAPH TRANSFORMATIONSA reaction occurs when reactant molecules interact with each other in the presence (or absence) ofreagent molecules to form new product molecules by breaking or adding some of their bonds. Ourmain insight is that reaction product prediction can be formulated as predicting a sequence of suchbond changes given the reactant and reagent molecules as input. A bond change is characterized bythe atom pair ( where the change happens) and the new bond type ( what is the change). We call thisatom pair a reaction atom pair and call this atom pair with the new bond type a reaction triple .More formally, we represent the entire system of input reactant and reagent molecules as a labeledgraphG= (V;E)with multiple connected components, each of which corresponds to a molecule.Nodes inVare atoms labeled with their atomic numbers and edges in Eare bonds labeled with theirbond types. GivenGas input, we predict a sequence of reaction triples that transforms Ginto a graphof product molecules G0.As reactions vary in number of transformation steps, we represent the sequence of reaction triplesas(;u;v;b )0;(;u;v;b )1;:::;(;u;v;b )T1or(;u;v;b )0:Tfor short. Here Tis the maximumnumber of steps, (u;v)is a pair of nodes, bis the new edge type of (u;v), andis a binary signalthat indicates the end of the sequence. If the sequence ends at Tend<T,0;:::Tend1will be 1andTend;:::;T1will be 0. At every step , if= 1, we apply the predicted edge change (u;v;b )on2Under review as a conference paper at ICLR 2019Input GraphsEmbedded NodesTop K atom pairs123 457RNN6...Embedded NodesTop K atom pairsRNNFigure 2: Workflow of a Graph Transformation Policy Network ( GTPN ). At every step of theforward pass, our model performs 7 major functions: 1) Computing the atom representation vectors,2) Computing the most possible Kreaction atom pairs, 3) Predicting the continuation signal , 4)Predicting the reaction atom pair (u;v), 5) Predicting a new bond bof this atom pair, 6) Updating theatom representation vectors, and 7) Updating the recurrent state.the current graphGto create a new intermediate graph G+1as input for the next step + 1. Thisiterative process of graph transformation can be formulated as a Markov Decision Process (MDP)characterized by a tuple (S;A;P;R; ), in whichSis a set of states,Ais a set of actions, Pis astate transition function, Ris a reward function, and is a discount factor. Since the process is finiteand contains no loop, we set the discount factor to be 1. The rest of the MDP tuple are defined asfollows:State : A states2Sis an intermediate graph Ggenerated at step (0 <T ). When= 0, we denotes0=G0=G.Action : An action a2A performed at step is the tuple (;u;v;b ). The action iscomposed of three consecutive sub-actions: ,(u;v), andb. If= 0, our model willignore the next sub-actions (u;v)andb, and all the future actions (;u;v;b )+1:T. Notethat settingto be the first sub-action is useful in case a reaction does not happen, i.e.,0= 0State Transition : If= 1, the current graph Gis modified based on the reaction triple(u;v;b )to generate a new intermediate graph G+1. We do not incorporate chemical rulessuch as valency check during state transition because the current bond change may resultin invalid intermediate molecules G, but later, other bond changes may compensate it tocreate the valid final products GTend.Reward : We use both immediate rewards and delayed rewards to encourage our modelto learn the optimal policy faster. At every step , if the model predicts ,(u;v)orbcorrectly, it will receive a positive reward for each correct sub-action. Otherwise, a negativereward is given. After the prediction process has terminated, if the generated products areexactly the same as the groundtruth products, we give the model a positive reward, otherwisea negative reward. The concrete reward values are provided in Appendix A.3.2.2 G RAPH TRANSFORMATION POLICY NETWORKIn this section, we describe the architecture of our model a Graph Transformation Policy Network(GTPN ).GTPN has three main components namely a Graph Neural Network (GNN), a Node PairPrediciton Network (NPPN), and a Policy Network (PN). Each component is responsible for one orseveral key functions shown in Fig. 2: GNN performs functions 1 and 6; NPPN performs function 2;and PN performs functions 3, 4 and 5. Apart from these components, GTPN also has a RecurrentNeural Network (RNN) to keep track of the past transformations. The hidden state hof this RNN isused by NPPN and PN to make accurate prediction.3Under review as a conference paper at ICLR 20192.2.1 G RAPH NEURAL NETWORKTo model the intermediate graph Gat step, we compute the node state vector xiof every node iinGby using a variant of the Message Passing Neural Networks (Gilmer et al., 2017):xi=MessagePassingmx1i;vi;N(i)(1)wheremis the number of message passing steps; viis the feature vector of node i;N(i)is the set ofall neighbor nodes of node i; andx1iis the state vector of node iat the previous step. When = 0,x1iis initialized from viusing a neural network. Details about the MessagePassing (:)function areprovided in Appendix A.1.2.2.2 N ODE PAIRPREDICTION NETWORKIn order to predict how likely an atom pair (i;j)of the intermediate graph Gwill change its bond,we assign (i;j)with a score sij2R. Ifsijis high, (i;j)is more probably a reaction atom pair,otherwise, less probably. Similar to (Jin et al., 2017), we use two different networks called “local”network and “global” network for this task. In case of the “local” network,sijis computed as:zij=W1h1;(xi+xj);eij+b1(2)sij=fatom pairzij(3)wherefatom pairis a neural network; is a nonlinear activation function (e.g., ReLU); [:]denotesvector concatenation; W1andb1are parameters; h1is the hidden state of the RNN at the previousstep; andeijis the representation vector of the bond between (i;j). If there is no bond between (i;j)we assume that its bond type is “NULL” . We consider zijas the representation vector for the atompair(i;j).The “ global ” network leverages self-attention (Vaswani et al., 2017; Wang et al., 2018) to detectcompatibility between atom iand all other atoms before computing the scores:rij=V1(xi+xj);eij+c1aij=softmaxV2rij+c2ci=Xj2Vaijxjzij=W1h1;(xi+xj);(ci+cj);eij+b1(4)sij=fatom pairzij(5)whereaijis the attention score from node ito every other node j;ciis the context vector of atom ithat summarizes the information from all other atoms.During experiments, we tried both options mentioned above and saw that the “global” network clearlyoutperforms the “local” network so we set the “global” network as a default module in our model.In addition, since reagents never change their form during a reaction, we explicitly exclude all atompairs that have either atoms belong to the reagents. This leads to better results than not using reagentinformation. Detailed analyses are provided in Appendix A.5.Top-Katom pairs Because the number of atom pairs that actually participate in a reaction is verysmall (usually smaller than 10) compared to the total number of atom pairs of the input molecules(usually hundreds or thousands), it is much more efficient to identify reaction triples from a smallsubset of highly probable reaction atom pairs. For that reason, we extract K(KjVj2)atom pairswith the highest scores. Later, we will predict reaction triples taken from these Katom pairs only.We denote the set of top- Katom pairs, their corresponding scores, and representation vectors as (uk;vk)jk=1;K,sukvkjk=1;KandZK=zukvkjk=1;K, respectively.2.2.3 P OLICY NETWORKPredicting continuation signal To account for varying number of transformation steps, PN gener-ates a continuation signal 2f0;1gto indicate whether prediction should continue or terminate.4Under review as a conference paper at ICLR 2019is drawn from a Bernoulli distribution:p(= 1) = sigmoidfsignalh1;g(ZK)(6)whereh1is the previous RNN state; ZKis the set of representation vectors of the top Katompairs at the current step; fsignalis a neural network; gis a function that maps an unordered set ofinputs to an output vector. For simplicity, we use a mean function:z1K=g(ZK) =1KKXk=1Wz1ukvkPredicting atom pair At the next sub-step, PN predicts which atom pair changes its bond duringthe reaction by sampling from the top- Katom pairs with probability:p((uk;vk)) =softmaxKsukvk(7)wheresukvkis the score of the atom pair (uk;vk)computed in Eq. (5). After predicting the atompair(u;v), we will mask it to ensure that it could not be in the top Kagain at future steps.Predicting bond type Given an atom pair (u;v)sampled from the previous sub-step, we predicta new bond type bbetweenuandvto get a complete reaction triple (u;v;b )using the probability:p(bj(u;v)) =softmaxBfbondh1;zuv;(ebebold)(8)whereBis the total number of bond types; zuvis the representation vector of (u;v)computed inEq. (4);boldis the old bond of (u;v);eboldandebare the embedding vectors corresponding to thebond typeboldandb, respectively; and fbondis a neural network.2.3 U PDATING STATESAfter predicting a complete reaction triple (u;v;b ), our model updates: i) the new recurrent hiddenstateh, and ii) the new node representation vectors x+1iof the new intermediate graph G+1fori2V. These updates are presented in Appendix A.2.2.4 T RAININGLoss function plays a central role in achieving fast training and high performance. We design thefollowing loss:L=1LA2C+2Lvalue+3Latom pair+4Lover length+5Lin topKwhereLA2Cis the Advantage Actor-Critic (A2C) loss (Mnih et al., 2016) to account for the correctsequence of reaction triples; Lvalueis the loss for estimating the value function used in A2C; Latom pairaccounts for binary change in the bond of an atom pair; Lover lengthpenalizes long predicted sequences;andLin topKis the rank loss to force a ground-truth reaction atom pair to appear in the top- K; and1;:::; 5>0are tunable coefficients. The component losses are explained in the following.2.4.1 R EACTION TRIPLE LOSSThe loss follows a policy gradient method known as Advantage Actor-Critic (A2C):LA2C=Tend1X=0Asignallogp() +Aatom pair logp((u;v)) +Abondlogp(b)ATendsignallogTend(9)whereTendis the first step that = 0;Asignal,Aatom pair andAbondare called advantages . To computethese advantages, we use the unbiased estimations called Temporal Different errors, defined as:5Under review as a conference paper at ICLR 2019Asignal =rsignal+VZ+1KV(ZK) (10)Aatom pair =ratom pair +VZ+1KV(ZK) (11)Abond =rbond+VZ+1KV(ZK) (12)wherersignal,ratom pair ,rbondare immediate rewards at step ; at the final step =Tend, the modelreceives additional delayed rewards; is the discount factor; and Vis the parametric value function.We trainVusing the following mean square error loss:Lvalue=TendX=0kV(ZK)Rk2(13)whereRis the return at step .Episode termination during training Although the loss defined in Eq. (9) is correct, it is not goodto use in practice because: i) If our model selects a wrong sub-action at any sub-step of the stepTwrong (Twrong< T end), the whole predicted sequence will be incorrect regardless of what will bepredicted from Twrong+ 1toTend. Therefore, computing the loss for actions from Twrong+ 1toTendisredundant. ii) More importantly, the incorrect updates of the graph structure at subsequent steps fromTwrong+ 1toTendwill lead to cumulative prediction errors which make the training of our modelmuch more difficult.To resolve this issue, during training, we use a binary vector 2f0;1g3Tto keep track of the firstwrong sub-action: t=1ifttfirst wrong0ift>t first wrongwheretfirst wrong denotes the sub-step at which ourmodel chooses a wrong sub-action the first time. The actor-critic loss in Eq. (9) now becomes:LA2C=TX=0Asignallogp() +(+1)Aatom pair logp((u;v)) +(+2)Abondlogp(b)(14)whereTis the maximum number of steps. Similarly, we change the value loss into:Lvalue=TX=0kV(ZK)Rk22.4.2 R EACTION ATOM PAIR LOSSTo train our model to assign higher scores to reaction atom pairs and lower to non-reaction atompairs, we use the following cross-entropy loss function:Latom pair=Tfirst wrongX=0Xi2VXj2V;j6=iij(yijlogpij+ (1yij) log(1pij)) (15)whereTfirst wrong =jtfirst wrong3k;ijt2f0;1gis a mask of the atom pair (i;j)at step;yij2f0;1gisthe label indicating whether the atom pair (i;j)is a reaction atom pair or not; pij=sigmoid (sij)(see Eq. (5)).2.4.3 C ONSTRAINT ON THE SEQUENCE LENGTHOne major difficulty of the chemical reaction prediction problem is to know exactly when to stopprediction so we can make accurate inference. By forcing the model to stop immediately whenmaking wrong prediction, we can prevent cumulative error and significantly reduce variance duringtraining. But it also comes with a cost: The model cannot learn (because it does not have to learn)when to stop. This phenomenon can be visualized easily as the model predicts 1for the signal at6Under review as a conference paper at ICLR 2019Dataset #reactions #changes #molecules #atoms #bondsUSPTO-15ktrain 10,500 1 | 11 | 2.3 1 | 20 | 3.6 4 | 100 | 34.9 3 | 110 | 34.7valid 1,500 1 | 11 | 2.3 1 | 20 | 3.6 7 | 94 | 34.5 5 | 99 | 34.2test 3,000 1 | 11 | 2.3 1 | 16 | 3.6 7 | 98 | 34.9 5 | 102 | 34.7USPTOtrain 409,035 1 | 6 | 2.2 2 | 29 | 4.8 9 | 150 | 39.7 6 | 165 | 38.6valid 30,000 1 | 6 | 2.2 2 | 25 | 4.8 9 | 150 | 39.6 7 | 158 | 38.5test 40,000 1 | 6 | 2.2 2 | 22 | 4.8 9 | 150 | 39.8 7 | 162 | 38.7Table 1: Statistics of USPTO-15k andUSPTO datasets. “changes” means bond changes, “molecules”means reactants and reagents in a reaction; “atoms” and“bonds” are defined for a molecule. Apartfrom “#reactions” , other columns are presented in the format “min | max | mean”.every stepduring inference. In order to make the model aware of the correct sequence length duringtraining, we define a loss that punishes the model if it produces a longer sequence than the groundtruth sequence:Lover length=XTgtend<T endlogp(= 0) (16)whereTgtendis the end step of the ground-truth sequence. Note that the loss in Eq. (16) is not appliedwhenTendTgtend. The reason is that forcing = 1withTend <Tgtendis not theoretically correctbecause all the signals after Tendare assumed to be 0. The incentive to force Tendclose toTgtendwhenit is smaller than Tgtendhas already been included in the advantages in Eq. (14).2.4.4 C ONSTRAINT ON THE TOP -KATOM PAIRSIdeally, the loss from Eq. (15) pushes a reaction atom pair (~u;~v)into the top-Katom pairs at eachstep <Tgtend. However, this is not guaranteed, especially when comes close to Tgtend. To encouragethe ground-truth reaction atom pair (~u;~v)with the highest score to appear in the top K, we introducean additional rank-based loss:Lin topK=Tfirst wrongX=0logp((~u;~v)in topK)wherep((~u;~v)in topK)is computed as:p((~u;~v)in topK) =exp (s~u~v)exp (s~u~v) +PKk=1expsukvk (17)3 E XPERIMENTS3.1 D ATASETWe evaluate our model on two standard datasets USPTO-15k (15K reactions) and USPTO (480Kreactions) which have been used in previous works (Jin et al., 2017; Schwaller et al., 2018; Bradshawet al., 2018). Details about these datasets are given in Table 1. The USPTO dataset contains reactant,reagent and product molecules represented as SMILES strings. Using RDKit1, we convert theSMILES strings into molecule objects and store them as graphs. For each reaction, every atom in thereactant and reagent molecules is identified with a unique “atom map number”. This identity is thesame in the products. Using this knowledge, we compare every atom pair in the input molecules withthe correspondent in the product molecules to obtain a ground-truth set of reaction triples for training.InUSPTO-15k , the ground-truth sets of reaction triples was precomputed by (Jin et al., 2017).1https://www.rdkit.org/7Under review as a conference paper at ICLR 2019ModelUSPTO-15k USPTOC@6 C@8 C@10 C@6 C@8 C@10WLN?(Jin et al., 2017) 81.6 86.1 89.1 89.8 92.0 93.3WLN (Jin et al., 2017) 88.45 91.65 93.34 90.97 93.98 95.26CLN (Pham et al., 2017) 88.68 91.63 93.07 90.72 93.57 94.80Our GNN 88.92 92.00 93.57 91.24 94.17 95.33Table 2: Results for reaction atom pair prediction. C@k is coverage at k. Best results are highlightedin bold. WLN?is the original model from (Jin et al., 2017) while WLN is our re-implemented version.Except for WLN?, other models explicitly use reagent information.1 5 10 15 20k0.20.40.60.81.0valuecoverage@krecall@kFigure 3: Coverage@k andRecall@k with respect to kfor the USPTO dataset.3.2 R EACTION ATOM PAIRPREDICTIONIn this section, we test our model’s ability to identify reaction atom pairs by formulating it as a rankingproblem with the scores computed in Eq. (5). Similar to (Jin et al., 2017), we use Coverage@k as theevaluation metric, which is the proportion of reactions that have allgroundtruth reaction atom pairsappear in the top kpredicted atom pairs.We compare our proposed graph neural network (GNN) with Weisfeiler-Lehman Network (WLN)(Jin et al., 2017) and Column Network (CLN) (Pham et al., 2017). Since our GNN explicitly usesreagent information to compute the scores of atom pairs, we modify the implementation of WLN andCLN accordingly for fair comparison. From Table 2, we observe that our GNN clearly outperformsWLN and CLN in all cases. We attribute this improvement to the use of a separate node state vectorxti(different from the node feature vector vi) for updating the structural information of a node (seeEq. (21)). The other two models, on the other hand, only use a single vector to store both the nodefeatures and structure, hence, some information may be lost. In addition, using explicit reagentinformation boosts the prediction accuracy, which improves the WLN by 1-7% depending on themetrics. The presence of reagent information reduces the number of atom pairs to be searched on andcontributes to the likelihood of reaction atom pairs. Further results are presented in Appendix A.5.3.3 T OP-KATOM PAIREXTRACTIONThe performance of our model depends on the number of selected top atom pairs K. The valueofKpresents a trade-off between coverage and efficiency. In addition to the metric Coverage@kin Sec. 3.2, we use Recall@k which is the proportion of correct atom pairs that appear in top ktofind the good K. Fig. 3 shows Coverage@k andRecall@k for the USPTO dataset with respect tok. We see that both curves increase rapidly when k <10and stablize when k >10. We also ranexperiments with k= 10 ,15,20and observed that their prediction results are quite similar. Hence,in what follows we select K= 10 for efficiency.8Under review as a conference paper at ICLR 2019ModelUSPTO-15k USPTOP@1 P@3 P@5 P@1 P@3 P@5WLDN (Jin et al., 2017) 76.7 85.6 86.8 79.6 87.7 89.2Seq2Seq (Schwaller et al., 2018) - - - 80.3?86.2?87.5?GTPN 72.31 - - 71.26 - -GTPN}74.56 82.62 84.23 73.25 80.56 83.53GTPN}|74.56 83.19 84.97 73.25 84.31 85.76GTPN}82.39 85.60 86.68 83.20 84.97 85.90GTPN}|82.39 85.73 86.78 83.20 86.03 86.48Table 3: Results for reaction prediction. P@k is precision at k. State-of-the-art results from (Jinet al., 2017) are written in italic. Results from (Schwaller et al., 2018) are marked with?and they arecomputed on a slightly different version of USPTO that contains only single-product reactions. Bestresults are highlighted in bold.}: With beam search (beam width = 20),: Invalid product removal,|: Duplicated product removal.3.4 R EACTION PRODUCT PREDICTIONThis experiment validates GTPN on full reaction product prediction against the recent state-of-the-artmethods (Jin et al., 2017; Schwaller et al., 2018) using the accuracy metric. The recent methodELECTRO (Bradshaw et al., 2018) is not compatible here because it was only evaluated on a subsetofUSPTO limited to linear chain topology. Comparison against ELECTRO is reported separatelyin Appendix A.6. Table 3 shows the prediction results. We produce multiple reaction productcandidates by using beam search decoding with beam width N= 20 . Details about beam search andits behaviors are presented in Appendix A.4.In brief, we compute the normalized-over-length log probabilities of Npredicted sequences ofreaction triples and sort these values in descending order to get a rank list of Npossible reactionoutcomes. Given a predicted sequence of reaction triples (u;v;b )0:T, we can generate reactionproducts from input reactants simply by replacing the old bond of (u;v)withb. However, theseproducts are not guaranteed to be valid (e.g., maximum valence constraint violation or aromaticmolecules cannot be kekulized) so we post-process the outputs by removing all invalid products.The removal increases the top-1 accuracy by about 8% and 10% on USPTO-15k andUSPTO ,respectively. Due to the permutation invariance of the predicted sequence of reaction triples, someproduct candidates are duplicate and will also be removed. This does not lead to any change in P@1but slightly improves P@3 andP@5 by about 0.5-1% on the two datasets.Overall, GTPN with beam search and post-processing outperforms both WLDN (Jin et al., 2017)and Seq2Seq (Schwaller et al., 2018) in the top-1 accuracy. For the top-3 and top-5, our model’sperformance is comparable to WLDN’s on USPTO-15k and is worse than WLDN’s on USPTO . It isnot surprising since our model is trained to accurately predict the top-1 outcomes instead of rankingthe candidates directly like WLDN. It is important to emphasize that we did not tune the modelhyper-parameters when training on USPTO but reused the optimal settings from USPTO-15k (whichis 25 times smaller than USPTO ) so the results may not be optimal (see Appendix A.3 for moretraining detail).4 R ELATED WORK4.1 L EARNING TO PREDICT CHEMICAL REACTIONIn chemical reaction prediction, machine learning has replaced rule-based methods (Chen & Baldi,2009) for better generalizability and scalability. Existing machine learning-based techiques are eithertemplate-free (Kayala & Baldi, 2011; Jin et al., 2017; Fooshee et al., 2018) and template-based (Weiet al., 2016; Segler & Waller, 2017; Coley et al., 2017). Both groups share the same mechanism:running multiple stages with the aid of reaction templates or rules. For example, in (Wei et al., 2016)the authors proposed a two-stage model that first classifies reactions into different types based on theneural fingerprint vectors (Duvenaud et al., 2015) of reactant and reagent molecules. Then, it applies9Under review as a conference paper at ICLR 2019pre-designed SMARTS transformation on the reactants with respect to the most suitable predictedreaction type to generate the reaction products.The work of (Jin et al., 2017) treats a reaction as a set of bond changes so in the first step, theypredict which atom pairs are likely to be reactive using a variant of graph neural networks calledWeisfeiler-Lehman Networks (WLNs). In the next step, they do almost the same as (Coley et al.,2017) by modifying the bond type between the selected atom pairs (with chemical rules satisfied) tocreate product candidates and rank them (with reactant molecules as addition input) using anotherkind of WLNs called Weifeiler-Lehman Different Networks (WLDNs).To the best of our knowledge, (Jin et al., 2017) is the first work that achieves remarkable results (withthe Precision@1 is about 79.6%) on the large USPTO dataset containing more than 480 thousandsreactions. Works of (Nam & Kim, 2016) and (Schwaller et al., 2018) avoid multi-stage prediction bybuilding a seq2seq model that generates the (canonical) SMILES string of the single product fromthe concatenated SMILES strings of the reactants and reagents in an end-to-end manner. However,their methods cannot deal with sets of reactants/reagents/products properly as well as cannot provideconcrete reaction mechanism for every reaction.The most recent work on this topic is (Bradshaw et al., 2018) which solves the reaction predictionproblem by predicting a sequence of bond changes given input reactants and reagents representedas graphs. To handle ordering, they only select reactions with predefined topology. Our method, bycontrast, is order-free and can be applied to almost any kind of reactions.4.2 G RAPH NEURAL NETWORKS FOR MODELING MOLECULESIn recent years, there has been a fast development of graph neural networks (GNNs) for modelingmolecules. These models are proposed to solve different problems in chemistry including toxicityprediction (Duvenaud et al., 2015), drug activity classification (Shervashidze et al., 2011; Daiet al., 2016; Pham et al., 2018), protein interface prediction (Fout et al., 2017) and drug generation(Simonovsky & Komodakis, 2018; Jin et al., 2018). Most of them can be regarded as variants ofmessage-passing graph neural networks (MPGNNs) (Gilmer et al., 2017).4.3 R EINFORCEMENT LEARNING FOR STRUCTURAL REASONINGReinforcement learning (RL) has become a standard approach to many structural reasoning problems2because it allows agents to perform discrete actions. A typical example of using RL for structuralreasoning is drug generation (Li et al., 2018; You et al., 2018). Both (Li et al., 2018) and (Youet al., 2018) learn the same generative policy whose action set including: i) adding a new atom ora molecular scaffold to the intermediate graph, ii) connecting existing pair of atoms with bonds,and iii) terminating generation. However, (You et al., 2018) uses an adversarial loss to enforceglobal chemical constraints on the generated molecules as a whole instead of using the commonreconstruction loss as in (Li et al., 2018). Other examples are path-based relational reasoning inknowledge graphs (Das et al., 2018) and learning combinatorial optimization over graphs (Khalilet al., 2017).5 D ISCUSSIONWe have introduced a novel method named Graph Transformation Policy Network ( GTPN ) forpredicting products of a chemical reaction. GTPN uses graph neural networks to represent inputreactant and reagent molecules, and uses reinforcement learning to find an optimal sequence ofbond changes that transforms the reactants into products. We train GTPN using the AdvantageActor-Critic (A2C) method with appropriate constraints to account for notable aspects of chemicalreaction. Experiments on real datasets have demonstrated the competitiveness of our model.Although the GTPN was proposed to solve the chemical reaction problem, it is indeed generic tosolve the graph transformation problem, which can be useful in reasoning about relations (e.g., see(Zambaldi et al., 2018)) and changes in relation. Open rooms include addressing dynamic graphsover time, extending toward full chemical planning and structural reasoning using RL.2Structural reasoning is a problem of inferring or generating new structure (e.g. objects with relations)10Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title Nice application. Some edits are needed. ### Review Text Update: Score increased. ___________________________________ Original review: The paper presents an approach to predict the products of chemical reactions, given the reactants and reagents. It works by stepwise predicting the atom pairs that change their bonds in course of reaction, and then adjusting the bonds between them. This can be interpreted as a stepwise graph transformation. I think this is an interesting applied ML paper with fair results. The presentation is clear and understandable. The experimental setup is reasonable. However, the paper is not ready yet to be accepted in my opinion. I think a higher score is justified if the authors address the following points: - Relation to previous work, originality In contrast to what the authors claim, what is predicted here is not exactly the reaction mechanism, but an implementation of the principle of minimal chemical distance, which was already described by Ugi and coworkers in 1980 [see Jochum, Gasteiger, Ugi, The Principle of Minimum Chemical Distance Angew. Chem. Int. Ed. Engl. 1980, 19, p 495-505]. The “insight” the authors have about treating reagents and reactants jointly is Organic Chemistry 101, and that reactions are stepwise graph transformations was also reported by Ugi et al, however, already in 1979! [Ugi, et al. "New applications of computers in chemistry." Angewandte Chemie International Edition in English 18.2 (1979): 111-123. ] I assume the authors were not aware of these papers, but now they are, so this needs to be modified accordingly, and these papers need to be referred to in the introduction. - Questions: The authors suggest that graph neural networks are more generic that so-called heuristic features (fingerprints) – which, as Duvenaud et al have elaborated, can be interpreted just as graph neural networks themselves – with fixed weights. Also, there are results by the Hochreiter group which show that graph neural networks perform worse that classical chemical features under rigorous testing { DOI: 10.1039/C8SC00148K } Do the authors think their models could also improve if they used the classical fingerprints? Is the GRU really needed to encode the past bond changes? What happens if you remove it? The statement that the method has the advantage of not relying on handcrafted reaction templates is somewhat overselling, because instead it uses a handcrafted complex neural network architecture. How complicated is it to train the network? If you remove some of the “tricks” of shaping the loss function, does it still train well? To what degree is the ranking of the different models just a matter of hyperparameter tuning or different architectures? If you used a different graph neural net instead of an MPNN on top of your GTPN method, what would you expect? Are the differences between the models significant? During prediction, you apply a flag to the starting molecules if they are a reagent or reactant. How do you know upfront what a reagent or reactant is during inference? On page 5 and 7, you speak of the correct sequence of reaction triples (which implies an ordering), even though earlier you claim the algorithm is order-invariant? Where do you get the ground truth labels from? I assume these are already annotated in the data. In the appendix, please replace the pie chart with a bar chart. - Language: I would suggest the authors to adapt the language of their paper towards a more academic tone. Science is not a sports competition of getting slightly higher numbers in benchmarks, but rather about providing insights and explanations. Words like “beating” or “record” are locker room talk, and to be avoided. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
3SqrRe8FWQ-
ICLR.cc/2021/Conference
2021
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic
["Renkun Ni", "Hong-min Chu", "Oscar Castaneda", "Ping-yeh Chiang", "Christoph Studer", "Tom Goldstein"]
Low-precision neural networks represent both weights and activations with few bits, drastically reducing the cost of multiplications. Meanwhile, these products are accumulated using high-precision (typically 32-bit) additions. Additions dominate the arithmetic complexity of inference in quantized (e.g., binary) nets, and high precision is needed to avoid overflow. To further optimize inference, we propose WrapNet, an architecture that adapts neural networks to use low-precision (8-bit) additions while achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by inserting a cyclic activation layer that makes results invariant to overflow. We demonstrate the efficacy of our approach using both software and hardware platforms.
["quantization", "efficient inference"]
ABSTRACTLow-precision neural networks represent both weights and activations with fewbits, drastically reducing the cost of multiplications. Meanwhile, these productsare accumulated using high-precision (typically 32-bit) additions. Additions dom-inate the arithmetic complexity of inference in quantized (e.g., binary) nets, andhigh precision is needed to avoid overflow. To further optimize inference, we pro-pose WrapNet, an architecture that adapts neural networks to use low-precision(8-bit) additions while achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by insertinga cyclic activation layer that makes results invariant to overflow. We demonstratethe efficacy of our approach using both software and hardware platforms.1 I NTRODUCTIONSignificant progress has been made in quantizing (or even binarizing) neural networks, and numer-ous methods have been proposed that reduce the precision of weights, activations, and even gradi-ents while retaining high accuracy (Courbariaux et al., 2016; Hubara et al., 2016; Li et al., 2016; Linet al., 2017; Rastegari et al., 2016; Zhu et al., 2016; Dong et al., 2017; Zhu et al., 2018; Choi et al.,2018a; Zhou et al., 2016; Li et al., 2017; Wang et al., 2019; Jung et al., 2019; Choi et al., 2018b;Gong et al., 2019). Such quantization strategies make neural networks more hardware-friendly byleveraging fast, integer-only arithmetic, replacing multiplications with simple bit-wise operations,and reducing memory requirements and bandwidth.Unfortunately, the gains from quantization are limited because quantized networks still requirehigh-precision arithmetic. Even if weights and activations are represented with just one bit, deepfeature computation requires the summation of hundreds or even thousands of products. Perform-ing these summations with low-precision registers results in integer overflow, contaminating down-stream computations and destroying accuracy. Moreover, as multiplication costs are slashed byquantization, high-precision accumulation starts to dominate the arithmetic cost. Indeed, our ownhardware implementations show that an 8-bit 8-bit multiplier consumes comparable power andsilicon area to a 32-bit accumulator. When reducing the precision to a 3-bit 1-bit multiplier, a32-bit accumulator consumes more than 10higher power and area; see Section 4.5. Evidently,low-precision accumulators are the key to further accelerating quantized nets.In custom hardware, low-precision accumulators reduce area and power requirements while boost-ing throughput. On general-purpose processors, where registers have fixed size, low-precision ac-cumulators are exploited through bit-packing , i.e., by representing multiple low-precision integersside-by-side within a single high-precision register (Pedersoli et al., 2018; Rastegari et al., 2016;Bulat & Tzimiropoulos, 2019). Then, a single vector instruction is used to perform the same oper-ation across all of the packed numbers. For example, a 64-bit register can be used to execute eightparallel 8-bit additions, thus increasing the throughput of software implementations. Hence, the useof low-precision accumulators is advantageous for both hardware and software implementations,provided that integer overflow does not contaminate results.1Published as a conference paper at ICLR 2021We propose WrapNet, a network architecture with extremely low-precision accumulators. WrapNetexploits the fact that integer computer arithmetic is cyclic, i.e, numbers are accumulated until theyreach the maximum representable integer and then “wrap around” to the smallest representableinteger. To deal with such integer overflows, we place a differentiable cyclic (periodic) activationfunction immediately after the convolution (or linear) operation, with period equal to the differencebetween the maximum and minimum representable integer. This strategy makes neural networksresilient to overflow as the activations of neurons are unaffected by overflows during convolution.We explore several directions with WrapNet. On the software side, we consider the use of bit-packing for processors with or without dedicated vector instructions. In the absence of vector in-structions, overflows in one packed integer may produce a carry bit that contaminates its neighboringvalue. We propose training regularizers that minimize the effects of such contamination artifacts,resulting in networks that leverage bit-packed computation with very little impact on final accuracy.For processors with vector instructions, we modify the Gemmlowp library (Jacob et al., 2016) tooperate with 8-bit accumulators. Our implementation achieves up to 2:4speed-up compared to a32-bit accumulator implementation, even when lacking specialized instructions for 8-bit multiply-accumulate. We also demonstrate the efficacy of WrapNet in terms of cycle time, area, and energyefficiency when considering custom hardware designs in a commercial 28 nm CMOS technology.2 R ELATED WORK AND BACKGROUND2.1 N ETWORK QUANTIZATIONNetwork quantization aims at accelerating inference by using low-precision arithmetic. In its mostextreme form, weights and activations are both quantized using binary or ternary quantizers. Thebinary quantizer Qbcorresponds to the sign function, whereas the ternary quantizer Qtmaps somevalues to zero. Multiplications in binarized or ternarized networks (Hubara et al., 2016; Courbariauxet al., 2015; Lin et al., 2017; Rastegari et al., 2016; Zhu et al., 2016) can be implemented using bit-wise logic, leading to impressive acceleration. However, training such networks is challenging sincefewer than 2bits are used to represent activations and weights, resulting in a dramatic impact onaccuracy compared to full-precision models.Binary and ternary networks are generalized to higher precision via uniform quantization, which hasbeen shown to result in efficient hardware (Jacob et al., 2018). The multi-bit uniform quantizer Quis given by: Qu(x) = round (x=x)x;where xdenotes the quantization step-size. The outputof the quantizer is a floating-point number xthat can be expressed as x= xxq, wherexqis thefixed-point representation of x. The fixed-point number xqhas a “precision” or “bitwidth,” which isthe number of bits used to represent it. Note that the range of floating-point numbers representableby the uniform quantizer Qudepends on both the quantization step-size xand the quantizationprecision. Nonetheless, the number of different values that can be represented by the same quantizerdepends only on the precision.Applying uniform quantization to both weights w= wwqand activations x= xxqsimplifiescomputations, as an inner-product simply becomesz=Xiwixi=Xi(w(wq)i)(x(xq)i) = ( wx)Xi(wq)i(xq)i= zzq: (1)The key advantage of uniform quantization is that the core computationPi(wq)i(xq)ican be carriedout using fixed-point (i.e., integer) arithmetic only. Results in (Gong et al., 2019; Choi et al., 2018b;Jung et al., 2019; Wang et al., 2019; Mishra et al., 2017; Mishra & Marr, 2017) have shown thathigh classification accuracy is attainable with low-bitwidth uniform quantization, such as 2 or 3 bits.Although (wq)i;(xq)i, and their product may have extremely low-precision, the accumulated resultzqof many of these products has very high dynamic range. As a result, high-precision accumulatorsare typically required to avoid overflows, which is the bottleneck for further arithmetic speedups.2.2 L OW-PRECISION ACCUMULATIONSeveral approaches have been proposed that use accumulators with fewer bits to obtain speed-ups.For example, reference (Khudia et al., 2021) splits the weights into two separate matrices, one with2Published as a conference paper at ICLR 2021Table 1: Average overflow rate (in 8 bits) of each layer for a low-precision network and correspond-ing test accuracy using either 32-bit or 8-bit accumulators during inference on CIFAR10.Bit (A/W) Overflow rate (8-bit) Accuracy (32-bit) Accuracy (8-bit)full precision – 92.45% –3/1 10.84% 91.08% 10.06%2/1 1.72% 88.46% 44.04%small- and another with large-magnitude entries. If the latter matrix is sparse, acceleration is attainedas most computations rely on fast, low-precision operations. However, to significantly reduce theaccumulator’s precision, one would need to severely decrease the magnitude of the entries of the firstmatrix, which would, in turn, prevent the second matrix from being sufficiently sparse to achieveacceleration. Recently, (de Bruin et al., 2020) proposed using layer-dependent quantization param-eters to avoid overflowing accumulators with fixed precision. Fine-tuning is then used to improveperformance. However, if the accumulator precision is too low (e.g., 8 bits or less), the optimizedprecision of activations and weights is too coarse to attain satisfactory performance. Another lineof work (Sakr et al., 2019; Micikevicius et al., 2017; Wang et al., 2018) uses 16-bit floating-pointaccumulators for training and inference—such approaches typically require higher complexity thanmethods based on fixed-point arithmetic.2.3 T HEIMPACT OF INTEGER OVERFLOWOverflow is a major problem, especially in highly quantized networks. Table 1 demonstrates thatoverflows occur in around 11% of the neurons in a network with 3-bit activations (A) and binaryweights (W) that is using 8-bit accumulators for inference after being trained on CIFAR-10 withstandard precision. Clearly, overflow has a significant negative impact on accuracy. Table 1 showsthat if we use an 8-bit (instead of a 32-bit) accumulator, then the accuracy of a binary-weight networkwith 2-bit activations drops by more than 40%, even when only 1.72% neurons overflow. If we repeatthe experiment with 3-bit activations and binary weights, the accuracy is only marginally better thana random guess. Therefore, existing methods try to avoid integer overflow by using accumulatorswith relatively high precision, and pay a correspondingly high price when doing arithmetic.3 W RAPNET: DEALING WITH INTEGER OVERFLOWSWe now introduce WrapNet, which includes a cyclic activation function and an overflow penalty,enabling neural networks to use low-precision accumulators. We also present a modified quantiza-tion step-size selection strategy for activations, which retains high classification accuracy. Finally,we show how further speed-ups can be achieved on processors with or without specialized vectorinstructions.We propose training a network with layers that emulate integer overflows on the fixed-point pre-activationszqto maintain high accuracy. However, directly training a quantized network with anoverflowing accumulator diverges (see Table 2) due to the discontinuity of the modulo operation.To facilitate training, we insert a cyclic “smooth modulo” activation immediately after every lin-ear/convolutional layer, which not only captures the wrap-around behavior of overflows, but alsoensures that the activation is continuous everywhere. The proposed smooth modulo activation cis acomposite function of a modulo function mand a basis function fthat ensures continuity. Specifi-cally, given a b-bit accumulator, our smooth-modulo cfor fixed-point inputs is as follows:f(m) =8><>:m; forkk+12b1mkk+12b1k2b1km; form<kk+12b1k2b1km; form>kk+12b1c(zq) =f(mod(zq+ 2b1;2b)2b1);wherekis a hyper-parameter that controls the slope of the transition. Note that we apply constantshifts to keep the input of fin[2b1;2b1). Figure 1a illustrates the smooth modulo function with3Published as a conference paper at ICLR 2021(a) (b)Figure 1: (a) Example of the proposed cyclic activation with different slopes kand the originalmodulo operator for a 4-bit accumulator. (b) Convolutional block with proposed cyclic activation.two different slopes k= 1;4. Askincreases, the cyclic activation becomes more similar to themodulo operator and has a greater range, but the transition becomes more abrupt. Since our cyclicactivation is continuous and differentiable almost everywhere, standard gradient-based learning canbe applied easily. A convolutional block with cyclic activation layer is shown in Figure 1b. Afterthe convolution result goes into the cyclic activation, the result is multiplied by zto computea floating-point number, which is then processed through BatchNorm and ReLU. A fixed per-layerquantization step-size is then used to convert the floating-point output of the ReLU into a fixed-pointinput for the next layer. We detail the procedure to find this step-size in Section 3.2.3.1 O VERFLOW PENALTYAn alternative way to adapt quantized networks to low-precision accumulators is to directly reducethe amount of overflows. To achieve this, we propose a regularizer which penalizes outputs thatexceed the bitwidth of the accumulation register. Concretely, for a b-bit accumulator, we define anoverflow penalty for the l-th layer of the network as follows: Rol= (1=N)Pimaxfjziqj2b1;0g:Here,ziqis the fixed-point result in (1) for the i-th neuron of the l-th layer, and Nis the total numberof neurons in the l-th layer. The overflow penalty is imposed after every quantized linear layer andbefore the cyclic activation. All these penalties are combined into one regularizer Ro=PlRol.3.2 S ELECTION OF ACTIVATION QUANTIZATION STEP-SIZETo keep multiplication simple, the floating-point output of ReLU must be quantized before it isfed into the following layer. However, as shown in Table 1, a significant number of overflowsoccur even with 3-bit activations. From our experiments (see Table 3), we have observed that ifoverflow occurs too frequently (i.e., on more than 10% of the neurons), then WrapNet starts tosuffer significant accuracy degradation. However, if we reduce the activation precision so that nooverflows happen at all, several layers will have 1-bit activations (see Table 3), thereby increasingquantization errors and degrading accuracy. To balance accumulation and quantization errors, weadjust the quantization step-size xof each layer based on the overflow rate, i.e., the percentage p%of neurons that overflow in the network. If the overflow rate p% is too large, then we increase xtoreduce the overflow rate p%. The selected quantization step-size is then fixed for further fine-tuning.3.3 A DAPTING TO BIT-PACKINGMost modern processors provide vector instructions that enable parallel operation on multiple 8-bit numbers. For instance, the A VX2 (NEON) instruction set on x86 (ARM) processors providesparallel processing with 32 (16) 8-bit numbers. Vector instructions provide a clean implementationof bit-packing, which WrapNet can leverage to attain significant speed-ups. While some embed-ded processors and legacy chips do not provide vector instructions, bit-packing can still be applied.Without vector instructions for multiplication, binary/ternary weights must be used to replace mul-tiplication with bit-wise logic (Bulat & Tzimiropoulos, 2019; Pedersoli et al., 2018). Furthermore,bit-packing of additions is more delicate: Each integer overflow not only results in wrap-aroundbehavior, but also generates a carry bit that contaminates the adjacent number—specialized vector4Published as a conference paper at ICLR 2021instructions avoid such contamination. We propose the following strategies to minimize the impactof carry propagation.Reducing variance in the number of carries. The number of carries generated during a convo-lution operation can be large. Nevertheless, if we can keep the number of carries approximatelythe same for all the neurons among a batch of images, the estimated number of carries can be sub-tracted from the result to correct the outputs of a bit-packed convolution operation. To achieve this,during training, we calculate the number of carries for each neuron and impose a regularizer, Rc,to keep the variance of the number of carries small. The detailed formulation of Rccan be foundin Appendix A.1. Using a buffer bit. Alternatively, since each addition can generate at most onecarry bit, we can place a buffer bit between every low-bit number in the bit-packing. For example,instead of packing eight 8-bit representations into a 64-bit number, we pack eight 7-bit numbers withone buffer bit between each of them. These buffer bits absorb the carry bits, and are cleared usingbit-wise logic after each addition. Buffering makes representations 1-bit smaller, which potentiallydegrades accuracy. A hybrid approach. To get the benefits from both strategies, we use a variancepenalty on layers that have small standard deviation to begin with, and equip the remaining layerswith a buffer bit.4 E XPERIMENTSWe compare the accuracy and efficiency of WrapNet to networks with full-precision accumulatorsusing the CIFAR-10 and ImageNet datasets. Most experiments use binary or ternary weights forWrapNet as A VX2 lacks 8-bit multiplication instructions, but supports 8-bit additions and logicoperations needed for binary/ternary convolutions.4.1 T RAINING PIPELINEWe first pre-train a network with quantized weights and no cyclic layers, while keeping full-precisionactivations. Then, we select the quantization step-sizes of the activations (see Section 3.2) suchthat each layer has an overflow rate of around p%(a hyper-parameter) with respect to the desiredaccumulator bitwidth. Given the selected quantization step-size for each layer and the pre-trainednetwork, we insert our proposed cyclic activation layer. We then warm-up our WrapNet by fine-tuning with full-precision activation for several epochs. Finally we further fine-tune the networkwith both activations and weights quantized. Both overflow and carry variance regularizers areonly applied in the final fine-tuning step, except when training ResNet for ImageNet, where theregularizers are also included during warm-up.4.2 A DAPTING TO LOW-PRECISION ACCUMULATORSWe conduct ablation studies on the following factors: the type of cyclic function, the initial overflowrate for quantization step-size and precision selection, and the coefficient of the overflow penaltyregularizer. These experiments are conducted on VGG-7 (Li et al., 2016), which is commonly usedin the quantization literature for CIFAR-10. We binarize the weights as in (Rastegari et al., 2016),and we train WrapNet to adapt to an 8-bit accumulator. As our default setting, we use k= 2as thetransition slope, p= 5% as the initial overflow rate, and 0as the coefficient of the regularizer.Cyclic activation function. We compare the performance of various transition slopes kof ourcyclic function cin Table 2, and we achieve the best performance when k= 2. Ifkis too small,then the accuracy decreases due to a narrower effective bitwidth (only half of the bitwidth is usedwhenk= 1). Meanwhile, the abrupt transition for large khurts the performance as well. In theextreme case where the cyclic function degenerates to modulo (k! 1 ), WrapNet diverges torandom guessing, which highlights the importance of training with a “smooth” cyclic non-linearityto assimilate integer overflow. We also find that placing a ReLU after batch norm yields the bestperformance, even though the cyclic function is already non linear. More experimental results canbe found in Appendix B.1.Quantization step-size. As described in Section 3.2, the quantization step-sizes are selected tobalance the rounding error of the activations and accumulation errors due to overflow. We comparethe classification performance when we choose different step-sizes to control the overflow rate as in5Published as a conference paper at ICLR 2021Table 2: Results for different transition slopes for cyclic function; denotes divergence.k 1 2 4 10 1Accuracy 90.24% 90.52% 90.25% 89.16% Table 3: Results for different quantization step-sizesbased on overflow rate p(%).denotes divergence.p Bits Accuracy p Bits Accuracy0 1 90.07% 20 4 88.25%2 3 90.51% 30 5 85.30%5 3 90.52% 40 5 36.11%10 4 89.92% 50 5Table 4: Results for fine-tuning with theoverflow penalty ( Ro).Rop% Accuracy Difference0 20 88.25% –0 5 90.52% 2.27%0.01 20 90.05% –0.01 5 90.81% 0.76%Table 3. If the initial overflow rate is large, then the quantization step-size will be finer, but trainingis less stable. We obtain the best performance when the initial overflow rate is around 5%. Themedian bitwidths of the activations across layers are also reported in Table 3. Note that if we wantto suppress all overflows, we can only use 1-bit activations. We also observe that WrapNet can attainreasonable accuracy (85%) even with a large overflow rate (around 30%), which demonstrates thatour proposed cyclic activations provides resilience against integer overflows.Overflow penalty. The overflow penalty regularizer improves stability to step-size selection. Morespecifically, in Table 4, the difference in accuracy between two step-size selections decreases from2.27% to 0.76% after adding the regularizer. The overflow penalty also complements our cyclicactivation, as we achieve the best performance when using both of them together during the fine-tuning stage. Moreover, in Appendix B.2, we compare our results to fine-tuning the pre-trainednetwork using the overflow regularizer only. In the absence of a cyclic layer, neural networks stillsuffer from low accuracy (as in Section 2.3) unless a very strong penalty is imposed.4.3 A DAPTING TO BIT-PACKINGWe now show the efficacy of WrapNet for bit-packing without vector operations. We use the samearchitecture, binary weights, 8-bit accumulators, and hyper-parameters as in Section 4.2. The train-ing details can be found in Appendix A.2. We consider CIFAR-10, and we compare with the bestresult of WrapNet from the previous section as a baseline. Without specific vector instructions,accuracy degenerates to a random guess because of undesired carry contamination during inference.Surprisingly, with the carry variance regularizer, WrapNet works well even with abundant carry con-tamination during inference (for each neuron, 384 on average over all the dataset). The regularizerdrops the standard deviation of the per-neuron carry contamination by 90%. When we use the hybridapproach, the accuracy is further improved (89.43%) and close to the best result (90.81%) we canachieve with vector instructions that do not propagate carries across different numbers (see Table 5).Table 5: Results for adaptation to bit-packing with 8-bit accumulator. (v) denotes no carry contami-nation as in a vector instruction; (c) denotes carry propagation between different numbers.Method Accuracy (v) Accuracy (c) Carry Carry StdBaseline 90.81% 10.03% 254.91 159.55Buffer Bit – 88.22% – –Rc– 87.86% 384.42 17.91Hybrid – 89.43% 482.4 16.186Published as a conference paper at ICLR 20214.4 B ENCHMARK RESULTSIn this section, we compare our WrapNet when there is no carry contamination, with the following32-bit accumulator baselines: a full-precision network (FP), a network trained with binary/ternaryweights but with full-precision activations (BWN/TWN), and a network where both weights andactivations are quantized to the same precision as our WrapNet (BWN/TWN-QA). We benchmarkour results on both CIFAR-10 and ImageNet. We use VGG7 and ResNet20 for our CIFAR-10experiments, and we use AlexNet (Krizhevsky et al., 2012; Simon et al., 2016), ResNet18 andResNet50 (He et al., 2016) for our ImageNet experiments. Details of training can be found inAppendix B.3.For CIFAR-10, even with an 8-bit accumulator, our results are comparable to both BWN and TWN.When adapting to a 12-bit accumulator, we further achieve performance on-par with TWN andbetter than BWN (see Table 6). For ImageNet, our WrapNet can achieve accuracy as good as BWNwhen adapting to a 12-bit accumulator where we can use binary weights and roughly 7-bit quantizedactivations. However, in the extreme low-precision case (8-bit), the accuracy of our binary WrapNetdrops around 8% due to the limited bitwidth we can use for activations. As reported in Table 6,the median activation bitwidth is roughly 3-bit, and for some layers in AlexNet, we can only use1-bit activations. Despite the gap from BWN, we observe that our model can achieve comparableperformance as BWN-QA where the same precision is used for activations. When using ternaryweights and an 8-bit accumulator, our WrapNet only drops by 3% and 2% from TWN for ResNet18and ResNet50, respectively. In addition, in the case of adapting to a 12-bit accumulator, our ternaryWrapNet with roughly 7-bit activations is even slightly better than TWN for ResNet50. Note that,without cyclic activation function, all the results for networks using 8-bit accumulator are as poor asrandom guessing which is consistent with Table 1.Table 6: Top-1 test accuracy for both CIFAR-10 and ImageNet with different architectures. Here,“Acc” represents accumulator, and “QA” represents quantized activation.Bits CIFAR-10 ImageNetActivation Weight Acc VGG7 ResNet20 AlexNet ResNet18 ResNet50FP 32 32 32 92.45% 91.78% 60.61% 69.59% 76.15%BWN 32 1 32 91.55% 90.03% 56.56% 63.55% 72.88%BWN-QA3 1 32 91.30% 89.86% 46.30% 57.54% 66.85%WrapNet3 1 8 90.81% 89.78% 44.88% 55.60% 64.30%WrapNet7 1 12 91.59% 90.17% 56.62% 63.11% 72.37%TWN 32 2 32 91.56% 90.36% 57.57% 65.70% 73.31%TWN-QA4 2 32 91.49% 90.12% 55.84% 63.67% 72.50%WrapNet4 2 8 91.14% 89.56% 52.24% 62.13% 71.62%WrapNet7 2 12 91.53% 90.88% 57.60% 63.84% 73.93%4.5 E FFICIENCY ANALYSISWe conduct an efficiency analysis of parallelization by bit-packing, both with and without vectoroperations, on an Intel i7-7700HQ CPU operating at 2.80 GHz. We also conduct a detailed study ofimprovements that can be obtained using custom hardware.A VX2 instruction efficiency analysis. We study the empirical efficiency of WrapNet when vec-tor operations are available. We extended Gemmlowp (Jacob et al., 2016) to implement matrixmultiplications using 8-bit accumulators with A VX2 instructions. To demonstrate the efficiency oflow-precision accumulators, we compare our implementation with the A VX2 version of Gemmlowp,which uses 32-bit accumulators. We report the execution speed of both on various convolution ker-nels of ResNet18 in Table 7. From Table 7 we observe significant speed-ups ranging from 2to2:4among different blocks. Besides, we compare the entire inference time (ms) of ResNet18 forWrapNet (234.74) with a 32b-accumulator quantized network (312.42), which gains 33% speed-up.The result provides solid evidence for the efficiency advantage of using low-precision accumulators.We remark that in average, the time cost for cyclic activation is only around 10% of the time cost7Published as a conference paper at ICLR 2021Table 7: Time cost (ms) for typical 33con-volution kernels in ResNet using different ac-cumulator bitwidths.Input size Output 8-bit 32-bit64x56x56 64 3.467 8.339128x28x28 128 2.956 6.785256x14x14 256 2.499 5.498512x7x7 512 2.710 5.520Table 8: Time cost (ms) for 33convolution ker-nels in ResNet with no vector instructions usingbit packing.Input size Output bit packing na ̈ıve64x56x56 64 29.80 83.705128x28x28 128 23.86 80.557256x14x14 256 21.71 86.753512x7x7 512 20.41 87.671for the GEMM kernel. We also remark that A VX2 lacks a single instruction that performs both mul-tiplication and accumulation for 8-bit data, but it does have such instruction for 32-bit data. Thus,further acceleration can be achieved on systems like ARM where such combined instructions for8-bit data are available.Bit-packing results without vector operations. We implement a na ̈ıve for-loop based matrix multi-plication, which uses buffer bit and logical operations introduced in Section 3.3 to form the baseline.We then pack four 8-bit integers into 32 bits, and report the execution speed of both implementa-tions on various convolution kernels of ResNet18 in Table 8. The results show significant speed-upsranging from 2:8to4:3. Such observations demonstrate our proposed approach to handle extracarry bits makes bit-packing viable and efficient, even when vector instructions are not available.Hardware analysis. To illustrate the potential benefits of WrapNet for custom hardware accel-erators, we have implemented a multiply-accumulate (MAC) unit in a commercial 28nm CMOStechnology. The MAC unit consists of (i) a multiplier with an output register, (ii) an accumula-tor with its corresponding register, and (iii) auxiliary circuitry. Please refer to Appendix C for thedetails. We have considered 8-bit 8-bit and 3-bit1-bit multipliers, as well as 32-bit and 8-bitaccumulators, where the latter option is enabled by our WrapNet approach and its cyclic activationfunction. We consider a slope k= 2for the cyclic activation. Figure 2 shows our post-layout results.Figure 2a shows that reducing the multiplier bitwidth decreases the cycle time by 7%; reducing theaccumulator precision from 32-bit to 8-bit further the cycle time by 16%. Figures 2b and 2c highlightthe importance of reducing the accumulator’s precision. When using an 8-bit 8-bit multiplier,the 32-bit accumulator already constitutes more than 40% of the area and energy of a MAC unit.Once the multiplier’s precision reduces, the accumulator dominates area- and energy-efficiency.Thanks to WrapNet, we can reduce the accumulator precision from 32-bit to 8-bit, thus reducing theaccumulator’s area- and energy-efficiency by more than 5and4, respectively. WrapNet requiresthe implementation of the cyclic activation, which has an area- and energy-efficiency comparable(although lower) to that of the accumulator. In spite of this overhead, WrapNet is still able to reducethe total MAC unit’s area- and energy-efficiency by up to 3and2, respectively. While ourhardware implementation only uses one adder per inner-product, we note that WrapNet can also beapplied to spatial architectures, such as systolic arrays, which use several adders per inner-product.For such spatial architectures, WrapNet avoids an increase in the adders’ bitwidth, normalizing all(a) (b) (c)Figure 2: (a) Cycle time, (b) area and (c) energy efficiency for different MAC units implemented in28nm CMOS. We consider 8-bit 8-bit or 3-bit1-bit multipliers with 32-bit or 8-bit accumulators.8Published as a conference paper at ICLR 2021adders to the same low bitwidth. Moreover, the use of several adders per inner-product amortizesthe overhead from the cyclic activation, of which only one is needed per inner-product. Finally,we note that this analysis only considers the computation part of a hardware accelerator as this iswhere WrapNet has a significant impact—the memory sub-system will remain virtually the same, asexisting methods already quantize the output activations to low-bit before storing them in memory.5 C ONCLUSIONWe have proposed WrapNet, a novel method to render neural networks resilient to integer overflow,which enables the use of low-precision accumulators. We have demonstrated the effectiveness ofour adaptation on both CIFAR-10 and ImageNet. In addition, our custom GEMM kernel achieves2:4acceleration over its standard library version, and our hardware exploration shows significantimprovements in area- and energy-efficiency. Our hope is that hardware-aware architectures will en-able deep learning applications on a wide range of platforms and mobile devices. Furthermore, withfuture innovations in GPU and data center technologies, we hope that WrapNet can provide furtherspeed-ups by enabling inference using quarter-precision—a step forward in terms of performancefrom the currently available half-precision standard available on emerging GPUs.ACKNOWLEDGEMENTThe university of Maryland team was supported by the ONR MURI program, AFOSR MURI pro-gram, and the National Science Foundation DMS division. Addition support was provided byDARPA GARD, DARPA QED4RML, and DARPA YFA.
EWkBW2PUhIN
An interesting idea on an important issue
7: Good paper, accept
This paper explores to solve an often ignored issue in quantization: accumulation precision. As the bit-width of input scales down, the area/energy cost of the accumulator starts to dominate. The cyclic method proposed by the authors at the first glance is not intuitive. However, it's surprising that the surveyed models could be tuned to live with significant overflows---as long as it can be tuned, which is enabled by the "differentiable overflow" brought by the cyclic method. There are several issues to be addressed before the paper can be accepted: (1) In the equations of page3, the boundary in the 2nd line of f(m) has a 'c(zq)', is that a typo? (2) Since the paper only covers CIFAR10 and some ImageNet works that are on the easier side to quantize, such as ResNet18 and VGG, the cyclic method could meet its limitation on ResNet50 and Mobilenet. The authors didn't discuss an important concept: accumulation length. As the accumulation length increases, the events of overflow could rise sharply, and the training could fail without room for the cyclic method to optimize the slope k. (3) Some comments on the relation between the accumulation length and bit-packing would also be helpful. For example, if the accumulator with 8-way bit-packing is working on the same GEMM, the accumulation length would be reduced by 8---that would be desirable. Although, a higher level reduction would be required then. In general, the paper is well written and brings attention to the important topic of accumulation for reduced precision inference. The paper attempts to solve the overflow problem, although not perfectly, with a differentiable "failure" approach. The paper provides great hardware insights for hardware/software co-design. Therefore, I recommend the paper to be accepted on the condition that the authors could address my comments fairly.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic ### Paper Abstract Low-precision neural networks represent both weights and activations with few bits, drastically reducing the cost of multiplications. Meanwhile, these products are accumulated using high-precision (typically 32-bit) additions. Additions dominate the arithmetic complexity of inference in quantized (e.g., binary) nets, and high precision is needed to avoid overflow. To further optimize inference, we propose WrapNet, an architecture that adapts neural networks to use low-precision (8-bit) additions while achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by inserting a cyclic activation layer that makes results invariant to overflow. We demonstrate the efficacy of our approach using both software and hardware platforms. ### Paper Keywords ["quantization", "efficient inference"] ### Paper Content ABSTRACTLow-precision neural networks represent both weights and activations with fewbits, drastically reducing the cost of multiplications. Meanwhile, these productsare accumulated using high-precision (typically 32-bit) additions. Additions dom-inate the arithmetic complexity of inference in quantized (e.g., binary) nets, andhigh precision is needed to avoid overflow. To further optimize inference, we pro-pose WrapNet, an architecture that adapts neural networks to use low-precision(8-bit) additions while achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by insertinga cyclic activation layer that makes results invariant to overflow. We demonstratethe efficacy of our approach using both software and hardware platforms.1 I NTRODUCTIONSignificant progress has been made in quantizing (or even binarizing) neural networks, and numer-ous methods have been proposed that reduce the precision of weights, activations, and even gradi-ents while retaining high accuracy (Courbariaux et al., 2016; Hubara et al., 2016; Li et al., 2016; Linet al., 2017; Rastegari et al., 2016; Zhu et al., 2016; Dong et al., 2017; Zhu et al., 2018; Choi et al.,2018a; Zhou et al., 2016; Li et al., 2017; Wang et al., 2019; Jung et al., 2019; Choi et al., 2018b;Gong et al., 2019). Such quantization strategies make neural networks more hardware-friendly byleveraging fast, integer-only arithmetic, replacing multiplications with simple bit-wise operations,and reducing memory requirements and bandwidth.Unfortunately, the gains from quantization are limited because quantized networks still requirehigh-precision arithmetic. Even if weights and activations are represented with just one bit, deepfeature computation requires the summation of hundreds or even thousands of products. Perform-ing these summations with low-precision registers results in integer overflow, contaminating down-stream computations and destroying accuracy. Moreover, as multiplication costs are slashed byquantization, high-precision accumulation starts to dominate the arithmetic cost. Indeed, our ownhardware implementations show that an 8-bit 8-bit multiplier consumes comparable power andsilicon area to a 32-bit accumulator. When reducing the precision to a 3-bit 1-bit multiplier, a32-bit accumulator consumes more than 10higher power and area; see Section 4.5. Evidently,low-precision accumulators are the key to further accelerating quantized nets.In custom hardware, low-precision accumulators reduce area and power requirements while boost-ing throughput. On general-purpose processors, where registers have fixed size, low-precision ac-cumulators are exploited through bit-packing , i.e., by representing multiple low-precision integersside-by-side within a single high-precision register (Pedersoli et al., 2018; Rastegari et al., 2016;Bulat & Tzimiropoulos, 2019). Then, a single vector instruction is used to perform the same oper-ation across all of the packed numbers. For example, a 64-bit register can be used to execute eightparallel 8-bit additions, thus increasing the throughput of software implementations. Hence, the useof low-precision accumulators is advantageous for both hardware and software implementations,provided that integer overflow does not contaminate results.1Published as a conference paper at ICLR 2021We propose WrapNet, a network architecture with extremely low-precision accumulators. WrapNetexploits the fact that integer computer arithmetic is cyclic, i.e, numbers are accumulated until theyreach the maximum representable integer and then “wrap around” to the smallest representableinteger. To deal with such integer overflows, we place a differentiable cyclic (periodic) activationfunction immediately after the convolution (or linear) operation, with period equal to the differencebetween the maximum and minimum representable integer. This strategy makes neural networksresilient to overflow as the activations of neurons are unaffected by overflows during convolution.We explore several directions with WrapNet. On the software side, we consider the use of bit-packing for processors with or without dedicated vector instructions. In the absence of vector in-structions, overflows in one packed integer may produce a carry bit that contaminates its neighboringvalue. We propose training regularizers that minimize the effects of such contamination artifacts,resulting in networks that leverage bit-packed computation with very little impact on final accuracy.For processors with vector instructions, we modify the Gemmlowp library (Jacob et al., 2016) tooperate with 8-bit accumulators. Our implementation achieves up to 2:4speed-up compared to a32-bit accumulator implementation, even when lacking specialized instructions for 8-bit multiply-accumulate. We also demonstrate the efficacy of WrapNet in terms of cycle time, area, and energyefficiency when considering custom hardware designs in a commercial 28 nm CMOS technology.2 R ELATED WORK AND BACKGROUND2.1 N ETWORK QUANTIZATIONNetwork quantization aims at accelerating inference by using low-precision arithmetic. In its mostextreme form, weights and activations are both quantized using binary or ternary quantizers. Thebinary quantizer Qbcorresponds to the sign function, whereas the ternary quantizer Qtmaps somevalues to zero. Multiplications in binarized or ternarized networks (Hubara et al., 2016; Courbariauxet al., 2015; Lin et al., 2017; Rastegari et al., 2016; Zhu et al., 2016) can be implemented using bit-wise logic, leading to impressive acceleration. However, training such networks is challenging sincefewer than 2bits are used to represent activations and weights, resulting in a dramatic impact onaccuracy compared to full-precision models.Binary and ternary networks are generalized to higher precision via uniform quantization, which hasbeen shown to result in efficient hardware (Jacob et al., 2018). The multi-bit uniform quantizer Quis given by: Qu(x) = round (x=x)x;where xdenotes the quantization step-size. The outputof the quantizer is a floating-point number xthat can be expressed as x= xxq, wherexqis thefixed-point representation of x. The fixed-point number xqhas a “precision” or “bitwidth,” which isthe number of bits used to represent it. Note that the range of floating-point numbers representableby the uniform quantizer Qudepends on both the quantization step-size xand the quantizationprecision. Nonetheless, the number of different values that can be represented by the same quantizerdepends only on the precision.Applying uniform quantization to both weights w= wwqand activations x= xxqsimplifiescomputations, as an inner-product simply becomesz=Xiwixi=Xi(w(wq)i)(x(xq)i) = ( wx)Xi(wq)i(xq)i= zzq: (1)The key advantage of uniform quantization is that the core computationPi(wq)i(xq)ican be carriedout using fixed-point (i.e., integer) arithmetic only. Results in (Gong et al., 2019; Choi et al., 2018b;Jung et al., 2019; Wang et al., 2019; Mishra et al., 2017; Mishra & Marr, 2017) have shown thathigh classification accuracy is attainable with low-bitwidth uniform quantization, such as 2 or 3 bits.Although (wq)i;(xq)i, and their product may have extremely low-precision, the accumulated resultzqof many of these products has very high dynamic range. As a result, high-precision accumulatorsare typically required to avoid overflows, which is the bottleneck for further arithmetic speedups.2.2 L OW-PRECISION ACCUMULATIONSeveral approaches have been proposed that use accumulators with fewer bits to obtain speed-ups.For example, reference (Khudia et al., 2021) splits the weights into two separate matrices, one with2Published as a conference paper at ICLR 2021Table 1: Average overflow rate (in 8 bits) of each layer for a low-precision network and correspond-ing test accuracy using either 32-bit or 8-bit accumulators during inference on CIFAR10.Bit (A/W) Overflow rate (8-bit) Accuracy (32-bit) Accuracy (8-bit)full precision – 92.45% –3/1 10.84% 91.08% 10.06%2/1 1.72% 88.46% 44.04%small- and another with large-magnitude entries. If the latter matrix is sparse, acceleration is attainedas most computations rely on fast, low-precision operations. However, to significantly reduce theaccumulator’s precision, one would need to severely decrease the magnitude of the entries of the firstmatrix, which would, in turn, prevent the second matrix from being sufficiently sparse to achieveacceleration. Recently, (de Bruin et al., 2020) proposed using layer-dependent quantization param-eters to avoid overflowing accumulators with fixed precision. Fine-tuning is then used to improveperformance. However, if the accumulator precision is too low (e.g., 8 bits or less), the optimizedprecision of activations and weights is too coarse to attain satisfactory performance. Another lineof work (Sakr et al., 2019; Micikevicius et al., 2017; Wang et al., 2018) uses 16-bit floating-pointaccumulators for training and inference—such approaches typically require higher complexity thanmethods based on fixed-point arithmetic.2.3 T HEIMPACT OF INTEGER OVERFLOWOverflow is a major problem, especially in highly quantized networks. Table 1 demonstrates thatoverflows occur in around 11% of the neurons in a network with 3-bit activations (A) and binaryweights (W) that is using 8-bit accumulators for inference after being trained on CIFAR-10 withstandard precision. Clearly, overflow has a significant negative impact on accuracy. Table 1 showsthat if we use an 8-bit (instead of a 32-bit) accumulator, then the accuracy of a binary-weight networkwith 2-bit activations drops by more than 40%, even when only 1.72% neurons overflow. If we repeatthe experiment with 3-bit activations and binary weights, the accuracy is only marginally better thana random guess. Therefore, existing methods try to avoid integer overflow by using accumulatorswith relatively high precision, and pay a correspondingly high price when doing arithmetic.3 W RAPNET: DEALING WITH INTEGER OVERFLOWSWe now introduce WrapNet, which includes a cyclic activation function and an overflow penalty,enabling neural networks to use low-precision accumulators. We also present a modified quantiza-tion step-size selection strategy for activations, which retains high classification accuracy. Finally,we show how further speed-ups can be achieved on processors with or without specialized vectorinstructions.We propose training a network with layers that emulate integer overflows on the fixed-point pre-activationszqto maintain high accuracy. However, directly training a quantized network with anoverflowing accumulator diverges (see Table 2) due to the discontinuity of the modulo operation.To facilitate training, we insert a cyclic “smooth modulo” activation immediately after every lin-ear/convolutional layer, which not only captures the wrap-around behavior of overflows, but alsoensures that the activation is continuous everywhere. The proposed smooth modulo activation cis acomposite function of a modulo function mand a basis function fthat ensures continuity. Specifi-cally, given a b-bit accumulator, our smooth-modulo cfor fixed-point inputs is as follows:f(m) =8><>:m; forkk+12b1mkk+12b1k2b1km; form<kk+12b1k2b1km; form>kk+12b1c(zq) =f(mod(zq+ 2b1;2b)2b1);wherekis a hyper-parameter that controls the slope of the transition. Note that we apply constantshifts to keep the input of fin[2b1;2b1). Figure 1a illustrates the smooth modulo function with3Published as a conference paper at ICLR 2021(a) (b)Figure 1: (a) Example of the proposed cyclic activation with different slopes kand the originalmodulo operator for a 4-bit accumulator. (b) Convolutional block with proposed cyclic activation.two different slopes k= 1;4. Askincreases, the cyclic activation becomes more similar to themodulo operator and has a greater range, but the transition becomes more abrupt. Since our cyclicactivation is continuous and differentiable almost everywhere, standard gradient-based learning canbe applied easily. A convolutional block with cyclic activation layer is shown in Figure 1b. Afterthe convolution result goes into the cyclic activation, the result is multiplied by zto computea floating-point number, which is then processed through BatchNorm and ReLU. A fixed per-layerquantization step-size is then used to convert the floating-point output of the ReLU into a fixed-pointinput for the next layer. We detail the procedure to find this step-size in Section 3.2.3.1 O VERFLOW PENALTYAn alternative way to adapt quantized networks to low-precision accumulators is to directly reducethe amount of overflows. To achieve this, we propose a regularizer which penalizes outputs thatexceed the bitwidth of the accumulation register. Concretely, for a b-bit accumulator, we define anoverflow penalty for the l-th layer of the network as follows: Rol= (1=N)Pimaxfjziqj2b1;0g:Here,ziqis the fixed-point result in (1) for the i-th neuron of the l-th layer, and Nis the total numberof neurons in the l-th layer. The overflow penalty is imposed after every quantized linear layer andbefore the cyclic activation. All these penalties are combined into one regularizer Ro=PlRol.3.2 S ELECTION OF ACTIVATION QUANTIZATION STEP-SIZETo keep multiplication simple, the floating-point output of ReLU must be quantized before it isfed into the following layer. However, as shown in Table 1, a significant number of overflowsoccur even with 3-bit activations. From our experiments (see Table 3), we have observed that ifoverflow occurs too frequently (i.e., on more than 10% of the neurons), then WrapNet starts tosuffer significant accuracy degradation. However, if we reduce the activation precision so that nooverflows happen at all, several layers will have 1-bit activations (see Table 3), thereby increasingquantization errors and degrading accuracy. To balance accumulation and quantization errors, weadjust the quantization step-size xof each layer based on the overflow rate, i.e., the percentage p%of neurons that overflow in the network. If the overflow rate p% is too large, then we increase xtoreduce the overflow rate p%. The selected quantization step-size is then fixed for further fine-tuning.3.3 A DAPTING TO BIT-PACKINGMost modern processors provide vector instructions that enable parallel operation on multiple 8-bit numbers. For instance, the A VX2 (NEON) instruction set on x86 (ARM) processors providesparallel processing with 32 (16) 8-bit numbers. Vector instructions provide a clean implementationof bit-packing, which WrapNet can leverage to attain significant speed-ups. While some embed-ded processors and legacy chips do not provide vector instructions, bit-packing can still be applied.Without vector instructions for multiplication, binary/ternary weights must be used to replace mul-tiplication with bit-wise logic (Bulat & Tzimiropoulos, 2019; Pedersoli et al., 2018). Furthermore,bit-packing of additions is more delicate: Each integer overflow not only results in wrap-aroundbehavior, but also generates a carry bit that contaminates the adjacent number—specialized vector4Published as a conference paper at ICLR 2021instructions avoid such contamination. We propose the following strategies to minimize the impactof carry propagation.Reducing variance in the number of carries. The number of carries generated during a convo-lution operation can be large. Nevertheless, if we can keep the number of carries approximatelythe same for all the neurons among a batch of images, the estimated number of carries can be sub-tracted from the result to correct the outputs of a bit-packed convolution operation. To achieve this,during training, we calculate the number of carries for each neuron and impose a regularizer, Rc,to keep the variance of the number of carries small. The detailed formulation of Rccan be foundin Appendix A.1. Using a buffer bit. Alternatively, since each addition can generate at most onecarry bit, we can place a buffer bit between every low-bit number in the bit-packing. For example,instead of packing eight 8-bit representations into a 64-bit number, we pack eight 7-bit numbers withone buffer bit between each of them. These buffer bits absorb the carry bits, and are cleared usingbit-wise logic after each addition. Buffering makes representations 1-bit smaller, which potentiallydegrades accuracy. A hybrid approach. To get the benefits from both strategies, we use a variancepenalty on layers that have small standard deviation to begin with, and equip the remaining layerswith a buffer bit.4 E XPERIMENTSWe compare the accuracy and efficiency of WrapNet to networks with full-precision accumulatorsusing the CIFAR-10 and ImageNet datasets. Most experiments use binary or ternary weights forWrapNet as A VX2 lacks 8-bit multiplication instructions, but supports 8-bit additions and logicoperations needed for binary/ternary convolutions.4.1 T RAINING PIPELINEWe first pre-train a network with quantized weights and no cyclic layers, while keeping full-precisionactivations. Then, we select the quantization step-sizes of the activations (see Section 3.2) suchthat each layer has an overflow rate of around p%(a hyper-parameter) with respect to the desiredaccumulator bitwidth. Given the selected quantization step-size for each layer and the pre-trainednetwork, we insert our proposed cyclic activation layer. We then warm-up our WrapNet by fine-tuning with full-precision activation for several epochs. Finally we further fine-tune the networkwith both activations and weights quantized. Both overflow and carry variance regularizers areonly applied in the final fine-tuning step, except when training ResNet for ImageNet, where theregularizers are also included during warm-up.4.2 A DAPTING TO LOW-PRECISION ACCUMULATORSWe conduct ablation studies on the following factors: the type of cyclic function, the initial overflowrate for quantization step-size and precision selection, and the coefficient of the overflow penaltyregularizer. These experiments are conducted on VGG-7 (Li et al., 2016), which is commonly usedin the quantization literature for CIFAR-10. We binarize the weights as in (Rastegari et al., 2016),and we train WrapNet to adapt to an 8-bit accumulator. As our default setting, we use k= 2as thetransition slope, p= 5% as the initial overflow rate, and 0as the coefficient of the regularizer.Cyclic activation function. We compare the performance of various transition slopes kof ourcyclic function cin Table 2, and we achieve the best performance when k= 2. Ifkis too small,then the accuracy decreases due to a narrower effective bitwidth (only half of the bitwidth is usedwhenk= 1). Meanwhile, the abrupt transition for large khurts the performance as well. In theextreme case where the cyclic function degenerates to modulo (k! 1 ), WrapNet diverges torandom guessing, which highlights the importance of training with a “smooth” cyclic non-linearityto assimilate integer overflow. We also find that placing a ReLU after batch norm yields the bestperformance, even though the cyclic function is already non linear. More experimental results canbe found in Appendix B.1.Quantization step-size. As described in Section 3.2, the quantization step-sizes are selected tobalance the rounding error of the activations and accumulation errors due to overflow. We comparethe classification performance when we choose different step-sizes to control the overflow rate as in5Published as a conference paper at ICLR 2021Table 2: Results for different transition slopes for cyclic function; denotes divergence.k 1 2 4 10 1Accuracy 90.24% 90.52% 90.25% 89.16% Table 3: Results for different quantization step-sizesbased on overflow rate p(%).denotes divergence.p Bits Accuracy p Bits Accuracy0 1 90.07% 20 4 88.25%2 3 90.51% 30 5 85.30%5 3 90.52% 40 5 36.11%10 4 89.92% 50 5Table 4: Results for fine-tuning with theoverflow penalty ( Ro).Rop% Accuracy Difference0 20 88.25% –0 5 90.52% 2.27%0.01 20 90.05% –0.01 5 90.81% 0.76%Table 3. If the initial overflow rate is large, then the quantization step-size will be finer, but trainingis less stable. We obtain the best performance when the initial overflow rate is around 5%. Themedian bitwidths of the activations across layers are also reported in Table 3. Note that if we wantto suppress all overflows, we can only use 1-bit activations. We also observe that WrapNet can attainreasonable accuracy (85%) even with a large overflow rate (around 30%), which demonstrates thatour proposed cyclic activations provides resilience against integer overflows.Overflow penalty. The overflow penalty regularizer improves stability to step-size selection. Morespecifically, in Table 4, the difference in accuracy between two step-size selections decreases from2.27% to 0.76% after adding the regularizer. The overflow penalty also complements our cyclicactivation, as we achieve the best performance when using both of them together during the fine-tuning stage. Moreover, in Appendix B.2, we compare our results to fine-tuning the pre-trainednetwork using the overflow regularizer only. In the absence of a cyclic layer, neural networks stillsuffer from low accuracy (as in Section 2.3) unless a very strong penalty is imposed.4.3 A DAPTING TO BIT-PACKINGWe now show the efficacy of WrapNet for bit-packing without vector operations. We use the samearchitecture, binary weights, 8-bit accumulators, and hyper-parameters as in Section 4.2. The train-ing details can be found in Appendix A.2. We consider CIFAR-10, and we compare with the bestresult of WrapNet from the previous section as a baseline. Without specific vector instructions,accuracy degenerates to a random guess because of undesired carry contamination during inference.Surprisingly, with the carry variance regularizer, WrapNet works well even with abundant carry con-tamination during inference (for each neuron, 384 on average over all the dataset). The regularizerdrops the standard deviation of the per-neuron carry contamination by 90%. When we use the hybridapproach, the accuracy is further improved (89.43%) and close to the best result (90.81%) we canachieve with vector instructions that do not propagate carries across different numbers (see Table 5).Table 5: Results for adaptation to bit-packing with 8-bit accumulator. (v) denotes no carry contami-nation as in a vector instruction; (c) denotes carry propagation between different numbers.Method Accuracy (v) Accuracy (c) Carry Carry StdBaseline 90.81% 10.03% 254.91 159.55Buffer Bit – 88.22% – –Rc– 87.86% 384.42 17.91Hybrid – 89.43% 482.4 16.186Published as a conference paper at ICLR 20214.4 B ENCHMARK RESULTSIn this section, we compare our WrapNet when there is no carry contamination, with the following32-bit accumulator baselines: a full-precision network (FP), a network trained with binary/ternaryweights but with full-precision activations (BWN/TWN), and a network where both weights andactivations are quantized to the same precision as our WrapNet (BWN/TWN-QA). We benchmarkour results on both CIFAR-10 and ImageNet. We use VGG7 and ResNet20 for our CIFAR-10experiments, and we use AlexNet (Krizhevsky et al., 2012; Simon et al., 2016), ResNet18 andResNet50 (He et al., 2016) for our ImageNet experiments. Details of training can be found inAppendix B.3.For CIFAR-10, even with an 8-bit accumulator, our results are comparable to both BWN and TWN.When adapting to a 12-bit accumulator, we further achieve performance on-par with TWN andbetter than BWN (see Table 6). For ImageNet, our WrapNet can achieve accuracy as good as BWNwhen adapting to a 12-bit accumulator where we can use binary weights and roughly 7-bit quantizedactivations. However, in the extreme low-precision case (8-bit), the accuracy of our binary WrapNetdrops around 8% due to the limited bitwidth we can use for activations. As reported in Table 6,the median activation bitwidth is roughly 3-bit, and for some layers in AlexNet, we can only use1-bit activations. Despite the gap from BWN, we observe that our model can achieve comparableperformance as BWN-QA where the same precision is used for activations. When using ternaryweights and an 8-bit accumulator, our WrapNet only drops by 3% and 2% from TWN for ResNet18and ResNet50, respectively. In addition, in the case of adapting to a 12-bit accumulator, our ternaryWrapNet with roughly 7-bit activations is even slightly better than TWN for ResNet50. Note that,without cyclic activation function, all the results for networks using 8-bit accumulator are as poor asrandom guessing which is consistent with Table 1.Table 6: Top-1 test accuracy for both CIFAR-10 and ImageNet with different architectures. Here,“Acc” represents accumulator, and “QA” represents quantized activation.Bits CIFAR-10 ImageNetActivation Weight Acc VGG7 ResNet20 AlexNet ResNet18 ResNet50FP 32 32 32 92.45% 91.78% 60.61% 69.59% 76.15%BWN 32 1 32 91.55% 90.03% 56.56% 63.55% 72.88%BWN-QA3 1 32 91.30% 89.86% 46.30% 57.54% 66.85%WrapNet3 1 8 90.81% 89.78% 44.88% 55.60% 64.30%WrapNet7 1 12 91.59% 90.17% 56.62% 63.11% 72.37%TWN 32 2 32 91.56% 90.36% 57.57% 65.70% 73.31%TWN-QA4 2 32 91.49% 90.12% 55.84% 63.67% 72.50%WrapNet4 2 8 91.14% 89.56% 52.24% 62.13% 71.62%WrapNet7 2 12 91.53% 90.88% 57.60% 63.84% 73.93%4.5 E FFICIENCY ANALYSISWe conduct an efficiency analysis of parallelization by bit-packing, both with and without vectoroperations, on an Intel i7-7700HQ CPU operating at 2.80 GHz. We also conduct a detailed study ofimprovements that can be obtained using custom hardware.A VX2 instruction efficiency analysis. We study the empirical efficiency of WrapNet when vec-tor operations are available. We extended Gemmlowp (Jacob et al., 2016) to implement matrixmultiplications using 8-bit accumulators with A VX2 instructions. To demonstrate the efficiency oflow-precision accumulators, we compare our implementation with the A VX2 version of Gemmlowp,which uses 32-bit accumulators. We report the execution speed of both on various convolution ker-nels of ResNet18 in Table 7. From Table 7 we observe significant speed-ups ranging from 2to2:4among different blocks. Besides, we compare the entire inference time (ms) of ResNet18 forWrapNet (234.74) with a 32b-accumulator quantized network (312.42), which gains 33% speed-up.The result provides solid evidence for the efficiency advantage of using low-precision accumulators.We remark that in average, the time cost for cyclic activation is only around 10% of the time cost7Published as a conference paper at ICLR 2021Table 7: Time cost (ms) for typical 33con-volution kernels in ResNet using different ac-cumulator bitwidths.Input size Output 8-bit 32-bit64x56x56 64 3.467 8.339128x28x28 128 2.956 6.785256x14x14 256 2.499 5.498512x7x7 512 2.710 5.520Table 8: Time cost (ms) for 33convolution ker-nels in ResNet with no vector instructions usingbit packing.Input size Output bit packing na ̈ıve64x56x56 64 29.80 83.705128x28x28 128 23.86 80.557256x14x14 256 21.71 86.753512x7x7 512 20.41 87.671for the GEMM kernel. We also remark that A VX2 lacks a single instruction that performs both mul-tiplication and accumulation for 8-bit data, but it does have such instruction for 32-bit data. Thus,further acceleration can be achieved on systems like ARM where such combined instructions for8-bit data are available.Bit-packing results without vector operations. We implement a na ̈ıve for-loop based matrix multi-plication, which uses buffer bit and logical operations introduced in Section 3.3 to form the baseline.We then pack four 8-bit integers into 32 bits, and report the execution speed of both implementa-tions on various convolution kernels of ResNet18 in Table 8. The results show significant speed-upsranging from 2:8to4:3. Such observations demonstrate our proposed approach to handle extracarry bits makes bit-packing viable and efficient, even when vector instructions are not available.Hardware analysis. To illustrate the potential benefits of WrapNet for custom hardware accel-erators, we have implemented a multiply-accumulate (MAC) unit in a commercial 28nm CMOStechnology. The MAC unit consists of (i) a multiplier with an output register, (ii) an accumula-tor with its corresponding register, and (iii) auxiliary circuitry. Please refer to Appendix C for thedetails. We have considered 8-bit 8-bit and 3-bit1-bit multipliers, as well as 32-bit and 8-bitaccumulators, where the latter option is enabled by our WrapNet approach and its cyclic activationfunction. We consider a slope k= 2for the cyclic activation. Figure 2 shows our post-layout results.Figure 2a shows that reducing the multiplier bitwidth decreases the cycle time by 7%; reducing theaccumulator precision from 32-bit to 8-bit further the cycle time by 16%. Figures 2b and 2c highlightthe importance of reducing the accumulator’s precision. When using an 8-bit 8-bit multiplier,the 32-bit accumulator already constitutes more than 40% of the area and energy of a MAC unit.Once the multiplier’s precision reduces, the accumulator dominates area- and energy-efficiency.Thanks to WrapNet, we can reduce the accumulator precision from 32-bit to 8-bit, thus reducing theaccumulator’s area- and energy-efficiency by more than 5and4, respectively. WrapNet requiresthe implementation of the cyclic activation, which has an area- and energy-efficiency comparable(although lower) to that of the accumulator. In spite of this overhead, WrapNet is still able to reducethe total MAC unit’s area- and energy-efficiency by up to 3and2, respectively. While ourhardware implementation only uses one adder per inner-product, we note that WrapNet can also beapplied to spatial architectures, such as systolic arrays, which use several adders per inner-product.For such spatial architectures, WrapNet avoids an increase in the adders’ bitwidth, normalizing all(a) (b) (c)Figure 2: (a) Cycle time, (b) area and (c) energy efficiency for different MAC units implemented in28nm CMOS. We consider 8-bit 8-bit or 3-bit1-bit multipliers with 32-bit or 8-bit accumulators.8Published as a conference paper at ICLR 2021adders to the same low bitwidth. Moreover, the use of several adders per inner-product amortizesthe overhead from the cyclic activation, of which only one is needed per inner-product. Finally,we note that this analysis only considers the computation part of a hardware accelerator as this iswhere WrapNet has a significant impact—the memory sub-system will remain virtually the same, asexisting methods already quantize the output activations to low-bit before storing them in memory.5 C ONCLUSIONWe have proposed WrapNet, a novel method to render neural networks resilient to integer overflow,which enables the use of low-precision accumulators. We have demonstrated the effectiveness ofour adaptation on both CIFAR-10 and ImageNet. In addition, our custom GEMM kernel achieves2:4acceleration over its standard library version, and our hardware exploration shows significantimprovements in area- and energy-efficiency. Our hope is that hardware-aware architectures will en-able deep learning applications on a wide range of platforms and mobile devices. Furthermore, withfuture innovations in GPU and data center technologies, we hope that WrapNet can provide furtherspeed-ups by enabling inference using quarter-precision—a step forward in terms of performancefrom the currently available half-precision standard available on emerging GPUs.ACKNOWLEDGEMENTThe university of Maryland team was supported by the ONR MURI program, AFOSR MURI pro-gram, and the National Science Foundation DMS division. Addition support was provided byDARPA GARD, DARPA QED4RML, and DARPA YFA.<|im_end|> <|im_start|>assistant ### Review Title An interesting idea on an important issue ### Review Text This paper explores to solve an often ignored issue in quantization: accumulation precision. As the bit-width of input scales down, the area/energy cost of the accumulator starts to dominate. The cyclic method proposed by the authors at the first glance is not intuitive. However, it's surprising that the surveyed models could be tuned to live with significant overflows---as long as it can be tuned, which is enabled by the "differentiable overflow" brought by the cyclic method. There are several issues to be addressed before the paper can be accepted: (1) In the equations of page3, the boundary in the 2nd line of f(m) has a 'c(zq)', is that a typo? (2) Since the paper only covers CIFAR10 and some ImageNet works that are on the easier side to quantize, such as ResNet18 and VGG, the cyclic method could meet its limitation on ResNet50 and Mobilenet. The authors didn't discuss an important concept: accumulation length. As the accumulation length increases, the events of overflow could rise sharply, and the training could fail without room for the cyclic method to optimize the slope k. (3) Some comments on the relation between the accumulation length and bit-packing would also be helpful. For example, if the accumulator with 8-way bit-packing is working on the same GEMM, the accumulation length would be reduced by 8---that would be desirable. Although, a higher level reduction would be required then. In general, the paper is well written and brings attention to the important topic of accumulation for reduced precision inference. The paper attempts to solve the overflow problem, although not perfectly, with a differentiable "failure" approach. The paper provides great hardware insights for hardware/software co-design. Therefore, I recommend the paper to be accepted on the condition that the authors could address my comments fairly. ### Review Rating 7: Good paper, accept ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
S7KQa2XYpiQ
ICLR.cc/2021/Conference
2021
Improve Novel Class Generalization By Adaptive Feature Distribution for Few-Shot Learning
["Ran Tao", "Marios Savvides"]
In this work, we focus on improving the novel class generalization of few-shot learning. By addressing the difference between feature distributions of base and novel classes, we propose the adaptive feature distribution method which is to finetune one scale vector using the support set of novel classes. The scale vector is applied on the normalized feature distribution and by using one scale vector to reshape the feature space manifold, we obtain consistent performance improvement for both in-domain and cross-domain evaluations. By simply finetuning one scale vector using 5 images, we observe a $2.23\%$ performance boost on 5-way 1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes. This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.
["Novel Class Generalization", "Finetuning One Scale Vector", "Adaptive Feature Distribution", "Cross-Domain"]
ABSTRACTIn this work, we focus on improving the novel class generalization of few-shotlearning. By addressing the difference between feature distributions of base andnovel classes, we propose the adaptive feature distribution method which is tofinetune one scale vector using the support set of novel classes. The scale vectoris applied on the normalized feature distribution and by using one scale vector toreshape the feature space manifold, we obtain consistent performance improve-ment for both in-domain and cross-domain evaluations. By simply finetuning onescale vector using 5 images, we observe a 2:23% performance boost on 5-way1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes.This approach is simple yet effective. By just finetuning a single scale vector weprovide a solution of reducing number of parameters while still obtain generaliza-tion ability for few-shot learning. We achieve the state-of-the-art performance onmini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.1 I NTRODUCTIONWith the plethora of available large-scale data, deep learning has achieved significant advancements.However multiple factors such as high labelling costs, scarce availability of classes of interest or theexpensive need for experts for label generation set limits of applying large-scale data. To addressthis challenge, the problem of few-shot learning was formulated which has received considerable at-tention in recent years Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle(2016); Hariharan & Girshick (2017).For a supervised learning problem with data set (x1;y1);:::(xn;yn)(xi2X feature space, yi2Ylabel space), by using the hypothesis class( h(:;w)), we want to minimize l(h(x;w);y)on newsamples. With the assumption that training samples and test samples are i.i.d from the same unknowndistribution DoverXY , the problem is optimized over the Empirical Risk Minimization(ERM).For multi-class classification with deep neural network, the hypothesis class related to the scenariocan be divided into two functionalities: the feature extractor F(xi)parameterized by , the classifierC(jw)for a given class weight vector w. Basically to achieve a good classification performanceover the large-scale dataset, h(:;w)is expected to be highly invariant and this property empowersthe feature extractor F(xi)with good feature invariance ability if we consider variations that aregenerally in the objects such as shapes, lights and etc.Few-shot learning proposes a great challenge as the estimation of the distribution is hard to achievewith a few samples. Meta-learning methods on few-shot learning lead a direction of adapting toa hypothesis class with few samples, which directly back-propagates the loss between testing setwith theh(:;w)proposed with the training set. Recent work meta-Baseline Chen et al. (2020)pro-posed to conducts meta-training with a pre-trained feature extractor on base classes which leadsto a large-margin performance improvement of meta-training. Moreover, they observe that duringmeta-training stage, models are better generalized on the base classes while evaluation performanceon novel classes largely dropped.The novel class generalization which is defined as evaluation performance on novel classes followingChen et al. (2020) is essential for improving few-shot learning into practice. Training of algorithmson few-shot learning are conducted with base classes which are relatively large-scale in the sense1Under review as a conference paper at ICLR 2021F f1f2f3L Obtain features through a fixed feature extractor Pass features through the normalization layer Only fine-tuning the scale vector Figure 1: Illustration on AFD: with samples from support sets of novel classes, features are obtainedby using a pre-trained feature extractor Fand then these features are passed through the featurenormalization layer which is parameterized by a scale vector; with using a non-parametric evaluationmetrics gradients flow into optimizing the scale vector.of plenty number of classes with hundreds of images. Methods in metric-based learning Chen et al.(2019); Gidaris & Komodakis (2019); Wang et al. (2018); Gidaris & Komodakis (2018) and meta-learning Chen et al. (2020) prove that training in this way benefits the capture of large variationswhich is crucial for discriminative features. However, as the feature extractor on base classes istrained under maximum likelihood, features are also trained to be invariant for discriminating thesebase classes, as shown in Fig.2. Then the evaluation on novel classes would suffer from the featuredistribution difference between base and novel classes, and cross-domain between base and novelclasses could enlarge this feature distribution difference. Objects(or images) in different domainscarry different aspects of information which leads to different discriminative features or features incommon among categories.Attempts of improving novel class generalization include finetuning method proposed in Chen et al.(2019). In Chen et al. (2019), they proposed to finetune the novel class weights using the supportset of novel classes with competitive results. However if feature distribution of novel classes suffersfrom scattering, even with a plenty of data finetuning the novel class weights without any optimiza-tion on the feature side is not promising for finding a good decision boundary, not to mention withonly a few samples.In our work, we propose the adaptive feature distribution to improve the novel class generalization.Following the idea of finetuning using a handful of samples, we apply a non-parametric distancefirst to construct the hypothesis class and then by only finetuning a scale vector which applied onthe normalized feature distribution, we achieve the effects of adaptive feature distribution on novelclasses.Our Contributions: 1) We address the importance of further understanding the feature distributionfor novel classes. Using DB-index which measures the quality of feature distributions for novelclasses to select feature extractors, we observe a consistent performance boost on all three evalu-ation datasets. We believe introducing analysis on feature distributions and clustering quality ofnovel classes is informative to the community. 2)We propose to improve novel class generalizationthrough adapting the feature distribution of novel classes. And by only finetuning one scale vectorusing support sets of novel classes, we showcase the supreme generalization of this method espe-cially on cross-domain evaluations. We achieve the state-of-the-art performance on mini-Imagenet,tiered-Imagenet as well as cross-domain evaluation on CUB. 3)This approach is simple yet effec-tive. By just finetuning a single scale vector we provide a solution of reducing number of parameterswhile still obtain generalization ability for few-shot learning.2 P RIOR ARTThere have been many approaches to few-shot learning explored recently, namely are fast-adaptationmethods Finn et al. (2017); Rusu et al. (2018); Sun et al. (2019); Chen et al. (2020), model optimiza-tion based methods Ravi & Larochelle (2016), metric learning based methods Vinyals et al. (2016);Snell et al. (2017); Ren et al. (2018); Sung et al. (2018); Guo & Cheung (2020); Li et al. (2020) andmethods which use ridge regression and support vector machine Bertinetto et al. (2018); Lee et al.2Under review as a conference paper at ICLR 2021(a) Learned Representa-tion for Base Classes(b) Novel Class 7 (c) Novel Class 8 (d) Novel Class 9Figure 2: MNIST Illustration of Feature Distribution Difference between Base and Novel Classes.The feature extractor(Lenet) is trained with 0-6 base classes. We plot the feature distribution forbase classes and novel classes. As shown in 2-D space, novel classes features are more scatteredcompared with compact feature distribution of base classes. Meanwhile, novel class features tend toproject on the direction of base class weights(shown as the gray line).(2019). There have also been studies focusing on discovering projective feature embeddings Simonet al. (2018; 2020). Recently, a few studies utilized a variety of techniques in machine learningtowards few shot classification. Techniques like self-supervised training, self-training through semi-supervised learning and model ensembles showed a boost result when applied on few-shot learningproblem Gidaris et al. (2019); Dvornik et al. (2019); Li et al. (2019b). Modules were also inventedto enhance feature discrimination Li et al. (2019a); Hou et al. (2019). Recently approaches have alsoexplored combination with Graph Neural Networks Garcia & Bruna (2017); Kim et al. (2019).3 A DAPTIVE FEATURE DISTRIBUTION : FINE-TUNING THE SCALE OFFEATURE DISTRIBUTION ON NOVEL CLASSESIn this section, we introduce how we realize the adaptive feature distribution with a learnable scalevector and the effects on novel class feature space by only finetuning the scale vector with a fewsamples.The Few-Shot Problem Formulation. Evaluation datasets in few-shot learning are separated intobase, validation and test classes. Base classes which is used in training involves a relatively largenumber of labelled training samples. And validation classes ortest classes are treated as novelclasses , which correspondingly used for validation and testing purpose. For few-shot learning sce-narios, one episode is defined as K-wayN-shot learning where Kis the number of classes, Nisthe number of training images(support set) and Kclasses are firstly sampled from the novel classes;Nsamples in the support set as well as the query set(samples used for evaluating the episode per-formance) are sampled within each Kclasses. For one K-wayN-shot episode, we use SkandQkto denote the support and query set accordingly for k2Knovel classes.We use a pre-trained feature extractor Fto subtract features. We use fi=F(xi)to represent thefeature forxi. We first add a feature normalization layer with a scale vector s:fi=fis (1)Where:=1NPifiand=1NPi(fi)2.In this layer, features from all training samples are first normalized in a way that values of everyelement on the feature vectors are regularized by following the normal distribution. A scale vectorsis then multiplied with the normalized feature. sserves as the ”adaptive” part that by tuning thevalue of s, we are scaling the normalized feature distribution. sis flexible in the sense that everyelement on sscales up or down on every element of features and this in general leads to the reshapeof feature space manifold. Then by fine-tuning swith classification loss on novel classes, we expectto the reshape of feature space manifold could fast adapt the features for novel classes especially oncross-domain cases.3Under review as a conference paper at ICLR 2021In the fine-tuning stage, we first construct our evaluation metrics in an non-parametric way. We useaverage feature of the support set Skas the class weight wkwith a softmax loss:Lf=1NNXi=1logPyi=1NNXi=1logexpzyiPKk=1expzk(2)Wherezj=wTjfi=fTi1NXx2Sjf (3)By using this non-parametric metrics, we decrease the number of parameters to be trained in thefine-tuning stage while still follows the maximum likelihood estimation to predict the probabilityp(yjx). And this allows flexibility of fine-tuning the feature space with adaptive feature distribution.We analyze the gradient flow in the fine-tuning stage in the following.The derivative of zjwithfiis:@zj@fi=wj (4)For an input xi, the derivative of zjwithLfis:@Lf@zj=Pj1j=yiPjj6=yi(5)@Lf@fi= (Pyi1)wyi+KXj6=yiPjwj (6)Meanwhile as sis element-wisely multiplied with fi, the gradient at location cforsis(we omit thenotation of location cto simplify the notation):@fi@s=fi(7)Then we have the gradient for sat any location on swith sample xias:rs=fi[(Pyi1)wyi+KXj6=yiPjwj] (8)For fine-tuning only using K-way N-shot samples, Pyi'1(for 1-shot case, Pyi= 1) the gradientduring training can be approximated as:rs=fiKXj6=yiPjwj=fiKXj6=yi[PjXx2Sjfx] (9)To simplify the notation, we use the gradient for 1-shot case to conduct further discussion, which is:rs=fiKXj6=yiPjfj(10)By conducting the gradient descent, we have s=srs.To give a direct impression of how this fine-tuning changes the feature space manifold, we illustratethe change on sbrought by gradient descent intuitively. First of all, the normalization on fensuresthat the value is ”soft bounded”, which will not cause the extreme values on the gradient. For somelocations where elements are encoded ”common” information, values of fiandfjare similar. Andin the opposite way, elements in other locations are encoded ”discriminative” information wherevalues offiandfjare largely different. In this case, rscould be relatively large or negative whichleads to scaling up the feature distribution at those locations. Then the difference between featuresare further enlarged correspondingly. In this case, the manifold of the feature space will fast adaptto the shape where distinguished parts are enlarged.4Under review as a conference paper at ICLR 20214 O VERALL FRAMEWORKIn this section, we introduce the overall framework that we conduct the few-shot classification prob-lem.4.1 T RAINING CLASSIFICATION ON BASE CLASSESThe modelF(x)that is trained on the base classes Kbase. To obtain a better feature invariance,we use the l2-normalized Softmax Chen et al. (2019); Ranjan et al. (2017); Wang et al. (2017); Qiet al. (2018) with cross entropy loss, which utilize softmax under the constraint of kwyik22= 1andkF(xi)k22= 1:LSM=1NbaseNbaseXi=1logexpScos(wTyi;F(xi))PKbasek=1expScos(wTk;F(xi))(11)4.2 E VALUATION ON NOVEL CLASSESGiven anK-wayN-shot episode of few-shot classification, for each class k2Kwe have a supportsetSk= (x1;y1);;(xN;yN)and a query set Qk= (x1;y1);;(xM;yM). With the pretrainedfeature extractor F(x), we follow the same metric of cosine distance in equation 11 when evaluatingon novel classes; and the novel class weight wkis the average feature of the support set SkQi et al.(2018); Chen et al. (2020):wk=1NXx2SkF(x) (12)The predicted probability that x2Qbelongs to class kis:p(y=kjx) =exp cos( wTk;F(x))PKj=1exp cos( wTj;F(x))(13)4.3 F INE-TUNING SCALE VECTOR ON NORMALIZED FEATURE DISTRIBUTION .For the fine-tuning part, we conduct experiments with data augmentation and without data augmen-tation separately. With data augmentation, when we construct our the non-parametric evaluationmetrics in equation. 2 the average feature used for novel class weight are generated from sampleswithout data augmentation while features as input to the evaluation metrics are from samples afterdata augmentation. By doing this, we ensures the minimum change of the novel class prototype(classweight) and the maximum of sample variations around the class prototype. The fine-tuning withoutdata augmentation follows the methodology in Section 2.4.4 M ODEL SELECTION FOR NOVEL CLASSES .After we train the classification on base classes, we come to the model selection of using whichmodel as the feature extractor for novel classes. The feature extractor with the best classificationaccuracy or from the later epochs may not be a good choice. To obtain a high classification accuracy,features trained by supervised classification at the later stage of training may suffer the ”overfitting”to the seen classes. In other words, features would be projected precisely to directions of class weightvectors in order to get a high classification accuracy. By using these models as the feature extractorfor novel classes, features of the novel classes could be separately projected onto the directions ofthe base classes which enlarge the scattering of that feature distribution indeed. Using the few-shot performance on validation set could be one choice, however as we are approaching the adaptivefeature distribution, we consider the model selection from the perspective of measuring the quality offeature distribution. We use DB-index Davies & Bouldin (1979) as the criterion for model selection,which evaluates the clustering quality by considering the separation of the clusters and the tightnessinside the clusters. And interestingly, we found that models with lower DB-index are generallymodels around the epoch after the first time of decreasing the learning rate. In our experiments,models with lower DB-index on validation set are selected.5Under review as a conference paper at ICLR 2021ModelsBackbone mini-ImageNet tiered-ImageNet1-shot 5-shot 1-shot 5-shotFinn et al. (2017) Conv-4-64 48:701:84 63:100:92 51:671:81 70:300:08Sung et al. (2018) Conv-4-64 50:440:82 65:320:70 - -Gidaris et al. (2019) WRN-28-10 62:930:45 79:870:33 70.530:51 84:980:36Gidaris & Komodakis (2019) WRN-28-10 61:070:15 76:750:10 68:180:16 83:090:12Rusu et al. (2018) WRN-28-10 61:760:08 77:590:12 66:330:05 81:440:09Gidaris & Komodakis (2019) WRN-28-10 60:060:14 76:390:11 68:180:16 83:090:12Li et al. (2019a) ResNet18 62:050:55 78:630:06 64:780:11 81:050:52Dvornik et al. (2019) ResNet18 59:480:62 75:620:48 70:440:32 85.430:21Oreshkin et al. (2018) ResNet12 58:500:30 76:700:30 - -Ravichandran et al. (2019) ResNet-12 60:71 77:26 66:87 82:64Lee et al. (2019) ResNet12 62:640:61 78:630:46 65:990:72 81:560:53Sun et al. (2019) ResNet12 61:21:8 75:50:8 - -Simon et al. (2020) ResNet-12 64.600:72 79:510:50 67:390:82 82:850:56Guo & Cheung (2020) ResNet-12 63:120:08 78:400:11 67:690:11 82:820:13Li et al. (2020) ResNet-12 - - 67:100:52 79:540:60Chen et al. (2020) ResNet-12 63:170:23 79:260:17 68:620:27 83:290:18Baseline ResNet12 59:380:44 76:830:33 63:510:48 80:460:38Baseline* ResNet12 63.730:44 80:590:31 68:680:49 84:030:35AFD ResNet12 63:700:44 80.810:31 68.720:49 84.230:35Table 1: Results on mini-ImageNet and tiered-ImageNet for 5-way evaluation. The results are theaverage accuracy with 95% confidence intervals based on the same 2000 test episodes among all ourexperiments. The 95% confidence intervals is reference to comparing with other methods.5 E XPERIMENTAL VALIDATIONWe evaluate the our adaptive feature distribution method in both in-domain case and cross-domaincase. In-domain case is defined as the base and novel classes are from the same datsets and cross-domain case refers to the situation that base and novel classes are from different datasets and gener-ally the datasets have domain difference.5.1 E VALUATION DATASETS AND IMPLEMENTATION DETAILS5.1.1 E VALUATION DATASETSDataset 1: mini-ImageNet Vinyals et al. (2016) is a standard benchmark for few-shot image clas-sification benchmark, which consists of 100 randomly chosen classes from ILSVRC-2012 Rus-sakovsky et al. (2015). And these classes are randomly split into 64, 16 and 20 classes for meta-training, meta-validation and meta-test set respectively. Each class contains 600 images of size8484. We use the common split used in Lee et al. (2019).Dataset 2: tiered-ImageNet Ren et al. (2018) is a larger subset of ILSVRC-2012 Russakovsky et al.(2015), composed of 608 classes which are split into meta-training, meta-validation and meta-testingset with 351, 97 and 160 classes respectively. All images are of the size 8484.Dataset 3: CUB-200-2011 Wah et al. (2011) contains 200 classes and 11,788 images in total.Following the evaluation protocol of Hilliard et al. (2018), the dataset is split into 100 base, 50validation and 50 novel classes. We use the same splits as Chen et al. (2019) for testing. This datasetserves as the test set for the cross-domain evaluation.5.1.2 A RCHITECTURE AND TRAINING DETAILSBaseline Network Architecture. We utilize the ResNet-12 network architecture following Lee et al.(2019) to train the baseline and backbone classification model. However in contrast to Lee et al.(2019), we use a global average pooling after the last residual block following which the featurelength becomes 640 and the feature layer is followed by a 1-d batchnorm layer without affine.6Under review as a conference paper at ICLR 2021method 5-way 1-shot 5-way 5-shotMatchingNetsVinyals et al. (2016) - 53:070:74MAMLFinn et al. (2017) - 51:340:72ProtoNetSnell et al. (2017) - 62:020:70Linear Classifier(Chen et al. (2019)) - 65:570:7Cosine Classifier(Chen et al. (2019)) - 62:040:76Diverse 20 FullDvornik et al. (2019) - 66:170:55Baseline 46:310:43 64:150:38Baseline* 49:260:43 69:560:39AFD 50.990:43 70.640:38Table 2: Domain Difference Testing on CUB Dataset using the mini-ImageNet Trained Model.Results for MatchingNets, MAML and ProtoNet are fromChen et al. (2019).Training hyperparameters. All networks were trained with SGD along with Nesterov momentumof 0.9 and weight decay of 5104. The initial learning rate is set as 0.1 which was decreasedby a factor of 10 every 50 epochs for a total of 150 epochs. The batch size was kept at 256. Dataargumentation was applied for baseline classification following Lee et al. (2019), which includedhorizontal flip, random crop, and color (brightness, contrast, and saturation) jitter.Fine Tuning on the Novel Class Support Set. We finetune on the novel class training set usingAdam with learning rate 5103by back-propagating the gradient from the whole batch, andearly stop in this case is crucial that we finetune 3 epochs for 1-shot and 5 epochs for 5-shot case incross-domain cases and 3 epochs for both 1-shot and 5-shot in in-domain cases. The scale vector isinitialized as 1.In our experiments, Baseline refers to the pre-trained feature extractor of the last epoch for training;Baseline* refers to the pre-trained feature extractor selected using density based clustering index.And we use Baseline* as the feature extractor for all our finetuning experiments.5.2 C OMPARING PERFORMANCE ON IN-DOMAIN AND CROSS -DOMAIN CASESPerformance of model selection are consistent. We observe that using the DB-index to selectfeature extractors gain consistent performance improvement among all three evaluation datasets.And this could serve as a good sign of studying the feature transfer-ability from the perspective offeature distribution.AFD Shows Improvement on In-Domain Evaluations. Shown in Table.1, AFD improves theperformance on 5-shot with 0:22% and0:2%separately for miniImagenet and tieredImagenet. Onething to notice is that the performance of our Baseline* already surpass performance of most works.AFD still leads to performance improvement while using a well presumed feature extractor.AFD shows superior generalization to cross-domain evaluations. Shown in Table.2, by simplyfinetuning using 5 images for 1-shot case and 25 images for 5-shot case, we observe 1:73% and1:08% performance improvement from statistical results among 2000 episodes.5.3 A BLATION STUDIES ON FINE-TUNINGThe results of ablation studies are shown in Table.3.Effects of Applying Data Augmentation during Finetuning: The major obstacle of few-shotlearning is the lack of samples which is essential for improving the novel class generalization. Al-though we only train 3 epochs, the effects of data augmentation are still obvious. Only for 1-shotcase with miniImagenet-trained feature extractor the performance is worse than without using dataaugmentation. This could be caused with the reason that the feature extractor is trained with a rel-atively small data, features abstracted then are not stable to optimize which is serve when addingdata augmentation with only 5 training samples. Otherwise, we observe performance improvementof0:47% for 5-shot with miniImagenet-trained feature extractor and 0:18%,0:65% for 1-shot and5-shot with tieredImagenet-trained feature extractor. As we only use the basically simple data aug-7Under review as a conference paper at ICLR 2021ModelsComponents mini-ImageNet tiered-ImageNetdot-product cosine data-aug 1-shot 5-shot 1-shot 5-shotBaseline 46:31 64:15 46:52 65:59Baseline* 49:26 69:56 54:67 74:94finetune-weight X X 48:60 68:64 54:25 74:87X 51.49 70:17 55:00 74:56X X 50:99 70:60 55:18 75:21AFD X X 50:99 70.64 55.18 75.21Table 3: Ablation Studies on CUB. All experiments are in 5-way evaluations. The results are theaverage accuracy based on 2000 test episodes. Episodes are the same over all experiments. The 95%confidence intervals are approximately the same(with 0:01difference among experiments), and weput the values correspondingly: 0:43,0:38,0:48,0:39.mentation strategy, further work to explore the effective data augmentation for finetuning on novelclasses are promising.Effects of Different Feature Extractor: Firstly by using a feature extractor trained with largerdataset, the performance on cross-domain cases boost a lot which indicates the importance of a goodfeature embedding. And for AFD, the performance improvement on 1-shot are 1:73% and1:08% forminiImagenet model and tieredImagenet model; on 5-shot are 0:51% and0:27% for miniImagenetmodel and tieredImagenet model. This illustrates that AFD are able to fast adapt features especiallywhen the quality of feature embedding is not good. However, with a better feature extractor allowsbetter improvement of using data augmentation in AFD as discussed above.Effects of Different Metrics: We compare the results of using different metrics(dot-product andcosine metrics with scale Wang et al. (2017)) in our non-parametric evaluation for fine-tuning. Theperformance is almost the same. As different metrics affect how well can we achive the predictedprobability and in our case, as illstrated in Section 2, the predicted probability is around 1 already.Then different metrics serve similar efforts of adapting features for novel classes.The Importance of Fine-tuning Features: We compare AFD with only finetuning the novel classweight method. For finetuning the novel class weight, we use average features from support set asthe weight initialization and the hyper-parameters settings are the same as mentioned above. Weobserve performance drop by only finetuning the novel class weight, which are 0:66%,0:92% for1-shot and 5-shot with mini-ImageNet trained feature extractor, 0:42%,0:07% for 1-shot and 5-shotwith tiered-ImageNet trained feature extractor. For cross-domain cases, features for novel classesare not well discriminative and constrained for the same class. As features are not optimized, onlyfinetuning the novel class weights linear relating to features will actually drop the performance.This illustrates the importance of adapting features of novel classes. By AFD, we get consistent andessential performance improvement: 1:73% and0:51% for 1-shot and 5-shot with mini-ImageNettrained feature extractor, 1:08% and0:27% for 1-shot and 5-shot with tiered-ImageNet trained fea-ture extractor. This showcases the powerful effects of AFD under cross-domain cases, comparedwith the simplicity lies in AFD.6 C ONCLUSIONWe propose an finetuning on adaptive feature distribution to improve the novel class generalizationfor few-shot learning. And the performance improvement on both in-domain and cross-domainevaluation showcases the superior generalization brought by this simple yet effective method. Withthe proposed AFD method, we also address the importance of further understanding and analyzingthe feature distribution of novel classes.
KPFFE34fbz3
Proposes an interesting approach but suffers from clarity issues
4: Ok but not good enough - rejection
Summary ======== This paper proposes a simple approach to few-shot classification which first pre-trains a feature extractor via the classification loss on all base classes, as is usually done. Then, to solve each test task, they propose to learn a task-specific scaling co-efficient for each (normalized) feature of the pre-trained embedding function. This learning is done via a few steps of gradient descent on the support set of the given test task, for the objective of correctly classifying the support examples using a simple readout layer that is set to the prototypes of the different classes. This approach is parameter efficient, as it only requires optimizing the scaling co-efficients in the inner loop. Pros ==== [+] I really like the idea of isolating a small set of parameters to tune for the purpose of achieving task adaptation. This allows avoiding the expensive fine-tuning of the entire feature extractor, and may be more expressive than simply learning a per-task readout layer. [+] I also liked the intuitive explanation of the role of the scaling factor by inspecting the gradients of the task-specific loss with respect to the scaling parameters. It’s indeed interesting (though not unexpected) to see that this mechanism can amplify dimensions of the embedding space that are most discriminative for each task at hand. [+] I really like the proposed approach for model-selecting for the pre-trained model. I think this is a valuable contribution and the strong performance of this component alone is very interesting. Cons ==== [-] A major weakness of this paper in my opinion is the lack of clarity. I found parts of the paper hard to follow, both relating to the proposed framework as well as the experimental setup. It would additionally be useful to proof-read the paper and correct typos, grammar errors and incorrect usage of words throughout. Specific clarity issues are outlined below: A. Clarity issues relating to the proposed framework. a) The loss in Equation 2 is only summing over N points, where N is the number of examples per class. I was expecting it to sum over the entire support set (total of K*N examples). Is this a typo or is this meant to represent the loss of the examples of a given class only? If so, that should be indicated. b) In Equations 2 and 3, it would be clearer to rename z_j to z_{ij}, since it is the logit of example i belonging to class j. Similar minor notation issues apply in all following equations. c) In Equations 9 and 10, instead of \nabla s, it would be clearer to write this as the partial derivative of the loss of the particular example i w.r.t s. d) I agree that for 1-shot P_{y_i} == 1 (because the single example is exactly the same as the prototype in that case) but I’m not sure why that quantity is approximately 1 more generally. e) In Equation 11, what is the symbol S between the \exp and \cos? f) In Equations 12 and 13, shouldn’t the normalized features and scale be used instead of the original features F? B. Clarity issues relating to the experiments. a) What exactly are the models referred to as “Baseline” and “Baseline*” in the tables? I understand the difference between them is the model selection method from the pre-training round. But it’s not clear what is the algorithm they use to solve the downstream few-shot tasks. Are they simply learning a readout layer? If so, is that layer initialized from the prototypes as is done for AFD? b) I found the ablation section very hard to follow. First of all, the caption of Table 3 is “Ablation Studies on CUB”, but the results in that Table are on mini-ImageNet and tiered-ImageNet. Is this then a case of cross-domain study where the models were trained on CUB and evaluated on those two datasets? The description of the results of this table (e.g. in the paragraph “The Importance of Fine-tuning Features”) sometimes refers to a “mini-ImageNet trained feature extractor” (or analogously for tiered-ImageNet) and sometimes refers to “cross-domain cases” which makes it very hard to understand how the results in Table 3 are actually obtained. c) In Table 3, rows 4 and 5 don’t have the “Models” entry filled in. Are these additional AFD variants or “finetune-weight” variants? d) There is a paragraph titled “Effects of Different Feature Extractor” that refers to studying the effect of a feature extractor trained on a larger dataset. I couldn’t figure out which results this paragraph is referring to from the table. It would be useful to explain which entries of Table 3 these observations are describing. e) In the paragraph titled “The Importance of Fine-tuning Features”, again I was unsure which results some of the descriptions are referring to as the reported percentages in that paragraph don’t all correspond to numbers in Table 3. It would be useful to be clear about which entries in the table each of the findings refers to. [-] Another weakness of the paper is the limited discussion of related work. There have been various attempts at approaches that fine-tune only a cleverly-selected subset of the feature extractor parameters in each test task. Two examples that come to mind are: [1] and [2]. Additionally, this approach is related to FiLM-based models [3, 4] that learn a scaling and shifting transformation for each task (based on its support set). These are all different from the proposed approach in that they are meta-learning models (as opposed to algorithms that are applied at test-time only on top of pre-trained features), but they are otherwise similar in spirit, so they should be discussed and compared against. Further, [6] should also be cited alongside [7] in the context of conducting meta-training with a pre-trained feature extractor. [-] The proposed approach does not actually seem to outperform the baseline on the “in-domain” cases (Table 1) as the increase in performance over the baseline is very small and the confidence intervals overlap. It seems that the cross-domain experiments better showcase the ability of this method (since in that scenario adapting features is more important), so perhaps future experiments should further explore that scenario. [-] Finally, I felt that important baselines were missing, that would reflect alternative ways of fine-tuning subsets of the feature extractor less selectively than the proposed approach (see the section below for some suggestions). Overall ====== Overall, I vote for rejecting this paper due to the weaknesses discussed above. To reiterate, a major drawback in my opinion relates to the clarity of the presentation of this work. Other weak points include the lack of discussion and comparison to relevant literature and baselines and the weak performance of the proposed approach over the reported baseline. Suggestion for additional experiments ================================= The authors show experimentally that learning only a per-task readout layer is not expressive enough compared to their method that modifies the features as well via their proposed scaling mechanism. There are, however, other “baselines” that should be compared against that also modify the features in different ways. Specifically: while learning the readout layer (possibly initialized from the prototypes as is done in the proposed approach), we could also fine-tune the feature extractor. This baseline is used in the literature in some older work, e.g. the “Baseline-finetune” model in [5], as well as in more recent works in more challenging settings [6]. The drawback associated with fine-tuning the entire backbone is the potential overfitting to the support set for low shots, so it would also be useful to try variants that fine-tune the top X layers of the network. These explorations should happen on the features obtained by Baseline* for an apples-to-apples comparison with AFD. Further, perhaps the performance of AFD can be improved by fine-tuning the output layer too (which is initialized from the prototypes) during the inner loop that tunes the scale parameters? My understanding is that the output layer is currently fixed to the prototypes and isn’t optimized at all which might be restrictive. Finally, given that the proposed model only appears to significantly outperform the baseline in the cross-domain evaluation scenario (Table 2), a good test-bed for additional experiments with this method might be the Meta-Dataset benchmark [6] that is comprised of diverse datasets and therefore really necessitates a flexible model that can adapt to different distributions. References ========= [1] Fast Context Adaptation via Meta-Learning. Zintgraf et al. ICML 2019. [2] Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace. Lee et al. ICML 2018. [3] Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes. Requeima et al. NeurIPS 2019. [4] Improved Few-shot Visual Classification. Bateni et al. CVPR 2020. [5] Optimization as a Model for Few-shot Learning. Ravi and Larochelle. ICLR 2017. [6] Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. Triantafillou et al. ICLR 2020. [7] A New Meta-Baseline for Few-shot Learning. Chen et al. 2020.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Improve Novel Class Generalization By Adaptive Feature Distribution for Few-Shot Learning ### Paper Abstract In this work, we focus on improving the novel class generalization of few-shot learning. By addressing the difference between feature distributions of base and novel classes, we propose the adaptive feature distribution method which is to finetune one scale vector using the support set of novel classes. The scale vector is applied on the normalized feature distribution and by using one scale vector to reshape the feature space manifold, we obtain consistent performance improvement for both in-domain and cross-domain evaluations. By simply finetuning one scale vector using 5 images, we observe a $2.23\%$ performance boost on 5-way 1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes. This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB. ### Paper Keywords ["Novel Class Generalization", "Finetuning One Scale Vector", "Adaptive Feature Distribution", "Cross-Domain"] ### Paper Content ABSTRACTIn this work, we focus on improving the novel class generalization of few-shotlearning. By addressing the difference between feature distributions of base andnovel classes, we propose the adaptive feature distribution method which is tofinetune one scale vector using the support set of novel classes. The scale vectoris applied on the normalized feature distribution and by using one scale vector toreshape the feature space manifold, we obtain consistent performance improve-ment for both in-domain and cross-domain evaluations. By simply finetuning onescale vector using 5 images, we observe a 2:23% performance boost on 5-way1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes.This approach is simple yet effective. By just finetuning a single scale vector weprovide a solution of reducing number of parameters while still obtain generaliza-tion ability for few-shot learning. We achieve the state-of-the-art performance onmini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.1 I NTRODUCTIONWith the plethora of available large-scale data, deep learning has achieved significant advancements.However multiple factors such as high labelling costs, scarce availability of classes of interest or theexpensive need for experts for label generation set limits of applying large-scale data. To addressthis challenge, the problem of few-shot learning was formulated which has received considerable at-tention in recent years Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle(2016); Hariharan & Girshick (2017).For a supervised learning problem with data set (x1;y1);:::(xn;yn)(xi2X feature space, yi2Ylabel space), by using the hypothesis class( h(:;w)), we want to minimize l(h(x;w);y)on newsamples. With the assumption that training samples and test samples are i.i.d from the same unknowndistribution DoverXY , the problem is optimized over the Empirical Risk Minimization(ERM).For multi-class classification with deep neural network, the hypothesis class related to the scenariocan be divided into two functionalities: the feature extractor F(xi)parameterized by , the classifierC(jw)for a given class weight vector w. Basically to achieve a good classification performanceover the large-scale dataset, h(:;w)is expected to be highly invariant and this property empowersthe feature extractor F(xi)with good feature invariance ability if we consider variations that aregenerally in the objects such as shapes, lights and etc.Few-shot learning proposes a great challenge as the estimation of the distribution is hard to achievewith a few samples. Meta-learning methods on few-shot learning lead a direction of adapting toa hypothesis class with few samples, which directly back-propagates the loss between testing setwith theh(:;w)proposed with the training set. Recent work meta-Baseline Chen et al. (2020)pro-posed to conducts meta-training with a pre-trained feature extractor on base classes which leadsto a large-margin performance improvement of meta-training. Moreover, they observe that duringmeta-training stage, models are better generalized on the base classes while evaluation performanceon novel classes largely dropped.The novel class generalization which is defined as evaluation performance on novel classes followingChen et al. (2020) is essential for improving few-shot learning into practice. Training of algorithmson few-shot learning are conducted with base classes which are relatively large-scale in the sense1Under review as a conference paper at ICLR 2021F f1f2f3L Obtain features through a fixed feature extractor Pass features through the normalization layer Only fine-tuning the scale vector Figure 1: Illustration on AFD: with samples from support sets of novel classes, features are obtainedby using a pre-trained feature extractor Fand then these features are passed through the featurenormalization layer which is parameterized by a scale vector; with using a non-parametric evaluationmetrics gradients flow into optimizing the scale vector.of plenty number of classes with hundreds of images. Methods in metric-based learning Chen et al.(2019); Gidaris & Komodakis (2019); Wang et al. (2018); Gidaris & Komodakis (2018) and meta-learning Chen et al. (2020) prove that training in this way benefits the capture of large variationswhich is crucial for discriminative features. However, as the feature extractor on base classes istrained under maximum likelihood, features are also trained to be invariant for discriminating thesebase classes, as shown in Fig.2. Then the evaluation on novel classes would suffer from the featuredistribution difference between base and novel classes, and cross-domain between base and novelclasses could enlarge this feature distribution difference. Objects(or images) in different domainscarry different aspects of information which leads to different discriminative features or features incommon among categories.Attempts of improving novel class generalization include finetuning method proposed in Chen et al.(2019). In Chen et al. (2019), they proposed to finetune the novel class weights using the supportset of novel classes with competitive results. However if feature distribution of novel classes suffersfrom scattering, even with a plenty of data finetuning the novel class weights without any optimiza-tion on the feature side is not promising for finding a good decision boundary, not to mention withonly a few samples.In our work, we propose the adaptive feature distribution to improve the novel class generalization.Following the idea of finetuning using a handful of samples, we apply a non-parametric distancefirst to construct the hypothesis class and then by only finetuning a scale vector which applied onthe normalized feature distribution, we achieve the effects of adaptive feature distribution on novelclasses.Our Contributions: 1) We address the importance of further understanding the feature distributionfor novel classes. Using DB-index which measures the quality of feature distributions for novelclasses to select feature extractors, we observe a consistent performance boost on all three evalu-ation datasets. We believe introducing analysis on feature distributions and clustering quality ofnovel classes is informative to the community. 2)We propose to improve novel class generalizationthrough adapting the feature distribution of novel classes. And by only finetuning one scale vectorusing support sets of novel classes, we showcase the supreme generalization of this method espe-cially on cross-domain evaluations. We achieve the state-of-the-art performance on mini-Imagenet,tiered-Imagenet as well as cross-domain evaluation on CUB. 3)This approach is simple yet effec-tive. By just finetuning a single scale vector we provide a solution of reducing number of parameterswhile still obtain generalization ability for few-shot learning.2 P RIOR ARTThere have been many approaches to few-shot learning explored recently, namely are fast-adaptationmethods Finn et al. (2017); Rusu et al. (2018); Sun et al. (2019); Chen et al. (2020), model optimiza-tion based methods Ravi & Larochelle (2016), metric learning based methods Vinyals et al. (2016);Snell et al. (2017); Ren et al. (2018); Sung et al. (2018); Guo & Cheung (2020); Li et al. (2020) andmethods which use ridge regression and support vector machine Bertinetto et al. (2018); Lee et al.2Under review as a conference paper at ICLR 2021(a) Learned Representa-tion for Base Classes(b) Novel Class 7 (c) Novel Class 8 (d) Novel Class 9Figure 2: MNIST Illustration of Feature Distribution Difference between Base and Novel Classes.The feature extractor(Lenet) is trained with 0-6 base classes. We plot the feature distribution forbase classes and novel classes. As shown in 2-D space, novel classes features are more scatteredcompared with compact feature distribution of base classes. Meanwhile, novel class features tend toproject on the direction of base class weights(shown as the gray line).(2019). There have also been studies focusing on discovering projective feature embeddings Simonet al. (2018; 2020). Recently, a few studies utilized a variety of techniques in machine learningtowards few shot classification. Techniques like self-supervised training, self-training through semi-supervised learning and model ensembles showed a boost result when applied on few-shot learningproblem Gidaris et al. (2019); Dvornik et al. (2019); Li et al. (2019b). Modules were also inventedto enhance feature discrimination Li et al. (2019a); Hou et al. (2019). Recently approaches have alsoexplored combination with Graph Neural Networks Garcia & Bruna (2017); Kim et al. (2019).3 A DAPTIVE FEATURE DISTRIBUTION : FINE-TUNING THE SCALE OFFEATURE DISTRIBUTION ON NOVEL CLASSESIn this section, we introduce how we realize the adaptive feature distribution with a learnable scalevector and the effects on novel class feature space by only finetuning the scale vector with a fewsamples.The Few-Shot Problem Formulation. Evaluation datasets in few-shot learning are separated intobase, validation and test classes. Base classes which is used in training involves a relatively largenumber of labelled training samples. And validation classes ortest classes are treated as novelclasses , which correspondingly used for validation and testing purpose. For few-shot learning sce-narios, one episode is defined as K-wayN-shot learning where Kis the number of classes, Nisthe number of training images(support set) and Kclasses are firstly sampled from the novel classes;Nsamples in the support set as well as the query set(samples used for evaluating the episode per-formance) are sampled within each Kclasses. For one K-wayN-shot episode, we use SkandQkto denote the support and query set accordingly for k2Knovel classes.We use a pre-trained feature extractor Fto subtract features. We use fi=F(xi)to represent thefeature forxi. We first add a feature normalization layer with a scale vector s:fi=fis (1)Where:=1NPifiand=1NPi(fi)2.In this layer, features from all training samples are first normalized in a way that values of everyelement on the feature vectors are regularized by following the normal distribution. A scale vectorsis then multiplied with the normalized feature. sserves as the ”adaptive” part that by tuning thevalue of s, we are scaling the normalized feature distribution. sis flexible in the sense that everyelement on sscales up or down on every element of features and this in general leads to the reshapeof feature space manifold. Then by fine-tuning swith classification loss on novel classes, we expectto the reshape of feature space manifold could fast adapt the features for novel classes especially oncross-domain cases.3Under review as a conference paper at ICLR 2021In the fine-tuning stage, we first construct our evaluation metrics in an non-parametric way. We useaverage feature of the support set Skas the class weight wkwith a softmax loss:Lf=1NNXi=1logPyi=1NNXi=1logexpzyiPKk=1expzk(2)Wherezj=wTjfi=fTi1NXx2Sjf (3)By using this non-parametric metrics, we decrease the number of parameters to be trained in thefine-tuning stage while still follows the maximum likelihood estimation to predict the probabilityp(yjx). And this allows flexibility of fine-tuning the feature space with adaptive feature distribution.We analyze the gradient flow in the fine-tuning stage in the following.The derivative of zjwithfiis:@zj@fi=wj (4)For an input xi, the derivative of zjwithLfis:@Lf@zj=Pj1j=yiPjj6=yi(5)@Lf@fi= (Pyi1)wyi+KXj6=yiPjwj (6)Meanwhile as sis element-wisely multiplied with fi, the gradient at location cforsis(we omit thenotation of location cto simplify the notation):@fi@s=fi(7)Then we have the gradient for sat any location on swith sample xias:rs=fi[(Pyi1)wyi+KXj6=yiPjwj] (8)For fine-tuning only using K-way N-shot samples, Pyi'1(for 1-shot case, Pyi= 1) the gradientduring training can be approximated as:rs=fiKXj6=yiPjwj=fiKXj6=yi[PjXx2Sjfx] (9)To simplify the notation, we use the gradient for 1-shot case to conduct further discussion, which is:rs=fiKXj6=yiPjfj(10)By conducting the gradient descent, we have s=srs.To give a direct impression of how this fine-tuning changes the feature space manifold, we illustratethe change on sbrought by gradient descent intuitively. First of all, the normalization on fensuresthat the value is ”soft bounded”, which will not cause the extreme values on the gradient. For somelocations where elements are encoded ”common” information, values of fiandfjare similar. Andin the opposite way, elements in other locations are encoded ”discriminative” information wherevalues offiandfjare largely different. In this case, rscould be relatively large or negative whichleads to scaling up the feature distribution at those locations. Then the difference between featuresare further enlarged correspondingly. In this case, the manifold of the feature space will fast adaptto the shape where distinguished parts are enlarged.4Under review as a conference paper at ICLR 20214 O VERALL FRAMEWORKIn this section, we introduce the overall framework that we conduct the few-shot classification prob-lem.4.1 T RAINING CLASSIFICATION ON BASE CLASSESThe modelF(x)that is trained on the base classes Kbase. To obtain a better feature invariance,we use the l2-normalized Softmax Chen et al. (2019); Ranjan et al. (2017); Wang et al. (2017); Qiet al. (2018) with cross entropy loss, which utilize softmax under the constraint of kwyik22= 1andkF(xi)k22= 1:LSM=1NbaseNbaseXi=1logexpScos(wTyi;F(xi))PKbasek=1expScos(wTk;F(xi))(11)4.2 E VALUATION ON NOVEL CLASSESGiven anK-wayN-shot episode of few-shot classification, for each class k2Kwe have a supportsetSk= (x1;y1);;(xN;yN)and a query set Qk= (x1;y1);;(xM;yM). With the pretrainedfeature extractor F(x), we follow the same metric of cosine distance in equation 11 when evaluatingon novel classes; and the novel class weight wkis the average feature of the support set SkQi et al.(2018); Chen et al. (2020):wk=1NXx2SkF(x) (12)The predicted probability that x2Qbelongs to class kis:p(y=kjx) =exp cos( wTk;F(x))PKj=1exp cos( wTj;F(x))(13)4.3 F INE-TUNING SCALE VECTOR ON NORMALIZED FEATURE DISTRIBUTION .For the fine-tuning part, we conduct experiments with data augmentation and without data augmen-tation separately. With data augmentation, when we construct our the non-parametric evaluationmetrics in equation. 2 the average feature used for novel class weight are generated from sampleswithout data augmentation while features as input to the evaluation metrics are from samples afterdata augmentation. By doing this, we ensures the minimum change of the novel class prototype(classweight) and the maximum of sample variations around the class prototype. The fine-tuning withoutdata augmentation follows the methodology in Section 2.4.4 M ODEL SELECTION FOR NOVEL CLASSES .After we train the classification on base classes, we come to the model selection of using whichmodel as the feature extractor for novel classes. The feature extractor with the best classificationaccuracy or from the later epochs may not be a good choice. To obtain a high classification accuracy,features trained by supervised classification at the later stage of training may suffer the ”overfitting”to the seen classes. In other words, features would be projected precisely to directions of class weightvectors in order to get a high classification accuracy. By using these models as the feature extractorfor novel classes, features of the novel classes could be separately projected onto the directions ofthe base classes which enlarge the scattering of that feature distribution indeed. Using the few-shot performance on validation set could be one choice, however as we are approaching the adaptivefeature distribution, we consider the model selection from the perspective of measuring the quality offeature distribution. We use DB-index Davies & Bouldin (1979) as the criterion for model selection,which evaluates the clustering quality by considering the separation of the clusters and the tightnessinside the clusters. And interestingly, we found that models with lower DB-index are generallymodels around the epoch after the first time of decreasing the learning rate. In our experiments,models with lower DB-index on validation set are selected.5Under review as a conference paper at ICLR 2021ModelsBackbone mini-ImageNet tiered-ImageNet1-shot 5-shot 1-shot 5-shotFinn et al. (2017) Conv-4-64 48:701:84 63:100:92 51:671:81 70:300:08Sung et al. (2018) Conv-4-64 50:440:82 65:320:70 - -Gidaris et al. (2019) WRN-28-10 62:930:45 79:870:33 70.530:51 84:980:36Gidaris & Komodakis (2019) WRN-28-10 61:070:15 76:750:10 68:180:16 83:090:12Rusu et al. (2018) WRN-28-10 61:760:08 77:590:12 66:330:05 81:440:09Gidaris & Komodakis (2019) WRN-28-10 60:060:14 76:390:11 68:180:16 83:090:12Li et al. (2019a) ResNet18 62:050:55 78:630:06 64:780:11 81:050:52Dvornik et al. (2019) ResNet18 59:480:62 75:620:48 70:440:32 85.430:21Oreshkin et al. (2018) ResNet12 58:500:30 76:700:30 - -Ravichandran et al. (2019) ResNet-12 60:71 77:26 66:87 82:64Lee et al. (2019) ResNet12 62:640:61 78:630:46 65:990:72 81:560:53Sun et al. (2019) ResNet12 61:21:8 75:50:8 - -Simon et al. (2020) ResNet-12 64.600:72 79:510:50 67:390:82 82:850:56Guo & Cheung (2020) ResNet-12 63:120:08 78:400:11 67:690:11 82:820:13Li et al. (2020) ResNet-12 - - 67:100:52 79:540:60Chen et al. (2020) ResNet-12 63:170:23 79:260:17 68:620:27 83:290:18Baseline ResNet12 59:380:44 76:830:33 63:510:48 80:460:38Baseline* ResNet12 63.730:44 80:590:31 68:680:49 84:030:35AFD ResNet12 63:700:44 80.810:31 68.720:49 84.230:35Table 1: Results on mini-ImageNet and tiered-ImageNet for 5-way evaluation. The results are theaverage accuracy with 95% confidence intervals based on the same 2000 test episodes among all ourexperiments. The 95% confidence intervals is reference to comparing with other methods.5 E XPERIMENTAL VALIDATIONWe evaluate the our adaptive feature distribution method in both in-domain case and cross-domaincase. In-domain case is defined as the base and novel classes are from the same datsets and cross-domain case refers to the situation that base and novel classes are from different datasets and gener-ally the datasets have domain difference.5.1 E VALUATION DATASETS AND IMPLEMENTATION DETAILS5.1.1 E VALUATION DATASETSDataset 1: mini-ImageNet Vinyals et al. (2016) is a standard benchmark for few-shot image clas-sification benchmark, which consists of 100 randomly chosen classes from ILSVRC-2012 Rus-sakovsky et al. (2015). And these classes are randomly split into 64, 16 and 20 classes for meta-training, meta-validation and meta-test set respectively. Each class contains 600 images of size8484. We use the common split used in Lee et al. (2019).Dataset 2: tiered-ImageNet Ren et al. (2018) is a larger subset of ILSVRC-2012 Russakovsky et al.(2015), composed of 608 classes which are split into meta-training, meta-validation and meta-testingset with 351, 97 and 160 classes respectively. All images are of the size 8484.Dataset 3: CUB-200-2011 Wah et al. (2011) contains 200 classes and 11,788 images in total.Following the evaluation protocol of Hilliard et al. (2018), the dataset is split into 100 base, 50validation and 50 novel classes. We use the same splits as Chen et al. (2019) for testing. This datasetserves as the test set for the cross-domain evaluation.5.1.2 A RCHITECTURE AND TRAINING DETAILSBaseline Network Architecture. We utilize the ResNet-12 network architecture following Lee et al.(2019) to train the baseline and backbone classification model. However in contrast to Lee et al.(2019), we use a global average pooling after the last residual block following which the featurelength becomes 640 and the feature layer is followed by a 1-d batchnorm layer without affine.6Under review as a conference paper at ICLR 2021method 5-way 1-shot 5-way 5-shotMatchingNetsVinyals et al. (2016) - 53:070:74MAMLFinn et al. (2017) - 51:340:72ProtoNetSnell et al. (2017) - 62:020:70Linear Classifier(Chen et al. (2019)) - 65:570:7Cosine Classifier(Chen et al. (2019)) - 62:040:76Diverse 20 FullDvornik et al. (2019) - 66:170:55Baseline 46:310:43 64:150:38Baseline* 49:260:43 69:560:39AFD 50.990:43 70.640:38Table 2: Domain Difference Testing on CUB Dataset using the mini-ImageNet Trained Model.Results for MatchingNets, MAML and ProtoNet are fromChen et al. (2019).Training hyperparameters. All networks were trained with SGD along with Nesterov momentumof 0.9 and weight decay of 5104. The initial learning rate is set as 0.1 which was decreasedby a factor of 10 every 50 epochs for a total of 150 epochs. The batch size was kept at 256. Dataargumentation was applied for baseline classification following Lee et al. (2019), which includedhorizontal flip, random crop, and color (brightness, contrast, and saturation) jitter.Fine Tuning on the Novel Class Support Set. We finetune on the novel class training set usingAdam with learning rate 5103by back-propagating the gradient from the whole batch, andearly stop in this case is crucial that we finetune 3 epochs for 1-shot and 5 epochs for 5-shot case incross-domain cases and 3 epochs for both 1-shot and 5-shot in in-domain cases. The scale vector isinitialized as 1.In our experiments, Baseline refers to the pre-trained feature extractor of the last epoch for training;Baseline* refers to the pre-trained feature extractor selected using density based clustering index.And we use Baseline* as the feature extractor for all our finetuning experiments.5.2 C OMPARING PERFORMANCE ON IN-DOMAIN AND CROSS -DOMAIN CASESPerformance of model selection are consistent. We observe that using the DB-index to selectfeature extractors gain consistent performance improvement among all three evaluation datasets.And this could serve as a good sign of studying the feature transfer-ability from the perspective offeature distribution.AFD Shows Improvement on In-Domain Evaluations. Shown in Table.1, AFD improves theperformance on 5-shot with 0:22% and0:2%separately for miniImagenet and tieredImagenet. Onething to notice is that the performance of our Baseline* already surpass performance of most works.AFD still leads to performance improvement while using a well presumed feature extractor.AFD shows superior generalization to cross-domain evaluations. Shown in Table.2, by simplyfinetuning using 5 images for 1-shot case and 25 images for 5-shot case, we observe 1:73% and1:08% performance improvement from statistical results among 2000 episodes.5.3 A BLATION STUDIES ON FINE-TUNINGThe results of ablation studies are shown in Table.3.Effects of Applying Data Augmentation during Finetuning: The major obstacle of few-shotlearning is the lack of samples which is essential for improving the novel class generalization. Al-though we only train 3 epochs, the effects of data augmentation are still obvious. Only for 1-shotcase with miniImagenet-trained feature extractor the performance is worse than without using dataaugmentation. This could be caused with the reason that the feature extractor is trained with a rel-atively small data, features abstracted then are not stable to optimize which is serve when addingdata augmentation with only 5 training samples. Otherwise, we observe performance improvementof0:47% for 5-shot with miniImagenet-trained feature extractor and 0:18%,0:65% for 1-shot and5-shot with tieredImagenet-trained feature extractor. As we only use the basically simple data aug-7Under review as a conference paper at ICLR 2021ModelsComponents mini-ImageNet tiered-ImageNetdot-product cosine data-aug 1-shot 5-shot 1-shot 5-shotBaseline 46:31 64:15 46:52 65:59Baseline* 49:26 69:56 54:67 74:94finetune-weight X X 48:60 68:64 54:25 74:87X 51.49 70:17 55:00 74:56X X 50:99 70:60 55:18 75:21AFD X X 50:99 70.64 55.18 75.21Table 3: Ablation Studies on CUB. All experiments are in 5-way evaluations. The results are theaverage accuracy based on 2000 test episodes. Episodes are the same over all experiments. The 95%confidence intervals are approximately the same(with 0:01difference among experiments), and weput the values correspondingly: 0:43,0:38,0:48,0:39.mentation strategy, further work to explore the effective data augmentation for finetuning on novelclasses are promising.Effects of Different Feature Extractor: Firstly by using a feature extractor trained with largerdataset, the performance on cross-domain cases boost a lot which indicates the importance of a goodfeature embedding. And for AFD, the performance improvement on 1-shot are 1:73% and1:08% forminiImagenet model and tieredImagenet model; on 5-shot are 0:51% and0:27% for miniImagenetmodel and tieredImagenet model. This illustrates that AFD are able to fast adapt features especiallywhen the quality of feature embedding is not good. However, with a better feature extractor allowsbetter improvement of using data augmentation in AFD as discussed above.Effects of Different Metrics: We compare the results of using different metrics(dot-product andcosine metrics with scale Wang et al. (2017)) in our non-parametric evaluation for fine-tuning. Theperformance is almost the same. As different metrics affect how well can we achive the predictedprobability and in our case, as illstrated in Section 2, the predicted probability is around 1 already.Then different metrics serve similar efforts of adapting features for novel classes.The Importance of Fine-tuning Features: We compare AFD with only finetuning the novel classweight method. For finetuning the novel class weight, we use average features from support set asthe weight initialization and the hyper-parameters settings are the same as mentioned above. Weobserve performance drop by only finetuning the novel class weight, which are 0:66%,0:92% for1-shot and 5-shot with mini-ImageNet trained feature extractor, 0:42%,0:07% for 1-shot and 5-shotwith tiered-ImageNet trained feature extractor. For cross-domain cases, features for novel classesare not well discriminative and constrained for the same class. As features are not optimized, onlyfinetuning the novel class weights linear relating to features will actually drop the performance.This illustrates the importance of adapting features of novel classes. By AFD, we get consistent andessential performance improvement: 1:73% and0:51% for 1-shot and 5-shot with mini-ImageNettrained feature extractor, 1:08% and0:27% for 1-shot and 5-shot with tiered-ImageNet trained fea-ture extractor. This showcases the powerful effects of AFD under cross-domain cases, comparedwith the simplicity lies in AFD.6 C ONCLUSIONWe propose an finetuning on adaptive feature distribution to improve the novel class generalizationfor few-shot learning. And the performance improvement on both in-domain and cross-domainevaluation showcases the superior generalization brought by this simple yet effective method. Withthe proposed AFD method, we also address the importance of further understanding and analyzingthe feature distribution of novel classes.<|im_end|> <|im_start|>assistant ### Review Title Proposes an interesting approach but suffers from clarity issues ### Review Text Summary ======== This paper proposes a simple approach to few-shot classification which first pre-trains a feature extractor via the classification loss on all base classes, as is usually done. Then, to solve each test task, they propose to learn a task-specific scaling co-efficient for each (normalized) feature of the pre-trained embedding function. This learning is done via a few steps of gradient descent on the support set of the given test task, for the objective of correctly classifying the support examples using a simple readout layer that is set to the prototypes of the different classes. This approach is parameter efficient, as it only requires optimizing the scaling co-efficients in the inner loop. Pros ==== [+] I really like the idea of isolating a small set of parameters to tune for the purpose of achieving task adaptation. This allows avoiding the expensive fine-tuning of the entire feature extractor, and may be more expressive than simply learning a per-task readout layer. [+] I also liked the intuitive explanation of the role of the scaling factor by inspecting the gradients of the task-specific loss with respect to the scaling parameters. It’s indeed interesting (though not unexpected) to see that this mechanism can amplify dimensions of the embedding space that are most discriminative for each task at hand. [+] I really like the proposed approach for model-selecting for the pre-trained model. I think this is a valuable contribution and the strong performance of this component alone is very interesting. Cons ==== [-] A major weakness of this paper in my opinion is the lack of clarity. I found parts of the paper hard to follow, both relating to the proposed framework as well as the experimental setup. It would additionally be useful to proof-read the paper and correct typos, grammar errors and incorrect usage of words throughout. Specific clarity issues are outlined below: A. Clarity issues relating to the proposed framework. a) The loss in Equation 2 is only summing over N points, where N is the number of examples per class. I was expecting it to sum over the entire support set (total of K*N examples). Is this a typo or is this meant to represent the loss of the examples of a given class only? If so, that should be indicated. b) In Equations 2 and 3, it would be clearer to rename z_j to z_{ij}, since it is the logit of example i belonging to class j. Similar minor notation issues apply in all following equations. c) In Equations 9 and 10, instead of \nabla s, it would be clearer to write this as the partial derivative of the loss of the particular example i w.r.t s. d) I agree that for 1-shot P_{y_i} == 1 (because the single example is exactly the same as the prototype in that case) but I’m not sure why that quantity is approximately 1 more generally. e) In Equation 11, what is the symbol S between the \exp and \cos? f) In Equations 12 and 13, shouldn’t the normalized features and scale be used instead of the original features F? B. Clarity issues relating to the experiments. a) What exactly are the models referred to as “Baseline” and “Baseline*” in the tables? I understand the difference between them is the model selection method from the pre-training round. But it’s not clear what is the algorithm they use to solve the downstream few-shot tasks. Are they simply learning a readout layer? If so, is that layer initialized from the prototypes as is done for AFD? b) I found the ablation section very hard to follow. First of all, the caption of Table 3 is “Ablation Studies on CUB”, but the results in that Table are on mini-ImageNet and tiered-ImageNet. Is this then a case of cross-domain study where the models were trained on CUB and evaluated on those two datasets? The description of the results of this table (e.g. in the paragraph “The Importance of Fine-tuning Features”) sometimes refers to a “mini-ImageNet trained feature extractor” (or analogously for tiered-ImageNet) and sometimes refers to “cross-domain cases” which makes it very hard to understand how the results in Table 3 are actually obtained. c) In Table 3, rows 4 and 5 don’t have the “Models” entry filled in. Are these additional AFD variants or “finetune-weight” variants? d) There is a paragraph titled “Effects of Different Feature Extractor” that refers to studying the effect of a feature extractor trained on a larger dataset. I couldn’t figure out which results this paragraph is referring to from the table. It would be useful to explain which entries of Table 3 these observations are describing. e) In the paragraph titled “The Importance of Fine-tuning Features”, again I was unsure which results some of the descriptions are referring to as the reported percentages in that paragraph don’t all correspond to numbers in Table 3. It would be useful to be clear about which entries in the table each of the findings refers to. [-] Another weakness of the paper is the limited discussion of related work. There have been various attempts at approaches that fine-tune only a cleverly-selected subset of the feature extractor parameters in each test task. Two examples that come to mind are: [1] and [2]. Additionally, this approach is related to FiLM-based models [3, 4] that learn a scaling and shifting transformation for each task (based on its support set). These are all different from the proposed approach in that they are meta-learning models (as opposed to algorithms that are applied at test-time only on top of pre-trained features), but they are otherwise similar in spirit, so they should be discussed and compared against. Further, [6] should also be cited alongside [7] in the context of conducting meta-training with a pre-trained feature extractor. [-] The proposed approach does not actually seem to outperform the baseline on the “in-domain” cases (Table 1) as the increase in performance over the baseline is very small and the confidence intervals overlap. It seems that the cross-domain experiments better showcase the ability of this method (since in that scenario adapting features is more important), so perhaps future experiments should further explore that scenario. [-] Finally, I felt that important baselines were missing, that would reflect alternative ways of fine-tuning subsets of the feature extractor less selectively than the proposed approach (see the section below for some suggestions). Overall ====== Overall, I vote for rejecting this paper due to the weaknesses discussed above. To reiterate, a major drawback in my opinion relates to the clarity of the presentation of this work. Other weak points include the lack of discussion and comparison to relevant literature and baselines and the weak performance of the proposed approach over the reported baseline. Suggestion for additional experiments ================================= The authors show experimentally that learning only a per-task readout layer is not expressive enough compared to their method that modifies the features as well via their proposed scaling mechanism. There are, however, other “baselines” that should be compared against that also modify the features in different ways. Specifically: while learning the readout layer (possibly initialized from the prototypes as is done in the proposed approach), we could also fine-tune the feature extractor. This baseline is used in the literature in some older work, e.g. the “Baseline-finetune” model in [5], as well as in more recent works in more challenging settings [6]. The drawback associated with fine-tuning the entire backbone is the potential overfitting to the support set for low shots, so it would also be useful to try variants that fine-tune the top X layers of the network. These explorations should happen on the features obtained by Baseline* for an apples-to-apples comparison with AFD. Further, perhaps the performance of AFD can be improved by fine-tuning the output layer too (which is initialized from the prototypes) during the inner loop that tunes the scale parameters? My understanding is that the output layer is currently fixed to the prototypes and isn’t optimized at all which might be restrictive. Finally, given that the proposed model only appears to significantly outperform the baseline in the cross-domain evaluation scenario (Table 2), a good test-bed for additional experiments with this method might be the Meta-Dataset benchmark [6] that is comprised of diverse datasets and therefore really necessitates a flexible model that can adapt to different distributions. References ========= [1] Fast Context Adaptation via Meta-Learning. Zintgraf et al. ICML 2019. [2] Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace. Lee et al. ICML 2018. [3] Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes. Requeima et al. NeurIPS 2019. [4] Improved Few-shot Visual Classification. Bateni et al. CVPR 2020. [5] Optimization as a Model for Few-shot Learning. Ravi and Larochelle. ICLR 2017. [6] Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. Triantafillou et al. ICLR 2020. [7] A New Meta-Baseline for Few-shot Learning. Chen et al. 2020. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
1uufzxsxfL
AKBC.ws/2020/Conference
2020
Revisiting Evaluation of Knowledge Base Completion Models
["Pouya Pezeshkpour", "Yifan Tian", "Sameer Singh"]
Representing knowledge graphs (KGs) by learning embeddings for entities and relations has led to accurate models for existing KG completion benchmarks. However, due to the open-world assumption of existing KGs, evaluation of KG completion uses ranking metrics and triple classification with negative samples, and is thus unable to directly assess models on the goals of the task: completion. In this paper, we first study the shortcomings of these evaluation metrics. Specifically, we demonstrate that these metrics (1) are unreliable for estimating how calibrated the models are, (2) make strong assumptions that are often violated, and 3) do not sufficiently, and consistently, differentiate embedding methods from each other, or from simpler approaches. To address these issues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from the test and validation data of YAGO3-10, which enables us to compute accurate triple classification accuracy on this data. Conducting thorough experiments on existing models, we provide new insights and directions for the KG completion research. Along with the dataset and the open source implementation of the models, we also provide a leaderboard for knowledge graph completion that consists of a hidden, and growing, test set, available at https://pouyapez.github.io/yago3-tc/.
["Knowledge Graph Completion", "Link prediction", "Calibration", "Triple Classification"]
Automated Knowledge Base Construction (2020) Conference paperRevisiting Evaluation of Knowledge Base Completion ModelsPouya Pezeshkpour PEZESHKP @UCI.EDUUniversity of California, IrvineYifan Tian YIFANT @UCI.EDUUniversity of California, IrvineSameer Singh SAMEER @UCI.EDUUniversity of California, IrvineAbstractRepresenting knowledge graphs (KGs) by learning embeddings for entities and relations hasled to accurate models for existing KG completion benchmarks. However, due to the open-worldassumption of existing KGs, evaluation of KG completion uses ranking metrics and triple classifi-cation with negative samples, and is thus unable to directly assess models on the goals of the task:completion . In this paper, we first study the shortcomings of these evaluation metrics. Specifically,we demonstrate that these metrics (1) are unreliable for estimating how calibrated the models are,(2) make strong assumptions that are often violated, and 3) do not sufficiently, and consistently,differentiate embedding methods from each other, or from simpler approaches. To address theseissues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from thetest and validation data of YAGO3-10, which enables us to compute accurate triple classificationaccuracy on this data. Conducting thorough experiments on existing models, we provide newinsights and directions for the KG completion research. Along with the dataset and the open sourceimplementation of the models, we also provide a leaderboard for knowledge graph completion thatconsists of a hidden, and growing, test set, available at https://pouyapez.github.io/yago3-tc/ .1. IntroductionKnowledge graphs (KGs) are essential components of a wide range of tasks in scientific and industrialprocesses [Zhang et al., 2016, Zhu et al., 2018]. Most knowledge graphs, in practice, are oftensubstantially incomplete and contain noise even in the edges they do have, prompting the need formodels for knowledge graph completion (KGC), also called link prediction. In recent years, modelsthat have led to accurate link prediction are based primarily on relational embeddings [Bordes et al.,2013a, Yang et al., 2015], where dense vectors are learned for each entity and relation in the KG.By using different scoring functions that capture the uncertainty in each fact [Trouillon et al., 2017,Dettmers et al., 2018, Sun et al., 2019a], such knowledge graph completion models have achievedincredible success on existing benchmarks.Unfortunately, the lack of a complete and accurate KGs is a problem for evaluation as well. Sinceit is not possible to list all possible true and false facts for a KG of interest, existing evaluation ofKGC consists of gathering known truefacts, and using: (1) ranking metrics , such as Hits@N andMean Reciprocal Rank (MRR), to calculate the relative rank of these known true facts against allunknown facts (thus implicitly treated as negative), and (2) classification accuracy of individualfacts, by treating random corruptions of a known true fact as negative/false facts. In spite of steadyand significant progress on these models, it is not clear whether these metrics correspond to the trueperformance on link prediction, making it difficult to decide whether they are ready for real-worldPEZESHKPOUR , TIAN, & S INGHdeployment. Further, due to the strong assumptions made by these evaluation metrics, the strengths,shortcomings, and reasoning capabilities underlying these link prediction methods is difficult todetermine, hindering further progress of the field.In this paper, we study significant issues with the current evaluation metrics for knowledge graphcompletion models, in particular, highlighting the impact of the assumptions made by these metricson model performance. We show that the ranking metrics often do not correlate well with the actualperformance of the model, make it incredibly challenging to determine whether these models arewell-calibrated or not (an essential property for real-world deployment), and do not correlate wellwith the reasoning power of the models. For triple classification, upon a detailed examination ofseveral commonly used benchmarks, we show that the metric is heavily sensitive to the choice ofnegative sampling, and that there is a significant mismatch between accuracy and the ranking metrics.To address these shortcomings in existing benchmarks, we introduce YAGO3-TC , a high-quality,manually-annotated dense sub-graph of the YAGO3-10 KG. Along with the true facts that are alreadypresent in test and validation splits of the existing benchmark, YAGO3-TC also includes related factsinvolving the same entities that are annotated to be true or false via crowdsourcing. These relatedfacts are designed to be somewhat challenging to discriminate since they are high-scoring by recentaccurate models, resulting in 28,364 labeled facts out of which 2,976 are positive. Since we ensurethe quality of the annotations, classification metrics such as accuracy, precision/recall, etc. can beused to appropriately evaluate models of knowledge graph completion.We also provide a comprehensive analysis of recent KG completion models, given the high-quality annotations in YAGO3-TC, using triple classification metrics. We are able to provide accuratecalibration results for completion models, showing that they are significantly overconfident (consistentwith existing results for neural networks, but different from other observations for KGC). Further,we observe that there is a significant mismatch between ranking metrics and performance on thecompletion task (e.g., there is more than 20% gap between Hits@1 and Precision). Most importantly,we show that the progress in performance indicated by ranking metrics does not align with actualcompletion task; simple methods achieve similar performance to state-of-art models.2. Background and NotationIn this section, we introduce notations, benchmarks, evaluation procedures, and a brief overview ofexisting relational embedding approaches to knowledge graph completion. More details on existingbenchmarks and implementation details are provided in the Appendix.Embedding Based KGC: To represent the facts in the KGs, we use triples of subject, relation,and object,hs;r;oi, wheres;o2, the set of entities, and r2R, the set of relations. To modelthe KG for link prediction, a scoring function :R!Ris learned to evaluate whetherany given fact is true. In this work, we will primarily study DistMult [Yang et al., 2015] due to itssimplicity and popularity, and RotatE [Sun et al., 2019a] and Tucker [Balazevic et al., 2019] becauseof their state-of-the-art performance. Only true facts are included in the training data, thus trainingthe model involves, for each observed triple, sampling Lnegative ones (value of Lis usually treatedas a hyperparameter) by randomly corrupting either the subject or the object of the triple. Using themodel’s probability of truth as ( (s;r;o ))forhs;r;oi, following Yang et al. [2015], Trouillon et al.REVISITING KNOWLEDGE BASE COMPLETION[2016], the binary cross-entropy loss is defined as:L(G) =X(s;r)2GXoys;rolog(( (s;r;o ))) + (1ys;ro) log(1( (s;r;o ))): (1)whereys;rorepresents whther the fact is “true” ( ys;ro= 1for observed facts, ys;ro= 0otherwise).Ranking Metrics: To evaluate the performance of the KG completion models, we rank test triplesagainst all possible negative samples, generated by corrupting the subject or object of the target triple.Ranking metrics have been used since existing KGs are open-world and the ground truth label for allnegative and positive samples is not available. In the filtered setting, which we consider in this work,we only treat triples that do not appear in the training, validation, or test set, as the possible negativesamples. To quantify the ranking of target triples, we use standard evaluation metrics such as MeanReciprocal Rank (MRR), which is the average inverse rank of all test triples, and Hits@N, which isthe percentage of test triples whose rank is lower (better) than or equal to N.Triple classification: Triple classification is the task of binary classification on the KGs triples.This task is important because, if appropriately set up, it directly evaluates the capability of KGCmodels in identifying missing links. Specifically, given a target triple hs;r;oi, we want to identifyif this is a positive/true fact or a negative one. For this task, previous approaches learn a specificthresholdrfor each relation, over validation data. In order to create the negative samples in theseapproaches, for both validation and test data, they corrupt subject or object of the target triple with arandom entity form the KG. After learning thresholds r, a triple assumed to be positive if its score ishigher than the threshold for the triple’s relation.3. Issues in Existing KG Completion EvaluationIn this section, we discuss some issues prevalent in current evaluation metrics, and provide empiricalevidence for their shortcomings. First, we observe that the assumptions underlying ranking metricsare often incompatible with the goals of the completion itself. Then, we show that evaluating howwell completion models are calibrated is challenging since the results are incredibly sensitive to thesetup design choices. Finally, we show that the results of the ranking metrics are often inconsistentwith the results of the triple classification evaluation.3.1 Assumptions in Ranking MetricsIn the past few years, we have observed tremendous progress in the performance of KG completionmodels, based on ranking metrics. As these models become increasingly accurate and potentiallyready for real-world deployment, it is now useful to understand the extent to which these rankingmetrics align with the actual goals of the completion task.Let us consider a simple example. Assume we want to validate whether triple hs;r;oiis true ornot. According to the current procedure, our only option is to rank the score of all possible objects(triples of the form hs;r;o0i) and subjects (of the form hs0;r;oi) and compute the rank of our targettriple. In this case, the ranking metrics such as Hits@N can only tell us whether this triple appears inthe top N possible triples. If the relation of our target triple can accept only one true object, i.e. therelation is N-1, this ranking is meaningful, however, if multiple objects can be true for our targetsubject and relation, this ranking is incomplete since does not capture other triples that are rankedhigher than the target fact are themselves true or not. A similar observation holds for relations forPEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on YAGO3-10Figure 1: Calibration study on different KGs based on three negative sampling procedures. We plotreliability diagrams of the fraction of positive triples to all the triples vs the link prediction models’score for a target triple. Being closer to the diagonal means the model is more calibrated.which multiple subjects can be true for the same object, i.e. a 1-N relation. Unfortunately, on studyingtwo commonly used KGs WN18RR and YAGO3-10, we notice that this phenomenon happens in ahuge portion of the data, as a result of the existence of semi-inverse relations.Semi-Inverse Relations in WN18RR: On conducting simple statistical analysis on WN18RR,we notice that this KG is not completely free of relations that are inverse of each other (whichmake the completion task trivial). We notice in more than 90% appearance of three relations_derivationally_related_form ,_verb_group , and _similar_to , and also more than 60% appearanceof_also_see , another triple with the same entities but the opposite direction of the original relationappears in the training data. Together, these relations consist 37% of this KG. Moreover, aroundone-third of triples in the test data contain one of these relations; the KGC models achieve more than0:95MRR performance on this subset, significantly affecting the overall performance.Semi-Inverse Relations in YAGO3-10: Along the similar lines, in YAGO3-10, for 75% of thetriples with relation isAffiliatedTo , the triple with relation playsFor appears between the same objectand subject ( 87% for reverse). These relations consist of 63% of test data, 81% of which have theother relation between the same subject and object in the training data. Note that embedding methodsachieve around 0:95of MRR performance on these triples.The existence of these semi-inverse relations results in two important conclusions. First, itindicates that the performance based on ranking metrics on these KGs is not trustworthy since it isvery easy to predict these triples. Second, since these relations can accept many objects (and subjects)for the same subject (and object), and embedding methods score all of those objects (and subject)very highly (since their semi-inverse version of them appear in the training data), low values ofranking metrics are clearly not accurate assessment of completion on these triples. More specifically,for a target triple with one of the mentioned relations, as the number of trueobjects or subjects withsemi-inverse relation increases, the chance of obtaining a worse ranking increases as well.3.2 Evaluating Calibration of the ModelsCalibration is a very important aspect of KG completion that has only recently received atten-tion [Tabacof and Costabello, 2019]. Treating the probability of truth of a fact ( ( (s;r;o ))fortriplehs;r;oi) as the confidence of the model for the triple, we consider our model to be calibratedif the confidence aligns with the rate of true facts. In other words, if confidence is equal to 0:5, weREVISITING KNOWLEDGE BASE COMPLETIONexpect to have around 50% of triples with this confidence to be true. If this proportion is far from50%, then the model is not calibrated (the model is under-confident if the proportion is higher, andover-confident if it is lower). Since the evaluation only consists of true facts, we need to obtainnegative/false facts by sampling. We use three different negative sampling procedures: (1) randomlyreplacing subject or object with an entity from all possible ones ( Random-N ), commonly used in KGcompletion literature, (2) randomly replacing subject or object with an entity that has appeared atleast once with the target relation in the training data ( Constraint-N ), which was used by Socher et al.[2013] to generate more challenging negative samples, and (3) choose the highest scoring negativesample that has the object (or subject) with a different type than the target triple object ( Careful-N ).By choosing the object (or subject) that has a different type than the target triple entities we enforcethe chosen negative sample be a true negative. We define entity type as the set of entities that haveappeared with similar relations (see appendix for a precise definition).The result of calibration study based on above negative samples is shown for YAGO3-10 inFigure 1 (calibration plot for WN18RR and FB15k-237 and histogram plot of score distributions isprovided in the appendix). Note that even though negative samples for these three negative samplingmethods are different, we generate each plot on the same set of negative samples for the three models.Although these results show that RotatE provides more calibrated models compared to Tucker inall the negative sampling procedures, the advantage of RotatE over DistMult changes with differentnegative samples (in Random-N , DistMult appears better than RotatE). Further, we suspect thereason behind the peculiar behavior of Tucker in Figure 1c is due to the fact that Tucker tends toscore many triples (both positive and negative triples) very highly for specific relations, such ashasGender andisLocatedIn . Moreover, as we make the negative sampling more challenging, we seeextremely different behavior from the models, some result in a much more calibrated plot comparedto others (e.g. RotatE looks calibrated for Careful-N , but not for the rest), making these benchmarksinconclusive for calibration. For WN18RR and FB15k-237, although DistMult outperforms the othertwo methods completely, we observe similar behaviors in calibration plots. Last, these plots alsoindicate that the models are under-confident, which is inconsistent with similar studies on neuralnetworks [Guo et al., 2017]. For a comparison that takes the model complexity into account, weinclude results for models that have the same number of parameters in the appendix.3.3 Simple Models Look AccurateIn this section, we evaluate the reasoning capabilities of current link prediction methods. Morespecifically, we wanted to see how far in performance on ranking metrics we can get to by adoptingvery simple methods and see if ranking metrics can properly differentiate between SOTA modelsand these simple approaches. We first study rule-based methods that only predict ranking for triplesthat have their semi-inverse relations in the training data. Then, introducing a local score that learnssimple neighborhood patterns, we see unexpectedly high performance on ranking metrics, castingdoubt on the capability of ranking metrics to accurately evaluate KGC methods.Rule-based Link Prediction: To see the effect of semi-inverse relations on the performance of linkprediction methods, we provide a very simple rule-based method. For WN18RR and YAGO3-10target triples, we identify all objects for which the target subject appears with a semi-inverse relationin the training data, and vice versa for the subjects. Then we rank these entities based on theirpopularity (their degree in the graph) in the KG. The result of this rule-based method is provided inTable 1. As shown, for both of YAGO3-10 and WN18RR, this method achieves high performance.PEZESHKPOUR , TIAN, & S INGHTable 1: Link Prediction result for FB15k-237, WN18RR and YAGO3-10 KGs. All results generatedusing perspective models’ SOTA hyperparameters.ModelsFB15k-237 WN18RR YAGO3-10MRR Hits@1 # Param MRR Hits@1 # Param MRR Hits@1 # ParamDistMult 0.295 19.8 5.8M 0.428 39.2 8.1M 0.409 31.2 24.6MRotatE 0.331 23.4 29.3M 0.478 43.4 40.9M 0.471 38.2 123.2MTucker 0.342 25.1 11M 0.456 42.8 9.4M 0.468 37.9 63.9MRule-Based - - - 0.338 32.1 - 0.286 24.2 -Local 0.181 12.7 0.3M 0.364 33.4 2k 0.322 25.8 50kLocal Score: We also study an alternate simple model, using just the local structure around thetarget triple. For each target triple, we compute a local score by finding all paths from the subjectto object and score them in the context of the relation of target triple (a simpler version of thislocal score is studied in Toutanova and Chen [2015]). Specifically, we define the local score as:Loc(s;r;o ) =(Pp2P(s;o)Wpr), whereP(s;o)denotes the set of all the paths between sando.To learnWprwe generate negative samples by randomly corrupting the r. A visual representation oflocal score is provided in the appendix. For FB15K-237 that is denser than WN18RR and YAGO3-10,for each relation, we consider the top 5 most frequent paths with length 2 between the subject andthe object of triples with that relation1. For WN18RR and YAGO3-10, since most of the triples donot have paths with length 2 between their entities, we only score simple patterns with length 3 inour model. More specifically, these paths comprise of patterns that have one edge with the samerelation as the target sample (with the same direction). Further, they should have the same relationfor the other two edges but in a different direction. More details and visualization of these patternsare provided in the appendix. The result of link prediction on our benchmarks is provided in Table 1.As it shows, in all three KGs, our local score performs comparably to embedding methods whilehaving a much fewer number of parameters. The high performance of both these simple modelsraises questions about the utility of ranking metrics and existing benchmarks.3.4 Problems with Triple Classification with Negative SamplingTo demonstrate that the current approach to evaluating accuracy via triple classification is inaccurate,we create a fact classifier from completion models (that only provide a score) by learning a thresholdfor each relation (following standard practice). The performance of state-of-art embedding modelsover several KGs is provided in Table 2. As it shows, all the models achieve very high accuracyperformance (around 80%) when we choose both validation and test negative samples randomly(Random-N ). The reason behind this high performance is mostly due to the naive way of randomnegative sampling. Moreover, Distmult and RotatE outperform the Tucker performance in all threeKG (although Tucker often achieves higher or very similar performance on ranking metrics).Negative Sampling: There are a few fundamental issues with the current approach to tripleclassification: (1) randomly choosing the negative samples results in very simple classification, whichis not informative for evaluation, and (2) training classifiers (estimating thresholds) based on randomnegative samples makes the results brittle, i.e., choosing slightly more challenging negative samples1. Around 70% the test triples have at least one of these patterns in their neighborhoodREVISITING KNOWLEDGE BASE COMPLETIONTable 2: Triple classification accuracy for random and careful negative sampling.ModelsFB15k-237 WN18RR YAGO3-10Random-N Careful-N Random-N Careful-N Random-N Careful-NDistMult 95.2 47.6 83.3 39.5 94.9 45.5RotatE 94.4 49.1 84.8 42.0 86.1 42.9Tucker 77.6 57.4 72.0 55.8 75.4 45.2Table 3: Triple classification accuracy on ground truth labels. The results are averaged over 5 runs.ModelsKinship NationsAcc F1 Recall Precision Acc F1 Recall PrecisionDistmult 58.8 7.6 34.8 4.3 86 24 25.6 22.9RotatE 10.6 10 97.3 5.3 66.9 27.2 69.6 16.9Tucker 86.2 38.8 83.7 25.2 55.6 18.9 66.6 11Type Constraint 28.9 12.0 94.6 6.4 47.6 22.8 87.0 13.1can reduce the performance dramatically. Instead of random samples, we instead use the challengingCareful-N samples described in Section 3.2, and show results in Table 2. As it shows, these negativesamples dramatically reduce the accuracy. Further, although Tucker performs the worst in the randomsetting, here we see a smaller reduction in accuracy compared to the other two methods (RotatEappears better than DistMult). We suspect the reason behind this smaller reduction in accuracy isthat Tucker distinguishes between positive and hard negative samples better, i.e., on average, assignsconsiderably higher scores to positive samples in comparison to hard negatives.Mismatch between Ranking Metrics and Accuracy: We also evaluate these models on tripleclassification in Table 3 for Kinship and Nations, which have allthe true facts available (all missingfacts are false, and can be enumerated). As the negative samples, we consider the union of top10 negative objects and subjects based on trained RotatE and Tucker models. Although thesemodels achieve around 0:8MRR and 100% Hits@10 performance, they demonstrate much lowerperformance on accuracy metrics showing that these ranking metrics are not trustworthy. Moreover,if we classify the negative and positive samples for Kinship and Nations KGs only based on thecompatibility of their subject and object with the relation, based on our defined notion of type ( TypeConstraint ), we see that Type Constraint achieves comparable results with these embedding methods,questioning the credibility of the ranking metrics and performance of these embedding methods.4. YAGO3-TC: A New Benchmark for Evaluating KG CompletionIn this section, we first describe our procedure to gather YAGO3-TC dataset, and then, we explainour plans to continuously update YAGO3-TC as new KG completion models are proposed.4.1 Creating YAGO3-TCTo solve these issues with KG completion evaluation, we gather a dataset that contains true and falsefacts, but is also challenging for current models. Note that in this work, we are not suggesting thatPEZESHKPOUR , TIAN, & S INGHCarles PuyolMaleSpain National TeamBarcelonaCatalonia National TeamReal MadridMan UnitedCarles Puyol plays for?Catalonia National Team ✓✓×Barcelona ✓✓✓Real Madrid ✓××Man United ×××Carles Puyol plays for?Real Madrid ××Initial Annotations (Round 1)Selective Reannotation(Round 2) ✓Catalonia National Team✓Barcelona×Man United?Real MadridCarles PuyolMaleSpain National TeamBarcelonaCatalonia National TeamReal MadridMan United(a) Overview of crowdsourcing process.# Test # ValidTriples 28,364 2,946Positives 2,976 223Negatives 25,388 2,723(b) Data Statistics.Figure 2: YAGO3-TC Dataset. (a) annotation process, and (b) statistics of the resulting datawe should completely replace the ranking metrics for all use cases, but point out their shortcomings,and introduece a benchmark to compute other metrics. Since each embedding model scores triplesdifferently, we use RotatE [Sun et al., 2019a] and Tucker [Balazevic et al., 2019] as our judges foridentifying important triples. More specifically, we first sample 1000 random triples from the testset of YAGO3-10 (the relation distribution histogram of YAGO3-10 test data and our 1000 sampledtriples is very similar, provided in the appendix). To reduce the effect of semi-inverse relations, wedo not consider triples with relation isAffiliatedTo in our samples. Then, applying trained RotatE andTucker on these triples we find the 10 top scoring objects for the query hs;r;?iand 10 top scoringsubjects forh?;r;oi. Excluding repeated triples, we gather 28,364 triples/facts.We need to label these triples as negative (false) and positive (true). Before conducting crowd-sourcing to label these triples, there are few criteria to identify the true label of some of these triples:(1) if the relation of target triple is N-1 (or 1-N) we can treat every object (or subject) as a negativesample, except for the original target object (or subject), and (2) if the object (subject) of a sampledoes not have the same type as the object (subject) of the original test triple, we can treat that sampleas negative. Filtering these identifiable samples, we label the rest through crowd-sourcing.For labeling the samples, we ask the users to search the information on the Wikipedia page of theentities and use the Google search engine. For example, we ask users to choose all the correct teamsfor the query “Carles Puyol plays for?” from provided options. We use Amazon Mechanical Turk asthe framework for gathering the labels and ask three users to answer each query. If more than oneuser agree on an answer, we treat that triple as a positive one. We separately reannotate the objectsthat were picked only by one of the users. This time, we ask two users to check these samples, and ifboth of them agree on a correctness of a choice, we treat it as a positive sample. After creating theset of positive labels, we treat everything else as negative samples. An overview of our user studyis depicted in Figure 2a. Further, after randomly choosing 100 samples from the validation data ofYAGO3-10, we use the same procedure for gathering labels for our validation data. Since we intendto use these labels to find the thresholds of the models, we ensure that at least one triple for eachrelation that appears in this set. The dataset statistics are provided in Table 2b. To check the qualityof our labels, we also include 100 true facts from the original test data in our study, and find that 96%of these triples were annotated to be positive, demonstrating the high quality of our labels.REVISITING KNOWLEDGE BASE COMPLETIONModels Acc F1 R P A-ROCDistMult 29.4 20.4 86.6 11.6 0.61RotatE 27.0 19.4 83.7 10.9 0.58Tucker 63.3 22.3 50.3 14.4 0.64DistMult-valid 85.6 19.1 14.3 29.1 0.59RotatE-valid 88.6 18.9 12.8 42.1 0.61Tucker-valid 79.7 22.1 27.5 18.5 0.56Random 80.9 10.9 11.1 10.6 0.51Type Constraint 32.2 20.8 84.8 11.8 0.61Local 61.0 19.0 43.8 12.2 0.6(a) Triple classification accuracy on ground truth labels.The results are averaged over 5 runs.(b) Calibration plot for YAGO3-TC.Figure 3: Triple classification on YAGO3-TC. (a) provides average performance of embeddingmethods and our baselines. (b) Depicts the calibration study of embedding models.4.2 Continuously Updated, Hidden BenchmarkAlthough we are confident about the quality of the true/false annotations, we select the candidatesbased on the output of the two recent models, RotatE and Tucker. As new models will be proposed,they may be able to differentiate between our gathered true and false facts, but may highly rank factsthat are not in our dataset. In order to maintain a benchmark that is useful for evaluating knowledgebase completion in the long run, we propose a web-hosted evaluation platform. The online platform,available at https://pouyapez.github.io/yago3-tc/ , includes a hidden test set, and a leaderboardof model submissions. The test set, initialized with the YAGO3-TC dataset described here, willcontinuously be updated as we receive model predictions from the submissions, thus identifyingmore challenging candidate facts, via the same crowdsourcing pipeline described above.5. Evaluation Using YAGO3-TCIn this section, we investigate the triple classification evaluation using our new dataset. We first studythe performance of several embedding models on the dataset and introduce new simple techniquesto improve current models. Then, comparing the calibration of the triple classification task withthe ranking scenario, we demonstrate this task is better defined. Finally, we study the per-relationbreakdown of accuracy to better assess model performance.5.1 Performance of Existing KGC Models on YAGO3-TCTo provide a better evaluation procedure for KG completion we study accuracy metrics on ourgathered data. The averaged result of SOTA methods in YAGO3-TC over 5 runs is provided in Table3a. As it shows, except for recall, Tucker outperforms RotatE. We note that, in this experiment,accuracy and precision are the metrics we care about the most because it is important to avoidlabeling a triple as positive incorrectly. Moreover, comparing the results with ranking metrics inTable 1, we see a significant mismatch with these metrics and the ranking ones.PEZESHKPOUR , TIAN, & S INGHWe also consider 3 baselines for our benchmark, 1) randomly assigning labels using the actualratio of positives and negatives as the probability, 2) classifying based on compatibility of the typeof subject and the object with the relation, and 3) train our classifier using the scores of our localmodels from Section 3.3. All of these baselines achieve comparable results with embedding methodsdemonstrating the need for better training and models. We are also interested in investigating whetherwe can improve these performances with simple modification on the learning process. First, insteadof random negative sampling on validation data, use our gathered validation data as the trainingdata for triple classification ( model-valid ). As shown, upon training on our validation data, theaccuracy and precision increase dramatically and recall drops with a huge gap. To provide a deeperunderstanding of the performance, a per-relation breakdown is provided in the appendix.5.2 CalibrationAs we showed, calibration study on existing evaluation metrics is not a well-defined task. YAGO3-TC provides us with an opportunity to study calibration in a more controlled and representativeenvironment. The evaluation on the calibration of YAGO3-TC is depicted in Figure 3b, with thehistogram plot of the scores in the appendix. As shown, Tucker provides a more calibrated plotcomparing to RotatE. Moreover, previous calibration curves suggested models are under-confident,whereas here the calibration reveals that they are overconfident, which is consistent with calibrationstudies on neural network models [Guo et al., 2017].6. Related WorkThere is a rich literature on representing knowledge bases using fixed-size embedding vectors.In the past few years, a number of recent techniques have proposed models that firstly assignan embedding for each entity and relation, and then use these embeddings to predict facts. Thesemethods which primarily only differ in scoring function for link prediction task, include tensormultiplication [Nickel et al., 2011, Socher et al., 2013, Yang et al., 2015, Balazevic et al., 2019],algebraic operations [Bordes et al., 2011, 2013b, Dasgupta et al., 2018, Sun et al., 2019a], andcomplex neural models [Dettmers et al., 2018, Nguyen et al., 2018]. Furthermore, a number ofstudies have examined the incorporation of extra types of evidence to achieve more informativeembeddings, with extra modalities consisting of numerical values [Garcia-Duran and Niepert, 2017],images [Oñoro-Rubio et al., 2017], text [Toutanova et al., 2015, 2016, Tu et al., 2017], and theircombinations [Pezeshkpour et al., 2018]. Utilizing the analysis in this work, we hope to shed morelights on better integrating extra modalities in the vector space to have more informed embeddings.Although these methods provide accurate models for a variety of KG tasks, only a few try toprovide a better understanding for these models, such as by addressing issues in training [Kadlecet al., 2017, Jain et al., 2020, Ruffinelli et al., 2020], investigating particular triples in the data[Akrami et al., 2020], studying sparsity and unreliability of KGs [Pujara et al., 2017], analyzinginterpretability in the embedding space [Sharma et al., 2018, Pezeshkpour et al., 2019, Allen et al.,2019], and identifying existing issues in KG completion models [Sun et al., 2019b]. AlthoughTabacof and Costabello [2019] also study calibration of link prediction models, there are severaldifferences between our study on calibration: 1) we show the effect of different negative samplingprocedures on the calibration of link prediction methods, and 2) we further provide a well-definedand explicit environment for calibration study using our proposed YAGO3-TC. Moreover, Safaviet al. [2020] studies utility of KG embedding methods in real-world completion tasks by proposingREVISITING KNOWLEDGE BASE COMPLETIONto calibrate these embedding models to output reliable confidence estimates for predicted triples. It isworth mentioning that developing more appropriate and challenging datasets as a way to addressshortcoming of existing benchmarks has been used in other machine learning tasks, such as visualreasoning [Johnson et al., 2017, Kottur et al., 2019], semantic parsing [Nangia and Bowman, 2018]and textual entailment [Zellers et al., 2018], amongst others.7. ConclusionIn this work, we set to investigate whether ranking metrics are appropriate measures to evaluate linkprediction models. Upon, studying shortcoming and strength of the current adopted procedure, wefirst show existing issues with ranking metrics: they do not evaluate completion, are difficult to usefor calibration, and are not able to consistently differentiate between different models. Facing theseissues, after redefining the triple classification task, we gather a new dataset YAGO3-TC consistingof a dense subgraph annotated with both true and false facts. Exploring several SOTA embeddingmodels on this dataset we further provide insights and directions for future works. We hope that thisresearch and dataset will bridge the gap in better adoption of link prediction models in real-worldscenarios. The datasets, leaderboard with continuously updated benchmark, and the open-sourceimplementation of the models are available at https://pouyapez.github.io/yago3-tc/ . We hope thisannotation methodology is used for existing, and future, evaluation bechmarks in KG completion.AcknowledgementsWe would like to thank Matt Gardner and the anonymous reviewers for their feedback. This work issupported in part by the DARPA MCS program under Contract No. N660011924033 with the UnitedStates Office of Naval Research, and in part by NSF award #IIS-1817183. The views expressed arethose of the authors and do not reflect the official policy or position of the funding agencies.ReferencesFarahnaz Akrami, Mohammed Samiul Saeef, Qingheng Zhang, Wei Hu, and Chengkai Li. Realisticre-evaluation of knowledge graph completion methods: An experimental study. In Proceedings ofthe 2020 ACM SIGMOD International Conference on Management of Data , pages 1995–2010,2020.Carl Allen, Ivana Balazevic, and Timothy M Hospedales. On understanding knowledge graphrepresentation. arXiv preprint arXiv:1909.11611 , 2019.Ivana Balazevic, Carl Allen, and Timothy Hospedales. Tucker: Tensor factorization for knowledgegraph completion. In Proceedings of the 2019 Conference on Empirical Methods in NaturalLanguage Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP) , pages 5188–5197, 2019.Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. Learning structured embed-dings of knowledge bases. In AAAI , 2011.PEZESHKPOUR , TIAN, & S INGHAntoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.Translating embeddings for modeling multi-relational data. In Neural Information ProcessingSystems (NIPS) , 2013a.Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.Translating embeddings for modeling multi-relational data. In Advances in neural informationprocessing systems , pages 2787–2795, 2013b.Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. Hyte: Hyperplane-basedtemporally aware knowledge graph embedding. In Proceedings of the 2018 Conference onEmpirical Methods in Natural Language Processing , pages 2001–2011, 2018.Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2dknowledge graph embeddings. Proceedings of the 32th Conference on Artificial Intelligence(AAAI) , 2018.Alberto Garcia-Duran and Mathias Niepert. Kblrn: End-to-end learning of knowledge base rep-resentations with latent, relational, and numerical features. arXiv preprint arXiv:1709.04676 ,2017.Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neuralnetworks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 ,pages 1321–1330. JMLR. org, 2017.Prachi Jain, Sushant Rathi, Soumen Chakrabarti, et al. Knowledge base completion: Baseline strikesback (again). arXiv preprint arXiv:2005.00804 , 2020.Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, andRoss Girshick. Clevr: A diagnostic dataset for compositional language and elementary visualreasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ,pages 2901–2910, 2017.Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strikeback. ACL 2017 , page 69, 2017.Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog:A diagnostic dataset for multi-round reasoning in visual dialog. In Proceedings of the 2019Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1 (Long and Short Papers) , pages 582–595, 2019.Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. Yago3: A knowledge base frommultilingual wikipedias. 2013.Nikita Nangia and Samuel R Bowman. Listops: A diagnostic dataset for latent tree learning. NAACLHLT 2018 , page 92, 2018.Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. A novel embedding modelfor knowledge base completion based on convolutional neural network. In Proceedings of the2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 2 (Short Papers) , volume 2, pages 327–333, 2018.REVISITING KNOWLEDGE BASE COMPLETIONMaximilian Nickel, V olker Tresp, and Hans-Peter Kriegel. A three-way model for collective learningon multi-relational data. In Proceedings of the 28th international conference on machine learning(ICML-11) , pages 809–816, 2011.Daniel Oñoro-Rubio, Mathias Niepert, Alberto García-Durán, Roberto González-Sánchez, andRoberto J López-Sastre. Representation learning for visual-relational knowledge graphs. arXivpreprint arXiv:1709.02314 , 2017.Pouya Pezeshkpour, Liyan Chen, and Sameer Singh. Embedding multimodal relational data forknowledge base completion. In Proceedings of the 2018 Conference on Empirical Methods inNatural Language Processing , pages 3208–3218, 2018.Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. Investigating robustness and interpretabilityof link prediction via adversarial modifications. In Proceedings of the 2019 Conference of theNorth American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) , pages 3336–3347, 2019.Jay Pujara, Eriq Augustine, and Lise Getoor. Sparsity and noise: Where knowledge graph embeddingsfall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural LanguageProcessing , pages 1751–1756, 2017.Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. You fcangteach an old dog new tricks! ontraining knowledge graph embeddings. In International Conference on Learning Representations ,2020.Tara Safavi, Danai Koutra, and Edgar Meij. Improving the utility of knowledge graph embeddingswith calibration. arXiv preprint arXiv:2004.01168 , 2020.Aditya Sharma, Partha Talukdar, et al. Towards understanding the geometry of knowledge graphembeddings. In Proceedings of the 56th Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , volume 1, pages 122–131, 2018.Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensornetworks for knowledge base completion. In Advances in neural information processing systems ,pages 926–934, 2013.Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding byrelational rotation in complex space. arXiv preprint arXiv:1902.10197 , 2019a.Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar, and Yiming Yang. A re-evaluationof knowledge graph completion methods. arXiv preprint arXiv:1911.03903 , 2019b.Pedro Tabacof and Luca Costabello. Probability calibration for knowledge graph embedding models.arXiv preprint arXiv:1912.10000 , 2019.Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and textinference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and theirCompositionality , pages 57–66, 2015.PEZESHKPOUR , TIAN, & S INGHKristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and MichaelGamon. Representing text for joint embedding of text and knowledge bases. In EMNLP , volume 15,pages 1499–1509, 2015.Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. Compositionallearning of embeddings for relation paths in knowledge base and text. In ACL (1) , 2016.Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complexembeddings for simple link prediction. In International Conference on Machine Learning , pages2071–2080, 2016.Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guil-laume Bouchard. Knowledge graph completion via complex tensor factorization. The Journal ofMachine Learning Research , 18(1):4735–4772, 2017.Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. Cane: Context-aware network embedding forrelation modeling. In Proceedings of the 55th Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , volume 1, pages 1722–1731, 2017.Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities andrelations for learning and inference in knowledge bases. In ICLR , 2015.Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial datasetfor grounded commonsense inference. In In EMNLP 2018 , 2018.Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. Collaborativeknowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDDinternational conference on knowledge discovery and data mining , pages 353–362. ACM, 2016.Yongjun Zhu, Olivier Elemento, Jyotishman Pathak, and Fei Wang. Drug knowledge bases and theirapplications in biomedical informatics research. Briefings in bioinformatics , 2018.REVISITING KNOWLEDGE BASE COMPLETIONAppendix A. Scoring Functions and Implementation DetailsHere we first describe different scoring functions adopted in this work and then elaborate theimplementation details.Scoring Functions: In DistMult, (s;r;o ) =esRreo, where es;eo2Rdare embeddings of thesubject, and object and Rr2Rddis a diagonal matrix representing the relation r. Moreover, TheRotatE scoring function is defined as (s;r;o ) =kesRreok2where es;Rr;eo2Cdanddenotes the Hadamard product. In Tucker, the score of triple hs;r;oiis defined as (s;r;o ) =W1es2Rr3eo, where es;eo2Rde,Rr2Rdr,W2Rde;dr;deandiis representing thetensor product along the ith mode.Implementation Details: We use the same loss and optimization for training, i.e., AdaGrad and thebinary cross-entropy loss. We adopt reported hyperparameters from previous works to reproduce theirperformance. To investigate the link prediction task, we study commonly-used metrics for evaluationin this task: mean reciprocal rank (MRR) and Hits@N. As our embedding methods, we considerDistMult [Yang et al., 2015] because of its simplicity and high performance, and RotatE [Sun et al.,2019a] and Tucker [Balazevic et al., 2019] because of their state of the art performance. Further, weuse validation data to tune the hyperparameters and use a grid search to find the best hyperparameters,such as regularization paramete. To evaluate our method, we conduct link prediction experimenton two small KGs Kinship and Nations and three more realistic KGs FB15k-237 [Toutanova et al.,2015], WN18-RR [Dettmers et al., 2018] and YAGO3-10 [Mahdisoltani et al., 2013]. A statisticalanalysis of our benchmarks is provided in Table 4Appendix B. Entity TypesDefinition B.1. In this work, we define a generic notion of type for entities. We consider two entitiesto have the same type if they appear with relations in the training data, that themselves have appearedseveral times with the same objects (subjects). More specifically, for target triple hs;r;oi, to findall the entities with the same type as s, we first find all the relations that for some number of times,appear with the same entities for their subject as the relation r. Then we consider the union of allentities that appear as the subject for those relations in the training data, as the set of the same typeentities fors. Throughout the paper, we use this notion of type to identify the type of each entity.Table 4: Data Statistics of the benchmarks.# Rel #Ent # Training #Test #ValidWN18RR 18 40,768 86,835 3,134 3,034FB15k-237 237 14,541 272,115 20,466 17,535YAGO3-10 37 123,170 1,079,040 5,000 5,000Nations 56 14 1,592 200 200Kinship 26 104 8,544 1,074 1,068PEZESHKPOUR , TIAN, & S INGHBarackObamaMichelleObamaUnitedStatesLawyerSashaObamaisMarriedTowasBornInhasJobhasChildhasChild(a) KG, with the target predictionBarackObamaMichelleObamaSashaObama(WhasChild,hasChildisMarriedTo )isMarriedTohasChildhasChild(b) The local score, LocFigure 4: Score of each triple includes local score, which captures paths between subject and objectentity in the target triple.Appendix C. Local ScoreIn this section, we analysis the scoring function and the simple patterns that we incorporate to ourmodel. A simple representation of our local model is depicted in Figure 4. Moreover, the simplepatterns with length 3 that we consider for WN18RR and YAGO3-10 is depicted in Figure 5. Thereason for choosing these patterns is the fact that they are very easy to learn. To learn these patterns,the translation based embedding method such as RotatE just need to learn that if a path contains twoedges with the same relation but in the reverse direction, these edges would cancel each other out.And this is a direct result of the definition of translation based scoring function. For Multiplicativebased embedding such as DistMult if we assume that jeoj;jesj;jesRrj= 1, the scoring functioncan be considered as translation-based embedding by considering the space angle as the metric ofsimilarity instead of Euclidean distance.Appendix D. Calibration StudyThe calibration plot for WN18RR and FB15k-237 over our three defined negative sampling procedureis depicted in Figure 6 The histogram plot of scores’ distribution for WN18RR, FB15k-237 andYAGO3-10 using Distmult, Tucker and Rotate as link prediction models and adopting studiedmentioned negative sampling procedures is depicted in Figure 7. Moreover, the histogram plot ofscores’ distribution for YAGO3-TC is depicted in Figure 8.Appendix E. Number of Parameters and CalibrationIn this section, we reproduce the calibration plots by fixing the number of parameters over differentmodels. We consider the DistMult’s number of parameters with a hidden dimension of 200 as ourbenchmark. The MRR performance of different models with the same number of parameters isprovided in Table 5. Moreover, the calibration plot using these models is depicted in Figure 9. Asit shows, the results appear very similar to previously reported ones. The reason behind similarREVISITING KNOWLEDGE BASE COMPLETIONEntity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(a) Pattern 1.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(b) Pattern 2.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(c) Pattern 3.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(d) Pattern 4.Figure 5: Simple patterns with length 3 which we incorporate to represent the WN18RR andYAGO3-10.0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (f) Careful-N on WN18RRFigure 6: Calibration study on different KGs based on three negative sampling procedures.behavior is due to the fact that the link prediction models’ performance tends to get saturated uponincreasing the hidden dimension value.PEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0100002000030000400005000060000CountTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score01000020000300004000050000CountTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score01000020000300004000050000CountTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult (f) Careful-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000120001400016000CountTuckerRotatEDistMult(g) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000120001400016000CountTuckerRotatEDistMult (h) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score02000400060008000100001200014000CountTuckerRotatEDistMult (i) Careful-N on YAGO3-10Figure 7: Calibration study on different KGs based on three negative sampling procedures.Table 5: Link Prediction result for FB15k-237, WN18RR and YAGO3-10 KGs. All results generatedby restricting the number of parameters to be equal to the DistMult’s parameters with dimension 200.ModelsFB15k-237 WN18RR YAGO3-10MRR Hits@1 MRR Hits@1 MRR Hits@1DistMult 0.279 17.9 0.39 36.4 0.423 33.8RotatE 0.3 20.9 0.434 40.7 0.459 36.5Tucker 0.339 25 0.423 40.4 0.417 33.4Appendix F. YAGO3-TC Relation DistributionThe relation distribution of YAGO3-10 test data on our randomly 1000 random sampled is depictedin Figure 10. As shows, except for relation affiliatedTo (relation 16) which we didn’t consider in oursampling, other relations demonstrate similar distribution.REVISITING KNOWLEDGE BASE COMPLETIONFigure 8: Histogram plot of Calibration on YAGO3-TC.Table 6: Per-Relation BreakdownRelationDistMult RotatE TuckerAcc F1 R P Acc F1 R P Acc F1 R PplaysFor 25.9 23.2 85.6 13.4 20.6 22.8 89.8 13 73.5 29.1 41.6 22.4isLocatedIn 35.4 23.2 83.8 13.5 21.7 20.3 85.6 11.5 45 23.7 73.4 14.1wasBornIn 22.9 5.5 75.3 2.8 15.3 5.6 84.3 2.9 62.4 3.6 23.4 1.9hasGender 78.4 32.4 92.6 19.7 94.7 45.3 38.9 54.4 97.9 82.2 85.2 79.4Appendix G. Per-Relation BreakdownWe perform a per-relation breakdown analysis on the YAGO3-TC dataset to gain a deeper under-standing of how is the distribution of the model’s performance on different relations. This kind ofanalysis can help us with identifying the shortcoming and the strength of our embedding methods.Table 6 compares RotatE and Tucker on the top four most frequent relations. As shown, RotatEoutperforms Tucker in recall except for relation hasGender , and loses except for F1 and precision forrelation wasBornIn . Relations playsFor andisLocatedIn show similar performance over all metricsin RotatE (and almost Tucker), demonstrating that these models learn similar pattern for theserelations. Moreover, both models perform very poorly in relation wasBornIn , suggesting the difficultyin predicting this type of relation. While both models predict the relation hasGender with much moreconfidence, emphasizing the simplicity in the prediction of this relation.PEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (f) Careful-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(g) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (h) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (i) Careful-N on YAGO3-10Figure 9: Calibration study on different KGs based on three negative sampling procedures.REVISITING KNOWLEDGE BASE COMPLETION(a) YAGO3-10 test data. (b) Randomly sampled data.Figure 10: Distribution of relations in YAGO3-10 and our randomly 1000 sampled. Except for rela-tionaffiliatedTo (relation 16) which we didn’t consider in our sampling, other relations demonstratesimilar distribution.
18RI3Qm8S
A good analysis on popular KB-completion datasets plus a carefully labeled triple classification dataset
7: Good paper, accept
This paper first analyzes several popular KB-completion datasets and their evaluation methods. Several issues have been highlighted and discussed, such as the assumptions made in the ranking metrics, skewed distributions on semi-inverse relations (in WN18RR & YAGO3-10), confidence scores by popular methods are not calibrated. In addition, the authors also suggest that some simple baselines are actually quite robust. Based on their finding, the author creates a binary triple classification dataset. Effectively, every triples in their dataset are examined by multiple Turkers to ensure the label quality and also to avoid the potential error due to the "close-world" assumption behind some existing datasets. General comments: I'm happy to see that the authors revisit the problem of how KB-completion is evaluated. Although the various potential issues of existing datasets and/or evaluation metrics are not necessarily secrete to KB-completion researchers, it is still good to identify and discuss them. While I agree most of the analysis and findings, I have to argue that the reason behind those issues is often that the use case was not carefully discussed and defined first. As a result, it is very easy to find special biases or skewed distributions of some triples, which may be exploited by different models. The proposed YAGO3-TC dataset, in my opinion, is one step towards to right direction. Setting it up as a simple binary classification problem of whether a triple is correct, avoids the implicit and incorrect "close-world" assumption, and thus ensures the label correctness. The task is more like fact-checking or simple question answering. However, the potential issue of this dataset is the distribution of the triples. Because it is somewhat selected by two existing methods, it could be sub-optimal compared to, say, triples generated by some human users with a specific scenario in mind. Detailed comments: 1. It is a common and well-known issue that the probabilities or confidence scores of ML models are not calibrated. It is not surprising to see that this problem also exists in KB-completion models. However, given that dev sets are available, why didn't the author apply existing calibration methods (e.g., those mentioned in Guo et al., ICML-17) to the output of the existing models? 2. Similarly, the type information can be used in conjunction with the existing models, even as a post-processing step (e.g., see [1]). The performance of existing models may be improved substantially. 3. For imbalanced class distribution, the "accuracy" metric is not very meaningful. Precision/Recall/F1 are better. Another alternative is the ROC analysis (False-positive rate vs. True-positive rate) if the task can be cast as an anomaly detection problem. Again, the choice of evaluation metrics depends on the underlying use-case scenario. 4. Probably too many details and discussions are put in Appendix. [1] Chang et al., Typed Tensor Decomposition of Knowledge Bases for Relation Extraction. EMNLP-2014.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Revisiting Evaluation of Knowledge Base Completion Models ### Paper Abstract Representing knowledge graphs (KGs) by learning embeddings for entities and relations has led to accurate models for existing KG completion benchmarks. However, due to the open-world assumption of existing KGs, evaluation of KG completion uses ranking metrics and triple classification with negative samples, and is thus unable to directly assess models on the goals of the task: completion. In this paper, we first study the shortcomings of these evaluation metrics. Specifically, we demonstrate that these metrics (1) are unreliable for estimating how calibrated the models are, (2) make strong assumptions that are often violated, and 3) do not sufficiently, and consistently, differentiate embedding methods from each other, or from simpler approaches. To address these issues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from the test and validation data of YAGO3-10, which enables us to compute accurate triple classification accuracy on this data. Conducting thorough experiments on existing models, we provide new insights and directions for the KG completion research. Along with the dataset and the open source implementation of the models, we also provide a leaderboard for knowledge graph completion that consists of a hidden, and growing, test set, available at https://pouyapez.github.io/yago3-tc/. ### Paper Keywords ["Knowledge Graph Completion", "Link prediction", "Calibration", "Triple Classification"] ### Paper Content Automated Knowledge Base Construction (2020) Conference paperRevisiting Evaluation of Knowledge Base Completion ModelsPouya Pezeshkpour PEZESHKP @UCI.EDUUniversity of California, IrvineYifan Tian YIFANT @UCI.EDUUniversity of California, IrvineSameer Singh SAMEER @UCI.EDUUniversity of California, IrvineAbstractRepresenting knowledge graphs (KGs) by learning embeddings for entities and relations hasled to accurate models for existing KG completion benchmarks. However, due to the open-worldassumption of existing KGs, evaluation of KG completion uses ranking metrics and triple classifi-cation with negative samples, and is thus unable to directly assess models on the goals of the task:completion . In this paper, we first study the shortcomings of these evaluation metrics. Specifically,we demonstrate that these metrics (1) are unreliable for estimating how calibrated the models are,(2) make strong assumptions that are often violated, and 3) do not sufficiently, and consistently,differentiate embedding methods from each other, or from simpler approaches. To address theseissues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from thetest and validation data of YAGO3-10, which enables us to compute accurate triple classificationaccuracy on this data. Conducting thorough experiments on existing models, we provide newinsights and directions for the KG completion research. Along with the dataset and the open sourceimplementation of the models, we also provide a leaderboard for knowledge graph completion thatconsists of a hidden, and growing, test set, available at https://pouyapez.github.io/yago3-tc/ .1. IntroductionKnowledge graphs (KGs) are essential components of a wide range of tasks in scientific and industrialprocesses [Zhang et al., 2016, Zhu et al., 2018]. Most knowledge graphs, in practice, are oftensubstantially incomplete and contain noise even in the edges they do have, prompting the need formodels for knowledge graph completion (KGC), also called link prediction. In recent years, modelsthat have led to accurate link prediction are based primarily on relational embeddings [Bordes et al.,2013a, Yang et al., 2015], where dense vectors are learned for each entity and relation in the KG.By using different scoring functions that capture the uncertainty in each fact [Trouillon et al., 2017,Dettmers et al., 2018, Sun et al., 2019a], such knowledge graph completion models have achievedincredible success on existing benchmarks.Unfortunately, the lack of a complete and accurate KGs is a problem for evaluation as well. Sinceit is not possible to list all possible true and false facts for a KG of interest, existing evaluation ofKGC consists of gathering known truefacts, and using: (1) ranking metrics , such as Hits@N andMean Reciprocal Rank (MRR), to calculate the relative rank of these known true facts against allunknown facts (thus implicitly treated as negative), and (2) classification accuracy of individualfacts, by treating random corruptions of a known true fact as negative/false facts. In spite of steadyand significant progress on these models, it is not clear whether these metrics correspond to the trueperformance on link prediction, making it difficult to decide whether they are ready for real-worldPEZESHKPOUR , TIAN, & S INGHdeployment. Further, due to the strong assumptions made by these evaluation metrics, the strengths,shortcomings, and reasoning capabilities underlying these link prediction methods is difficult todetermine, hindering further progress of the field.In this paper, we study significant issues with the current evaluation metrics for knowledge graphcompletion models, in particular, highlighting the impact of the assumptions made by these metricson model performance. We show that the ranking metrics often do not correlate well with the actualperformance of the model, make it incredibly challenging to determine whether these models arewell-calibrated or not (an essential property for real-world deployment), and do not correlate wellwith the reasoning power of the models. For triple classification, upon a detailed examination ofseveral commonly used benchmarks, we show that the metric is heavily sensitive to the choice ofnegative sampling, and that there is a significant mismatch between accuracy and the ranking metrics.To address these shortcomings in existing benchmarks, we introduce YAGO3-TC , a high-quality,manually-annotated dense sub-graph of the YAGO3-10 KG. Along with the true facts that are alreadypresent in test and validation splits of the existing benchmark, YAGO3-TC also includes related factsinvolving the same entities that are annotated to be true or false via crowdsourcing. These relatedfacts are designed to be somewhat challenging to discriminate since they are high-scoring by recentaccurate models, resulting in 28,364 labeled facts out of which 2,976 are positive. Since we ensurethe quality of the annotations, classification metrics such as accuracy, precision/recall, etc. can beused to appropriately evaluate models of knowledge graph completion.We also provide a comprehensive analysis of recent KG completion models, given the high-quality annotations in YAGO3-TC, using triple classification metrics. We are able to provide accuratecalibration results for completion models, showing that they are significantly overconfident (consistentwith existing results for neural networks, but different from other observations for KGC). Further,we observe that there is a significant mismatch between ranking metrics and performance on thecompletion task (e.g., there is more than 20% gap between Hits@1 and Precision). Most importantly,we show that the progress in performance indicated by ranking metrics does not align with actualcompletion task; simple methods achieve similar performance to state-of-art models.2. Background and NotationIn this section, we introduce notations, benchmarks, evaluation procedures, and a brief overview ofexisting relational embedding approaches to knowledge graph completion. More details on existingbenchmarks and implementation details are provided in the Appendix.Embedding Based KGC: To represent the facts in the KGs, we use triples of subject, relation,and object,hs;r;oi, wheres;o2, the set of entities, and r2R, the set of relations. To modelthe KG for link prediction, a scoring function :R!Ris learned to evaluate whetherany given fact is true. In this work, we will primarily study DistMult [Yang et al., 2015] due to itssimplicity and popularity, and RotatE [Sun et al., 2019a] and Tucker [Balazevic et al., 2019] becauseof their state-of-the-art performance. Only true facts are included in the training data, thus trainingthe model involves, for each observed triple, sampling Lnegative ones (value of Lis usually treatedas a hyperparameter) by randomly corrupting either the subject or the object of the triple. Using themodel’s probability of truth as ( (s;r;o ))forhs;r;oi, following Yang et al. [2015], Trouillon et al.REVISITING KNOWLEDGE BASE COMPLETION[2016], the binary cross-entropy loss is defined as:L(G) =X(s;r)2GXoys;rolog(( (s;r;o ))) + (1ys;ro) log(1( (s;r;o ))): (1)whereys;rorepresents whther the fact is “true” ( ys;ro= 1for observed facts, ys;ro= 0otherwise).Ranking Metrics: To evaluate the performance of the KG completion models, we rank test triplesagainst all possible negative samples, generated by corrupting the subject or object of the target triple.Ranking metrics have been used since existing KGs are open-world and the ground truth label for allnegative and positive samples is not available. In the filtered setting, which we consider in this work,we only treat triples that do not appear in the training, validation, or test set, as the possible negativesamples. To quantify the ranking of target triples, we use standard evaluation metrics such as MeanReciprocal Rank (MRR), which is the average inverse rank of all test triples, and Hits@N, which isthe percentage of test triples whose rank is lower (better) than or equal to N.Triple classification: Triple classification is the task of binary classification on the KGs triples.This task is important because, if appropriately set up, it directly evaluates the capability of KGCmodels in identifying missing links. Specifically, given a target triple hs;r;oi, we want to identifyif this is a positive/true fact or a negative one. For this task, previous approaches learn a specificthresholdrfor each relation, over validation data. In order to create the negative samples in theseapproaches, for both validation and test data, they corrupt subject or object of the target triple with arandom entity form the KG. After learning thresholds r, a triple assumed to be positive if its score ishigher than the threshold for the triple’s relation.3. Issues in Existing KG Completion EvaluationIn this section, we discuss some issues prevalent in current evaluation metrics, and provide empiricalevidence for their shortcomings. First, we observe that the assumptions underlying ranking metricsare often incompatible with the goals of the completion itself. Then, we show that evaluating howwell completion models are calibrated is challenging since the results are incredibly sensitive to thesetup design choices. Finally, we show that the results of the ranking metrics are often inconsistentwith the results of the triple classification evaluation.3.1 Assumptions in Ranking MetricsIn the past few years, we have observed tremendous progress in the performance of KG completionmodels, based on ranking metrics. As these models become increasingly accurate and potentiallyready for real-world deployment, it is now useful to understand the extent to which these rankingmetrics align with the actual goals of the completion task.Let us consider a simple example. Assume we want to validate whether triple hs;r;oiis true ornot. According to the current procedure, our only option is to rank the score of all possible objects(triples of the form hs;r;o0i) and subjects (of the form hs0;r;oi) and compute the rank of our targettriple. In this case, the ranking metrics such as Hits@N can only tell us whether this triple appears inthe top N possible triples. If the relation of our target triple can accept only one true object, i.e. therelation is N-1, this ranking is meaningful, however, if multiple objects can be true for our targetsubject and relation, this ranking is incomplete since does not capture other triples that are rankedhigher than the target fact are themselves true or not. A similar observation holds for relations forPEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on YAGO3-10Figure 1: Calibration study on different KGs based on three negative sampling procedures. We plotreliability diagrams of the fraction of positive triples to all the triples vs the link prediction models’score for a target triple. Being closer to the diagonal means the model is more calibrated.which multiple subjects can be true for the same object, i.e. a 1-N relation. Unfortunately, on studyingtwo commonly used KGs WN18RR and YAGO3-10, we notice that this phenomenon happens in ahuge portion of the data, as a result of the existence of semi-inverse relations.Semi-Inverse Relations in WN18RR: On conducting simple statistical analysis on WN18RR,we notice that this KG is not completely free of relations that are inverse of each other (whichmake the completion task trivial). We notice in more than 90% appearance of three relations_derivationally_related_form ,_verb_group , and _similar_to , and also more than 60% appearanceof_also_see , another triple with the same entities but the opposite direction of the original relationappears in the training data. Together, these relations consist 37% of this KG. Moreover, aroundone-third of triples in the test data contain one of these relations; the KGC models achieve more than0:95MRR performance on this subset, significantly affecting the overall performance.Semi-Inverse Relations in YAGO3-10: Along the similar lines, in YAGO3-10, for 75% of thetriples with relation isAffiliatedTo , the triple with relation playsFor appears between the same objectand subject ( 87% for reverse). These relations consist of 63% of test data, 81% of which have theother relation between the same subject and object in the training data. Note that embedding methodsachieve around 0:95of MRR performance on these triples.The existence of these semi-inverse relations results in two important conclusions. First, itindicates that the performance based on ranking metrics on these KGs is not trustworthy since it isvery easy to predict these triples. Second, since these relations can accept many objects (and subjects)for the same subject (and object), and embedding methods score all of those objects (and subject)very highly (since their semi-inverse version of them appear in the training data), low values ofranking metrics are clearly not accurate assessment of completion on these triples. More specifically,for a target triple with one of the mentioned relations, as the number of trueobjects or subjects withsemi-inverse relation increases, the chance of obtaining a worse ranking increases as well.3.2 Evaluating Calibration of the ModelsCalibration is a very important aspect of KG completion that has only recently received atten-tion [Tabacof and Costabello, 2019]. Treating the probability of truth of a fact ( ( (s;r;o ))fortriplehs;r;oi) as the confidence of the model for the triple, we consider our model to be calibratedif the confidence aligns with the rate of true facts. In other words, if confidence is equal to 0:5, weREVISITING KNOWLEDGE BASE COMPLETIONexpect to have around 50% of triples with this confidence to be true. If this proportion is far from50%, then the model is not calibrated (the model is under-confident if the proportion is higher, andover-confident if it is lower). Since the evaluation only consists of true facts, we need to obtainnegative/false facts by sampling. We use three different negative sampling procedures: (1) randomlyreplacing subject or object with an entity from all possible ones ( Random-N ), commonly used in KGcompletion literature, (2) randomly replacing subject or object with an entity that has appeared atleast once with the target relation in the training data ( Constraint-N ), which was used by Socher et al.[2013] to generate more challenging negative samples, and (3) choose the highest scoring negativesample that has the object (or subject) with a different type than the target triple object ( Careful-N ).By choosing the object (or subject) that has a different type than the target triple entities we enforcethe chosen negative sample be a true negative. We define entity type as the set of entities that haveappeared with similar relations (see appendix for a precise definition).The result of calibration study based on above negative samples is shown for YAGO3-10 inFigure 1 (calibration plot for WN18RR and FB15k-237 and histogram plot of score distributions isprovided in the appendix). Note that even though negative samples for these three negative samplingmethods are different, we generate each plot on the same set of negative samples for the three models.Although these results show that RotatE provides more calibrated models compared to Tucker inall the negative sampling procedures, the advantage of RotatE over DistMult changes with differentnegative samples (in Random-N , DistMult appears better than RotatE). Further, we suspect thereason behind the peculiar behavior of Tucker in Figure 1c is due to the fact that Tucker tends toscore many triples (both positive and negative triples) very highly for specific relations, such ashasGender andisLocatedIn . Moreover, as we make the negative sampling more challenging, we seeextremely different behavior from the models, some result in a much more calibrated plot comparedto others (e.g. RotatE looks calibrated for Careful-N , but not for the rest), making these benchmarksinconclusive for calibration. For WN18RR and FB15k-237, although DistMult outperforms the othertwo methods completely, we observe similar behaviors in calibration plots. Last, these plots alsoindicate that the models are under-confident, which is inconsistent with similar studies on neuralnetworks [Guo et al., 2017]. For a comparison that takes the model complexity into account, weinclude results for models that have the same number of parameters in the appendix.3.3 Simple Models Look AccurateIn this section, we evaluate the reasoning capabilities of current link prediction methods. Morespecifically, we wanted to see how far in performance on ranking metrics we can get to by adoptingvery simple methods and see if ranking metrics can properly differentiate between SOTA modelsand these simple approaches. We first study rule-based methods that only predict ranking for triplesthat have their semi-inverse relations in the training data. Then, introducing a local score that learnssimple neighborhood patterns, we see unexpectedly high performance on ranking metrics, castingdoubt on the capability of ranking metrics to accurately evaluate KGC methods.Rule-based Link Prediction: To see the effect of semi-inverse relations on the performance of linkprediction methods, we provide a very simple rule-based method. For WN18RR and YAGO3-10target triples, we identify all objects for which the target subject appears with a semi-inverse relationin the training data, and vice versa for the subjects. Then we rank these entities based on theirpopularity (their degree in the graph) in the KG. The result of this rule-based method is provided inTable 1. As shown, for both of YAGO3-10 and WN18RR, this method achieves high performance.PEZESHKPOUR , TIAN, & S INGHTable 1: Link Prediction result for FB15k-237, WN18RR and YAGO3-10 KGs. All results generatedusing perspective models’ SOTA hyperparameters.ModelsFB15k-237 WN18RR YAGO3-10MRR Hits@1 # Param MRR Hits@1 # Param MRR Hits@1 # ParamDistMult 0.295 19.8 5.8M 0.428 39.2 8.1M 0.409 31.2 24.6MRotatE 0.331 23.4 29.3M 0.478 43.4 40.9M 0.471 38.2 123.2MTucker 0.342 25.1 11M 0.456 42.8 9.4M 0.468 37.9 63.9MRule-Based - - - 0.338 32.1 - 0.286 24.2 -Local 0.181 12.7 0.3M 0.364 33.4 2k 0.322 25.8 50kLocal Score: We also study an alternate simple model, using just the local structure around thetarget triple. For each target triple, we compute a local score by finding all paths from the subjectto object and score them in the context of the relation of target triple (a simpler version of thislocal score is studied in Toutanova and Chen [2015]). Specifically, we define the local score as:Loc(s;r;o ) =(Pp2P(s;o)Wpr), whereP(s;o)denotes the set of all the paths between sando.To learnWprwe generate negative samples by randomly corrupting the r. A visual representation oflocal score is provided in the appendix. For FB15K-237 that is denser than WN18RR and YAGO3-10,for each relation, we consider the top 5 most frequent paths with length 2 between the subject andthe object of triples with that relation1. For WN18RR and YAGO3-10, since most of the triples donot have paths with length 2 between their entities, we only score simple patterns with length 3 inour model. More specifically, these paths comprise of patterns that have one edge with the samerelation as the target sample (with the same direction). Further, they should have the same relationfor the other two edges but in a different direction. More details and visualization of these patternsare provided in the appendix. The result of link prediction on our benchmarks is provided in Table 1.As it shows, in all three KGs, our local score performs comparably to embedding methods whilehaving a much fewer number of parameters. The high performance of both these simple modelsraises questions about the utility of ranking metrics and existing benchmarks.3.4 Problems with Triple Classification with Negative SamplingTo demonstrate that the current approach to evaluating accuracy via triple classification is inaccurate,we create a fact classifier from completion models (that only provide a score) by learning a thresholdfor each relation (following standard practice). The performance of state-of-art embedding modelsover several KGs is provided in Table 2. As it shows, all the models achieve very high accuracyperformance (around 80%) when we choose both validation and test negative samples randomly(Random-N ). The reason behind this high performance is mostly due to the naive way of randomnegative sampling. Moreover, Distmult and RotatE outperform the Tucker performance in all threeKG (although Tucker often achieves higher or very similar performance on ranking metrics).Negative Sampling: There are a few fundamental issues with the current approach to tripleclassification: (1) randomly choosing the negative samples results in very simple classification, whichis not informative for evaluation, and (2) training classifiers (estimating thresholds) based on randomnegative samples makes the results brittle, i.e., choosing slightly more challenging negative samples1. Around 70% the test triples have at least one of these patterns in their neighborhoodREVISITING KNOWLEDGE BASE COMPLETIONTable 2: Triple classification accuracy for random and careful negative sampling.ModelsFB15k-237 WN18RR YAGO3-10Random-N Careful-N Random-N Careful-N Random-N Careful-NDistMult 95.2 47.6 83.3 39.5 94.9 45.5RotatE 94.4 49.1 84.8 42.0 86.1 42.9Tucker 77.6 57.4 72.0 55.8 75.4 45.2Table 3: Triple classification accuracy on ground truth labels. The results are averaged over 5 runs.ModelsKinship NationsAcc F1 Recall Precision Acc F1 Recall PrecisionDistmult 58.8 7.6 34.8 4.3 86 24 25.6 22.9RotatE 10.6 10 97.3 5.3 66.9 27.2 69.6 16.9Tucker 86.2 38.8 83.7 25.2 55.6 18.9 66.6 11Type Constraint 28.9 12.0 94.6 6.4 47.6 22.8 87.0 13.1can reduce the performance dramatically. Instead of random samples, we instead use the challengingCareful-N samples described in Section 3.2, and show results in Table 2. As it shows, these negativesamples dramatically reduce the accuracy. Further, although Tucker performs the worst in the randomsetting, here we see a smaller reduction in accuracy compared to the other two methods (RotatEappears better than DistMult). We suspect the reason behind this smaller reduction in accuracy isthat Tucker distinguishes between positive and hard negative samples better, i.e., on average, assignsconsiderably higher scores to positive samples in comparison to hard negatives.Mismatch between Ranking Metrics and Accuracy: We also evaluate these models on tripleclassification in Table 3 for Kinship and Nations, which have allthe true facts available (all missingfacts are false, and can be enumerated). As the negative samples, we consider the union of top10 negative objects and subjects based on trained RotatE and Tucker models. Although thesemodels achieve around 0:8MRR and 100% Hits@10 performance, they demonstrate much lowerperformance on accuracy metrics showing that these ranking metrics are not trustworthy. Moreover,if we classify the negative and positive samples for Kinship and Nations KGs only based on thecompatibility of their subject and object with the relation, based on our defined notion of type ( TypeConstraint ), we see that Type Constraint achieves comparable results with these embedding methods,questioning the credibility of the ranking metrics and performance of these embedding methods.4. YAGO3-TC: A New Benchmark for Evaluating KG CompletionIn this section, we first describe our procedure to gather YAGO3-TC dataset, and then, we explainour plans to continuously update YAGO3-TC as new KG completion models are proposed.4.1 Creating YAGO3-TCTo solve these issues with KG completion evaluation, we gather a dataset that contains true and falsefacts, but is also challenging for current models. Note that in this work, we are not suggesting thatPEZESHKPOUR , TIAN, & S INGHCarles PuyolMaleSpain National TeamBarcelonaCatalonia National TeamReal MadridMan UnitedCarles Puyol plays for?Catalonia National Team ✓✓×Barcelona ✓✓✓Real Madrid ✓××Man United ×××Carles Puyol plays for?Real Madrid ××Initial Annotations (Round 1)Selective Reannotation(Round 2) ✓Catalonia National Team✓Barcelona×Man United?Real MadridCarles PuyolMaleSpain National TeamBarcelonaCatalonia National TeamReal MadridMan United(a) Overview of crowdsourcing process.# Test # ValidTriples 28,364 2,946Positives 2,976 223Negatives 25,388 2,723(b) Data Statistics.Figure 2: YAGO3-TC Dataset. (a) annotation process, and (b) statistics of the resulting datawe should completely replace the ranking metrics for all use cases, but point out their shortcomings,and introduece a benchmark to compute other metrics. Since each embedding model scores triplesdifferently, we use RotatE [Sun et al., 2019a] and Tucker [Balazevic et al., 2019] as our judges foridentifying important triples. More specifically, we first sample 1000 random triples from the testset of YAGO3-10 (the relation distribution histogram of YAGO3-10 test data and our 1000 sampledtriples is very similar, provided in the appendix). To reduce the effect of semi-inverse relations, wedo not consider triples with relation isAffiliatedTo in our samples. Then, applying trained RotatE andTucker on these triples we find the 10 top scoring objects for the query hs;r;?iand 10 top scoringsubjects forh?;r;oi. Excluding repeated triples, we gather 28,364 triples/facts.We need to label these triples as negative (false) and positive (true). Before conducting crowd-sourcing to label these triples, there are few criteria to identify the true label of some of these triples:(1) if the relation of target triple is N-1 (or 1-N) we can treat every object (or subject) as a negativesample, except for the original target object (or subject), and (2) if the object (subject) of a sampledoes not have the same type as the object (subject) of the original test triple, we can treat that sampleas negative. Filtering these identifiable samples, we label the rest through crowd-sourcing.For labeling the samples, we ask the users to search the information on the Wikipedia page of theentities and use the Google search engine. For example, we ask users to choose all the correct teamsfor the query “Carles Puyol plays for?” from provided options. We use Amazon Mechanical Turk asthe framework for gathering the labels and ask three users to answer each query. If more than oneuser agree on an answer, we treat that triple as a positive one. We separately reannotate the objectsthat were picked only by one of the users. This time, we ask two users to check these samples, and ifboth of them agree on a correctness of a choice, we treat it as a positive sample. After creating theset of positive labels, we treat everything else as negative samples. An overview of our user studyis depicted in Figure 2a. Further, after randomly choosing 100 samples from the validation data ofYAGO3-10, we use the same procedure for gathering labels for our validation data. Since we intendto use these labels to find the thresholds of the models, we ensure that at least one triple for eachrelation that appears in this set. The dataset statistics are provided in Table 2b. To check the qualityof our labels, we also include 100 true facts from the original test data in our study, and find that 96%of these triples were annotated to be positive, demonstrating the high quality of our labels.REVISITING KNOWLEDGE BASE COMPLETIONModels Acc F1 R P A-ROCDistMult 29.4 20.4 86.6 11.6 0.61RotatE 27.0 19.4 83.7 10.9 0.58Tucker 63.3 22.3 50.3 14.4 0.64DistMult-valid 85.6 19.1 14.3 29.1 0.59RotatE-valid 88.6 18.9 12.8 42.1 0.61Tucker-valid 79.7 22.1 27.5 18.5 0.56Random 80.9 10.9 11.1 10.6 0.51Type Constraint 32.2 20.8 84.8 11.8 0.61Local 61.0 19.0 43.8 12.2 0.6(a) Triple classification accuracy on ground truth labels.The results are averaged over 5 runs.(b) Calibration plot for YAGO3-TC.Figure 3: Triple classification on YAGO3-TC. (a) provides average performance of embeddingmethods and our baselines. (b) Depicts the calibration study of embedding models.4.2 Continuously Updated, Hidden BenchmarkAlthough we are confident about the quality of the true/false annotations, we select the candidatesbased on the output of the two recent models, RotatE and Tucker. As new models will be proposed,they may be able to differentiate between our gathered true and false facts, but may highly rank factsthat are not in our dataset. In order to maintain a benchmark that is useful for evaluating knowledgebase completion in the long run, we propose a web-hosted evaluation platform. The online platform,available at https://pouyapez.github.io/yago3-tc/ , includes a hidden test set, and a leaderboardof model submissions. The test set, initialized with the YAGO3-TC dataset described here, willcontinuously be updated as we receive model predictions from the submissions, thus identifyingmore challenging candidate facts, via the same crowdsourcing pipeline described above.5. Evaluation Using YAGO3-TCIn this section, we investigate the triple classification evaluation using our new dataset. We first studythe performance of several embedding models on the dataset and introduce new simple techniquesto improve current models. Then, comparing the calibration of the triple classification task withthe ranking scenario, we demonstrate this task is better defined. Finally, we study the per-relationbreakdown of accuracy to better assess model performance.5.1 Performance of Existing KGC Models on YAGO3-TCTo provide a better evaluation procedure for KG completion we study accuracy metrics on ourgathered data. The averaged result of SOTA methods in YAGO3-TC over 5 runs is provided in Table3a. As it shows, except for recall, Tucker outperforms RotatE. We note that, in this experiment,accuracy and precision are the metrics we care about the most because it is important to avoidlabeling a triple as positive incorrectly. Moreover, comparing the results with ranking metrics inTable 1, we see a significant mismatch with these metrics and the ranking ones.PEZESHKPOUR , TIAN, & S INGHWe also consider 3 baselines for our benchmark, 1) randomly assigning labels using the actualratio of positives and negatives as the probability, 2) classifying based on compatibility of the typeof subject and the object with the relation, and 3) train our classifier using the scores of our localmodels from Section 3.3. All of these baselines achieve comparable results with embedding methodsdemonstrating the need for better training and models. We are also interested in investigating whetherwe can improve these performances with simple modification on the learning process. First, insteadof random negative sampling on validation data, use our gathered validation data as the trainingdata for triple classification ( model-valid ). As shown, upon training on our validation data, theaccuracy and precision increase dramatically and recall drops with a huge gap. To provide a deeperunderstanding of the performance, a per-relation breakdown is provided in the appendix.5.2 CalibrationAs we showed, calibration study on existing evaluation metrics is not a well-defined task. YAGO3-TC provides us with an opportunity to study calibration in a more controlled and representativeenvironment. The evaluation on the calibration of YAGO3-TC is depicted in Figure 3b, with thehistogram plot of the scores in the appendix. As shown, Tucker provides a more calibrated plotcomparing to RotatE. Moreover, previous calibration curves suggested models are under-confident,whereas here the calibration reveals that they are overconfident, which is consistent with calibrationstudies on neural network models [Guo et al., 2017].6. Related WorkThere is a rich literature on representing knowledge bases using fixed-size embedding vectors.In the past few years, a number of recent techniques have proposed models that firstly assignan embedding for each entity and relation, and then use these embeddings to predict facts. Thesemethods which primarily only differ in scoring function for link prediction task, include tensormultiplication [Nickel et al., 2011, Socher et al., 2013, Yang et al., 2015, Balazevic et al., 2019],algebraic operations [Bordes et al., 2011, 2013b, Dasgupta et al., 2018, Sun et al., 2019a], andcomplex neural models [Dettmers et al., 2018, Nguyen et al., 2018]. Furthermore, a number ofstudies have examined the incorporation of extra types of evidence to achieve more informativeembeddings, with extra modalities consisting of numerical values [Garcia-Duran and Niepert, 2017],images [Oñoro-Rubio et al., 2017], text [Toutanova et al., 2015, 2016, Tu et al., 2017], and theircombinations [Pezeshkpour et al., 2018]. Utilizing the analysis in this work, we hope to shed morelights on better integrating extra modalities in the vector space to have more informed embeddings.Although these methods provide accurate models for a variety of KG tasks, only a few try toprovide a better understanding for these models, such as by addressing issues in training [Kadlecet al., 2017, Jain et al., 2020, Ruffinelli et al., 2020], investigating particular triples in the data[Akrami et al., 2020], studying sparsity and unreliability of KGs [Pujara et al., 2017], analyzinginterpretability in the embedding space [Sharma et al., 2018, Pezeshkpour et al., 2019, Allen et al.,2019], and identifying existing issues in KG completion models [Sun et al., 2019b]. AlthoughTabacof and Costabello [2019] also study calibration of link prediction models, there are severaldifferences between our study on calibration: 1) we show the effect of different negative samplingprocedures on the calibration of link prediction methods, and 2) we further provide a well-definedand explicit environment for calibration study using our proposed YAGO3-TC. Moreover, Safaviet al. [2020] studies utility of KG embedding methods in real-world completion tasks by proposingREVISITING KNOWLEDGE BASE COMPLETIONto calibrate these embedding models to output reliable confidence estimates for predicted triples. It isworth mentioning that developing more appropriate and challenging datasets as a way to addressshortcoming of existing benchmarks has been used in other machine learning tasks, such as visualreasoning [Johnson et al., 2017, Kottur et al., 2019], semantic parsing [Nangia and Bowman, 2018]and textual entailment [Zellers et al., 2018], amongst others.7. ConclusionIn this work, we set to investigate whether ranking metrics are appropriate measures to evaluate linkprediction models. Upon, studying shortcoming and strength of the current adopted procedure, wefirst show existing issues with ranking metrics: they do not evaluate completion, are difficult to usefor calibration, and are not able to consistently differentiate between different models. Facing theseissues, after redefining the triple classification task, we gather a new dataset YAGO3-TC consistingof a dense subgraph annotated with both true and false facts. Exploring several SOTA embeddingmodels on this dataset we further provide insights and directions for future works. We hope that thisresearch and dataset will bridge the gap in better adoption of link prediction models in real-worldscenarios. The datasets, leaderboard with continuously updated benchmark, and the open-sourceimplementation of the models are available at https://pouyapez.github.io/yago3-tc/ . We hope thisannotation methodology is used for existing, and future, evaluation bechmarks in KG completion.AcknowledgementsWe would like to thank Matt Gardner and the anonymous reviewers for their feedback. This work issupported in part by the DARPA MCS program under Contract No. N660011924033 with the UnitedStates Office of Naval Research, and in part by NSF award #IIS-1817183. The views expressed arethose of the authors and do not reflect the official policy or position of the funding agencies.ReferencesFarahnaz Akrami, Mohammed Samiul Saeef, Qingheng Zhang, Wei Hu, and Chengkai Li. Realisticre-evaluation of knowledge graph completion methods: An experimental study. In Proceedings ofthe 2020 ACM SIGMOD International Conference on Management of Data , pages 1995–2010,2020.Carl Allen, Ivana Balazevic, and Timothy M Hospedales. On understanding knowledge graphrepresentation. arXiv preprint arXiv:1909.11611 , 2019.Ivana Balazevic, Carl Allen, and Timothy Hospedales. Tucker: Tensor factorization for knowledgegraph completion. In Proceedings of the 2019 Conference on Empirical Methods in NaturalLanguage Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP) , pages 5188–5197, 2019.Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. Learning structured embed-dings of knowledge bases. In AAAI , 2011.PEZESHKPOUR , TIAN, & S INGHAntoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.Translating embeddings for modeling multi-relational data. In Neural Information ProcessingSystems (NIPS) , 2013a.Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.Translating embeddings for modeling multi-relational data. In Advances in neural informationprocessing systems , pages 2787–2795, 2013b.Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. Hyte: Hyperplane-basedtemporally aware knowledge graph embedding. In Proceedings of the 2018 Conference onEmpirical Methods in Natural Language Processing , pages 2001–2011, 2018.Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2dknowledge graph embeddings. Proceedings of the 32th Conference on Artificial Intelligence(AAAI) , 2018.Alberto Garcia-Duran and Mathias Niepert. Kblrn: End-to-end learning of knowledge base rep-resentations with latent, relational, and numerical features. arXiv preprint arXiv:1709.04676 ,2017.Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neuralnetworks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 ,pages 1321–1330. JMLR. org, 2017.Prachi Jain, Sushant Rathi, Soumen Chakrabarti, et al. Knowledge base completion: Baseline strikesback (again). arXiv preprint arXiv:2005.00804 , 2020.Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, andRoss Girshick. Clevr: A diagnostic dataset for compositional language and elementary visualreasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ,pages 2901–2910, 2017.Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strikeback. ACL 2017 , page 69, 2017.Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog:A diagnostic dataset for multi-round reasoning in visual dialog. In Proceedings of the 2019Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1 (Long and Short Papers) , pages 582–595, 2019.Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. Yago3: A knowledge base frommultilingual wikipedias. 2013.Nikita Nangia and Samuel R Bowman. Listops: A diagnostic dataset for latent tree learning. NAACLHLT 2018 , page 92, 2018.Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. A novel embedding modelfor knowledge base completion based on convolutional neural network. In Proceedings of the2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 2 (Short Papers) , volume 2, pages 327–333, 2018.REVISITING KNOWLEDGE BASE COMPLETIONMaximilian Nickel, V olker Tresp, and Hans-Peter Kriegel. A three-way model for collective learningon multi-relational data. In Proceedings of the 28th international conference on machine learning(ICML-11) , pages 809–816, 2011.Daniel Oñoro-Rubio, Mathias Niepert, Alberto García-Durán, Roberto González-Sánchez, andRoberto J López-Sastre. Representation learning for visual-relational knowledge graphs. arXivpreprint arXiv:1709.02314 , 2017.Pouya Pezeshkpour, Liyan Chen, and Sameer Singh. Embedding multimodal relational data forknowledge base completion. In Proceedings of the 2018 Conference on Empirical Methods inNatural Language Processing , pages 3208–3218, 2018.Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. Investigating robustness and interpretabilityof link prediction via adversarial modifications. In Proceedings of the 2019 Conference of theNorth American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) , pages 3336–3347, 2019.Jay Pujara, Eriq Augustine, and Lise Getoor. Sparsity and noise: Where knowledge graph embeddingsfall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural LanguageProcessing , pages 1751–1756, 2017.Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. You fcangteach an old dog new tricks! ontraining knowledge graph embeddings. In International Conference on Learning Representations ,2020.Tara Safavi, Danai Koutra, and Edgar Meij. Improving the utility of knowledge graph embeddingswith calibration. arXiv preprint arXiv:2004.01168 , 2020.Aditya Sharma, Partha Talukdar, et al. Towards understanding the geometry of knowledge graphembeddings. In Proceedings of the 56th Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , volume 1, pages 122–131, 2018.Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensornetworks for knowledge base completion. In Advances in neural information processing systems ,pages 926–934, 2013.Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding byrelational rotation in complex space. arXiv preprint arXiv:1902.10197 , 2019a.Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar, and Yiming Yang. A re-evaluationof knowledge graph completion methods. arXiv preprint arXiv:1911.03903 , 2019b.Pedro Tabacof and Luca Costabello. Probability calibration for knowledge graph embedding models.arXiv preprint arXiv:1912.10000 , 2019.Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and textinference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and theirCompositionality , pages 57–66, 2015.PEZESHKPOUR , TIAN, & S INGHKristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and MichaelGamon. Representing text for joint embedding of text and knowledge bases. In EMNLP , volume 15,pages 1499–1509, 2015.Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. Compositionallearning of embeddings for relation paths in knowledge base and text. In ACL (1) , 2016.Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complexembeddings for simple link prediction. In International Conference on Machine Learning , pages2071–2080, 2016.Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guil-laume Bouchard. Knowledge graph completion via complex tensor factorization. The Journal ofMachine Learning Research , 18(1):4735–4772, 2017.Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. Cane: Context-aware network embedding forrelation modeling. In Proceedings of the 55th Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , volume 1, pages 1722–1731, 2017.Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities andrelations for learning and inference in knowledge bases. In ICLR , 2015.Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial datasetfor grounded commonsense inference. In In EMNLP 2018 , 2018.Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. Collaborativeknowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDDinternational conference on knowledge discovery and data mining , pages 353–362. ACM, 2016.Yongjun Zhu, Olivier Elemento, Jyotishman Pathak, and Fei Wang. Drug knowledge bases and theirapplications in biomedical informatics research. Briefings in bioinformatics , 2018.REVISITING KNOWLEDGE BASE COMPLETIONAppendix A. Scoring Functions and Implementation DetailsHere we first describe different scoring functions adopted in this work and then elaborate theimplementation details.Scoring Functions: In DistMult, (s;r;o ) =esRreo, where es;eo2Rdare embeddings of thesubject, and object and Rr2Rddis a diagonal matrix representing the relation r. Moreover, TheRotatE scoring function is defined as (s;r;o ) =kesRreok2where es;Rr;eo2Cdanddenotes the Hadamard product. In Tucker, the score of triple hs;r;oiis defined as (s;r;o ) =W1es2Rr3eo, where es;eo2Rde,Rr2Rdr,W2Rde;dr;deandiis representing thetensor product along the ith mode.Implementation Details: We use the same loss and optimization for training, i.e., AdaGrad and thebinary cross-entropy loss. We adopt reported hyperparameters from previous works to reproduce theirperformance. To investigate the link prediction task, we study commonly-used metrics for evaluationin this task: mean reciprocal rank (MRR) and Hits@N. As our embedding methods, we considerDistMult [Yang et al., 2015] because of its simplicity and high performance, and RotatE [Sun et al.,2019a] and Tucker [Balazevic et al., 2019] because of their state of the art performance. Further, weuse validation data to tune the hyperparameters and use a grid search to find the best hyperparameters,such as regularization paramete. To evaluate our method, we conduct link prediction experimenton two small KGs Kinship and Nations and three more realistic KGs FB15k-237 [Toutanova et al.,2015], WN18-RR [Dettmers et al., 2018] and YAGO3-10 [Mahdisoltani et al., 2013]. A statisticalanalysis of our benchmarks is provided in Table 4Appendix B. Entity TypesDefinition B.1. In this work, we define a generic notion of type for entities. We consider two entitiesto have the same type if they appear with relations in the training data, that themselves have appearedseveral times with the same objects (subjects). More specifically, for target triple hs;r;oi, to findall the entities with the same type as s, we first find all the relations that for some number of times,appear with the same entities for their subject as the relation r. Then we consider the union of allentities that appear as the subject for those relations in the training data, as the set of the same typeentities fors. Throughout the paper, we use this notion of type to identify the type of each entity.Table 4: Data Statistics of the benchmarks.# Rel #Ent # Training #Test #ValidWN18RR 18 40,768 86,835 3,134 3,034FB15k-237 237 14,541 272,115 20,466 17,535YAGO3-10 37 123,170 1,079,040 5,000 5,000Nations 56 14 1,592 200 200Kinship 26 104 8,544 1,074 1,068PEZESHKPOUR , TIAN, & S INGHBarackObamaMichelleObamaUnitedStatesLawyerSashaObamaisMarriedTowasBornInhasJobhasChildhasChild(a) KG, with the target predictionBarackObamaMichelleObamaSashaObama(WhasChild,hasChildisMarriedTo )isMarriedTohasChildhasChild(b) The local score, LocFigure 4: Score of each triple includes local score, which captures paths between subject and objectentity in the target triple.Appendix C. Local ScoreIn this section, we analysis the scoring function and the simple patterns that we incorporate to ourmodel. A simple representation of our local model is depicted in Figure 4. Moreover, the simplepatterns with length 3 that we consider for WN18RR and YAGO3-10 is depicted in Figure 5. Thereason for choosing these patterns is the fact that they are very easy to learn. To learn these patterns,the translation based embedding method such as RotatE just need to learn that if a path contains twoedges with the same relation but in the reverse direction, these edges would cancel each other out.And this is a direct result of the definition of translation based scoring function. For Multiplicativebased embedding such as DistMult if we assume that jeoj;jesj;jesRrj= 1, the scoring functioncan be considered as translation-based embedding by considering the space angle as the metric ofsimilarity instead of Euclidean distance.Appendix D. Calibration StudyThe calibration plot for WN18RR and FB15k-237 over our three defined negative sampling procedureis depicted in Figure 6 The histogram plot of scores’ distribution for WN18RR, FB15k-237 andYAGO3-10 using Distmult, Tucker and Rotate as link prediction models and adopting studiedmentioned negative sampling procedures is depicted in Figure 7. Moreover, the histogram plot ofscores’ distribution for YAGO3-TC is depicted in Figure 8.Appendix E. Number of Parameters and CalibrationIn this section, we reproduce the calibration plots by fixing the number of parameters over differentmodels. We consider the DistMult’s number of parameters with a hidden dimension of 200 as ourbenchmark. The MRR performance of different models with the same number of parameters isprovided in Table 5. Moreover, the calibration plot using these models is depicted in Figure 9. Asit shows, the results appear very similar to previously reported ones. The reason behind similarREVISITING KNOWLEDGE BASE COMPLETIONEntity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(a) Pattern 1.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(b) Pattern 2.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(c) Pattern 3.Entity1Entity2Entity3Entity4Relation 1Relation 2Relation 2Relation 1(d) Pattern 4.Figure 5: Simple patterns with length 3 which we incorporate to represent the WN18RR andYAGO3-10.0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (f) Careful-N on WN18RRFigure 6: Calibration study on different KGs based on three negative sampling procedures.behavior is due to the fact that the link prediction models’ performance tends to get saturated uponincreasing the hidden dimension value.PEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0100002000030000400005000060000CountTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score01000020000300004000050000CountTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score01000020000300004000050000CountTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000CountTuckerRotatEDistMult (f) Careful-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000120001400016000CountTuckerRotatEDistMult(g) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0200040006000800010000120001400016000CountTuckerRotatEDistMult (h) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score02000400060008000100001200014000CountTuckerRotatEDistMult (i) Careful-N on YAGO3-10Figure 7: Calibration study on different KGs based on three negative sampling procedures.Table 5: Link Prediction result for FB15k-237, WN18RR and YAGO3-10 KGs. All results generatedby restricting the number of parameters to be equal to the DistMult’s parameters with dimension 200.ModelsFB15k-237 WN18RR YAGO3-10MRR Hits@1 MRR Hits@1 MRR Hits@1DistMult 0.279 17.9 0.39 36.4 0.423 33.8RotatE 0.3 20.9 0.434 40.7 0.459 36.5Tucker 0.339 25 0.423 40.4 0.417 33.4Appendix F. YAGO3-TC Relation DistributionThe relation distribution of YAGO3-10 test data on our randomly 1000 random sampled is depictedin Figure 10. As shows, except for relation affiliatedTo (relation 16) which we didn’t consider in oursampling, other relations demonstrate similar distribution.REVISITING KNOWLEDGE BASE COMPLETIONFigure 8: Histogram plot of Calibration on YAGO3-TC.Table 6: Per-Relation BreakdownRelationDistMult RotatE TuckerAcc F1 R P Acc F1 R P Acc F1 R PplaysFor 25.9 23.2 85.6 13.4 20.6 22.8 89.8 13 73.5 29.1 41.6 22.4isLocatedIn 35.4 23.2 83.8 13.5 21.7 20.3 85.6 11.5 45 23.7 73.4 14.1wasBornIn 22.9 5.5 75.3 2.8 15.3 5.6 84.3 2.9 62.4 3.6 23.4 1.9hasGender 78.4 32.4 92.6 19.7 94.7 45.3 38.9 54.4 97.9 82.2 85.2 79.4Appendix G. Per-Relation BreakdownWe perform a per-relation breakdown analysis on the YAGO3-TC dataset to gain a deeper under-standing of how is the distribution of the model’s performance on different relations. This kind ofanalysis can help us with identifying the shortcoming and the strength of our embedding methods.Table 6 compares RotatE and Tucker on the top four most frequent relations. As shown, RotatEoutperforms Tucker in recall except for relation hasGender , and loses except for F1 and precision forrelation wasBornIn . Relations playsFor andisLocatedIn show similar performance over all metricsin RotatE (and almost Tucker), demonstrating that these models learn similar pattern for theserelations. Moreover, both models perform very poorly in relation wasBornIn , suggesting the difficultyin predicting this type of relation. While both models predict the relation hasGender with much moreconfidence, emphasizing the simplicity in the prediction of this relation.PEZESHKPOUR , TIAN, & S INGH0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(a) Random-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (b) Constraint-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (c) Careful-N on FB15k-2370.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(d) Random-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (e) Constraint-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (f) Careful-N on WN18RR0.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult(g) Random-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (h) Constraint-N on YAGO3-100.0 0.2 0.4 0.6 0.8 1.0Mean Score0.00.20.40.60.81.0Ratio of positivesTuckerRotatEDistMult (i) Careful-N on YAGO3-10Figure 9: Calibration study on different KGs based on three negative sampling procedures.REVISITING KNOWLEDGE BASE COMPLETION(a) YAGO3-10 test data. (b) Randomly sampled data.Figure 10: Distribution of relations in YAGO3-10 and our randomly 1000 sampled. Except for rela-tionaffiliatedTo (relation 16) which we didn’t consider in our sampling, other relations demonstratesimilar distribution.<|im_end|> <|im_start|>assistant ### Review Title A good analysis on popular KB-completion datasets plus a carefully labeled triple classification dataset ### Review Text This paper first analyzes several popular KB-completion datasets and their evaluation methods. Several issues have been highlighted and discussed, such as the assumptions made in the ranking metrics, skewed distributions on semi-inverse relations (in WN18RR & YAGO3-10), confidence scores by popular methods are not calibrated. In addition, the authors also suggest that some simple baselines are actually quite robust. Based on their finding, the author creates a binary triple classification dataset. Effectively, every triples in their dataset are examined by multiple Turkers to ensure the label quality and also to avoid the potential error due to the "close-world" assumption behind some existing datasets. General comments: I'm happy to see that the authors revisit the problem of how KB-completion is evaluated. Although the various potential issues of existing datasets and/or evaluation metrics are not necessarily secrete to KB-completion researchers, it is still good to identify and discuss them. While I agree most of the analysis and findings, I have to argue that the reason behind those issues is often that the use case was not carefully discussed and defined first. As a result, it is very easy to find special biases or skewed distributions of some triples, which may be exploited by different models. The proposed YAGO3-TC dataset, in my opinion, is one step towards to right direction. Setting it up as a simple binary classification problem of whether a triple is correct, avoids the implicit and incorrect "close-world" assumption, and thus ensures the label correctness. The task is more like fact-checking or simple question answering. However, the potential issue of this dataset is the distribution of the triples. Because it is somewhat selected by two existing methods, it could be sub-optimal compared to, say, triples generated by some human users with a specific scenario in mind. Detailed comments: 1. It is a common and well-known issue that the probabilities or confidence scores of ML models are not calibrated. It is not surprising to see that this problem also exists in KB-completion models. However, given that dev sets are available, why didn't the author apply existing calibration methods (e.g., those mentioned in Guo et al., ICML-17) to the output of the existing models? 2. Similarly, the type information can be used in conjunction with the existing models, even as a post-processing step (e.g., see [1]). The performance of existing models may be improved substantially. 3. For imbalanced class distribution, the "accuracy" metric is not very meaningful. Precision/Recall/F1 are better. Another alternative is the ROC analysis (False-positive rate vs. True-positive rate) if the task can be cast as an anomaly detection problem. Again, the choice of evaluation metrics depends on the underlying use-case scenario. 4. Probably too many details and discussions are put in Appendix. [1] Chang et al., Typed Tensor Decomposition of Knowledge Bases for Relation Extraction. EMNLP-2014. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
SJgw51HFDr
ICLR.cc/2020/Conference
2020
Sparse Weight Activation Training
["Md Aamir Raihan", "Tor M. Aamodt"]
Training convolutional neural networks (CNNs) is time consuming. Prior work has explored how to reduce the computational demands of training by eliminating gradients with relatively small magnitude. We show that eliminating small magnitude components has limited impact on the direction of high-dimensional vectors. However, in the context of training a CNN, we find that eliminating small magnitude components of weight and activation vectors allows us to train deeper networks on more complex datasets versus eliminating small magnitude components of gradients. We propose Sparse Weight Activation Training (SWAT), an algorithm that embodies these observations. SWAT reduces computations by 50% to 80% with better accuracy at a given level of sparsity versus the Dynamic Sparse Graph algorithm. SWAT also reduces memory footprint by 23% to 37% for activations and 50% to 80% for weights.
["Sparsity", "Training", "Acceleration", "Pruning", "Compression"]
ABSTRACTTraining convolutional neural networks (CNNs) is time consuming. Prior workhas explored how to reduce the computational demands of training by eliminat-ing gradients with relatively small magnitude. We show that eliminating smallmagnitude components has limited impact on the direction of high-dimensionalvectors. However, in the context of training a CNN, we find that eliminating smallmagnitude components of weight and activation vectors allows us to train deepernetworks on more complex datasets versus eliminating small magnitude compo-nents of gradients. We propose Sparse Weight Activation Training (SWAT), analgorithm that embodies these observations. SWAT reduces computations by 50%to 80% with better accuracy at a given level of sparsity versus the Dynamic SparseGraph algorithm. SWAT also reduces memory footprint by 23% to 37% for acti-vations and 50% to 80% for weights.1 I NTRODUCTIONThe usage of convolutional neural networks (CNNs) has dominated a wide variety of complex com-puter vision tasks, such as object recognition (Krizhevsky et al., 2012; Szegedy et al., 2015), objectdetection (Szegedy et al., 2013; Ren et al., 2015), and image restoration (Dong et al., 2014; Zhanget al., 2017). However, CNNs are compute and memory intensive; even a moderately sized CNNmodel, like ResNet-50 with tens of millions of parameters, requires billions of floating-point opera-tions and consumes tens of gigabytes to store weights and activations during training.Previous works propose techniques for reducing computations and memory consumption duringCNN training. Such techniques include quantization where every operation is quantized in low-precision during training such a (Zhou et al., 2016; Choi et al., 2018; Wu et al., 2016; Wang et al.,2018), or, use fixed-point integers instead of floating-point numbers (Wu et al., 2018; Das et al.,2018).An orthogonal approach to reduce computations is sparsification, a process in which we eliminatecomputations involving small values. meProp (Sun et al., 2017; Wei et al., 2017) sparsifies back-propagating by selecting a subset of output gradients in each layer. Using only the top 5% of thegradients (ranked by magnitude), meProp can train a CNN and MLP on MNIST dataset withoutaccuracy loss. The computational flow of meProp is shown in Figure 1a and 1b. meProp doesnot modify the forward pass. In the backward pass meProp performs a “Top-K” operation on theoutput activation gradients which sets components not ranked in the Top-K by magnitude to zero.It then uses the sparsified output activation gradients to (potentially more efficiently) compute theinput activation and weight gradients. Our experiments suggest meProp fails to converge on largernetworks and datasets.Recently, Liu et al. (2019) proposed a method of reducing computation during training and infer-ence by constructing a dynamic sparse graph (DSG) using random projection for dimensionalityreduction. DSG loses accuracy on ImageNet dataset.In this work, we propose an alternative technique, Sparse Weight Activation Training (SWAT), thatcan train deep CNNs on complex data sets like ImageNet. Compared to DSG, SWAT is a straight-forward technique which uses less expensive Top-K operation, inspired by meProp, while achievingbetter accuracy than DSG on ImageNet.This paper provides the following contributions:1Under review as a conference paper at ICLR 2020It shows that dropping gradients during back-propagation is harmful to network conver-gence especially when training a deeper model on a complex dataset. In this case themodel suffers high accuracy loss.It proposes SWAT, a sparse training algorithm that can train a broad range of deep CNNswith minimal accuracy loss on complex datasets like CIFAR10, CIFAR100, and ImageNet.SWAT reduces the total number of operations during training by 50%–80%. It also achieves23%–37% activation and 50%–80% weight footprint reduction during the backward pass.SWAT algorithm uses sparse weight both in the forward and backward passes, and thereforemodel learns sparse weights, i.e., a pruned architecture; If the model has been trained usingSWAT with S%sparsity during training, then during inference, weight can be pruned toS%without sacrificing any loss in accuracy.We perform empirical studies to provide insight into why ‘SWAT performs well; we showedthat Top-K sparsification in general preserves direction in high-dimensional space.2 SPARSITY INDUCED TRAINING2.1 P RELIMINARIESLet us consider a deep CNN with L convolutional layers trained using mini-batch stochasticgradient descent, where the lthlayer maps the input activation ( al1) using function flfromRNCl1Hl1Wl1!RNClHlWl.flcomputesClchannel of output feature maps, eachof dimension RHlWl, usingCl1channels of input feature maps of dimension RHl1Wl1foreach of theNsamples in the mini-batch. The lthlayer has weights wl2RClCl1HfWf. Theforward pass of the lthlayer can be defined as:al=fl(al1;wl) (1)During back-propagation the lthlayer receives the gradient of the loss Lw.r.t its output activation(5al). This is used to compute the gradient of the loss w.r.t its input activation ( 5al1) and weight(5wl). Thus, the backward pass for the lthlayer can be defined as:5al1=Fl(5al;wl) (2)5wl=Fl(5al;al1) (3)BACKWARD PASSMEPROP SWATFORWARD PASSBACKWARD PASSFORWARD PASS l l−1 l l l−1 l = l−1 l l l l l−1 l l l−1T l l l l l l−1T l l = l−1T l l l−1T l l l lT l l−1 l−1 al-1 = wlT. l al-1 = wlT. l lT l lT l l−1 l−1 (a)(∇ , ) −1(∇ , ) ∇−1∇ Backward PassForward PassLayerParametersSave for backward pass( , ) −1 −1−1TOP K∇∇(b)(∇ , ) −1(∇ , ) ∇−1∇ Backward PassForward PassLayerParametersSave for backward pass( , ) −1 −1−1∇TOP K TOP K(c)Figure 1: meProp versus SWAT (a) Shows the forward and backward pass of MeProp and SWATfor a fully connected layer. (b) Computational flow of meProp for any layer l(c) Computationalflow of SWAT for any layer l.2Under review as a conference paper at ICLR 20202.2 S PARSE WEIGHT ACTIVATION TRAININGOur goal is to reduce the computations required during the training process. SWAT does this byeffectively transforming small magnitude components of vectors into zero values. Since multiplica-tion of any number by zero results in zero the multiplication is not necessary. Such a sparsificationprocess can be applied in a number of ways in the context of the backpropagation algorithm. Ideally,the modified training algorithm will retain both model accuracy and rate of convergence.We explore the sensitivity to applying this sparsification process at different points. We look at thesensitivity of the model convergence to sparse input i.e. weight ( wl) and input activation ( al) forforward pass and weights ( wl), input activation ( al) and output activation gradient ( 5al) for thebackward pass as shown in Equation 1 , 2 and 3.Figure 2a shows the result of our analysis of sparsification in the forward pass. Here we trainResNet-18 on CIFAR-100 after modifying the forward pass to use sparse inputs in Equation 1 forweights (wl) and separately activations ( al) while keeping the backward pass unchanged (i.e., onlythe forward pass computation is sparse). The results show that network convergence is more tolerantto sparse weights ( wl) compared to sparse activations ( al). Thus, as shown in Figure 1a and 1c,SWAT uses sparse weights in the forward pass.Figure 2b shows the result of our analysis of sparsification in the backward pass. We modify Equa-tion 2 and Equation 3 to sparsify either output gradient ( 5al), as in meProp (see Figure 1b), orsparsify activations ( al) and weights ( wl). The results show accuracy is extremely sensitive to spar-sification of output gradients. Such sparsity consistently results in networks converging to loweraccuracy compared to using sparse activations and weights. Thus, as shown in Figure 1a and 1c,SWAT uses sparse weights and activations in the backward pass. The overall SWAT training algo-rithm is presented in Algorithm 1.SWAT uses sparse computation in both the forward and the backward passes, while meProp (Sunet al., 2017) uses sparse computation only in the backward pass. SWAT uses sparse weights and ac-tivations in the backward pass allowing compression of weights and activations in the forward pass1.Effectively, reducing overall memory access overhead of fetching weights in the backward pass andactivation storage overhead because only Top-K% are saved. This memory benefit is not present formeProp since dense weights and activations are needed in the backward pass, whereas there is nostorage benefit of sparsifying the output gradients since they are temporary values generated duringback-propagation.40 50 60 70 80 90Sparsity1020304050607080Validation Accuracy (%) sparse wlsparse al1(a)40 50 60 70 80 90Sparsity50556065707580Validation Accuracy (%) sparse wl and al1sparse al (b)0 5 10 15 20 25 30 35 40 45Epoch010203040506070Accuracy (%)meProp(Sparsity 30%)meProp(Sparsity 50%)SAW(Sparsity 80%)BaselineTraining CurveValidation Curve (c)Figure 2: Convergence Analysis : (a) Sensitivity Analysis of ResNet18 for the Forward Pass on theCIFAR100 dataset. (b) Sensitivity Analysis of ResNet18 for the Backward Pass on the CIFAR100dataset. (c) Shows the training curve of ResNet18 on ImageNet for meProp and SAW algorithm.Learning rate is reduced by110that30thand40thepoch.To compare SWAT’s approach to that of meProp, we use a variant of SWAT that only sparsifiesthe backward pass; we shall refer to this version of SWAT as SAW (Sparse Activation and Weightback-propagation). We compare the performance of the meProp and SAW with deep networks, andcomplex datasets2. Figure 2c shows SAW and meProp convergence of ResNet18 with the ImageNetdataset; it compares the performance of meProp at 30% and 50% sparsity to SAW 80% sparsity.As we can see, meProp converges to a good solution at sparsity of 30%. However, at 50% sparsity,1more detail in section 3.32The performance of SWAT with deep networks and complex datasets is in the result section.3Under review as a conference paper at ICLR 2020meProp suffers from overfitting and fails to generalize (between epochs 5 to 30), and at the sametime, it is unable to reach an accuracy level above 45%. These results suggest that dropping outputactivation gradient ( 5al) is generally harmful during back-propagation. On the other hand, SAWsucceeds to converge to an accuracy of 64% even at a much higher sparsity of 80%.Algorithm 1: Training anLlayer network using SWAT or SAW AlgorithmThe data: A mini-batch of inputs & targets ( a0,a), training iteration t, previous weights wt,learning rate .The result: Update weights wt+1.Step 1. Forward Computation;forl = 1 to L doifl ==‘ConvolutionLayer’ or ‘LinearLayer’thenifalgorithm ==‘SWAT’ thenwtl(fTOPK (wtl);atl(forward (wtl;atl1);atl1(fTOPK (atl1);else// algorithm == ‘SAW’ ;atl(forward (wtl;atl1)wtl(fTOPK (wtl);atl1(fTOPK (atl1);endsave forbackward l(wtl;al1;elseal(forward (wtl;al1);save forbackward l(wtl;al1;endendStep 2. Backward Computation;Compute the gradient of the output layer5aL=@loss (aL;a)@aL;forl=L to 1 dowtl;al1(save forbackward l;5al1(backward input (5al;wtl);5wl1(backward weight (5al;atl1);endStep 3. Parameter Update;forl=1 to L dowt+1l(Optimizer (wtl;5wl;);endTop-K Selection: Given CNNs operate on tensors with many dimensions, there are several optionsfor how to select which components are set to zero during sparsification. Our CNNs operate onfourth-order tensors, T2RNCHW. Below we evaluate three variants of the Top-K opera-tion illustrated in the right side of Figure 3. We also compared against a null hypothesis in whichrandomly selected components of a tensor are set to zero.50 60 70 80Sparsity4550556065707580Validation Accuracy (%)Dataset: CIFAR100HWCHWNCHWRANDOMTOP-50% TOP-50%TOPK-CHW0.8 -0.70.2 0.50.9 0.8-0.9 0.7-0.6 0.3-0.1 -0.80.1 -0.40.1 -0.2TOP-50%TOPK-NCHWHWCN0.8 -0.70.2 0.50.1 -0.40.1 -0.20.9 0.8-0.9 0.7-0.6 0.3-0.1 -0.8TOP- 50%TOPK-HW0.8 -0.70.2 0.50.9 0.8-0.9 0.7TOP- 50% TOP- 50%TOP- 50%-0.6 0.3-0.1 -0.80.1 -0.40.1 -0.2Figure 3: Different ways of performing top-k operation . ‘N’ denotes the #samples in the mini-batch or filters in the layer, ‘C’ denotes the #channels in the layer. ‘H’ and ‘W’ denote the heightand width of the filter/activation map in the layer. Color represent the selected activations/weightsby the Top-K operation.The first variant, labeled TOPK-NCHW in Figure 3, selects activations and weights to set to zeroby considering the entire mini-batch. This variant performs Top-K operation over the entire tensor,ffN;C;H;WgTOPK (T), where the superscript represents the dimension along which the Top-K operation isperformed. The second variant (TOPK-CHW) performs Top-K operation over the dimensions C;HandWi.e.,ffC;H;WgTOPK (T), i.e., selects K % of input activations from every mini-batch sample andK% of weights from every filter in the layer. The third variant (TOPK-HW) is the strictest form of4Under review as a conference paper at ICLR 2020Top-K operation. It select K% of activations or weights from all channels, and thereby performingthe Top-K operation over the dimension HandW, i.e.,ffH;WgTOPK (TH;W).The left side of Figure 3 shows the accuracy achieved on ResNet-18 for CIFAR100 when usingSAW configured with each of these Top-K variants along with a variant where a random subset ofcomponents is set to zero. The results show, first, that randomly selecting works only for low spar-sity. At high sparsity all variants of Top-K outperform random selection by a considerable margin.Second, they show that the more constrainted the Top-K operation the less accuracy achieved. Con-straining Top-K results in selecting some activations or weights which are quite small. Similarly,some essential activations and weights are discarded just to satisfy the constraint.3 RESULTSIn this section, we present our experimental results of SWAT algorithm on different architectures anddatasets and we quantify the theoretical reduction in compute and memory bandwidth achievableusing SWAT.3.1 E XPERIMENTAL SETUPWe implement SWAT and SAW algorithms in PyTorch Framework (Paszke et al., 2017); modelsare trained on three different datasets: CIFAR10, CIFAR100 (Krizhevsky et al., 2009) and Ima-geNet ILSVRC2012 (Deng et al., 2009) and are evaluated on four different architectures ResNet-18, 34, 50, 101 (He et al., 2016), Wide Residual Networks (Zagoruyko & Komodakis, 2016),DenseNet-BC-121 (Huang et al., 2017), and VGG-16 (Simonyan & Zisserman, 2014) with batch-normalization (Ioffe & Szegedy, 2015). Batch-Normalization statistics are computed using the run-ning average (with momentum 0:9). We use SGD with momentum as our optimization algorithmwith an initial learning rate of 0:1, momentum of 0:9and weight decay of0:0001 .For CIFAR10 and CIFAR100 dataset, ResNet, VGG, and DenseNet models are trained for150epochs, and learning rate are reduced by (1=10)that the 50-th and the 100-th epoch whereasWRN is trained for 200epochs and the learning rate is annealed by a factor of (1=5)that 60-th,120-th and 160-th epoch. ResNet, VGG, and WRN are trained using a batch-size of 128 whereasDenseNet is trained with a batch-size of 64. We run each experiment with three different seeds anduse the average value for all the plots.For training on ImageNet dataset, we use 224224random crops from the input images or itshorizontal flip and the input image is normalized by the per-color mean and standard deviation.Networks are trained for 50epochs with the mini batch-size of 256 samples, and the learning rateare reduced by (1=10)thafter 30-th and 40-th epoch.3.2 A CCURACY ANALYSISIn this section, we provide a comprehensive analysis of SWAT and SAW algorithms and show theinfluence of sparsity on validation accuracy; thereby showing the potential of reducing computationduring training with negligible accuracy loss. We furthermore discuss the impact on rate of con-vergence and the robustness of the algorithm on a wide range of models with different depths andwidths. Last, we provide an alternative to Top-K for efficient software/hardware implementation.Accuracy on CIFAR10 and CIFAR100: Figure 4 shows the accuracy of the SWAT and SAWalgorithms at different sparsity budgets on CIFAR10 and CIFAR100 dataset. From the graph, we canconclude that models can be trained using SWAT and SAW algorithm up to 60% sparsity with almostzero accuracy loss and suffer only a slight accuracy loss at 70% sparsity. For CIFAR10 dataset at70% sparsity, VGG-16 and DenseNet-121 have an accuracy loss of around 0:57% for SWAT ( 0:26%for SAW) and 0:4%for SWAT ( 0:23% for SAW) whereas ResNet-18 gains an accuracy of 0:02% forSWAT ( 0:1%for SAW). For CIFAR100 dataset at 70% sparsity, ResNet-18, VGG-16 and DenseNet-BC-121 lose an accuracy of around 0:5%,0:41% and0:68% for SAW and 0:4%,0:99% and1:78%for SWAT respectively. At 80% sparsity the accuracy loss on CIFAR10 and 100 is less than 1:8%for SAW and less than 2:5%for SWAT.5Under review as a conference paper at ICLR 20200 10 20 30 40 50 60 70 80 90Sparsity90919293949596Validation Accuracy (%)Dataset: CIFAR10ResNet-18 VGG-16 DenseNet-121SAWSWAT(a)0 10 20 30 40 50 60 70 80 90Sparsity6668707274767880Validation Accuracy (%)Dataset: CIFAR100ResNet-18 VGG-16 DenseNet-121SAWSWAT (b)Figure 4: Comprehensive analysis of sparsity vs accuracy trade-off : (a) Accuracy of SWAT andSAW algorithms on CIFAR10 dataset. (b) Accuracy of SWAT and SAW algorithms on CIFAR100dataset. The dashed line represents the baseline accuracy for the corresponding model. Datapointsfor SAW algorithm are represented as dots whereas for SWAT algorithm they are represented asstars.0 5 10 15 20 25 30 35 40 45Epoch10203040506070Validation Accuracy (%)Dataset: ImageNetRN18VGG16DN121BaselineSparsity 50%Sparsity 70%(a)50 55 60 65 70 75 80 85Sparsity6062646668707274Validation Accuracy (%)67.7167.4466.2964.2567.0166.1964.8862.9770.4469.6569.1567.8669.5868.973.3172.7571.569.5872.5671.05Dataset: ImageNetResNet-18 VGG-16 DenseNet-121SAW SWAT DSG (b)Figure 5: Trend of SWAT algorithm on ImageNet dataset : (a) Validation curve of SWAT al-gorithm (b) Validation Accuracy of SWAT, SAW and DSG algorithms at different sparsity con-straints. Dotted line represents the baseline back-propagation algorithm. ‘RN18’ represent ResNet-18, ‘DN121’ represent DenseNet-BC-121 and ‘DSG’ denote the results reported by the DynamicSparse Graph(Liu et al., 2019) algorithm.Accuracy on ImageNet: Figure 5a shows the validation curve of the SWAT algorithm on Ima-geNet dataset for three different architectures and Figure 5b shows the accuracy obtained by theSWAT and SAW algorithms. The result shows that the SWAT and SAW algorithms lose negligi-ble accuracy at 50% sparsity for all three architectures. The solution is within an accuracy loss of0:261:01% compared to the baseline solution. For high sparsity of 70%, ResNet-18, VGG-16 andDenseNet-BC-121 lose only around 1:52%,1:6%and2:26% accuracy for the SWAT and 1:42%,1:28% and1:82% for the SAW algorithm respectively. Both the algorithms perform better than theDSG algorithm proposed by (Liu et al., 2019), which accelerates training by performing dimension-ality reduction search and performing the forward and backward passes in low dimensional space.The accuracy loss of DSG at 50% sparsity is around 2:83% for ResNet-18 and 1:54% for VGG-16compared to the SWAT accuracy loss of 0:27% and0:86% for ResNet-18 and VGG-16 respectively.Impact on rate of Convergence: We define the rate of convergence is the number of epochs it takesto reach the saturation accuracy. Figure 5a shows the validation curve of SWAT algorithm whentraining ResNet-18, VGG-16 and DenseNet-BC-121 on ImageNet dataset. As shown in Figure,when the learning rate is 0.1 (i.e. between epoch 0 and 30) the SWAT algorithm reaches the satu-ration accuracy around the 15th epoch approximately the same epoch when the baseline algorithmalso reaches saturation. Similarly, when the learning rate is 0.01 (i.e. between epoch 0-40th) bothSWAT and the baseline saturate at epoch 35th. The experiment at 50% and70% sparsity shows thatSWAT algorithm converges with slight accuracy loss but at the same rate compared to the baselinealgorithm.Influence of Depth and Width: Network depth (# layers ) and width (# channels ) are two importantdesign criteria. Previous studies (Lu et al., 2017; Raghu et al., 2017; Sharir & Shashua, 2018) havefound that both depth and width affect network expressivity. Increasing network depth helps in6Under review as a conference paper at ICLR 2020learning complex abstraction, whereas increasing width helps in learning more features. Ideally,we want SWAT and SAW algorithms to work with models of varying depth and width. Therefore,we study the influence of depth and width on the SWAT and SAW algorithms. Figure 6a showsthe accuracy of ResNet-50, ResNet-101 and WRN-28-10 on CIFAR100 datasets at four differentsparsities 0%,50%,70% and80%. The result for deeper networks shows that enforcing sparsity upto70% is beneficial for training as the ResNet-50 and ResNet-100 converged to an accuracy higherthan the baseline training. At 80% sparsity, ResNet-101 loses accuracy of a mere 0:18% whereasResNet-50 still has an accuracy advantage of 0:19% over the baseline. WRN-28-10 lose accuracy ontraining with SWAT and SAW algorithm, but the accuracy loss is only 0:67% for SAW and 0:49%for SWAT at 70% sparsity.0 50 70 80Sparsity747678808284Validation Accuracy (%)Dataset: CIFAR100ResNet-50ResNet-101WRN-28-10SAW SWAT(a)0 2000 4000 6000 80000.000.020.04TopKThresholdWeight (wl) layer-1layer-7layer-14layer-170 2000 4000 6000 8000iteration0.00.51.01.5TopKThresholdActivation (al1) (b)Figure 6: (a) Influence of Depth and Width : Accuracy of SAW and SWAT algorithms on CI-FAR100 dataset for ResNet-50,101 and WRN-28-10 (b) Threshold value (K-th largest values) inTop-K operation of different layers during trainingEfficient Top-K Implementation: Top-K3operation on 1 dimensional array of size ncan be naivelyimplemented using sorting. The computational complexity of a naive Top-K operation is O(nlogn).The computational complexity can be reduced to O(n), if the k-th largest element can be found inO(n)time, since for this case the Top-K operation can be implemented by a threshold operation.The K-th largest element can be computed in O(n)average time using quickselect (Hoare, 1961) orin(n)time using BFPRT (Blum et al., 1973) or introselect (Musser, 1997). The computation canbe further reduced since we found experimentally that for a given layer, the K-th largest elementsis almost constant during training as shown in Figure 6b. So we don’t need to compute the K-thlargest elements during every training iteration and can be computed once in a while after multipleiterations.3.3 C OMPUTATIONAL AND MEMORY OVERHEAD REDUCTION DURING TRAININGIn this section, we will quantify the reduction in computational and memory overhead, using SWAT,over the baseline training algorithm.Computation Reduction: SWAT algorithm is accelerating CNN training bysparsifying the computation in both the forward and the backward pass.050708005070800507080050708005070800507080Sparsity020406080100120140GFLOPs per example1xRN501xRN1011xVGG161xVGG191xDN1211xDN1612.0x2.0x2.0x2.0x2.0x2.0x3.3x3.3x3.3x3.3x3.3x3.3x5.0x5.0x5.0x5.0x5.0x5.0xforwardbackwardFigure 7: Computational reduction in SWAT at dif-ferent sparsity . “RN” denotes ResNet, “DN” denoteDenseNet (Dataset: ImageNet)During CNN training, most of the com-putation is in the convolution and fullyconnected layer, therefore sparsifyingthe computation in both of these lay-ers can result in linear speed-up dur-ing training. Figure 7 shows the com-putational reduction possible by SWATfor three different architecture whiletraining on ImageNet dataset. SWATachieves a computation reduction of 2x,3.3x, and 5x at 50%, 70%, and 80%sparsity respectively. Note that the over-all overhead of implementing efficientTop-K operation using BFRT/introselect+ thresholding, as described in the pre-vious section, is only 1-2% additional computation during training. Another benefit of using SWAT3Top-K operation is performed on an absolute values.7Under review as a conference paper at ICLR 2020is that the model learns a sparse architecture and therefore, sparse weights are used during Inference.Thus, the same computational benefit of 2-5x is possible for Inference as well.050708005070800507080050708005070800507080Sparsity0.00.10.20.30.40.50.6Memory Overhead (GB)1.0xRN501.0xRN1011.0xVGG161.0xVGG191.0xDN1211.0xDN1612.0x2.0x2.0x2.0x2.0x2.0x3.3x3.3x3.3x 3.3x3.2x3.2x5.0x5.0x5.0x 5.0x4.8x4.8xConv-ParameterLinear-Parameter(a)050708005070800507080050708005070800507080Sparsity0.000.050.100.150.20Memory Overheadper sample (GB)1.0xRN501.0xRN1011.0xVGG161.0xVGG191.0xDN1211.0xDN1611.3x1.3x1.3x1.3x1.3x1.3x1.5x1.5x1.4x1.4x1.5x1.5x1.6x1.7x1.5x1.5x1.6x1.6xBN-ActivationConv-Activation (b)Figure 8: Reduction in memory accesses during the backward pass . (a) Reduction in parameteraccess (b) Reduction in activation access per sample (Dataset: ImageNet)Memory Overhead Reduction: During training, most of the weights and activations are stored inDRAM and accessing DRAM consumes three orders of magnitude more energy consumption thancomputation (Horowitz). So reducing the memory access during training will directly reduce theenergy consumption. SWAT algorithm uses sparse input activation ( al1) and weight ( wl1) in thebackward, so input activation and weight can be compressed and stored in the memory in sparse for-mat thereby reducing the DRAM access in the backward pass. Figure 8a shows the reduction of 2x,3.3x, and 5x at sparsity of 50%, 70% and 80% in the parameters access in the backward pass. Theother significant memory overhead is saving the activation, and this overhead is dependent not onlyon the model size but also on the batch-size used during training. Figure 8b shows the activationmemory overhead for different architectures for a mini-batch size of 1. The graph only shows theactivation of batch-normalization and convolutional layer since memory overhead for the activationsof the linear layer is negligible compared to them. Note that SWAT algorithm sparsifies the compu-tation of the linear, and convolutional layers, so full input activation of the batch-normalization layerare saved for the backward pass. Thus, SWAT algorithm achieves activation compression of around1.3x, 1.5x, and 1.7x at sparsity of around 50%, 70%, and 80%. The activation of batch-normalizationlayer limits the overall activation compression.4 EXPERIMENTAL ANALYSIS OF SWAT BEHAVIOURIn this section, we will give some experimental evidence which explains why the SWAT algorithmis working so well in practice. The experiment shows the general behavior of vector sparsificationin high dimensional space. Let us first define two new terminologies which we are going to use inthis section: “Top-K sparsification” and “Sparsification Angle”4. Top-K sparsification of a vector vselects K% of the highest magnitude component and set the rest of the component to zero. Sparsifi-cation angle is the angle between the original and the Top-K sparsified instance of that vector.4.1 V ECTOR SPARSIFICATION IN HIGH-DIMENSIONAL SPACEA vector in high-dimensional space behaves differently from their low dimensional counterpart. Itis well known that in high-dimension, two independent isotropic vectors tend to be orthogonal. Re-cently Anderson & Berg (2017) extended the analysis of the geometry of high-dimensional vectorsto binary vectors. They proved that the angle between any random vector, drawn from a rotationallyinvariant distribution, and its binarized version would be concentrated around 37. Thus, binariza-tion of high dimensional vector approximately preserves their direction. We apply a similar kindof geometry analysis to high dimensional sparsified vectors. We show that sparsification indeedpreserves direction in high-dimensional space, which is contrary to our low-dimensional intuition.The first question we need to answer is how should we sparsify a vector v2Rdsuch that thesparsification angle is minimum between the original and the sparsified instance of the originalvector (has only K%non-zero component). We found that the cosine angle is minimum whentheK%non-zero component corresponds to the K%highest magnitude component, i.e., Top Ksparsification. The proof is in the appendix.4The mathematical definition of both of these terms is in the appendix A8Under review as a conference paper at ICLR 2020We did an experiment for analyzing how Top K sparsification affect angle as shown in Figure 9a,Here we are showing the sparsification angle distribution for a 1000 dimensional vector drawn fromstandard normal distribution at different sparsity. The peak of sparsification angle at 90% sparsityis concentrated around 48which is much less than peak of random vectors which is concentratedaround 90. Similarly, the peak up to 80% sparsity is concentrated at an angle of 36:4only. Thissuggest that deviation caused by sparsification is indeed small in high-dimension.For our next experiment shown in Figure 9b, we study how much a vector of a given dimen-sion, drawn from standard normal distribution, can be maximally sparsified such that the sparsi-fication angle is less than . We can see that the percentage of Top-K components needed for=f20;30;40gis around only 43%,29% and18% respectively with a variance less than 3%as shown in Figure 9c. Thus, these experiment suggest that a high-dimensional vector can be spar-sified up to 70%, which will lead a deviation ()of only 30. All the above experimental resultsare dependent on the distribution from which random vectors are drawn, in Figure 9e, we calculatethe sparsification angle during training ResNet18 on CIFAR100 dataset at 70% Sparsity. Here thesparsification angle for weight and activation is less than 36for all the layer in the network. So theexperiment suggests that the above analysis is applicable during training.0 20 40 60 80 100cosine_angle()0.00.51.01.52.02.53.0Density Function s=10%s=20%s=30%s=40%s=50%s=60%s=70%s=80%s=90%random(a)0 2000 4000 6000 8000 10000Dimension0.0%20.0%40.0%60.0%80.0%100.0%Top-K Components=10=20=30=40=50=60=70=80 (b)0 2000 4000 6000 8000 10000Dimension050100150200250300Variancey=0.03x=10=20=30=40=50(c)20 40 60 80cosine_angle()0%10%20%30%40%50%60%Top-K Componentsdim=100dim=1000dim=5000dim=10000 (d)0 25 50 75 100 125 150010203040cosine_angle( )Weight (w l) layer-1layer-7layer-13layer-170 25 50 75 100 125 150Epoch010203040cosine_angle( )Activation (a l1) (e)Figure 9: Vector sparsification in high-dimension approximately preserves the direction :(a)Shows the sparsification angle distribution at different Top-K percentage for a 1000 dimensionalvector in 10;000trials. Random represents the angle distribution between 2random vectors. (b)and (c) Shows the percentage of Top-K components needed for sparsification angle to be within in1000 trials and the variance in those trials. (d) Shows the relation between Top-K sparsification andthe sparsification angle in 1000 trials. (e) Shows how the sparsification angle (at 70% sparsification)varies during training for ResNet-18 architecture on CIFAR100 dataset.5 R ELATED WORKWe can classify most of the previous studies which focus on accelerating training or inference in thefollowing broad categories:Pruning : Most of the pruning work focuses on inference optimization. Weight pruning can beclassified into two broad categories of structured and unstructured pruning. The idea of unstructured9Under review as a conference paper at ICLR 2020pruning can be traced back to LeCun et al. (1990); Hassibi & Stork (1993), which prune the networkusing the saliency of parameters derived from the second-order information of loss function. Hanet al. (2015b;a) pruned network parameters using a magnitude based method. There are severalother unstructured pruning methods such as Molchanov et al. (2017); Louizos et al. (2017), but thedrawback of all these methods is that it is difficult to extract parallelism on hardware. In contrast,structured pruning such as Liu et al. (2017); Li et al. (2016b); He et al. (2017); Luo et al. (2017); Wenet al. (2016); Molchanov et al. (2016) removes entire channels or filters at a time which preserves theinherent regular computation structure, and therefore it is easy to extract parallelism on hardware.Quantization : Quantized networks can be used to accelerate both training and inference since en-ergy consumption in the hardware is directly proportional to bit-width of the operands. There aremany works which focus on quantizing weights for efficient inference such as McDonnell (2018);Wu et al. (2016); Zhu et al. (2016); Li et al. (2016a); Courbariaux et al. (2015) whereas much otherwork focuses on accelerating training as well, such as Banner et al. (2018); Choi et al. (2018); Wanget al. (2018); Lin et al. (2017a); Zhou et al. (2016); Courbariaux et al. (2016); Rastegari et al. (2016);Gupta et al. (2015). Some of the other work such as Zhao et al. (2019); McKinstry et al. (2018);Zhou et al. (2017); Mellempudi et al. (2017) shows that training from scratch is not necessary forfinding the quantized model, but one can find a quantized model from pre-trained full precisionmodels. Other work focuses on discrete training and inference using Integers such as Wu et al.(2018); Das et al. (2018); Jacob et al. (2018); Lin et al. (2016) since integer added/multiplier is moreefficient than floating-point adder/multiplier. Few studies such as Louizos et al. (2019); Jung et al.(2019); Zhang et al. (2018); Zhou et al. (2018); Hou & Kwok (2018); Hou et al. (2016) formulatethe quantization as an optimization problem to minimize the accuracy loss due to quantization. Fewother work such as Yang et al. (2019); De Sa et al. (2018) focus on improving the learning algorithmby proposing novel stochastic averaging of the low precision iterates or using SVRG to reduce thevariance and by dynamically adjusting the precision representation using bit centering. Few worksinstead of quantizing the entire model to a fixed bit-width focus on per tensor or parameter quan-tization such as Sakr & Shanbhag (2019); Khoram & Li (2018). Compared to all these works, ourwork is orthogonal as we eliminate the computation instead of reducing the computation precision.Tensor Decomposition and Dimentionality Reduction : There are few works on compressing themodels by performing tensor decomposition or by learning compact structure. Alvarez & Salzmann(2017) introduce a regularizer that promotes the parameter matrix to have a low rank. Thus the algo-rithm encourages the model to learn a compact structure by accounting for compression during thetraining itself. Novikov et al. (2015) showed that tensor decomposition could be used to compressa fully connected layer by using only a few parameters. Later, Garipov et al. (2016) extended it forthe convolutional layer. The idea was to reshape the kernel into a tensor of higher-order and then tofactorize it.Distributed Training : There are few works (Stich et al., 2018; Lin et al., 2017b) which look atreducing the communication overhead in distributed training by transferring only sparse gradientsduring gradient aggregation step but these works are accumulating the rest of the gradients locallyfor subsequent iterations. Compared to all these works, our work objective is different as we are con-cerned with accelerating single node training, whereas their objective is minimizing communicationduring distributed training.6 C ONCLUSIONIn this work, we propose SWAT, a robust training algorithm based on the insight that sparsifyingweights and activation during training has little impact on convergence. SWAT sparsify both the for-ward and the backward passes, thereby eliminating lots of redundant computation such as additionand multiplication by zero. SWAT is a simpler technique and performs better than the recently pro-posed dimensionality reduction (DSG) technique for accelerating training. Our experiments overvarious benchmarks demonstrate significant computation reduction of up to 2-5x for training andinference and provides a memory footprint reduction of activation by 1.3-1.7x and reduction inmemory access overhead for weight by 2-5x in the backward pass.10Under review as a conference paper at ICLR 2020
S1xw9KbyKH
Official Blind Review #3
6: Weak Accept
This paper studies training neural networks with sparse weights and sparse activations (SWAT training). By using sparse weights in forward passes as well as sparse weights and activations in backward passes, SWAT can reduce the computation overhead and also reduce the training memory footprint. The primary contributions of the paper are in three folds: 1) The authors empirically compare the impact of (activation) gradient sparsity and weight + activation sparsity on the model performance---the comparison shows that the weight + activation sparsity has less influence on the model accuracy; 2) Across different models on CIFAR and ImageNet dataset, SWAP can reduce the training flops by 50% to 80% while using roughly 2 to 3x less training memory footprint saving (weight + activation); 3) The authors empirically study on why training using top-K based sparsification can attain strong model accuracy---the magnitude-based top-K approach can roughly preserve the directions of the vectors. I think the claimed contributions are well-validated in general. The design decisions of the approach are well supported by empirical observations and the components of the approach (different top-K methods) are studied properly. Additionally, I like the authors' synthetic-data studies to shed light on why top-K based sparsity can work well. Given the above reason, I give week accept and I am willing to raise the score if the following questions / concerns can be resolved in the rebuttal / future draft: 1. In results such as in figure 4, we observe that using intermediate levels of sparsity can actually demonstrate better generalization performance than the dense baseline training approach. I was wondering if this is because the default hyperparameter produces better training loss in sparse training than in dense training, and consequently the sparse training test performance is also improved over dense training. Without showing this, it is not fully convincing that intermediate sparsity helps prevent overfitting and generalizes better (as the authors discussed in the text). 2. For "Impact on Convergence" in section 3.2, it is not clear to me what the authors are using as a metric for the degree of convergence. Thus I can not evaluate the claims here. 3. For "Efficient Top K implementation" in section 3.2, the authors suggest only computing the K-th largest elements periodically to further improve efficiency. However the empirical evidence of whether this approach will significantly degrade the model performance at the end of training is not provided. 4. For the GFLOPS comparison in Figure 7, could the authors elaborate what operations are included into the count? As the sparse operations requires additional indexing operations for computation, I was wondering whether the GFLOPS can realistically reflect the real latency / energy efficiency of the SWAT approach. 5. How the memory access count calculated at the end of page 7? Is it counting the number of float point values (activations, activation gradients, weights) that needs to be fetched for forward and backward pass? 6. At the first paragraph in page 8 (last paragraph above section 4), do the authors imply that the activations of BN layers is not sparsified? Could the authors provide a bit more evidence on how (any why) sparsification of BN activation impacts the model performance.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Sparse Weight Activation Training ### Paper Abstract Training convolutional neural networks (CNNs) is time consuming. Prior work has explored how to reduce the computational demands of training by eliminating gradients with relatively small magnitude. We show that eliminating small magnitude components has limited impact on the direction of high-dimensional vectors. However, in the context of training a CNN, we find that eliminating small magnitude components of weight and activation vectors allows us to train deeper networks on more complex datasets versus eliminating small magnitude components of gradients. We propose Sparse Weight Activation Training (SWAT), an algorithm that embodies these observations. SWAT reduces computations by 50% to 80% with better accuracy at a given level of sparsity versus the Dynamic Sparse Graph algorithm. SWAT also reduces memory footprint by 23% to 37% for activations and 50% to 80% for weights. ### Paper Keywords ["Sparsity", "Training", "Acceleration", "Pruning", "Compression"] ### Paper Content ABSTRACTTraining convolutional neural networks (CNNs) is time consuming. Prior workhas explored how to reduce the computational demands of training by eliminat-ing gradients with relatively small magnitude. We show that eliminating smallmagnitude components has limited impact on the direction of high-dimensionalvectors. However, in the context of training a CNN, we find that eliminating smallmagnitude components of weight and activation vectors allows us to train deepernetworks on more complex datasets versus eliminating small magnitude compo-nents of gradients. We propose Sparse Weight Activation Training (SWAT), analgorithm that embodies these observations. SWAT reduces computations by 50%to 80% with better accuracy at a given level of sparsity versus the Dynamic SparseGraph algorithm. SWAT also reduces memory footprint by 23% to 37% for acti-vations and 50% to 80% for weights.1 I NTRODUCTIONThe usage of convolutional neural networks (CNNs) has dominated a wide variety of complex com-puter vision tasks, such as object recognition (Krizhevsky et al., 2012; Szegedy et al., 2015), objectdetection (Szegedy et al., 2013; Ren et al., 2015), and image restoration (Dong et al., 2014; Zhanget al., 2017). However, CNNs are compute and memory intensive; even a moderately sized CNNmodel, like ResNet-50 with tens of millions of parameters, requires billions of floating-point opera-tions and consumes tens of gigabytes to store weights and activations during training.Previous works propose techniques for reducing computations and memory consumption duringCNN training. Such techniques include quantization where every operation is quantized in low-precision during training such a (Zhou et al., 2016; Choi et al., 2018; Wu et al., 2016; Wang et al.,2018), or, use fixed-point integers instead of floating-point numbers (Wu et al., 2018; Das et al.,2018).An orthogonal approach to reduce computations is sparsification, a process in which we eliminatecomputations involving small values. meProp (Sun et al., 2017; Wei et al., 2017) sparsifies back-propagating by selecting a subset of output gradients in each layer. Using only the top 5% of thegradients (ranked by magnitude), meProp can train a CNN and MLP on MNIST dataset withoutaccuracy loss. The computational flow of meProp is shown in Figure 1a and 1b. meProp doesnot modify the forward pass. In the backward pass meProp performs a “Top-K” operation on theoutput activation gradients which sets components not ranked in the Top-K by magnitude to zero.It then uses the sparsified output activation gradients to (potentially more efficiently) compute theinput activation and weight gradients. Our experiments suggest meProp fails to converge on largernetworks and datasets.Recently, Liu et al. (2019) proposed a method of reducing computation during training and infer-ence by constructing a dynamic sparse graph (DSG) using random projection for dimensionalityreduction. DSG loses accuracy on ImageNet dataset.In this work, we propose an alternative technique, Sparse Weight Activation Training (SWAT), thatcan train deep CNNs on complex data sets like ImageNet. Compared to DSG, SWAT is a straight-forward technique which uses less expensive Top-K operation, inspired by meProp, while achievingbetter accuracy than DSG on ImageNet.This paper provides the following contributions:1Under review as a conference paper at ICLR 2020It shows that dropping gradients during back-propagation is harmful to network conver-gence especially when training a deeper model on a complex dataset. In this case themodel suffers high accuracy loss.It proposes SWAT, a sparse training algorithm that can train a broad range of deep CNNswith minimal accuracy loss on complex datasets like CIFAR10, CIFAR100, and ImageNet.SWAT reduces the total number of operations during training by 50%–80%. It also achieves23%–37% activation and 50%–80% weight footprint reduction during the backward pass.SWAT algorithm uses sparse weight both in the forward and backward passes, and thereforemodel learns sparse weights, i.e., a pruned architecture; If the model has been trained usingSWAT with S%sparsity during training, then during inference, weight can be pruned toS%without sacrificing any loss in accuracy.We perform empirical studies to provide insight into why ‘SWAT performs well; we showedthat Top-K sparsification in general preserves direction in high-dimensional space.2 SPARSITY INDUCED TRAINING2.1 P RELIMINARIESLet us consider a deep CNN with L convolutional layers trained using mini-batch stochasticgradient descent, where the lthlayer maps the input activation ( al1) using function flfromRNCl1Hl1Wl1!RNClHlWl.flcomputesClchannel of output feature maps, eachof dimension RHlWl, usingCl1channels of input feature maps of dimension RHl1Wl1foreach of theNsamples in the mini-batch. The lthlayer has weights wl2RClCl1HfWf. Theforward pass of the lthlayer can be defined as:al=fl(al1;wl) (1)During back-propagation the lthlayer receives the gradient of the loss Lw.r.t its output activation(5al). This is used to compute the gradient of the loss w.r.t its input activation ( 5al1) and weight(5wl). Thus, the backward pass for the lthlayer can be defined as:5al1=Fl(5al;wl) (2)5wl=Fl(5al;al1) (3)BACKWARD PASSMEPROP SWATFORWARD PASSBACKWARD PASSFORWARD PASS l l−1 l l l−1 l = l−1 l l l l l−1 l l l−1T l l l l l l−1T l l = l−1T l l l−1T l l l lT l l−1 l−1 al-1 = wlT. l al-1 = wlT. l lT l lT l l−1 l−1 (a)(∇ , ) −1(∇ , ) ∇−1∇ Backward PassForward PassLayerParametersSave for backward pass( , ) −1 −1−1TOP K∇∇(b)(∇ , ) −1(∇ , ) ∇−1∇ Backward PassForward PassLayerParametersSave for backward pass( , ) −1 −1−1∇TOP K TOP K(c)Figure 1: meProp versus SWAT (a) Shows the forward and backward pass of MeProp and SWATfor a fully connected layer. (b) Computational flow of meProp for any layer l(c) Computationalflow of SWAT for any layer l.2Under review as a conference paper at ICLR 20202.2 S PARSE WEIGHT ACTIVATION TRAININGOur goal is to reduce the computations required during the training process. SWAT does this byeffectively transforming small magnitude components of vectors into zero values. Since multiplica-tion of any number by zero results in zero the multiplication is not necessary. Such a sparsificationprocess can be applied in a number of ways in the context of the backpropagation algorithm. Ideally,the modified training algorithm will retain both model accuracy and rate of convergence.We explore the sensitivity to applying this sparsification process at different points. We look at thesensitivity of the model convergence to sparse input i.e. weight ( wl) and input activation ( al) forforward pass and weights ( wl), input activation ( al) and output activation gradient ( 5al) for thebackward pass as shown in Equation 1 , 2 and 3.Figure 2a shows the result of our analysis of sparsification in the forward pass. Here we trainResNet-18 on CIFAR-100 after modifying the forward pass to use sparse inputs in Equation 1 forweights (wl) and separately activations ( al) while keeping the backward pass unchanged (i.e., onlythe forward pass computation is sparse). The results show that network convergence is more tolerantto sparse weights ( wl) compared to sparse activations ( al). Thus, as shown in Figure 1a and 1c,SWAT uses sparse weights in the forward pass.Figure 2b shows the result of our analysis of sparsification in the backward pass. We modify Equa-tion 2 and Equation 3 to sparsify either output gradient ( 5al), as in meProp (see Figure 1b), orsparsify activations ( al) and weights ( wl). The results show accuracy is extremely sensitive to spar-sification of output gradients. Such sparsity consistently results in networks converging to loweraccuracy compared to using sparse activations and weights. Thus, as shown in Figure 1a and 1c,SWAT uses sparse weights and activations in the backward pass. The overall SWAT training algo-rithm is presented in Algorithm 1.SWAT uses sparse computation in both the forward and the backward passes, while meProp (Sunet al., 2017) uses sparse computation only in the backward pass. SWAT uses sparse weights and ac-tivations in the backward pass allowing compression of weights and activations in the forward pass1.Effectively, reducing overall memory access overhead of fetching weights in the backward pass andactivation storage overhead because only Top-K% are saved. This memory benefit is not present formeProp since dense weights and activations are needed in the backward pass, whereas there is nostorage benefit of sparsifying the output gradients since they are temporary values generated duringback-propagation.40 50 60 70 80 90Sparsity1020304050607080Validation Accuracy (%) sparse wlsparse al1(a)40 50 60 70 80 90Sparsity50556065707580Validation Accuracy (%) sparse wl and al1sparse al (b)0 5 10 15 20 25 30 35 40 45Epoch010203040506070Accuracy (%)meProp(Sparsity 30%)meProp(Sparsity 50%)SAW(Sparsity 80%)BaselineTraining CurveValidation Curve (c)Figure 2: Convergence Analysis : (a) Sensitivity Analysis of ResNet18 for the Forward Pass on theCIFAR100 dataset. (b) Sensitivity Analysis of ResNet18 for the Backward Pass on the CIFAR100dataset. (c) Shows the training curve of ResNet18 on ImageNet for meProp and SAW algorithm.Learning rate is reduced by110that30thand40thepoch.To compare SWAT’s approach to that of meProp, we use a variant of SWAT that only sparsifiesthe backward pass; we shall refer to this version of SWAT as SAW (Sparse Activation and Weightback-propagation). We compare the performance of the meProp and SAW with deep networks, andcomplex datasets2. Figure 2c shows SAW and meProp convergence of ResNet18 with the ImageNetdataset; it compares the performance of meProp at 30% and 50% sparsity to SAW 80% sparsity.As we can see, meProp converges to a good solution at sparsity of 30%. However, at 50% sparsity,1more detail in section 3.32The performance of SWAT with deep networks and complex datasets is in the result section.3Under review as a conference paper at ICLR 2020meProp suffers from overfitting and fails to generalize (between epochs 5 to 30), and at the sametime, it is unable to reach an accuracy level above 45%. These results suggest that dropping outputactivation gradient ( 5al) is generally harmful during back-propagation. On the other hand, SAWsucceeds to converge to an accuracy of 64% even at a much higher sparsity of 80%.Algorithm 1: Training anLlayer network using SWAT or SAW AlgorithmThe data: A mini-batch of inputs & targets ( a0,a), training iteration t, previous weights wt,learning rate .The result: Update weights wt+1.Step 1. Forward Computation;forl = 1 to L doifl ==‘ConvolutionLayer’ or ‘LinearLayer’thenifalgorithm ==‘SWAT’ thenwtl(fTOPK (wtl);atl(forward (wtl;atl1);atl1(fTOPK (atl1);else// algorithm == ‘SAW’ ;atl(forward (wtl;atl1)wtl(fTOPK (wtl);atl1(fTOPK (atl1);endsave forbackward l(wtl;al1;elseal(forward (wtl;al1);save forbackward l(wtl;al1;endendStep 2. Backward Computation;Compute the gradient of the output layer5aL=@loss (aL;a)@aL;forl=L to 1 dowtl;al1(save forbackward l;5al1(backward input (5al;wtl);5wl1(backward weight (5al;atl1);endStep 3. Parameter Update;forl=1 to L dowt+1l(Optimizer (wtl;5wl;);endTop-K Selection: Given CNNs operate on tensors with many dimensions, there are several optionsfor how to select which components are set to zero during sparsification. Our CNNs operate onfourth-order tensors, T2RNCHW. Below we evaluate three variants of the Top-K opera-tion illustrated in the right side of Figure 3. We also compared against a null hypothesis in whichrandomly selected components of a tensor are set to zero.50 60 70 80Sparsity4550556065707580Validation Accuracy (%)Dataset: CIFAR100HWCHWNCHWRANDOMTOP-50% TOP-50%TOPK-CHW0.8 -0.70.2 0.50.9 0.8-0.9 0.7-0.6 0.3-0.1 -0.80.1 -0.40.1 -0.2TOP-50%TOPK-NCHWHWCN0.8 -0.70.2 0.50.1 -0.40.1 -0.20.9 0.8-0.9 0.7-0.6 0.3-0.1 -0.8TOP- 50%TOPK-HW0.8 -0.70.2 0.50.9 0.8-0.9 0.7TOP- 50% TOP- 50%TOP- 50%-0.6 0.3-0.1 -0.80.1 -0.40.1 -0.2Figure 3: Different ways of performing top-k operation . ‘N’ denotes the #samples in the mini-batch or filters in the layer, ‘C’ denotes the #channels in the layer. ‘H’ and ‘W’ denote the heightand width of the filter/activation map in the layer. Color represent the selected activations/weightsby the Top-K operation.The first variant, labeled TOPK-NCHW in Figure 3, selects activations and weights to set to zeroby considering the entire mini-batch. This variant performs Top-K operation over the entire tensor,ffN;C;H;WgTOPK (T), where the superscript represents the dimension along which the Top-K operation isperformed. The second variant (TOPK-CHW) performs Top-K operation over the dimensions C;HandWi.e.,ffC;H;WgTOPK (T), i.e., selects K % of input activations from every mini-batch sample andK% of weights from every filter in the layer. The third variant (TOPK-HW) is the strictest form of4Under review as a conference paper at ICLR 2020Top-K operation. It select K% of activations or weights from all channels, and thereby performingthe Top-K operation over the dimension HandW, i.e.,ffH;WgTOPK (TH;W).The left side of Figure 3 shows the accuracy achieved on ResNet-18 for CIFAR100 when usingSAW configured with each of these Top-K variants along with a variant where a random subset ofcomponents is set to zero. The results show, first, that randomly selecting works only for low spar-sity. At high sparsity all variants of Top-K outperform random selection by a considerable margin.Second, they show that the more constrainted the Top-K operation the less accuracy achieved. Con-straining Top-K results in selecting some activations or weights which are quite small. Similarly,some essential activations and weights are discarded just to satisfy the constraint.3 RESULTSIn this section, we present our experimental results of SWAT algorithm on different architectures anddatasets and we quantify the theoretical reduction in compute and memory bandwidth achievableusing SWAT.3.1 E XPERIMENTAL SETUPWe implement SWAT and SAW algorithms in PyTorch Framework (Paszke et al., 2017); modelsare trained on three different datasets: CIFAR10, CIFAR100 (Krizhevsky et al., 2009) and Ima-geNet ILSVRC2012 (Deng et al., 2009) and are evaluated on four different architectures ResNet-18, 34, 50, 101 (He et al., 2016), Wide Residual Networks (Zagoruyko & Komodakis, 2016),DenseNet-BC-121 (Huang et al., 2017), and VGG-16 (Simonyan & Zisserman, 2014) with batch-normalization (Ioffe & Szegedy, 2015). Batch-Normalization statistics are computed using the run-ning average (with momentum 0:9). We use SGD with momentum as our optimization algorithmwith an initial learning rate of 0:1, momentum of 0:9and weight decay of0:0001 .For CIFAR10 and CIFAR100 dataset, ResNet, VGG, and DenseNet models are trained for150epochs, and learning rate are reduced by (1=10)that the 50-th and the 100-th epoch whereasWRN is trained for 200epochs and the learning rate is annealed by a factor of (1=5)that 60-th,120-th and 160-th epoch. ResNet, VGG, and WRN are trained using a batch-size of 128 whereasDenseNet is trained with a batch-size of 64. We run each experiment with three different seeds anduse the average value for all the plots.For training on ImageNet dataset, we use 224224random crops from the input images or itshorizontal flip and the input image is normalized by the per-color mean and standard deviation.Networks are trained for 50epochs with the mini batch-size of 256 samples, and the learning rateare reduced by (1=10)thafter 30-th and 40-th epoch.3.2 A CCURACY ANALYSISIn this section, we provide a comprehensive analysis of SWAT and SAW algorithms and show theinfluence of sparsity on validation accuracy; thereby showing the potential of reducing computationduring training with negligible accuracy loss. We furthermore discuss the impact on rate of con-vergence and the robustness of the algorithm on a wide range of models with different depths andwidths. Last, we provide an alternative to Top-K for efficient software/hardware implementation.Accuracy on CIFAR10 and CIFAR100: Figure 4 shows the accuracy of the SWAT and SAWalgorithms at different sparsity budgets on CIFAR10 and CIFAR100 dataset. From the graph, we canconclude that models can be trained using SWAT and SAW algorithm up to 60% sparsity with almostzero accuracy loss and suffer only a slight accuracy loss at 70% sparsity. For CIFAR10 dataset at70% sparsity, VGG-16 and DenseNet-121 have an accuracy loss of around 0:57% for SWAT ( 0:26%for SAW) and 0:4%for SWAT ( 0:23% for SAW) whereas ResNet-18 gains an accuracy of 0:02% forSWAT ( 0:1%for SAW). For CIFAR100 dataset at 70% sparsity, ResNet-18, VGG-16 and DenseNet-BC-121 lose an accuracy of around 0:5%,0:41% and0:68% for SAW and 0:4%,0:99% and1:78%for SWAT respectively. At 80% sparsity the accuracy loss on CIFAR10 and 100 is less than 1:8%for SAW and less than 2:5%for SWAT.5Under review as a conference paper at ICLR 20200 10 20 30 40 50 60 70 80 90Sparsity90919293949596Validation Accuracy (%)Dataset: CIFAR10ResNet-18 VGG-16 DenseNet-121SAWSWAT(a)0 10 20 30 40 50 60 70 80 90Sparsity6668707274767880Validation Accuracy (%)Dataset: CIFAR100ResNet-18 VGG-16 DenseNet-121SAWSWAT (b)Figure 4: Comprehensive analysis of sparsity vs accuracy trade-off : (a) Accuracy of SWAT andSAW algorithms on CIFAR10 dataset. (b) Accuracy of SWAT and SAW algorithms on CIFAR100dataset. The dashed line represents the baseline accuracy for the corresponding model. Datapointsfor SAW algorithm are represented as dots whereas for SWAT algorithm they are represented asstars.0 5 10 15 20 25 30 35 40 45Epoch10203040506070Validation Accuracy (%)Dataset: ImageNetRN18VGG16DN121BaselineSparsity 50%Sparsity 70%(a)50 55 60 65 70 75 80 85Sparsity6062646668707274Validation Accuracy (%)67.7167.4466.2964.2567.0166.1964.8862.9770.4469.6569.1567.8669.5868.973.3172.7571.569.5872.5671.05Dataset: ImageNetResNet-18 VGG-16 DenseNet-121SAW SWAT DSG (b)Figure 5: Trend of SWAT algorithm on ImageNet dataset : (a) Validation curve of SWAT al-gorithm (b) Validation Accuracy of SWAT, SAW and DSG algorithms at different sparsity con-straints. Dotted line represents the baseline back-propagation algorithm. ‘RN18’ represent ResNet-18, ‘DN121’ represent DenseNet-BC-121 and ‘DSG’ denote the results reported by the DynamicSparse Graph(Liu et al., 2019) algorithm.Accuracy on ImageNet: Figure 5a shows the validation curve of the SWAT algorithm on Ima-geNet dataset for three different architectures and Figure 5b shows the accuracy obtained by theSWAT and SAW algorithms. The result shows that the SWAT and SAW algorithms lose negligi-ble accuracy at 50% sparsity for all three architectures. The solution is within an accuracy loss of0:261:01% compared to the baseline solution. For high sparsity of 70%, ResNet-18, VGG-16 andDenseNet-BC-121 lose only around 1:52%,1:6%and2:26% accuracy for the SWAT and 1:42%,1:28% and1:82% for the SAW algorithm respectively. Both the algorithms perform better than theDSG algorithm proposed by (Liu et al., 2019), which accelerates training by performing dimension-ality reduction search and performing the forward and backward passes in low dimensional space.The accuracy loss of DSG at 50% sparsity is around 2:83% for ResNet-18 and 1:54% for VGG-16compared to the SWAT accuracy loss of 0:27% and0:86% for ResNet-18 and VGG-16 respectively.Impact on rate of Convergence: We define the rate of convergence is the number of epochs it takesto reach the saturation accuracy. Figure 5a shows the validation curve of SWAT algorithm whentraining ResNet-18, VGG-16 and DenseNet-BC-121 on ImageNet dataset. As shown in Figure,when the learning rate is 0.1 (i.e. between epoch 0 and 30) the SWAT algorithm reaches the satu-ration accuracy around the 15th epoch approximately the same epoch when the baseline algorithmalso reaches saturation. Similarly, when the learning rate is 0.01 (i.e. between epoch 0-40th) bothSWAT and the baseline saturate at epoch 35th. The experiment at 50% and70% sparsity shows thatSWAT algorithm converges with slight accuracy loss but at the same rate compared to the baselinealgorithm.Influence of Depth and Width: Network depth (# layers ) and width (# channels ) are two importantdesign criteria. Previous studies (Lu et al., 2017; Raghu et al., 2017; Sharir & Shashua, 2018) havefound that both depth and width affect network expressivity. Increasing network depth helps in6Under review as a conference paper at ICLR 2020learning complex abstraction, whereas increasing width helps in learning more features. Ideally,we want SWAT and SAW algorithms to work with models of varying depth and width. Therefore,we study the influence of depth and width on the SWAT and SAW algorithms. Figure 6a showsthe accuracy of ResNet-50, ResNet-101 and WRN-28-10 on CIFAR100 datasets at four differentsparsities 0%,50%,70% and80%. The result for deeper networks shows that enforcing sparsity upto70% is beneficial for training as the ResNet-50 and ResNet-100 converged to an accuracy higherthan the baseline training. At 80% sparsity, ResNet-101 loses accuracy of a mere 0:18% whereasResNet-50 still has an accuracy advantage of 0:19% over the baseline. WRN-28-10 lose accuracy ontraining with SWAT and SAW algorithm, but the accuracy loss is only 0:67% for SAW and 0:49%for SWAT at 70% sparsity.0 50 70 80Sparsity747678808284Validation Accuracy (%)Dataset: CIFAR100ResNet-50ResNet-101WRN-28-10SAW SWAT(a)0 2000 4000 6000 80000.000.020.04TopKThresholdWeight (wl) layer-1layer-7layer-14layer-170 2000 4000 6000 8000iteration0.00.51.01.5TopKThresholdActivation (al1) (b)Figure 6: (a) Influence of Depth and Width : Accuracy of SAW and SWAT algorithms on CI-FAR100 dataset for ResNet-50,101 and WRN-28-10 (b) Threshold value (K-th largest values) inTop-K operation of different layers during trainingEfficient Top-K Implementation: Top-K3operation on 1 dimensional array of size ncan be naivelyimplemented using sorting. The computational complexity of a naive Top-K operation is O(nlogn).The computational complexity can be reduced to O(n), if the k-th largest element can be found inO(n)time, since for this case the Top-K operation can be implemented by a threshold operation.The K-th largest element can be computed in O(n)average time using quickselect (Hoare, 1961) orin(n)time using BFPRT (Blum et al., 1973) or introselect (Musser, 1997). The computation canbe further reduced since we found experimentally that for a given layer, the K-th largest elementsis almost constant during training as shown in Figure 6b. So we don’t need to compute the K-thlargest elements during every training iteration and can be computed once in a while after multipleiterations.3.3 C OMPUTATIONAL AND MEMORY OVERHEAD REDUCTION DURING TRAININGIn this section, we will quantify the reduction in computational and memory overhead, using SWAT,over the baseline training algorithm.Computation Reduction: SWAT algorithm is accelerating CNN training bysparsifying the computation in both the forward and the backward pass.050708005070800507080050708005070800507080Sparsity020406080100120140GFLOPs per example1xRN501xRN1011xVGG161xVGG191xDN1211xDN1612.0x2.0x2.0x2.0x2.0x2.0x3.3x3.3x3.3x3.3x3.3x3.3x5.0x5.0x5.0x5.0x5.0x5.0xforwardbackwardFigure 7: Computational reduction in SWAT at dif-ferent sparsity . “RN” denotes ResNet, “DN” denoteDenseNet (Dataset: ImageNet)During CNN training, most of the com-putation is in the convolution and fullyconnected layer, therefore sparsifyingthe computation in both of these lay-ers can result in linear speed-up dur-ing training. Figure 7 shows the com-putational reduction possible by SWATfor three different architecture whiletraining on ImageNet dataset. SWATachieves a computation reduction of 2x,3.3x, and 5x at 50%, 70%, and 80%sparsity respectively. Note that the over-all overhead of implementing efficientTop-K operation using BFRT/introselect+ thresholding, as described in the pre-vious section, is only 1-2% additional computation during training. Another benefit of using SWAT3Top-K operation is performed on an absolute values.7Under review as a conference paper at ICLR 2020is that the model learns a sparse architecture and therefore, sparse weights are used during Inference.Thus, the same computational benefit of 2-5x is possible for Inference as well.050708005070800507080050708005070800507080Sparsity0.00.10.20.30.40.50.6Memory Overhead (GB)1.0xRN501.0xRN1011.0xVGG161.0xVGG191.0xDN1211.0xDN1612.0x2.0x2.0x2.0x2.0x2.0x3.3x3.3x3.3x 3.3x3.2x3.2x5.0x5.0x5.0x 5.0x4.8x4.8xConv-ParameterLinear-Parameter(a)050708005070800507080050708005070800507080Sparsity0.000.050.100.150.20Memory Overheadper sample (GB)1.0xRN501.0xRN1011.0xVGG161.0xVGG191.0xDN1211.0xDN1611.3x1.3x1.3x1.3x1.3x1.3x1.5x1.5x1.4x1.4x1.5x1.5x1.6x1.7x1.5x1.5x1.6x1.6xBN-ActivationConv-Activation (b)Figure 8: Reduction in memory accesses during the backward pass . (a) Reduction in parameteraccess (b) Reduction in activation access per sample (Dataset: ImageNet)Memory Overhead Reduction: During training, most of the weights and activations are stored inDRAM and accessing DRAM consumes three orders of magnitude more energy consumption thancomputation (Horowitz). So reducing the memory access during training will directly reduce theenergy consumption. SWAT algorithm uses sparse input activation ( al1) and weight ( wl1) in thebackward, so input activation and weight can be compressed and stored in the memory in sparse for-mat thereby reducing the DRAM access in the backward pass. Figure 8a shows the reduction of 2x,3.3x, and 5x at sparsity of 50%, 70% and 80% in the parameters access in the backward pass. Theother significant memory overhead is saving the activation, and this overhead is dependent not onlyon the model size but also on the batch-size used during training. Figure 8b shows the activationmemory overhead for different architectures for a mini-batch size of 1. The graph only shows theactivation of batch-normalization and convolutional layer since memory overhead for the activationsof the linear layer is negligible compared to them. Note that SWAT algorithm sparsifies the compu-tation of the linear, and convolutional layers, so full input activation of the batch-normalization layerare saved for the backward pass. Thus, SWAT algorithm achieves activation compression of around1.3x, 1.5x, and 1.7x at sparsity of around 50%, 70%, and 80%. The activation of batch-normalizationlayer limits the overall activation compression.4 EXPERIMENTAL ANALYSIS OF SWAT BEHAVIOURIn this section, we will give some experimental evidence which explains why the SWAT algorithmis working so well in practice. The experiment shows the general behavior of vector sparsificationin high dimensional space. Let us first define two new terminologies which we are going to use inthis section: “Top-K sparsification” and “Sparsification Angle”4. Top-K sparsification of a vector vselects K% of the highest magnitude component and set the rest of the component to zero. Sparsifi-cation angle is the angle between the original and the Top-K sparsified instance of that vector.4.1 V ECTOR SPARSIFICATION IN HIGH-DIMENSIONAL SPACEA vector in high-dimensional space behaves differently from their low dimensional counterpart. Itis well known that in high-dimension, two independent isotropic vectors tend to be orthogonal. Re-cently Anderson & Berg (2017) extended the analysis of the geometry of high-dimensional vectorsto binary vectors. They proved that the angle between any random vector, drawn from a rotationallyinvariant distribution, and its binarized version would be concentrated around 37. Thus, binariza-tion of high dimensional vector approximately preserves their direction. We apply a similar kindof geometry analysis to high dimensional sparsified vectors. We show that sparsification indeedpreserves direction in high-dimensional space, which is contrary to our low-dimensional intuition.The first question we need to answer is how should we sparsify a vector v2Rdsuch that thesparsification angle is minimum between the original and the sparsified instance of the originalvector (has only K%non-zero component). We found that the cosine angle is minimum whentheK%non-zero component corresponds to the K%highest magnitude component, i.e., Top Ksparsification. The proof is in the appendix.4The mathematical definition of both of these terms is in the appendix A8Under review as a conference paper at ICLR 2020We did an experiment for analyzing how Top K sparsification affect angle as shown in Figure 9a,Here we are showing the sparsification angle distribution for a 1000 dimensional vector drawn fromstandard normal distribution at different sparsity. The peak of sparsification angle at 90% sparsityis concentrated around 48which is much less than peak of random vectors which is concentratedaround 90. Similarly, the peak up to 80% sparsity is concentrated at an angle of 36:4only. Thissuggest that deviation caused by sparsification is indeed small in high-dimension.For our next experiment shown in Figure 9b, we study how much a vector of a given dimen-sion, drawn from standard normal distribution, can be maximally sparsified such that the sparsi-fication angle is less than . We can see that the percentage of Top-K components needed for=f20;30;40gis around only 43%,29% and18% respectively with a variance less than 3%as shown in Figure 9c. Thus, these experiment suggest that a high-dimensional vector can be spar-sified up to 70%, which will lead a deviation ()of only 30. All the above experimental resultsare dependent on the distribution from which random vectors are drawn, in Figure 9e, we calculatethe sparsification angle during training ResNet18 on CIFAR100 dataset at 70% Sparsity. Here thesparsification angle for weight and activation is less than 36for all the layer in the network. So theexperiment suggests that the above analysis is applicable during training.0 20 40 60 80 100cosine_angle()0.00.51.01.52.02.53.0Density Function s=10%s=20%s=30%s=40%s=50%s=60%s=70%s=80%s=90%random(a)0 2000 4000 6000 8000 10000Dimension0.0%20.0%40.0%60.0%80.0%100.0%Top-K Components=10=20=30=40=50=60=70=80 (b)0 2000 4000 6000 8000 10000Dimension050100150200250300Variancey=0.03x=10=20=30=40=50(c)20 40 60 80cosine_angle()0%10%20%30%40%50%60%Top-K Componentsdim=100dim=1000dim=5000dim=10000 (d)0 25 50 75 100 125 150010203040cosine_angle( )Weight (w l) layer-1layer-7layer-13layer-170 25 50 75 100 125 150Epoch010203040cosine_angle( )Activation (a l1) (e)Figure 9: Vector sparsification in high-dimension approximately preserves the direction :(a)Shows the sparsification angle distribution at different Top-K percentage for a 1000 dimensionalvector in 10;000trials. Random represents the angle distribution between 2random vectors. (b)and (c) Shows the percentage of Top-K components needed for sparsification angle to be within in1000 trials and the variance in those trials. (d) Shows the relation between Top-K sparsification andthe sparsification angle in 1000 trials. (e) Shows how the sparsification angle (at 70% sparsification)varies during training for ResNet-18 architecture on CIFAR100 dataset.5 R ELATED WORKWe can classify most of the previous studies which focus on accelerating training or inference in thefollowing broad categories:Pruning : Most of the pruning work focuses on inference optimization. Weight pruning can beclassified into two broad categories of structured and unstructured pruning. The idea of unstructured9Under review as a conference paper at ICLR 2020pruning can be traced back to LeCun et al. (1990); Hassibi & Stork (1993), which prune the networkusing the saliency of parameters derived from the second-order information of loss function. Hanet al. (2015b;a) pruned network parameters using a magnitude based method. There are severalother unstructured pruning methods such as Molchanov et al. (2017); Louizos et al. (2017), but thedrawback of all these methods is that it is difficult to extract parallelism on hardware. In contrast,structured pruning such as Liu et al. (2017); Li et al. (2016b); He et al. (2017); Luo et al. (2017); Wenet al. (2016); Molchanov et al. (2016) removes entire channels or filters at a time which preserves theinherent regular computation structure, and therefore it is easy to extract parallelism on hardware.Quantization : Quantized networks can be used to accelerate both training and inference since en-ergy consumption in the hardware is directly proportional to bit-width of the operands. There aremany works which focus on quantizing weights for efficient inference such as McDonnell (2018);Wu et al. (2016); Zhu et al. (2016); Li et al. (2016a); Courbariaux et al. (2015) whereas much otherwork focuses on accelerating training as well, such as Banner et al. (2018); Choi et al. (2018); Wanget al. (2018); Lin et al. (2017a); Zhou et al. (2016); Courbariaux et al. (2016); Rastegari et al. (2016);Gupta et al. (2015). Some of the other work such as Zhao et al. (2019); McKinstry et al. (2018);Zhou et al. (2017); Mellempudi et al. (2017) shows that training from scratch is not necessary forfinding the quantized model, but one can find a quantized model from pre-trained full precisionmodels. Other work focuses on discrete training and inference using Integers such as Wu et al.(2018); Das et al. (2018); Jacob et al. (2018); Lin et al. (2016) since integer added/multiplier is moreefficient than floating-point adder/multiplier. Few studies such as Louizos et al. (2019); Jung et al.(2019); Zhang et al. (2018); Zhou et al. (2018); Hou & Kwok (2018); Hou et al. (2016) formulatethe quantization as an optimization problem to minimize the accuracy loss due to quantization. Fewother work such as Yang et al. (2019); De Sa et al. (2018) focus on improving the learning algorithmby proposing novel stochastic averaging of the low precision iterates or using SVRG to reduce thevariance and by dynamically adjusting the precision representation using bit centering. Few worksinstead of quantizing the entire model to a fixed bit-width focus on per tensor or parameter quan-tization such as Sakr & Shanbhag (2019); Khoram & Li (2018). Compared to all these works, ourwork is orthogonal as we eliminate the computation instead of reducing the computation precision.Tensor Decomposition and Dimentionality Reduction : There are few works on compressing themodels by performing tensor decomposition or by learning compact structure. Alvarez & Salzmann(2017) introduce a regularizer that promotes the parameter matrix to have a low rank. Thus the algo-rithm encourages the model to learn a compact structure by accounting for compression during thetraining itself. Novikov et al. (2015) showed that tensor decomposition could be used to compressa fully connected layer by using only a few parameters. Later, Garipov et al. (2016) extended it forthe convolutional layer. The idea was to reshape the kernel into a tensor of higher-order and then tofactorize it.Distributed Training : There are few works (Stich et al., 2018; Lin et al., 2017b) which look atreducing the communication overhead in distributed training by transferring only sparse gradientsduring gradient aggregation step but these works are accumulating the rest of the gradients locallyfor subsequent iterations. Compared to all these works, our work objective is different as we are con-cerned with accelerating single node training, whereas their objective is minimizing communicationduring distributed training.6 C ONCLUSIONIn this work, we propose SWAT, a robust training algorithm based on the insight that sparsifyingweights and activation during training has little impact on convergence. SWAT sparsify both the for-ward and the backward passes, thereby eliminating lots of redundant computation such as additionand multiplication by zero. SWAT is a simpler technique and performs better than the recently pro-posed dimensionality reduction (DSG) technique for accelerating training. Our experiments overvarious benchmarks demonstrate significant computation reduction of up to 2-5x for training andinference and provides a memory footprint reduction of activation by 1.3-1.7x and reduction inmemory access overhead for weight by 2-5x in the backward pass.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This paper studies training neural networks with sparse weights and sparse activations (SWAT training). By using sparse weights in forward passes as well as sparse weights and activations in backward passes, SWAT can reduce the computation overhead and also reduce the training memory footprint. The primary contributions of the paper are in three folds: 1) The authors empirically compare the impact of (activation) gradient sparsity and weight + activation sparsity on the model performance---the comparison shows that the weight + activation sparsity has less influence on the model accuracy; 2) Across different models on CIFAR and ImageNet dataset, SWAP can reduce the training flops by 50% to 80% while using roughly 2 to 3x less training memory footprint saving (weight + activation); 3) The authors empirically study on why training using top-K based sparsification can attain strong model accuracy---the magnitude-based top-K approach can roughly preserve the directions of the vectors. I think the claimed contributions are well-validated in general. The design decisions of the approach are well supported by empirical observations and the components of the approach (different top-K methods) are studied properly. Additionally, I like the authors' synthetic-data studies to shed light on why top-K based sparsity can work well. Given the above reason, I give week accept and I am willing to raise the score if the following questions / concerns can be resolved in the rebuttal / future draft: 1. In results such as in figure 4, we observe that using intermediate levels of sparsity can actually demonstrate better generalization performance than the dense baseline training approach. I was wondering if this is because the default hyperparameter produces better training loss in sparse training than in dense training, and consequently the sparse training test performance is also improved over dense training. Without showing this, it is not fully convincing that intermediate sparsity helps prevent overfitting and generalizes better (as the authors discussed in the text). 2. For "Impact on Convergence" in section 3.2, it is not clear to me what the authors are using as a metric for the degree of convergence. Thus I can not evaluate the claims here. 3. For "Efficient Top K implementation" in section 3.2, the authors suggest only computing the K-th largest elements periodically to further improve efficiency. However the empirical evidence of whether this approach will significantly degrade the model performance at the end of training is not provided. 4. For the GFLOPS comparison in Figure 7, could the authors elaborate what operations are included into the count? As the sparse operations requires additional indexing operations for computation, I was wondering whether the GFLOPS can realistically reflect the real latency / energy efficiency of the SWAT approach. 5. How the memory access count calculated at the end of page 7? Is it counting the number of float point values (activations, activation gradients, weights) that needs to be fetched for forward and backward pass? 6. At the first paragraph in page 8 (last paragraph above section 4), do the authors imply that the activations of BN layers is not sparsified? Could the authors provide a bit more evidence on how (any why) sparsification of BN activation impacts the model performance. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
XTPfeOoZD8
NeurIPS.cc/2022/Workshop/SVRHM
2022
Measuring the Alignment of ANNs and Primate V1 on Luminance and Contrast Response Characteristics
["Stephanie Olaiya", "Tiago Marques", "James J. DiCarlo"]
Some artificial neural networks (ANNs) are the current state-of-the-art in modeling the primate ventral stream and object recognition behavior. However, how well they align with luminance and contrast processing in early visual areas is not known. Here, we compared luminance and contrast processing in ANN models of V1 and primate V1 at the level of single-neuron. Model neurons have luminance and contrast response characteristics that differ from those observed in macaque V1 neurons. In particular, model neurons have responses weakly modulated by changes in luminance and show non-saturating responses to high contrast stimuli. While no model perfectly matches macaque V1, there is great variability in their V1-alignment. Variability in luminance and contrast scores is not correlated suggesting that there are trade-offs in the model space of ANN V1 models.
["Primary Visual Cortex", "Luminance", "Contrast", "Artificial Neural Networks"]
Measuring the Alignment of ANNs and Primate V1 onLuminance and Contrast Response CharacteristicsStephanie Olaiya1,∗, Tiago Marques2,3,4,5,6,∗,§, James J. DiCarlo2,3,41Department of Neuroscience, Brown University2Department of Brain and Cognitive Sciences, MIT, Cambridge, MA021393McGovern Institute for Brain Research, MIT, Cambridge, MA021394Center for Brains, Minds and Machines, MIT, Cambridge, MA021396Champalimaud Clinical Centre, Champalimaud Foundation, Lisbon∗Joint first authors (equal contribution)§Corresponding author: tiago.marques@research.fchampalimaud.orgAbstractSome artificial neural networks (ANNs) are the current state-of-the-art in modelingthe primate ventral stream and object recognition behavior. However, how wellthey align with luminance and contrast processing in early visual areas is notknown. Here, we compared luminance and contrast processing in ANN models ofV1 and primate V1 at the level of single-neuron. Model neurons have luminanceand contrast response characteristics that differ from those observed in macaqueV1 neurons. In particular, model neurons have responses weakly modulated bychanges in luminance and show non-saturating responses to high contrast stimuli.While no model perfectly matches macaque V1, there is great variability in their V1-alignment. Variability in luminance and contrast scores is not correlated suggestingthat there are trade-offs in the model space of ANN V1 models.1 IntroductionThe primate ventral visual stream is a set of hierarchically-organized cortical areas that supportvisually guided behaviors such as object recognition [1, 2]. Understanding the computations thatgive rise to this complex visual behavior has been a major goal in visual neurosciences [3]. Over thepast years, some artificial neural networks (ANNs), which have also achieved human level visualabilities in computer vision tasks [4], have been used to explain with unparalleled success the responsepatterns of neurons along the visual ventral stream areas [5, 6, 7, 8]. However, this model to braincomparisons have received some criticism due to the fitting methods that linearly combine thousandsof model features to explain the responses of a few biological neurons [9, 10].Recently, a novel approach for comparing models to brains that does not require the traditional fittingprocedure has been proposed [11]. This method explicitly maps single artificial model neurons tosingle biological neurons and compares the distribution similarity of response properties betweenthe two. Another advantage of this approach is that it can leverage on existing published studies, forwhich raw data is not accessible, and turn them into quantitative tests for evaluating the model to brainsimilarity. Using this strategy, Marques et al. 2021, showed that model neurons have surprisinglysimilar response properties to neurons in primate V1 and that the population distributions in the twosystems were closely matched [11]. Despite this, no model in a large pool of candidate models wasable to perfectly account for all the responses properties studied.4th Workshop on Shared Visual Representations in Human and Machine Visual Intelligence (SVRHM) at theNeural Information Processing Systems (NeurIPS) conference 2022. New Orleans.Here, we extended this approach to study the alignment of ANNs and primate V1 on luminance andcontrast response characteristics. There is a vast literature characterizing in detail the responses ofneurons in early visual areas to changes in luminance (linear measure of light, spectrally weightedfor normal vision, and measured in cd/m2) and contrast (ratio of intensity between the brightest whiteand the darkest black of a stimulus). These properties are particularly interesting since most ANNsstruggle to generalize to distributional shifts in both the brightness and contrast of inputs [12]. Itremained to be seen whether the failure of models to deal with brightness and contrast perturbationswas due to their lack of alignment with primate V1 luminance and contrast processing at the singleneuron level. We make the following novel contributions:1.We developed 8 new benchmarks that measure the alignment of models to primate V1 onluminance and contrast response characteristics.2.Using these benchmarks we evaluated a pool of 15 candidate models of primate V1 frompre-assigned layers of ANN models. Luminance scores range from 0 to 0.79 and contrastscores range from 0.51 to 0.76, thus showing that no model studied is able to emulateprimate V1 contrast and luminance processing.3.We compared the luminance and contrast scores with the other V1 response properties scoresand observed that these are on average weakly correlated, suggesting that the space of V1models originating from ANN layers has trade-offs in their alignment to V1.2 MethodsTo measure the alignment between the ANNs luminance and contrast processing and that of primateV1, we adapted an existing methodology [11]. In summary, it consists of extracting primate V1 datarelated to luminance and contrast response properties from existing studies and replicating thoseexperiments in-silico in an ANN model of V1. Then, the distributions of those response properties inboth primate V1 and the model are quantitatively compared using a similarity metric. We used datafrom Kinoshita and Komatsu 2001 for the luminance benchmarks [13] and Sclar et al. 1990 for thecontrast benchmarks [14].2.1 Visual StimuliWe designed visual stimuli to approximate those used in the two studies that provided the primate V1data. Since models are trained using images with RGB encoded pixel values, when implementing ourstimuli, we used a gamma compression function that maps from the physical luminance space to thedigital image RGB encoded space (we used the standard range of 0 to 120 cd/m2to correspond to thewhole dynamic range of the input RGB values).For the luminance response properties, we used seven gray uniform luminance squares whichsubtended 10 degrees of visual space [13]. The luminance of the squares varied from 0.1 to 100 cd/m2on a logarithmic scale. For the contrast response properties, we used square achromatic sinusoidalgratings [14]. To determine the preferred spatial frequency and orientation of each neuron, wepresented gratings of 8 orientations (spaced by 22.5 deg) and 4 spatial frequencies (from 0.77 to 6.17cpd on a log scale). For each neuron, we selected the spatial frequency and orientation combinationthat elicited the strongest response. The gratings were 2.8 by 3.4 deg and were displayed on a blackbackground. To measure neural responses to contrast, the contrast of the gratings varied from 0.02 to1 logarithmically, with the maximum contrast corresponding to 120 cd/m2.2.2 Recording from ANN-based V1 modelsFor each ANN, a layer was pre-committed to V1 using a publicly available neural predicitivitybenchmark from the Brain-Score platform [8]. Only neurons from the pre-committed V1 layer wereanalyzed in this study. Stimuli were centered at a specific location near the fovea (0.5 deg eccentricityon both axes). We measured the functional receptive fields (RFs) of all V1 model neurons using agrid of small gratings as previously described [11]. Only neurons that had their RF center at thelocation around the stimulus center were considered for further analysis. Model activations weretransformed to neuronal firing rates using an affine transformation as in [11].2We analyzed the responses of the neurons similarly to corresponding studies. For the luminanceexperiment, neuronal activations were recorded without baseline correction. For the contrast experi-ment, we considered either the response’s first harmonic of the Fourier transform (AC component) orthe mean response with baseline correction (DC component) depending on whether the neuron was asimple or complex cell.2.3 Calculating response propertiesWe calculated four luminance response properties from the luminance tuning curves: surface respon-sive, bright slope, dark slope, and normalized delta slope. Surface responsive classifies neurons assurface responsive or non-responsive. Kinoshita et al classified a neuron as surface responsive ifits response to the brightest or darkest stimuli was significantly different from its response to thegray (3cd/m2) stimulus [13]. Only surface responsive neurons were used in creating the distributionsfor the other three response properties. The bright slope property was the slope of the regressionline when considering responses to the ‘bright stimuli’ (luminance of 3cd/m2and higher). The darkslope was the slope of the regression line when considering responses to the dark stimuli (3cd/m2and lower). The normalized delta slope quantifies the normalized difference between the bright slopeand the dark slope.We calculated four contrast response properties from the contrast tuning curves of each ANN neuron:standard neuron, maximum response, exponent, and semi-saturation constant. Firstly the responses ofABANN layerExample V1 modelMacaque V1Response (sps)Response (sps)Response (sps)Response (sps)0.020.1181624Rmax=82.3n=1.2c50=2.35Drop=0.00R 2 =1.000.020.110.00.81.6Rmax=19.6n=3.7c50=1.810.11101000.751.001.25Bright S. =-0.16Dark S. =0.15Norm Δ S. =-0.980.111010051015Bright S. =2.46Dark S. =0.64Norm Δ S. =0.37Luminance (cd/m2)Contrast0.11101000.1110100Bright S. =-4.50Dark S. =-14.18Norm Δ S. =0.34Bright S. =9.67Dark S. =6.09Norm Δ S. =0.1925505040600.020.110.020.1120402040Rmax=78.6n=1.3c50=0.68 Rmax=42.6n=2.5c50=0.35 Figure 1: Measuring luminance and contrast response properties in single neurons of ANNmodels of V1. A. Example neuronal responses in macaque V1 [13, 14]. Left column, luminancetuning curve; right curve, contrast tuning curve. Responses are represented in circles and fits toresponses with gray curves. Response properties for example neurons are shown on top of each plot.B.Same as in Abut for two example neurons from the same V1 model. Each row corresponds anexample neuron. Examples of stimuli used in the experiment are shown at the bottom.3each neuron are fitted using the hyperbolic ratio function which is used in Sclar et al. to characterizecontrast response relationships [15, 14]:R=Rmaxcncn+cn50(1)The standard neuron response property was a classification that describes whether an artificial neuronhad standard V1 neuron responses to contrast. We characterized this as having a peak drop at thehighest contrast < 20% and having a good fit to the hyperbolic equation (an R2> 0.9). Only neuronsthat were classified as standard were included in the distributions for the other response properties.From the fitted parameters, we extracted the other three properties as maximum response (Rmax),exponent (n), and semi-saturation constant (C 50).2.4 Comparing response property distributions:Empirical data for the neuronal response property distributions were extracted from the two studiesusing the WebPlotDigitizer. We replicated both the luminance and the contrast studies 1000 timesby performing in-silico neurophysiological experiments in the models. For each experiment, werandomly sampled the same number of neurons used in each study and computed the distribution ofthe several response properties. Primate V1 distributions and ANN V1 distributions were comparedusing a similarity metric that relies on the Kolmogorov-Smirnov Distance (previously described in[11]). This metric provides a quantitative score of the alignment of the ANN V1 distribution to thatof primate V1 as seen in the experimental study.3 ResultsWe analyzed 15 models in our study: AlexNet, ResNet18, ResNet34, ResNet50, ResNet50AT(adversarially trained), ResNet50SIN-IN-IN (trained with style transfer), CORnet-Z, CORnet-S,VOneResNet-50NS (non-stochastic), vgg-16, vgg-19, densenet-121, densenet-169, pnasnet andnasnet [16, 17, 18, 19, 20, 21, 22, 23]. Most model neurons show varying luminance responseprofiles (Figure 1). These range from monotonic profiles in which responses increase or decreasewith increasing luminance to V-shaped responses where the neurons are either gray preferring (peakat intermediate luminance values) or have the lowest responses to intermediate luminance values.In terms of contrast response characteristics, most model neurons have contrast tuning curves withneurons showing non-saturating responses to high contrast.A B−40040Dark Slope0.00.8−40040Bright Slope0.000.65−1.00.01.0Normalized Delta Slope0.00.6048Exponent0.000.350.000.501.00Semisaturation Constant0.00.6Proportion0100200300Maximum Response0.00.30.250.500.751.00ScoreStd. neuronMax RespC50ExponentSurface Resp.Dark S.Bright S.Norm. Δ S.Macaque V1Example V1 modelContrastLuminance8 single neuron properties(2 property groups)Figure 2: Distributions of luminance and contrast response properties in a candidate V1 modeland macaque V1. A. Distributions of 6 example response properties in macaque V1 (from publishedstudies, black line) and the best V1 model in the pool (ResNet50AT, purple). Model distributionsare obtained by performing in silico experiments, thick line is mean over 1,000 experiments and theshaded region is the SD. B.Similarity scores for the eight single neuron response properties for thesame V1 model (error bars represent mean and SD). Response properties are grouped in the twocategories: luminance and contrast.4Comparing the distributions of response properties in the models with those in macaque V1, weobserve that these show striking differences (Figure 2 shows the distributions and correspondingscores for the best performing model). In particular, the response properties with the lowest scoresare the semi-saturation constant and bright slope. Analyzing the tuning curves of individual neuronsprovides some clues for these differences. Unlike primate V1 neurons, model V1 neurons are weaklymodulated by luminance. This is seen as most neurons do not have large changes in responses tochanges in surface luminance (Figures 1 and 2). Thus, their bright slopes are not large in magnitude.Also, unlike primate V1 neurons, most ANN V1 neurons do not show saturating responses for highcontrast stimuli, resulting in much larger C 50and Rmax values than those in primate V1 (Figures 1and 2). Our analysis also showed that most models have a lower proportion of standard and surfaceresponsive neurons than are seen in primate V1. Finally, across all models analyzed, Resnet50AT isthe highest performing model on average for both luminance and contrast benchmarks (Figure 2).Despite all models failing to match primate V1 in all the response properties, how well they alignto V1 varies considerably. Luminance scores (average of the corresponding four properties) rangefrom 0 to 0.79 while contrast scores (average of the corresponding four properties) range from 0.51to 0.76. Interestingly, there is no correlation between luminance and contrast scores in the modelsstudied (Figure 3A). When comparing the scores on these benchmarks with those on the other sevenV1 response property benchmark categories (corresponding to 22 individual benchmarks [11]), weobserve a similar trend: while some pairs of benchmarks appear to be correlated, on average, pair-wisecorrelations between scores on V1 benchmarks are very weak. This suggests that there are trade-offsin this model space in terms of alignment to multiple aspects of V1 processing (Figure 3B).OrientationSpatial freq.Resp. selectivityRF sizeSurr. Mod. Text. Mod.Resp. Mag.ContrastSpatial freq.Resp. selectivityRF sizeSurr. Mod.Text. Mod.Resp. Mag.ContrastLuminance1.0-1.0CorrelationA B0.00.40.8Contrast Score0.00.40.8Luminance Scoren=15Figure 3: Luminance and contrast scores are not correlated with each other and with other V1response property benchmarks. A. Luminance and contrast average scores for the 15 modelsanalyzed in this study (each score is the average of the four corresponding response propertybenchmarks). B.Pair-wise correlations between the nine V1 response property categories scores.Each category is the average of multiple individual benchmarks [11]4 DiscussionIn this work, we evaluated whether ANN models of V1 are aligned to macaque V1 luminance andcontrast processing at the single-neuron level. While most model neurons respond to contrast andluminance stimuli, their responses differ from those observed in populations of macaque V1 neurons.In particular, model neurons have responses weakly modulated by changes in luminance and shownon-saturating responses to high contrast stimuli. Furthermore, models differ in their alignment to V1with the best model being one adversarially-trained, a result that is consistent with previous findingsfor other benchmarks [22, 11].While we view this study as an important first step to studying luminance and contrast processing inANN models of V1, much work remains to be done. We sampled a considerable pool of models withvarying architectures and training procedures. However, the space of V1 models is considerably largerand future work should explore more models with more diverse architectures, as well as analyzing5the alignment to V1 of other layers in the same hierarchical models (no layer pre-commitment).Furthermore, comparisons of luminance and contrast scores with other brain-benchmarks, particularlythose in higher visual areas and behavior, may provide insight into how luminance and contrastprocessing in early vision relates to downstream processing.Acknowledgments and Disclosure of FundingThis work was started during the MIT Summer Research Program (MSRP). S.O. would like to thankthe director of the program Mandana Sassanfar and MIT for this research opportunity. S.O. would alsolike to thank the members of the DiCarlo lab and the other MSRP students for the critical discussionsand feedback. This work was supported by the PhRMA Foundation Postdoctoral Fellowship inInformatics (T.M.), the Semiconductor Research Corporation (SRC) and DARPA (J.J.D.), Office ofNaval Research grant MURI-114407 (J.J.D.), the Simons Foundation grant SCGB-542965 (J.J.D.),the MIT-IBM Watson AI Lab grant W1771646 (J.J.D.).References[1] D J Felleman and D C Van Essen. “Distributed hierarchical processing in the primate cerebralcortex.” In: Cerebral cortex 1.1 (1991), pp. 1–47. ISSN : 1047-3211. URL:http://www.ncbi.nlm.nih.gov/pubmed/1822724 .[2] Mortimer Mishkin, Leslie G. Ungerleider, and Kathleen A. Macko. “Object vision and spatialvision: two cortical pathways”. In: Trends in Neurosciences 6.C (1983). ISBN: 0166-2236,pp. 414–417. ISSN : 01662236. DOI:10.1016/0166-2236(83)90190-X .[3] James J. DiCarlo and David Cox. “Untangling invariant object recognition”. In: Trends inCognitive Sciences 11.8 (2007). ISBN: 1364-6613, pp. 333–341. ISSN : 13646613. DOI:10.1016/j.tics.2007.06.010 .[4] Kaiming He et al. “Delving deep into rectifiers: Surpassing human-level performance onimagenet classification”. In: Proceedings of the IEEE International Conference on ComputerVision 2015 Inter (2015). ISBN: 9781467383912, pp. 1026–1034. ISSN : 15505499. DOI:10.1109/ICCV.2015.123 .[5] Daniel L. K. Yamins et al. “Performance-optimized hierarchical models predict neural re-sponses in higher visual cortex”. In: Proceedings of the National Academy of Sciences 111.23(2014), pp. 8619–8624. ISSN : 0027-8424. DOI:10.1073/pnas.1403112111 . arXiv: 0706.1062v1 .URL:http://www.pnas.org/cgi/doi/10.1073/pnas.1403112111 .[6] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. “Deep supervised, but not unsuper-vised, models may explain IT cortical representation”. en. In: PLoS Comput. Biol. 10.11 (Nov.2014), e1003915.[7] Umut Güçlü and Marcel A. J. van Gerven. “Deep Neural Networks Reveal a Gradient in theComplexity of Neural Representations across the Brain’s Ventral Visual Pathway”. In: TheJournal of neuroscience 35.27 (2015), pp. 10005–10014. ISSN : 0270-6474. DOI:10.1523/JNEUROSCI.5023-14.2015 . arXiv: 1411.6422 .URL:http://arxiv.org/abs/1411.6422%7B%5C%%7D0Ahttp://dx.doi.org/10.1523/JNEUROSCI.5023-14.2015 .[8] Martin Schrimpf et al. “Brain-Score: Which Artificial Neural Network for Object Recognitionis most Brain-Like?” In: bioRxiv (2018). DOI:10.1101/407007 .URL:https://www.biorxiv.org/content/early/2018/09/05/407007 .[9] Andrew Saxe, Stephanie Nelli, and Christopher Summerfield. “If deep learning is the answer,what is the question?” In: Nature Reviews Neuroscience 22.1 (2021). Publisher: SpringerUS, pp. 55–67. ISSN : 14710048. DOI:10 . 1038 / s41583 - 020 - 00395 - 8 .URL:http ://dx.doi.org/10.1038/s41583-020-00395-8 .[10] Thomas Serre. “Deep Learning: The Good, the Bad, and the Ugly”. In: Annual Review of VisionScience 5.1 (Sept. 2019), pp. 399–426. ISSN : 2374-4642. DOI:10.1146/annurev-vision-091718-014951 .URL:https://www.annualreviews.org/doi/10.1146/annurev-vision-091718-014951 .[11] Tiago Marques, Martin Schrimpf, and James J Dicarlo. “Multi-scale hierarchical neuralnetwork models that bridge from single neurons in the primate primary visual cortex to objectrecognition behavior”. In: bioRxiv (2021). DOI:10.1101/2021.03.01.433495 .URL:https://www.biorxiv.org/content/10.1101/2021.03.01.433495v2 .6[12] Dan Hendrycks and Thomas Dietterich. “Benchmarking neural network robustness to commoncorruptions and perturbations”. In: 7th International Conference on Learning Representations,ICLR 2019 (2019), pp. 1–16.[13] Masaharu Kinoshita and Hidehiko Komatsu. “Neural Representation of the Luminance andBrightness of a Uniform Surface in the Macaque Primary Visual Cortex”. en. In: Journalof Neurophysiology 86.5 (Nov. 2001), pp. 2559–2570. ISSN : 0022-3077, 1522-1598. DOI:10.1152/jn.2001.86.5.2559 .URL:https://www.physiology.org/doi/10.1152/jn.2001.86.5.2559 (visited on 09/27/2022).[14] Gary Sclar, John H.R. Maunsell, and Peter Lennie. “Coding of image contrast in centralvisual pathways of the macaque monkey”. en. In: Vision Research 30.1 (Jan. 1990), pp. 1–10.ISSN : 00426989. DOI:10.1016/0042-6989(90)90123-3 .URL:https://linkinghub.elsevier.com/retrieve/pii/0042698990901233 (visited on 09/27/2022).[15] D G Albrecht and D B Hamilton. “Striate cortex of monkey and cat: contrast response function.”en. In: Journal of Neurophysiology 48.1 (July 1982), pp. 217–237. ISSN : 0022-3077, 1522-1598. DOI:10.1152/jn.1982.48.1.217 .URL:https://www.physiology.org/doi/10.1152/jn.1982.48.1.217 (visited on 09/27/2022).[16] Alex Krizhevsky, Ilya Sutskever, and Hinton Geoffrey E. “ImageNet Classification with DeepConvolutional Neural Networks”. In: NIPS . ISSN: 10495258. 2012, pp. 1097–1105. ISBN :978-1-62748-003-1. DOI:10.1109/5.726791 .[17] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: (Dec. 2015). arXiv:1512.03385 [cs.CV] .[18] Karen Simonyan and Andrew Zisserman. “Very Deep Convolutional Networks for Large-ScaleImage Recognition”. In: (Sept. 2014). arXiv: 1409.1556 [cs.CV] .[19] Aleksander Madry et al. “Towards Deep Learning Models Resistant to Adversarial Attacks”.In: (June 2017). arXiv: 1706.06083 [stat.ML] .[20] Jonas Kubilius et al. “Brain-Like Object Recognition with High-Performing Shallow RecurrentANNs”. In: Advances in Neural Information Processing Systems 32 . Ed. by H. Wallach etal. Curran Associates, Inc., 2019, pp. 12805–12816. URL:http://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns.pdf .[21] Robert Geirhos et al. “ImageNet-Trained CNNs Are Biased Towards Texture”. In: InternationalConference on Learning Representations c (2019), pp. 1–20. DOI:arXiv:1811.12231v1 .arXiv: 1811.12231 .[22] Joel Dapello et al. “Simulating a primary visual cortex at the front of CNNs improves robustnessto image perturbations”. In: NeurIPS (2020), pp. 1–30. ISSN : 26928205. DOI:10.1101/2020.06.16.154542 .[23] Gao Huang et al. “Densely Connected Convolutional Networks”. In: (Aug. 2016). arXiv:1608.06993 [cs.CV] .7Checklist1. For all authors...(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See Section 3(b) Did you describe the limitations of your work? [Yes] See Section 4(c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics review guidelines and ensured that your paper conforms tothem? [Yes]2. If you are including theoretical results...(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments...(a)Did you include the code, data, and instructions needed to reproduce the main experi-mental results (either in the supplemental material or as a URL)? [No] We are preparinga code and data release for a full publication(b)Did you specify all the training details (e.g., data splits, hyperparameters, how theywere chosen)? [Yes](c)Did you report error bars (e.g., with respect to the random seed after running experi-ments multiple times)? [Yes](d)Did you include the total amount of compute and the type of resources used (e.g., typeof GPUs, internal cluster, or cloud provider)? [No]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...(a) If your work uses existing assets, did you cite the creators? [Yes](b) Did you mention the license of the assets? [N/A](c)Did you include any new assets either in the supplemental material or as a URL? [No](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects...(a)Did you include the full text of instructions given to participants and screenshots, ifapplicable? [N/A](b)Did you describe any potential participant risks, with links to Institutional ReviewBoard (IRB) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amountspent on participant compensation? [N/A]8
88g-mj48Tbk
Moving towards psychological plausiblity in addition to neural plausibility for CNNs
6: Marginally above acceptance threshold
This paper attempts to make CNNs more psychologically plausible by investigating behaviour in addition to neural plausibility. Comparisons between single-neuron recordings of Non-human primate V1 spiking data and Deep Neural Networks neurons are made. Specifically, luminance and contrast response characteristics are investigated & it is reported that DNN neurons have responses weakly modulated by luminance and non-saturating responses to contrast. This is an essential step towards understanding if CNNs are a plausible model for primate ventral visual stream & how they can be improved. * Strengths: 1. Comparisons between DNNs & single neuron recordings. 2. Good stimuli design 3. Extensive model benchmarking * Weaknesses: * Major: 1. Since the paper only tests CNN models, it would be a better claim if it was about CNNs rather than ANNs (in the title, abstract, etc.) 2. The NHP V1 data papers seem fairly old; it would be good to justify the reason behind selecting the data from these papers rather than newer/other papers. Is it due to availability? 3. While mentioning the selection of neurons for analysis, for example, surface-responsive neurons, it would be helpful to show the number/percentage of neurons analysed instead of stating most neurons. Specifically lines 87, 107, 116, 120, 127, 129 and 131. 4. Exact model scores are not mentioned; they should be mentioned to have a better understanding of the results. 5. It would be interesting and helpful to see between model comparisons to demonstrate which property is correlated with scores. For example, scale (number of parameters), brain score, robustness, performance, etc. * Minor: 1. Figure 1 title A, "right curve, contrast tuning curve" -> "right column, contrast tuning curve" 2. Line 124, mention the best performing model, i.e. ResNet50AT. 3. In Figure 2A, the first row, please mention the y-label 4. In Figure 3B, please mention "orientation" in the first row and "Luminance" in the last column. * Conclusion: The paper attempts an important step towards psychological plausibility, but the reporting of the results should be improved to paint a better picture.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Measuring the Alignment of ANNs and Primate V1 on Luminance and Contrast Response Characteristics ### Paper Abstract Some artificial neural networks (ANNs) are the current state-of-the-art in modeling the primate ventral stream and object recognition behavior. However, how well they align with luminance and contrast processing in early visual areas is not known. Here, we compared luminance and contrast processing in ANN models of V1 and primate V1 at the level of single-neuron. Model neurons have luminance and contrast response characteristics that differ from those observed in macaque V1 neurons. In particular, model neurons have responses weakly modulated by changes in luminance and show non-saturating responses to high contrast stimuli. While no model perfectly matches macaque V1, there is great variability in their V1-alignment. Variability in luminance and contrast scores is not correlated suggesting that there are trade-offs in the model space of ANN V1 models. ### Paper Keywords ["Primary Visual Cortex", "Luminance", "Contrast", "Artificial Neural Networks"] ### Paper Content Measuring the Alignment of ANNs and Primate V1 onLuminance and Contrast Response CharacteristicsStephanie Olaiya1,∗, Tiago Marques2,3,4,5,6,∗,§, James J. DiCarlo2,3,41Department of Neuroscience, Brown University2Department of Brain and Cognitive Sciences, MIT, Cambridge, MA021393McGovern Institute for Brain Research, MIT, Cambridge, MA021394Center for Brains, Minds and Machines, MIT, Cambridge, MA021396Champalimaud Clinical Centre, Champalimaud Foundation, Lisbon∗Joint first authors (equal contribution)§Corresponding author: tiago.marques@research.fchampalimaud.orgAbstractSome artificial neural networks (ANNs) are the current state-of-the-art in modelingthe primate ventral stream and object recognition behavior. However, how wellthey align with luminance and contrast processing in early visual areas is notknown. Here, we compared luminance and contrast processing in ANN models ofV1 and primate V1 at the level of single-neuron. Model neurons have luminanceand contrast response characteristics that differ from those observed in macaqueV1 neurons. In particular, model neurons have responses weakly modulated bychanges in luminance and show non-saturating responses to high contrast stimuli.While no model perfectly matches macaque V1, there is great variability in their V1-alignment. Variability in luminance and contrast scores is not correlated suggestingthat there are trade-offs in the model space of ANN V1 models.1 IntroductionThe primate ventral visual stream is a set of hierarchically-organized cortical areas that supportvisually guided behaviors such as object recognition [1, 2]. Understanding the computations thatgive rise to this complex visual behavior has been a major goal in visual neurosciences [3]. Over thepast years, some artificial neural networks (ANNs), which have also achieved human level visualabilities in computer vision tasks [4], have been used to explain with unparalleled success the responsepatterns of neurons along the visual ventral stream areas [5, 6, 7, 8]. However, this model to braincomparisons have received some criticism due to the fitting methods that linearly combine thousandsof model features to explain the responses of a few biological neurons [9, 10].Recently, a novel approach for comparing models to brains that does not require the traditional fittingprocedure has been proposed [11]. This method explicitly maps single artificial model neurons tosingle biological neurons and compares the distribution similarity of response properties betweenthe two. Another advantage of this approach is that it can leverage on existing published studies, forwhich raw data is not accessible, and turn them into quantitative tests for evaluating the model to brainsimilarity. Using this strategy, Marques et al. 2021, showed that model neurons have surprisinglysimilar response properties to neurons in primate V1 and that the population distributions in the twosystems were closely matched [11]. Despite this, no model in a large pool of candidate models wasable to perfectly account for all the responses properties studied.4th Workshop on Shared Visual Representations in Human and Machine Visual Intelligence (SVRHM) at theNeural Information Processing Systems (NeurIPS) conference 2022. New Orleans.Here, we extended this approach to study the alignment of ANNs and primate V1 on luminance andcontrast response characteristics. There is a vast literature characterizing in detail the responses ofneurons in early visual areas to changes in luminance (linear measure of light, spectrally weightedfor normal vision, and measured in cd/m2) and contrast (ratio of intensity between the brightest whiteand the darkest black of a stimulus). These properties are particularly interesting since most ANNsstruggle to generalize to distributional shifts in both the brightness and contrast of inputs [12]. Itremained to be seen whether the failure of models to deal with brightness and contrast perturbationswas due to their lack of alignment with primate V1 luminance and contrast processing at the singleneuron level. We make the following novel contributions:1.We developed 8 new benchmarks that measure the alignment of models to primate V1 onluminance and contrast response characteristics.2.Using these benchmarks we evaluated a pool of 15 candidate models of primate V1 frompre-assigned layers of ANN models. Luminance scores range from 0 to 0.79 and contrastscores range from 0.51 to 0.76, thus showing that no model studied is able to emulateprimate V1 contrast and luminance processing.3.We compared the luminance and contrast scores with the other V1 response properties scoresand observed that these are on average weakly correlated, suggesting that the space of V1models originating from ANN layers has trade-offs in their alignment to V1.2 MethodsTo measure the alignment between the ANNs luminance and contrast processing and that of primateV1, we adapted an existing methodology [11]. In summary, it consists of extracting primate V1 datarelated to luminance and contrast response properties from existing studies and replicating thoseexperiments in-silico in an ANN model of V1. Then, the distributions of those response properties inboth primate V1 and the model are quantitatively compared using a similarity metric. We used datafrom Kinoshita and Komatsu 2001 for the luminance benchmarks [13] and Sclar et al. 1990 for thecontrast benchmarks [14].2.1 Visual StimuliWe designed visual stimuli to approximate those used in the two studies that provided the primate V1data. Since models are trained using images with RGB encoded pixel values, when implementing ourstimuli, we used a gamma compression function that maps from the physical luminance space to thedigital image RGB encoded space (we used the standard range of 0 to 120 cd/m2to correspond to thewhole dynamic range of the input RGB values).For the luminance response properties, we used seven gray uniform luminance squares whichsubtended 10 degrees of visual space [13]. The luminance of the squares varied from 0.1 to 100 cd/m2on a logarithmic scale. For the contrast response properties, we used square achromatic sinusoidalgratings [14]. To determine the preferred spatial frequency and orientation of each neuron, wepresented gratings of 8 orientations (spaced by 22.5 deg) and 4 spatial frequencies (from 0.77 to 6.17cpd on a log scale). For each neuron, we selected the spatial frequency and orientation combinationthat elicited the strongest response. The gratings were 2.8 by 3.4 deg and were displayed on a blackbackground. To measure neural responses to contrast, the contrast of the gratings varied from 0.02 to1 logarithmically, with the maximum contrast corresponding to 120 cd/m2.2.2 Recording from ANN-based V1 modelsFor each ANN, a layer was pre-committed to V1 using a publicly available neural predicitivitybenchmark from the Brain-Score platform [8]. Only neurons from the pre-committed V1 layer wereanalyzed in this study. Stimuli were centered at a specific location near the fovea (0.5 deg eccentricityon both axes). We measured the functional receptive fields (RFs) of all V1 model neurons using agrid of small gratings as previously described [11]. Only neurons that had their RF center at thelocation around the stimulus center were considered for further analysis. Model activations weretransformed to neuronal firing rates using an affine transformation as in [11].2We analyzed the responses of the neurons similarly to corresponding studies. For the luminanceexperiment, neuronal activations were recorded without baseline correction. For the contrast experi-ment, we considered either the response’s first harmonic of the Fourier transform (AC component) orthe mean response with baseline correction (DC component) depending on whether the neuron was asimple or complex cell.2.3 Calculating response propertiesWe calculated four luminance response properties from the luminance tuning curves: surface respon-sive, bright slope, dark slope, and normalized delta slope. Surface responsive classifies neurons assurface responsive or non-responsive. Kinoshita et al classified a neuron as surface responsive ifits response to the brightest or darkest stimuli was significantly different from its response to thegray (3cd/m2) stimulus [13]. Only surface responsive neurons were used in creating the distributionsfor the other three response properties. The bright slope property was the slope of the regressionline when considering responses to the ‘bright stimuli’ (luminance of 3cd/m2and higher). The darkslope was the slope of the regression line when considering responses to the dark stimuli (3cd/m2and lower). The normalized delta slope quantifies the normalized difference between the bright slopeand the dark slope.We calculated four contrast response properties from the contrast tuning curves of each ANN neuron:standard neuron, maximum response, exponent, and semi-saturation constant. Firstly the responses ofABANN layerExample V1 modelMacaque V1Response (sps)Response (sps)Response (sps)Response (sps)0.020.1181624Rmax=82.3n=1.2c50=2.35Drop=0.00R 2 =1.000.020.110.00.81.6Rmax=19.6n=3.7c50=1.810.11101000.751.001.25Bright S. =-0.16Dark S. =0.15Norm Δ S. =-0.980.111010051015Bright S. =2.46Dark S. =0.64Norm Δ S. =0.37Luminance (cd/m2)Contrast0.11101000.1110100Bright S. =-4.50Dark S. =-14.18Norm Δ S. =0.34Bright S. =9.67Dark S. =6.09Norm Δ S. =0.1925505040600.020.110.020.1120402040Rmax=78.6n=1.3c50=0.68 Rmax=42.6n=2.5c50=0.35 Figure 1: Measuring luminance and contrast response properties in single neurons of ANNmodels of V1. A. Example neuronal responses in macaque V1 [13, 14]. Left column, luminancetuning curve; right curve, contrast tuning curve. Responses are represented in circles and fits toresponses with gray curves. Response properties for example neurons are shown on top of each plot.B.Same as in Abut for two example neurons from the same V1 model. Each row corresponds anexample neuron. Examples of stimuli used in the experiment are shown at the bottom.3each neuron are fitted using the hyperbolic ratio function which is used in Sclar et al. to characterizecontrast response relationships [15, 14]:R=Rmaxcncn+cn50(1)The standard neuron response property was a classification that describes whether an artificial neuronhad standard V1 neuron responses to contrast. We characterized this as having a peak drop at thehighest contrast < 20% and having a good fit to the hyperbolic equation (an R2> 0.9). Only neuronsthat were classified as standard were included in the distributions for the other response properties.From the fitted parameters, we extracted the other three properties as maximum response (Rmax),exponent (n), and semi-saturation constant (C 50).2.4 Comparing response property distributions:Empirical data for the neuronal response property distributions were extracted from the two studiesusing the WebPlotDigitizer. We replicated both the luminance and the contrast studies 1000 timesby performing in-silico neurophysiological experiments in the models. For each experiment, werandomly sampled the same number of neurons used in each study and computed the distribution ofthe several response properties. Primate V1 distributions and ANN V1 distributions were comparedusing a similarity metric that relies on the Kolmogorov-Smirnov Distance (previously described in[11]). This metric provides a quantitative score of the alignment of the ANN V1 distribution to thatof primate V1 as seen in the experimental study.3 ResultsWe analyzed 15 models in our study: AlexNet, ResNet18, ResNet34, ResNet50, ResNet50AT(adversarially trained), ResNet50SIN-IN-IN (trained with style transfer), CORnet-Z, CORnet-S,VOneResNet-50NS (non-stochastic), vgg-16, vgg-19, densenet-121, densenet-169, pnasnet andnasnet [16, 17, 18, 19, 20, 21, 22, 23]. Most model neurons show varying luminance responseprofiles (Figure 1). These range from monotonic profiles in which responses increase or decreasewith increasing luminance to V-shaped responses where the neurons are either gray preferring (peakat intermediate luminance values) or have the lowest responses to intermediate luminance values.In terms of contrast response characteristics, most model neurons have contrast tuning curves withneurons showing non-saturating responses to high contrast.A B−40040Dark Slope0.00.8−40040Bright Slope0.000.65−1.00.01.0Normalized Delta Slope0.00.6048Exponent0.000.350.000.501.00Semisaturation Constant0.00.6Proportion0100200300Maximum Response0.00.30.250.500.751.00ScoreStd. neuronMax RespC50ExponentSurface Resp.Dark S.Bright S.Norm. Δ S.Macaque V1Example V1 modelContrastLuminance8 single neuron properties(2 property groups)Figure 2: Distributions of luminance and contrast response properties in a candidate V1 modeland macaque V1. A. Distributions of 6 example response properties in macaque V1 (from publishedstudies, black line) and the best V1 model in the pool (ResNet50AT, purple). Model distributionsare obtained by performing in silico experiments, thick line is mean over 1,000 experiments and theshaded region is the SD. B.Similarity scores for the eight single neuron response properties for thesame V1 model (error bars represent mean and SD). Response properties are grouped in the twocategories: luminance and contrast.4Comparing the distributions of response properties in the models with those in macaque V1, weobserve that these show striking differences (Figure 2 shows the distributions and correspondingscores for the best performing model). In particular, the response properties with the lowest scoresare the semi-saturation constant and bright slope. Analyzing the tuning curves of individual neuronsprovides some clues for these differences. Unlike primate V1 neurons, model V1 neurons are weaklymodulated by luminance. This is seen as most neurons do not have large changes in responses tochanges in surface luminance (Figures 1 and 2). Thus, their bright slopes are not large in magnitude.Also, unlike primate V1 neurons, most ANN V1 neurons do not show saturating responses for highcontrast stimuli, resulting in much larger C 50and Rmax values than those in primate V1 (Figures 1and 2). Our analysis also showed that most models have a lower proportion of standard and surfaceresponsive neurons than are seen in primate V1. Finally, across all models analyzed, Resnet50AT isthe highest performing model on average for both luminance and contrast benchmarks (Figure 2).Despite all models failing to match primate V1 in all the response properties, how well they alignto V1 varies considerably. Luminance scores (average of the corresponding four properties) rangefrom 0 to 0.79 while contrast scores (average of the corresponding four properties) range from 0.51to 0.76. Interestingly, there is no correlation between luminance and contrast scores in the modelsstudied (Figure 3A). When comparing the scores on these benchmarks with those on the other sevenV1 response property benchmark categories (corresponding to 22 individual benchmarks [11]), weobserve a similar trend: while some pairs of benchmarks appear to be correlated, on average, pair-wisecorrelations between scores on V1 benchmarks are very weak. This suggests that there are trade-offsin this model space in terms of alignment to multiple aspects of V1 processing (Figure 3B).OrientationSpatial freq.Resp. selectivityRF sizeSurr. Mod. Text. Mod.Resp. Mag.ContrastSpatial freq.Resp. selectivityRF sizeSurr. Mod.Text. Mod.Resp. Mag.ContrastLuminance1.0-1.0CorrelationA B0.00.40.8Contrast Score0.00.40.8Luminance Scoren=15Figure 3: Luminance and contrast scores are not correlated with each other and with other V1response property benchmarks. A. Luminance and contrast average scores for the 15 modelsanalyzed in this study (each score is the average of the four corresponding response propertybenchmarks). B.Pair-wise correlations between the nine V1 response property categories scores.Each category is the average of multiple individual benchmarks [11]4 DiscussionIn this work, we evaluated whether ANN models of V1 are aligned to macaque V1 luminance andcontrast processing at the single-neuron level. While most model neurons respond to contrast andluminance stimuli, their responses differ from those observed in populations of macaque V1 neurons.In particular, model neurons have responses weakly modulated by changes in luminance and shownon-saturating responses to high contrast stimuli. Furthermore, models differ in their alignment to V1with the best model being one adversarially-trained, a result that is consistent with previous findingsfor other benchmarks [22, 11].While we view this study as an important first step to studying luminance and contrast processing inANN models of V1, much work remains to be done. We sampled a considerable pool of models withvarying architectures and training procedures. However, the space of V1 models is considerably largerand future work should explore more models with more diverse architectures, as well as analyzing5the alignment to V1 of other layers in the same hierarchical models (no layer pre-commitment).Furthermore, comparisons of luminance and contrast scores with other brain-benchmarks, particularlythose in higher visual areas and behavior, may provide insight into how luminance and contrastprocessing in early vision relates to downstream processing.Acknowledgments and Disclosure of FundingThis work was started during the MIT Summer Research Program (MSRP). S.O. would like to thankthe director of the program Mandana Sassanfar and MIT for this research opportunity. S.O. would alsolike to thank the members of the DiCarlo lab and the other MSRP students for the critical discussionsand feedback. This work was supported by the PhRMA Foundation Postdoctoral Fellowship inInformatics (T.M.), the Semiconductor Research Corporation (SRC) and DARPA (J.J.D.), Office ofNaval Research grant MURI-114407 (J.J.D.), the Simons Foundation grant SCGB-542965 (J.J.D.),the MIT-IBM Watson AI Lab grant W1771646 (J.J.D.).References[1] D J Felleman and D C Van Essen. “Distributed hierarchical processing in the primate cerebralcortex.” In: Cerebral cortex 1.1 (1991), pp. 1–47. ISSN : 1047-3211. URL:http://www.ncbi.nlm.nih.gov/pubmed/1822724 .[2] Mortimer Mishkin, Leslie G. Ungerleider, and Kathleen A. Macko. “Object vision and spatialvision: two cortical pathways”. In: Trends in Neurosciences 6.C (1983). ISBN: 0166-2236,pp. 414–417. ISSN : 01662236. DOI:10.1016/0166-2236(83)90190-X .[3] James J. DiCarlo and David Cox. “Untangling invariant object recognition”. In: Trends inCognitive Sciences 11.8 (2007). ISBN: 1364-6613, pp. 333–341. ISSN : 13646613. DOI:10.1016/j.tics.2007.06.010 .[4] Kaiming He et al. “Delving deep into rectifiers: Surpassing human-level performance onimagenet classification”. In: Proceedings of the IEEE International Conference on ComputerVision 2015 Inter (2015). ISBN: 9781467383912, pp. 1026–1034. ISSN : 15505499. DOI:10.1109/ICCV.2015.123 .[5] Daniel L. K. Yamins et al. “Performance-optimized hierarchical models predict neural re-sponses in higher visual cortex”. In: Proceedings of the National Academy of Sciences 111.23(2014), pp. 8619–8624. ISSN : 0027-8424. DOI:10.1073/pnas.1403112111 . arXiv: 0706.1062v1 .URL:http://www.pnas.org/cgi/doi/10.1073/pnas.1403112111 .[6] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. “Deep supervised, but not unsuper-vised, models may explain IT cortical representation”. en. In: PLoS Comput. Biol. 10.11 (Nov.2014), e1003915.[7] Umut Güçlü and Marcel A. J. van Gerven. “Deep Neural Networks Reveal a Gradient in theComplexity of Neural Representations across the Brain’s Ventral Visual Pathway”. In: TheJournal of neuroscience 35.27 (2015), pp. 10005–10014. ISSN : 0270-6474. DOI:10.1523/JNEUROSCI.5023-14.2015 . arXiv: 1411.6422 .URL:http://arxiv.org/abs/1411.6422%7B%5C%%7D0Ahttp://dx.doi.org/10.1523/JNEUROSCI.5023-14.2015 .[8] Martin Schrimpf et al. “Brain-Score: Which Artificial Neural Network for Object Recognitionis most Brain-Like?” In: bioRxiv (2018). DOI:10.1101/407007 .URL:https://www.biorxiv.org/content/early/2018/09/05/407007 .[9] Andrew Saxe, Stephanie Nelli, and Christopher Summerfield. “If deep learning is the answer,what is the question?” In: Nature Reviews Neuroscience 22.1 (2021). Publisher: SpringerUS, pp. 55–67. ISSN : 14710048. DOI:10 . 1038 / s41583 - 020 - 00395 - 8 .URL:http ://dx.doi.org/10.1038/s41583-020-00395-8 .[10] Thomas Serre. “Deep Learning: The Good, the Bad, and the Ugly”. In: Annual Review of VisionScience 5.1 (Sept. 2019), pp. 399–426. ISSN : 2374-4642. DOI:10.1146/annurev-vision-091718-014951 .URL:https://www.annualreviews.org/doi/10.1146/annurev-vision-091718-014951 .[11] Tiago Marques, Martin Schrimpf, and James J Dicarlo. “Multi-scale hierarchical neuralnetwork models that bridge from single neurons in the primate primary visual cortex to objectrecognition behavior”. In: bioRxiv (2021). DOI:10.1101/2021.03.01.433495 .URL:https://www.biorxiv.org/content/10.1101/2021.03.01.433495v2 .6[12] Dan Hendrycks and Thomas Dietterich. “Benchmarking neural network robustness to commoncorruptions and perturbations”. In: 7th International Conference on Learning Representations,ICLR 2019 (2019), pp. 1–16.[13] Masaharu Kinoshita and Hidehiko Komatsu. “Neural Representation of the Luminance andBrightness of a Uniform Surface in the Macaque Primary Visual Cortex”. en. In: Journalof Neurophysiology 86.5 (Nov. 2001), pp. 2559–2570. ISSN : 0022-3077, 1522-1598. DOI:10.1152/jn.2001.86.5.2559 .URL:https://www.physiology.org/doi/10.1152/jn.2001.86.5.2559 (visited on 09/27/2022).[14] Gary Sclar, John H.R. Maunsell, and Peter Lennie. “Coding of image contrast in centralvisual pathways of the macaque monkey”. en. In: Vision Research 30.1 (Jan. 1990), pp. 1–10.ISSN : 00426989. DOI:10.1016/0042-6989(90)90123-3 .URL:https://linkinghub.elsevier.com/retrieve/pii/0042698990901233 (visited on 09/27/2022).[15] D G Albrecht and D B Hamilton. “Striate cortex of monkey and cat: contrast response function.”en. In: Journal of Neurophysiology 48.1 (July 1982), pp. 217–237. ISSN : 0022-3077, 1522-1598. DOI:10.1152/jn.1982.48.1.217 .URL:https://www.physiology.org/doi/10.1152/jn.1982.48.1.217 (visited on 09/27/2022).[16] Alex Krizhevsky, Ilya Sutskever, and Hinton Geoffrey E. “ImageNet Classification with DeepConvolutional Neural Networks”. In: NIPS . ISSN: 10495258. 2012, pp. 1097–1105. ISBN :978-1-62748-003-1. DOI:10.1109/5.726791 .[17] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: (Dec. 2015). arXiv:1512.03385 [cs.CV] .[18] Karen Simonyan and Andrew Zisserman. “Very Deep Convolutional Networks for Large-ScaleImage Recognition”. In: (Sept. 2014). arXiv: 1409.1556 [cs.CV] .[19] Aleksander Madry et al. “Towards Deep Learning Models Resistant to Adversarial Attacks”.In: (June 2017). arXiv: 1706.06083 [stat.ML] .[20] Jonas Kubilius et al. “Brain-Like Object Recognition with High-Performing Shallow RecurrentANNs”. In: Advances in Neural Information Processing Systems 32 . Ed. by H. Wallach etal. Curran Associates, Inc., 2019, pp. 12805–12816. URL:http://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns.pdf .[21] Robert Geirhos et al. “ImageNet-Trained CNNs Are Biased Towards Texture”. In: InternationalConference on Learning Representations c (2019), pp. 1–20. DOI:arXiv:1811.12231v1 .arXiv: 1811.12231 .[22] Joel Dapello et al. “Simulating a primary visual cortex at the front of CNNs improves robustnessto image perturbations”. In: NeurIPS (2020), pp. 1–30. ISSN : 26928205. DOI:10.1101/2020.06.16.154542 .[23] Gao Huang et al. “Densely Connected Convolutional Networks”. In: (Aug. 2016). arXiv:1608.06993 [cs.CV] .7Checklist1. For all authors...(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See Section 3(b) Did you describe the limitations of your work? [Yes] See Section 4(c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics review guidelines and ensured that your paper conforms tothem? [Yes]2. If you are including theoretical results...(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments...(a)Did you include the code, data, and instructions needed to reproduce the main experi-mental results (either in the supplemental material or as a URL)? [No] We are preparinga code and data release for a full publication(b)Did you specify all the training details (e.g., data splits, hyperparameters, how theywere chosen)? [Yes](c)Did you report error bars (e.g., with respect to the random seed after running experi-ments multiple times)? [Yes](d)Did you include the total amount of compute and the type of resources used (e.g., typeof GPUs, internal cluster, or cloud provider)? [No]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...(a) If your work uses existing assets, did you cite the creators? [Yes](b) Did you mention the license of the assets? [N/A](c)Did you include any new assets either in the supplemental material or as a URL? [No](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects...(a)Did you include the full text of instructions given to participants and screenshots, ifapplicable? [N/A](b)Did you describe any potential participant risks, with links to Institutional ReviewBoard (IRB) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amountspent on participant compensation? [N/A]8<|im_end|> <|im_start|>assistant ### Review Title Moving towards psychological plausiblity in addition to neural plausibility for CNNs ### Review Text This paper attempts to make CNNs more psychologically plausible by investigating behaviour in addition to neural plausibility. Comparisons between single-neuron recordings of Non-human primate V1 spiking data and Deep Neural Networks neurons are made. Specifically, luminance and contrast response characteristics are investigated & it is reported that DNN neurons have responses weakly modulated by luminance and non-saturating responses to contrast. This is an essential step towards understanding if CNNs are a plausible model for primate ventral visual stream & how they can be improved. * Strengths: 1. Comparisons between DNNs & single neuron recordings. 2. Good stimuli design 3. Extensive model benchmarking * Weaknesses: * Major: 1. Since the paper only tests CNN models, it would be a better claim if it was about CNNs rather than ANNs (in the title, abstract, etc.) 2. The NHP V1 data papers seem fairly old; it would be good to justify the reason behind selecting the data from these papers rather than newer/other papers. Is it due to availability? 3. While mentioning the selection of neurons for analysis, for example, surface-responsive neurons, it would be helpful to show the number/percentage of neurons analysed instead of stating most neurons. Specifically lines 87, 107, 116, 120, 127, 129 and 131. 4. Exact model scores are not mentioned; they should be mentioned to have a better understanding of the results. 5. It would be interesting and helpful to see between model comparisons to demonstrate which property is correlated with scores. For example, scale (number of parameters), brain score, robustness, performance, etc. * Minor: 1. Figure 1 title A, "right curve, contrast tuning curve" -> "right column, contrast tuning curve" 2. Line 124, mention the best performing model, i.e. ResNet50AT. 3. In Figure 2A, the first row, please mention the y-label 4. In Figure 3B, please mention "orientation" in the first row and "Luminance" in the last column. * Conclusion: The paper attempts an important step towards psychological plausibility, but the reporting of the results should be improved to paint a better picture. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Hyl8yANFDB
ICLR.cc/2020/Conference
2020
Assessing Generalization in TD methods for Deep Reinforcement Learning
["Emmanuel Bengio", "Doina Precup", "Joelle Pineau"]
Current Deep Reinforcement Learning (DRL) methods can exhibit both data inefficiency and brittleness, which seem to indicate that they generalize poorly. In this work, we experimentally analyze this issue through the lens of memorization, and show that it can be observed directly during training. More precisely, we find that Deep Neural Networks (DNNs) trained with supervised tasks on trajectories capture temporal structure well, but DNNs trained with TD(0) methods struggle to do so, while using TD(lambda) targets leads to better generalization.
["reinforcement learning", "deep learning", "generalization"]
ABSTRACTCurrent Deep Reinforcement Learning (DRL) methods can exhibit both data in-efficiency and brittleness, which seem to indicate that they generalize poorly. Inthis work, we experimentally analyze this issue through the lens of memorization ,and show that it can be observed directly during training. More precisely, we findthat Deep Neural Networks (DNNs) trained with supervised tasks on trajectoriescapture temporal structure well, but DNNs trained with TD(0) methods struggle todo so, while using TD( ) targets leads to better generalization.1 I NTRODUCTIONDeep neural networks (DNNs) trained on supervised learning tasks using i.i.d. data have shownthe capacity to learn quickly even from a small amount of samples (Hardt et al., 2016). Intuitively,this is due to each sample also providing information about the estimate corresponding to othersamples; research suggests that DNNs first extract structures that are informative of the modes of thedata (even if later on they can also memorize see Zhang et al. (2016); Arpit et al. (2017)), and thatthey can transfer well (Yosinski et al., 2014; Li et al., 2015), even from relatively few samples. Incontrast, in Deep Reinforcement Learning (DRL), the number of samples required for an agent tolearn successfully is often very high; many modern algorithms struggle to perform well until theyacquire tens of millions of samples (Mirowski et al., 2016; Vinyals et al., 2017; Hessel et al., 2018),and some even diverge to bad solutions (Anschel et al., 2017). While there are many facets to samplecomplexity and brittleness, we posit that a contributing factor is a lack of what we call gradientupdate generalization , i.e., whether performing updates at one state provides useful informationabout the value/policy at other states .Generalization in RL is of two types: (a) generalization to unseen states–will an agent trained on asingle MDP pick the optimal action for a state it has never seen before? (b) generalization to unseentasks–will an agent trained on a distribution of MDPs know how to act in an MDP it has never seenbefore? Both of these facets are actively studied. For example, Farebrother et al. (2018) exposesome generalization failures on the Atari domain (Bellemare et al., 2013) and study the impact ofregularization, Zhang et al. (2018) study the generalization capabilities of DRL agents on randomizedmazes, Packer et al. (2018) study the extrapolation capabilities of DRL agents trained on a distributionof environment parameters (e.g. pole mass in CartPole) outside of the training distribution, Cobbeet al. (2018) find that even on procedurally generated environments, DRL agents can easily overfit ontheir training set unless regularized, Oh et al. (2017) study the embedding regularizations necessaryfor agents to generalize to new instruction sequences on navigation tasks.In this study, we are not interested in measuring state generalization (i.e. predictions for unseenstates), nor task generalization (i.e. in terms of the quality of the behaviour), but rather generalizationwithin the process of stochastic gradient learning. In other words, since any kind of generalizationmust arise through the accumulation of parameter updates, it seems useful to measure whether theseparameter updates are themselves general . To this end, we propose the measure of gradient updategeneralization , best understood as a side-effect of neural networks sharing parameters over theirentire input space. That is, updating parameters after seeing one state will change the prediction forvirtually all other states; we are interested in measuring that change.TD methods are a broad class of RL algorithms that form a target for an update by utilizing thecurrent estimate of the value function. They include TD( 0) and TD() methods for estimating thevalue of a fixed policy, as well as Sarsa and Q-learning algorithms for control. TD methods have1Under review as a conference paper at ICLR 2020achieved success in some challenging tasks (Tesauro, 1995; Mnih et al., 2013; Hessel et al., 2018),but they are also known to have problems when coupled with function approximation (Sutton, 1995;Baird, 1995; Tsitsiklis & Van Roy, 1997; Chung et al., 2018). Previous studies explicitly addressedproblems such as leakage propagation in TD (Penedones et al., 2018), while others aimed to providesampling improvements (Schaul et al., 2015; Andrychowicz et al., 2017; Fu et al., 2019), explicittemporal regularization (Thodoroff et al., 2018), or auxiliary tasks which push the agent to learn moreabout the temporal structure in the data (Jaderberg et al., 2016).To our knowledge, no study to date has focused on the dynamics of the generalization process itself,within TD-based DRL methods1such as deep Q-Learning (Riedmiller, 2005; Mnih et al., 2013),Sarsa (Rummery & Niranjan, 1994), and TD( ) (Sutton, 1988; Schulman et al., 2015). For this study,we introduce the aforementioned measure of gradient update generalization , which enables us todifferentiate the learning behaviours of different methods. Overall, we find that:1.when doing a TD(0) update for a single state, parameters change in such a way that the valueprediction of other states is generally not affected, surprisingly even for states that are closeeither temporally or in an annotated “ground truth” state space;2.DNNs trained with TD(0), in contrast with DNNs trained on a memorization task or using asupervised objective, do not entirely memorize their state space, yet also do not generalize inthe way we would expect;3.both the choice of optimizer and the nature of the objective impact the generalization be-haviours of models; in particular, when increasing the parameter in TD( ), DNNs appear tocapture more temporal structure.2 T ECHNICAL BACKGROUNDA Markov Decision Process (MDP) (Bellman, 1957; Sutton & Barto, 2018) M=hS;A;R;P;iconsists of a state space S, an action space A, a reward function R:S!Rand a transitionprobability distribution P(s0js;a). RL agents aim to optimize the expectation of the long-term return:G(St) =1Xk=tktR(Sk): (1)where2[0;1)is called the discount factor. Policies (ajs)map states to action distributions.Value functions VandQmap states/states-action pairs to expected returns, and can be expressedrecursively:V(St) =E[G(St)] =E[R(St) +V(St+1)jAt(St);St+1P(St;At)] (2)Q(St;At) =E[R(St) +Xa(ajSt+1)Q(St+1;a)jSt+1P(St;At)] (3)WhileVcould also be learned via regression to observed values of G, these recursive equations giverise to the Temporal Difference (TD) update rules for policy evaluation, relying on current estimatesofVtobootstrap , e.g.:V(St) V(St)(V(St)(R(St) +V(St+1))); (4)where2[0;1)is the step-size. Bootstrapping leads also to algorithms such as Q-Learning (Watkins& Dayan, 1992) and fitted-Q (Ernst et al., 2005; Riedmiller, 2005):LQL(St;At;Rt;St+1) = [Q(St;At)(Rt+maxaQ(St+1;a))]2; (5)Sarsa (Rummery & Niranjan, 1994):LSarsa (St;At;Rt;St+1;At+1) = [Q(St;At)(Rt+Q(St+1;At+1))]2withAt(St)(6)1In contrast, policy-gradient algorithms such as PPO (Schulman et al., 2017) A3C (Mnih et al., 2016) andSAC (Haarnoja et al., 2018) are capable of learning good policies without necessarily having learned a good valuefunction, and although interesting results have emerged to understand learning behaviours in policy-gradientmethods (Ilyas et al., 2018), these methods build upon TD and analyzing them would add undesired confounders.2Under review as a conference paper at ICLR 2020andTD(), which trades off between the unbiased target G(St)and the biased TD(0) target (biaseddue to relying on the estimated V(St+1)), using a weighted averaging of future targets called a-return (Sutton, 1988; Munos et al., 2016):G(St) = (1)1Xn=1n124nV(St+n) +n1Xj=0jR(St+j)35 (7)LTD()(St) = (V(St)G(St))2(8)(note that the return depends implicitly on the trajectory followed from St). When= 0, the loss issimply (V(St)(Rt+V(St+1)))2, leading to the algorithm called TD(0) (Sutton, 1988).3 U PDATE GENERALIZATION IN DEEPRLWe will now define the measure we propose in order to quantify the speed at which generalization tounseen states occurs, and to characterize the structure under which this generalization occurs. Wedefine gradient update generalization as the expected improvement in the loss function L: X!Rafter updating parameters 2, on sample XU2X, using update function UL: X! (e.g. SGD or a semi-gradient methods like TD(0)):YL(XU;;U) =EX[L(X;)L(X;UL(;XU))]: (9)If generalization from the samples in XUtoXis good, this measure of gain should be large, andintuitively fewer other samples should be needed to achieve a desired level of performance. On theother hand, if on average the loss only decreases for the samples XUused in training, then moredata inXXUwill have to be visited before the model can learn. Hence, this measure is relatedto both sample complexity and the speed of learning (see Fig. 15 for empirical confirmation of thisphenomenon).As computing the exact expectation is usually intractable, we empirically measure gains on differentsubsetsXX . In particular, when Xis chosen to be a slice around XUin the replay buffer,we writeYnear. We also subscript Ywith the corresponding loss’ subscript, e.g. for (5),LQL, wewriteYQL. In this study, we are interested in TD-based methods that rely heavily on bootstrapping,Q-Learning, Sarsa, and TD( ), and measure Yusing their respective losses, (5), (6), and (8).Structure in DNNs A common intuition in deep learning (Zhang et al., 2016; Arpit et al., 2017;Zhang et al., 2018) is that DNNs first learn about the structure of their data, meaning the underlying(usually linear) factors of variation of the data being mapped into the hidden units’ space via parametersharing. These factors of variation are usually conceptualized as a low-dimensional space where eachdimension explains part of the data (Bengio et al., 2013). It is commonly assumed that a model whichgeneralizes well will naturally capture these factors in the configuration of its parameters, in whichcase the gradient of the prediction w.r.t. all examples sharing the same latent factors of variation willbe very close; updating with only one sample will change the prediction for all the related examples.Hence, a DNN which captures structure correctly should show high gradient update generalization.Temporal structure in RL Data used in RL algorithms usually exhibits two additional types ofstructure: coherence of the inputs in a trajectory over time (e.g. pixel values in adjacent frames areoften similar), and smoothness of the value function in time (in the sparse-reward case with close to1,V(St)V(St+1), which is smooth in time, aside from rare discontinuities upon seeing rewards).Since RL data consists of trajectories which often have strong temporal structure of both types, wehypothesize that the gain Ynearof temporally correlated examples should increase closer in time tothe sample used in the update.Parameter sharing Another indirect measure of update generalization related to parameter sharingis the difference since last visit , which we denote as . At each update iteration k, we computethe difference between the value Vk(s)orQk(s;a)predicted from the current parameters, k, andVlast (s)(s)orQlast (s)(s;a), i.e. the prediction made the last time state swas used for a gradientupdate.2To illustrate, if Vwas a lookup table, would always be 0, while for a DNN, when states2In practice, we simply cache the value prediction for all states in a replay buffer (as states in a continuousstate space are unlikely to be encountered many times), and update the cache after a minibatch update (for thosestates only).3Under review as a conference paper at ICLR 2020are aliased together, should accurately reflect the effect of parameter sharing after performingsequences of updates (in contrast, (9) uses only a single update).3.1 E XPERIMENTAL SETUPWe will now perform a series of experiments aimed at assessing the amount of generalization ofvarious bootstrapping algorithms, compared to supervised learning, in combination with DNNs.First, we test whether DNNs have a large gradient update generalization gain when trained underideal conditions (data generated by expert policies and labelled with correct values, which can beused in supervised learning). Then, we test the policy evaluation case (using the same input data, butbootstrapped targets instead of supervised learning). We then test the usual control case, when noexpert trajectories are available. Finally, we measure the effect of TD( ) on generalization gain inpolicy evaluation, as well as test Q-Learning’s robustness to withheld data.We perform our experiments on the Atari environment (Bellemare et al., 2013), with the stochasticsetup recommended by Machado et al. (2018). We use a standard DQN architecture (Mnih et al., 2013).In order to generate expert trajectories, we use rollouts from a policy trained with Rainbow (Hesselet al., 2018); we denote Da dataset of transitions obtained with this agent, and the parametersafter training that agent. For control experiments, we use Mnih et al. (2013)’s Q-Learning setup.When measuring Ynearwe choose the nearest 60 examples in time to a given state-action pair (30previous and 30 following on the same trajectory).3.2 A SSESSING TEMPORAL STRUCTURE WITH SUPERVISED LEARNINGIn this experiment, we will assess if temporal structure, as described above, exists and can be capturedby our architecture. To do so, we train DNNs starting from random parameters but with “ideal" targetscoming from the expert parameters and expert trajectories D; this removes all non-stationarityfrom the learning. We train Qwith 3 different objectives:MC: LMC(s;a;) = (Q(s;a)G(D)(s))2(10)Reg: Lreg(s;a;) = (Q(s;a)Q(s;a))2(11)TD:LTD(s;a;r;s ;) = (Q(s;a)(r+maxa0Q(s0;a0)))2(12)where byG(D)(s)we denote the Monte-Carlo return within the dataset D, as in (1). Note that sinceLTD“bootstraps” to , this should be roughly equivalent to Lreg, the latter being plain supervisedlearning (or some sort of distillation, à-laHinton et al. (2012)).Results are visualized in Fig. 1 for experiments ran on MsPacman, Asterix, and Seaquest for 10 runseach. Results are averaged over these three environments (they have similar magnitudes and variance).Learning rates are kept constant, they affect the magnitude but not the shape of these curves.We draw two conclusions from these results. First, as seen in Fig. 1a & 1b, all curves tend tohave large gains around x= 0(the sample used in the update), especially from indices -10 to 10,showing that there is some amount of temporal structure captured by both objectives. Since Qis a good approximation, we expect that Q(s;a)(r+maxa0Q(s0;a0)), soLregandLTDhave similar targets and we expect them to have similar behaviours. Indeed, in Fig. 1 their curvesmostly overlap. Second, there is a clear asymmetry between training on expectations (i.e. the learnedQ(s;a)ormaxa0Q(s0;a0)) and high-variance Monte-Carlo returns (red and blue curves in Fig. 1a).We hypothesize that since the returns Gare computed from the same state sequence that is used tomeasure the gain, Gistruly informative of the expected value of future states. Strangely, this doesnot seem to be the case for past states, which is surprising.3On the other hand, while Gappearsmore informative of future expected returns, it is not particularly more informative of future sampledreturns than past returns, which explains the symmetric nature of the MC gain shown in Fig. 1b.3A possible explanation is that, due to the exponential discounting nature of returns ( V(St)kV(St+k)aside discontinuities when R6= 0), the correlation between the current and future returns simply has a largermagnitude than with past returns. This might push DNNs to prefer to “assign capacity” w.r.t. future returns.4Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.020.000.020.040.060.080.10TD gain ( YT D)MC+adamReg+adamTD∗+adamMC+rmspropReg+rmspropTD∗+rmsprop(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.050.100.150.200.250.30MC gain ( YMC)MC+adamReg+adamTD∗+adamMC+rmspropReg+rmspropTD∗+rmsprop (b) Near Monte-Carlo gain, YnearMCFigure 1: Supervised learning on Atari: Gain as a function of distance in the replay buffer from theupdate sample. We use dotted lines for the point at 0 distance, to emphasize that the correspondingstate was used for the update. (a-b) The curve around 0 indicates the temporal structure captured bythe TD and regression objectives.Another striking distinction in these curves appears between the Adam (Kingma & Ba, 2015) andRMSProp (Hinton et al., 2012) optimizers.4When moving far away from s, RMSProp tends toinduce a negative gain, while Adam tends to induce a near-zero gain. This is seen in Fig. 1a whereRMSProp’s TD gain is below 0 for states more than 10 steps away from the sample used in an update.Note that similar differences appear in virtually all following experiments, which we discuss later.3.3 P OLICY EVALUATION AND TD GAINWe have seen that DNNs can capture some temporal structure and have good gradient updategeneralization when given good quality inputs andtargets. We will now remove the expert targetsgenerated using the pretrained , but we will keep the expert inputs. This corresponds to policyevaluation on expert trajectories, and we would expect to see slightly worse generalization than in theprevious case.We run policy evaluation with 2 objectives, LQLandLSarsa as defined in (5), and (6)respectively,using a frozen target to bootstrap (Mnih et al., 2013), updated after every 10k minibatches. Experi-ments are run on 24 Atari environments (see A.1.1) for 10 runs each. Gain results are visualized inFig. 2, averaged over the 24 environments.The main observation from Fig. 2a is how narrow the peak around 0 is, suggesting that whenever astate’s value is updated, the prediction for other states does not change much in expectation, as ifthe representation were almost tabular, with estimates for encountered states being memorized. Theconclusion we draw is that, with a fixed data distribution, DNNs bootstrapping to an evolving targetnetwork will not proprely capture temporal structure, but will still be able to learn (at least in thesense of correctly approximating the value function).Another worrying observation is that RMSProp consistently has negative expected gain for nearbysamples (but large, larger than Adam, positive gain on XU, the minibatch sample), suggesting thatparameters trained with this optimizer memorize input-output pairs rather than assign capacity togeneralize.3.4 C OMPARING MEMORIZATION BEHAVIOUR IN POLICY EVALUATIONThe previous results established that some amount of memorization is done during TD-based policyevaluation. Quantifying memorization is still an open problem, but in this experiment we offer an4It has been reported that Adam is less sensitive than RMSProp to hyperparameters in value-based meth-ods (Hessel et al., 2018), although evidence suggests it doesn’t help policy gradients (Henderson et al., 2018).5Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.250.000.250.500.751.00TD gain ( YT D)×10−2 All GamesQL+adamSarsa+adamQL+rmspropSarsa+rmsprop(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−2−10TD gain ( YT D)×10−3 All GamesQL+adamSarsa+adamQL+rmspropSarsa+rmsprop (b) Near TD gain, YnearTD , zoom of (a) excluding XU, i.e.distance 0, to show the lack of temporal structureFigure 2: Policy evaluation on Atari: Gain as a function of distance in the replay buffer of the updatesample. (a) We use dotted lines for the point at 0 distance to emphasize that the corresponding statewas used for the update. (a-b) Compared to regression in Fig. 1a, almost no temporal structure iscaptured, which can be seen by how narrow the curve is around distance 0.interesting qualitative inspection to confirm that TD-based methods may lie somewhere between purememorization (acting like a lookup table) and strong generalization (capturing alllatent factors).In Zhang et al. (2016), the authors compare images classifiers trained with true labels to classifierstrained with random labels (in which case the model hasto simply memorize the labels), finding that,surprisingly, both can reach 0 training error. While this suggests that DNNs may also memorize whengiven the true labels, further studies showed many behavioural differences between the two setups,notably that DNNs first captured structure, and only afterwards fit random noise (Arpit et al., 2017).Taking inspiration from Zhang et al. (2016), we assign a random class in [N]to every state inD,change ourQfunction to be a usual classifier with Noutputs, and introduce a new objective, Lrand,which is simply the cross-entropy between the random class and the prediction. Experiments arerun on MsPacman, Breakout, and Seaquest. We use datasets of sizes 10k, 100k, and 500k, and useN2f2;10;50g. Interestingly, the architecture of Mnih et al. (2013) that is reused here struggles toreach 0 error5(for example, a model trained with 10k samples with N= 2reaches 5.7% error, whilea model trained with 500k and N= 50 totally fails at 85% error, see Table ??).Fig. 3 shows the evolution during training of the distribution of (S;A) =Q(S;A;current )Q(S;A;last (S)), wherelast (S)represents the value of the parameters when Swas last used in aminibatch update, and current represents the value of the parameters right before usingSfor themost recent update. If the parameters were those of a look-up table, would always be 0. For lossesother thanLrand (Q-Learning, Sarsa, and MC) we reuse the results of the previous section (with adataset size of 500k).The difference between Fig. 3a and Fig. 3b-d is compelling, and somewhat reassuring. In Fig. 3a thelog-likelihood for = 0 is above -2 (white) showing that it is very unlikely for the prediction at astate to have changed by more than 0:01when it is updated. In contrast, the distribution of ismore spread out in Fig. 3b-d. Combined with the fact that the memorization experiment does notreach 0 error, this allows us to confidently claim that DQN is not fully memorizing its state space .Even though the gain curve in Fig. 2 is very close to 0, except at the update sample (i.e. temporalstructure is poorly captured), some structure is captured by DNNs that allow them to learn about astate without having to use it explicitly in an update.5This could be due to the particularly shallow architecture of Mnih et al. (2013), as architectures with lessparameters but more layers are commonly assumed to have more effective capacity. It has indeed been shownthat deeper models can distinguish between exponentially more linear regions (Montufar et al., 2014).6Under review as a conference paper at ICLR 20200 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di))(a)Lrand0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di)) (b)LQL0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di)) (c)Lsarsa0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di))(d)LMC0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visitl=0.5−10−8−6−4−2log(P( ∆=di)) (e)LTD();= 0:50 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visitl=0.95−10−8−6−4−2log(P( ∆=di)) (f)LTD();= 0:95Figure 3: Policy evaluation on Atari: evolution of the distribution of , the difference since lastvisit, during training. In (a) the DNN is forced to memorize, as such the density of is concentratedaround 0 (thin red/white band). In (b-c), Q-Learning and Sarsa, the density is much less peaked at 0(larger yellow/green bands) as the DNN learns about states without visiting them. In (d) the DNNlearns quickly presumably without memorizing (the distribution of is more spread out and not asconcentrated around 0, seen by the larger yellow/green band), as it is trained on Monte-Carlo returns,and quickly converges as can be seen by the high density of positive s early. In (e,f) we see theeffect of using returns (see appendix A.6 for all values of ).3.5 TD G AIN IN CONTROLHaving removed in section 3.3, we now additionally remove Dand simply perform Q-Learningfrom scratch on MsPacman, Asterix, and Seaquest for 10M steps.Results are shown in Fig. 4. Interestingly, while Q-Learning does not have as strong a gain as theregressions from Fig. 1, it has a larger gain than policy evaluation. This may have several causes, andwe investigate two:Initially, because of the random exploratory policy, the DNN sees little data, and may be ableto capture a minimal set of factors of variation; then, upon seeing new states, the extractedfeatures are forced to be mapped onto those factors of variation, improving them, leading to anatural curriculum. By looking at the singular values of the last hidden layer’s matrix after100k steps, we do find that there is a consistently larger spread in the policy evaluation casethan the control case (see appendix A.3), showing that in the control case fewer factors areinitially captured. This effect diminishes as training progresses.Having run for 10M steps, control models could have been trained on more data and thus beforced to generalize better; this turns out notto be the case, as measuring the same quantitiesfor only the first 500k steps yields very similar magnitudes (see appendix A.4).Interestingly, these results are consistent with those of Agarwal et al. (2019), who study off-policylearning. Among many other results, Agarwal et al. (2019) find that off-policy-retraining a DQNmodel on another DQN agent’s lifetime set of trajectories yields much worse performance on average.7Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.010.020.030.04TD gain ( YT D)All Gamesadammsgdrmspropsgd(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.002−0.0010.0000.001TD gain ( YT D)All Gamesadammsgdrmspropsgd (b) Near TD gain, YnearTD , zoom excluding XU, i.e. dis-tance 0, to show the magnitude of generalizationFigure 4: Q-Learning on Atari: Gain as a function of distance in the replay buffer of the updatesample. (a-b) Compared to policy evaluation, gain appears to be better, but not as large as forregression. (b) For Adam and RMSProp, we include the corresponding curves for policy evaluationin lighter shades.While the authors suggest the strongly off-policy aspect of this experiment as the cause, our resultsstill show differences between control-Q-Learning and policy-evaluation-Q-Learning, which are bothdone “on-policy” in our setup, suggesting there are more factors at play than only off-policyness.Note that we also additionally run experiments with SGD and Momentum-SGD optimizers tohighlight the difference between Adam, that has a momentum component, and RMSprop, which onlyscales per-parameter learning rates. Predictably, Momentum-SGD’s behaviour is similar to Adam,and SGD’s to RMSprop.3.6 TD()AND RELIANCE ON BOOTSTRAPPINGTD() trades off between the immediate biased estimates of the future values and the true returnthrough itsparameter. To observe the effect of this parameter we perform policy evaluation on Dwith theLTD()objective on MsPacman.Results are shown in Fig. 5, where we can observe that (1) increasing increases near gain withoutoverly increasing update-sample gain (2) as for LMC, there is an asymmetry: updating informs usmore about the future than about the past, on average. Results for the distribution of are shown inFig. 3(e,f) (and appendix A.6), where we see that the closer is to 1, the more the TD( ) objectivecreates updates that affect all states.These results seem to indicate that TD( ) better captures factors of variation. One cause could bethat the more one relies on a sequence of DNN predictions (i.e. the sequence of n-step returns of the-return depend on the successive V(St+i)) to build a target, the more correlation there is betweenstates and targets (due to DNN smoothness), and the more temporal coherence there is (and thusmore opportunities for DNNs to capture the temporal dimension’s correlations). This is hard to verifyempirically, but we can proxy the correlation measure via the similarity between gradients. We doindeed find that the closer is to 1, the higher the average cosine similarity between gradients is (seeappendix A.5). This suggests that it may be advantageous to use -returns in environments wheregeneralization is important.3.7 T ESTING GENERALIZATION WITH AN INTRA -TASK TEST SETAnother way to assess whether agents fail to properly generalize in the sense of statistical inference –making predictions about states without visiting them – is to create a test set to measure generalizationerror. We do so on the MsPacman Atari environment, as it contains many opportunities for gener-alization in translational invariances (locally, the optimal action only depends on the surrounding8Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.250.500.751.001.251.50TD gain ( YT D)×10−2 MsPacmanl=0.10.250.50.750.80.90.950.99Figure 5: TD gain for policy evaluation withTD()& Adam. Note the larger gain as goes to1, as well as the asymmetry around 0.0 1 2 3 4 5 6 7 8 9 10Iterations (millions)20406080100120140Average episodic rewardMsPacmanp=0p=0.1p=0.25p=0.5Figure 6: Episodic rewards over Q-Learning onAtari. We train an agent while witholding statesfrom the training set with probability p.configuration of the agent, reward pellets, ghosts). We train our agent with the usual DQN setup(Mnih et al., 2013) but prevent the insertion of a state into the replay buffer with some probability p.More specifically, we use the RAM (ground truth) state information to exclude observations fromtraining. We run 5 seeds for each p2f0;0:1;0:25;0:5g.Results are shown in Fig. 6, where we see that witholding only 10% of states already slightly affectsagents. At 50%, performance is significantly reduced. While this is somewhat expected and consistentwith the literature (Farebrother et al., 2018), it again attests that TD-based methods can struggle withgeneralization, as observed also by Packer et al. (2018), who study interpolation and extrapolationfailures in deep RL agents.3.8 A DDITIONAL OBSERVATIONSOn other structures Our figures mostly show gradient update generalization gain as a function of“time” (temporal distance within a trajectory), but there might be structure elsewhere. We measuredgain as a function of 3 different metrics: ground truth state distance by reusing the Annotated AtariRAM of Anand et al. (2019), value distance (as DNNs may alias states with the same value), andfeature distance. Unfortunately, we were unable to find correlations (see appendix A.2).On convergence Figures 1, 2, and 4 show values averaged over the course of training. We find thatexcept in the first few iterations, these curves remain constant throughout training (see figures in A.4)and show no sign of convergence. This is also consistent with previous studies, as DQN is known tonot converge on Atari (Anschel et al., 2017).On variance WhileVar(YL)tends to be large, we find that the confidence interval of the mean isalways small, and would barely appear on most of our plots. Additionally, although generalizationgain is typically a fraction of the magnitude of the value function, it is consistently non-zero.On optimizers We find that the systematic differences we see between Adam and RMSProp alsooccur in behaviour, where control agents trained with RMSProp tend to get slightly more reward.An interpretation of our results is that RMSProp memorizes faster than Adam: it has much largeron-sample gain, it tends to make the singular values of the weight matrices larger, and it hasnegative near-sample gain, suggesting that capacity is spent memorizing on average. In Atari tasks,memorization can be an efficient strategy (although it is sensitive to noise, see Machado et al. (2018)).Hence, the better performance of RMSProp on Atari is consistent with our claims. This property maynot be as desireable in more complex environments requiring generalization.4 D ISCUSSIONRL is generally considered a harder problem than supervised learning. Hence, the fact that TD-stylemethods require more samples than supervised learning when used with deep nets is not necessarily9Under review as a conference paper at ICLR 2020surprising. However, with the same data and the same final targets (the “true" value function), it is notclear why TD updates lead to parameters that generalize worse than supervised learning. This couldbe a problem, as most RL methods rely on the TD mechanism in one way or another. In particular, ourresults show that both Q-Learning and Sarsa generalize poorly, leading to DNNs that memorize thetraining data (not unlike table lookup). Our results also suggest that TD( ), although not widely usedin recent DRL, improves generalization. Finally, we find differences between Adam and RMSPropthat we initially did not anticipate. Very little work has been done to understand and improve thecoupling between optimizers and TD, and our results indicate that this would be an important futurework direction.Our work suggests that the RL community should pay special attention to the current research ongeneralization in DNNs, because approaching the TD bootstrapping mechanism as a supervisedlearning problem does not seem to leverage the full generalization potential of DNNs.
rJx71n4AtS
Official Blind Review #3
6: Weak Accept
The manuscript is analyzing the "generalization" in TD(lambda) methods. It includes supervised learning from trajectories, on-policy imitation learning, and basic RL setting. Moreover, memoization performance has also been measured. Main conclusion is the fact that TD(0) performs very similar to tabular learning failing to transfer inductive biases between states. There are also additional surprising results about optimization. The empirical study is rather complete and significant. It raises interesting questions for the community and states some clear open problems. Results are conclusive and interesting. I believe it is a study which a practitioner using TD-based method should be aware of. Hence, I believe it is impactful. On the other hand, the manuscript has some significant issues which need to be resolved as follows: - One major issue is calling the analyzed metric "generalization". Generalization by definition requires something beyond what is seen. I believe the quantity defined in (9) is generalization. However, it can not be computed. Hence, calling its empirical version, "generalization" is confusing and a clear misuse of the term. I strongly urge authors to call the observed quantity something else. "Empirical expected improvement", "gradient regularity", "expected gain", etc. are some candidates come to my mind. - The optimization aspect is very interesting; however, it confuses the exposition significantly. I think it is better to give all results using adam first, and then showing the comparisons between adam and rmsprop later would be much more readable and easier to understand. - There are some clarity issues in the explanation of the experiments. Figure 3 is very confusing and it requires multiple reading to be understandable. A clearer visualization or a better explanation would improve the paper. - I am puzzled about why the authors did not use Q_MC in policy evaluation experiments (Section 3.3). I think it can very well be used in a straightforward manner. It would be an interesting addition to the experiments. Minor Nitpicks: - Memorization section is not clear. The discussion on N is very confusing as "14.4% for N = 2 and of 16.1% for N = 50" does not match any of "10.5%, 22.7%, and 34.2%" Can you give full error table in appendix? Overall, I like the study and suggest to accept it hoping authors can fix the issues I raise during rebuttal period.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Assessing Generalization in TD methods for Deep Reinforcement Learning ### Paper Abstract Current Deep Reinforcement Learning (DRL) methods can exhibit both data inefficiency and brittleness, which seem to indicate that they generalize poorly. In this work, we experimentally analyze this issue through the lens of memorization, and show that it can be observed directly during training. More precisely, we find that Deep Neural Networks (DNNs) trained with supervised tasks on trajectories capture temporal structure well, but DNNs trained with TD(0) methods struggle to do so, while using TD(lambda) targets leads to better generalization. ### Paper Keywords ["reinforcement learning", "deep learning", "generalization"] ### Paper Content ABSTRACTCurrent Deep Reinforcement Learning (DRL) methods can exhibit both data in-efficiency and brittleness, which seem to indicate that they generalize poorly. Inthis work, we experimentally analyze this issue through the lens of memorization ,and show that it can be observed directly during training. More precisely, we findthat Deep Neural Networks (DNNs) trained with supervised tasks on trajectoriescapture temporal structure well, but DNNs trained with TD(0) methods struggle todo so, while using TD( ) targets leads to better generalization.1 I NTRODUCTIONDeep neural networks (DNNs) trained on supervised learning tasks using i.i.d. data have shownthe capacity to learn quickly even from a small amount of samples (Hardt et al., 2016). Intuitively,this is due to each sample also providing information about the estimate corresponding to othersamples; research suggests that DNNs first extract structures that are informative of the modes of thedata (even if later on they can also memorize see Zhang et al. (2016); Arpit et al. (2017)), and thatthey can transfer well (Yosinski et al., 2014; Li et al., 2015), even from relatively few samples. Incontrast, in Deep Reinforcement Learning (DRL), the number of samples required for an agent tolearn successfully is often very high; many modern algorithms struggle to perform well until theyacquire tens of millions of samples (Mirowski et al., 2016; Vinyals et al., 2017; Hessel et al., 2018),and some even diverge to bad solutions (Anschel et al., 2017). While there are many facets to samplecomplexity and brittleness, we posit that a contributing factor is a lack of what we call gradientupdate generalization , i.e., whether performing updates at one state provides useful informationabout the value/policy at other states .Generalization in RL is of two types: (a) generalization to unseen states–will an agent trained on asingle MDP pick the optimal action for a state it has never seen before? (b) generalization to unseentasks–will an agent trained on a distribution of MDPs know how to act in an MDP it has never seenbefore? Both of these facets are actively studied. For example, Farebrother et al. (2018) exposesome generalization failures on the Atari domain (Bellemare et al., 2013) and study the impact ofregularization, Zhang et al. (2018) study the generalization capabilities of DRL agents on randomizedmazes, Packer et al. (2018) study the extrapolation capabilities of DRL agents trained on a distributionof environment parameters (e.g. pole mass in CartPole) outside of the training distribution, Cobbeet al. (2018) find that even on procedurally generated environments, DRL agents can easily overfit ontheir training set unless regularized, Oh et al. (2017) study the embedding regularizations necessaryfor agents to generalize to new instruction sequences on navigation tasks.In this study, we are not interested in measuring state generalization (i.e. predictions for unseenstates), nor task generalization (i.e. in terms of the quality of the behaviour), but rather generalizationwithin the process of stochastic gradient learning. In other words, since any kind of generalizationmust arise through the accumulation of parameter updates, it seems useful to measure whether theseparameter updates are themselves general . To this end, we propose the measure of gradient updategeneralization , best understood as a side-effect of neural networks sharing parameters over theirentire input space. That is, updating parameters after seeing one state will change the prediction forvirtually all other states; we are interested in measuring that change.TD methods are a broad class of RL algorithms that form a target for an update by utilizing thecurrent estimate of the value function. They include TD( 0) and TD() methods for estimating thevalue of a fixed policy, as well as Sarsa and Q-learning algorithms for control. TD methods have1Under review as a conference paper at ICLR 2020achieved success in some challenging tasks (Tesauro, 1995; Mnih et al., 2013; Hessel et al., 2018),but they are also known to have problems when coupled with function approximation (Sutton, 1995;Baird, 1995; Tsitsiklis & Van Roy, 1997; Chung et al., 2018). Previous studies explicitly addressedproblems such as leakage propagation in TD (Penedones et al., 2018), while others aimed to providesampling improvements (Schaul et al., 2015; Andrychowicz et al., 2017; Fu et al., 2019), explicittemporal regularization (Thodoroff et al., 2018), or auxiliary tasks which push the agent to learn moreabout the temporal structure in the data (Jaderberg et al., 2016).To our knowledge, no study to date has focused on the dynamics of the generalization process itself,within TD-based DRL methods1such as deep Q-Learning (Riedmiller, 2005; Mnih et al., 2013),Sarsa (Rummery & Niranjan, 1994), and TD( ) (Sutton, 1988; Schulman et al., 2015). For this study,we introduce the aforementioned measure of gradient update generalization , which enables us todifferentiate the learning behaviours of different methods. Overall, we find that:1.when doing a TD(0) update for a single state, parameters change in such a way that the valueprediction of other states is generally not affected, surprisingly even for states that are closeeither temporally or in an annotated “ground truth” state space;2.DNNs trained with TD(0), in contrast with DNNs trained on a memorization task or using asupervised objective, do not entirely memorize their state space, yet also do not generalize inthe way we would expect;3.both the choice of optimizer and the nature of the objective impact the generalization be-haviours of models; in particular, when increasing the parameter in TD( ), DNNs appear tocapture more temporal structure.2 T ECHNICAL BACKGROUNDA Markov Decision Process (MDP) (Bellman, 1957; Sutton & Barto, 2018) M=hS;A;R;P;iconsists of a state space S, an action space A, a reward function R:S!Rand a transitionprobability distribution P(s0js;a). RL agents aim to optimize the expectation of the long-term return:G(St) =1Xk=tktR(Sk): (1)where2[0;1)is called the discount factor. Policies (ajs)map states to action distributions.Value functions VandQmap states/states-action pairs to expected returns, and can be expressedrecursively:V(St) =E[G(St)] =E[R(St) +V(St+1)jAt(St);St+1P(St;At)] (2)Q(St;At) =E[R(St) +Xa(ajSt+1)Q(St+1;a)jSt+1P(St;At)] (3)WhileVcould also be learned via regression to observed values of G, these recursive equations giverise to the Temporal Difference (TD) update rules for policy evaluation, relying on current estimatesofVtobootstrap , e.g.:V(St) V(St)(V(St)(R(St) +V(St+1))); (4)where2[0;1)is the step-size. Bootstrapping leads also to algorithms such as Q-Learning (Watkins& Dayan, 1992) and fitted-Q (Ernst et al., 2005; Riedmiller, 2005):LQL(St;At;Rt;St+1) = [Q(St;At)(Rt+maxaQ(St+1;a))]2; (5)Sarsa (Rummery & Niranjan, 1994):LSarsa (St;At;Rt;St+1;At+1) = [Q(St;At)(Rt+Q(St+1;At+1))]2withAt(St)(6)1In contrast, policy-gradient algorithms such as PPO (Schulman et al., 2017) A3C (Mnih et al., 2016) andSAC (Haarnoja et al., 2018) are capable of learning good policies without necessarily having learned a good valuefunction, and although interesting results have emerged to understand learning behaviours in policy-gradientmethods (Ilyas et al., 2018), these methods build upon TD and analyzing them would add undesired confounders.2Under review as a conference paper at ICLR 2020andTD(), which trades off between the unbiased target G(St)and the biased TD(0) target (biaseddue to relying on the estimated V(St+1)), using a weighted averaging of future targets called a-return (Sutton, 1988; Munos et al., 2016):G(St) = (1)1Xn=1n124nV(St+n) +n1Xj=0jR(St+j)35 (7)LTD()(St) = (V(St)G(St))2(8)(note that the return depends implicitly on the trajectory followed from St). When= 0, the loss issimply (V(St)(Rt+V(St+1)))2, leading to the algorithm called TD(0) (Sutton, 1988).3 U PDATE GENERALIZATION IN DEEPRLWe will now define the measure we propose in order to quantify the speed at which generalization tounseen states occurs, and to characterize the structure under which this generalization occurs. Wedefine gradient update generalization as the expected improvement in the loss function L: X!Rafter updating parameters 2, on sample XU2X, using update function UL: X! (e.g. SGD or a semi-gradient methods like TD(0)):YL(XU;;U) =EX[L(X;)L(X;UL(;XU))]: (9)If generalization from the samples in XUtoXis good, this measure of gain should be large, andintuitively fewer other samples should be needed to achieve a desired level of performance. On theother hand, if on average the loss only decreases for the samples XUused in training, then moredata inXXUwill have to be visited before the model can learn. Hence, this measure is relatedto both sample complexity and the speed of learning (see Fig. 15 for empirical confirmation of thisphenomenon).As computing the exact expectation is usually intractable, we empirically measure gains on differentsubsetsXX . In particular, when Xis chosen to be a slice around XUin the replay buffer,we writeYnear. We also subscript Ywith the corresponding loss’ subscript, e.g. for (5),LQL, wewriteYQL. In this study, we are interested in TD-based methods that rely heavily on bootstrapping,Q-Learning, Sarsa, and TD( ), and measure Yusing their respective losses, (5), (6), and (8).Structure in DNNs A common intuition in deep learning (Zhang et al., 2016; Arpit et al., 2017;Zhang et al., 2018) is that DNNs first learn about the structure of their data, meaning the underlying(usually linear) factors of variation of the data being mapped into the hidden units’ space via parametersharing. These factors of variation are usually conceptualized as a low-dimensional space where eachdimension explains part of the data (Bengio et al., 2013). It is commonly assumed that a model whichgeneralizes well will naturally capture these factors in the configuration of its parameters, in whichcase the gradient of the prediction w.r.t. all examples sharing the same latent factors of variation willbe very close; updating with only one sample will change the prediction for all the related examples.Hence, a DNN which captures structure correctly should show high gradient update generalization.Temporal structure in RL Data used in RL algorithms usually exhibits two additional types ofstructure: coherence of the inputs in a trajectory over time (e.g. pixel values in adjacent frames areoften similar), and smoothness of the value function in time (in the sparse-reward case with close to1,V(St)V(St+1), which is smooth in time, aside from rare discontinuities upon seeing rewards).Since RL data consists of trajectories which often have strong temporal structure of both types, wehypothesize that the gain Ynearof temporally correlated examples should increase closer in time tothe sample used in the update.Parameter sharing Another indirect measure of update generalization related to parameter sharingis the difference since last visit , which we denote as . At each update iteration k, we computethe difference between the value Vk(s)orQk(s;a)predicted from the current parameters, k, andVlast (s)(s)orQlast (s)(s;a), i.e. the prediction made the last time state swas used for a gradientupdate.2To illustrate, if Vwas a lookup table, would always be 0, while for a DNN, when states2In practice, we simply cache the value prediction for all states in a replay buffer (as states in a continuousstate space are unlikely to be encountered many times), and update the cache after a minibatch update (for thosestates only).3Under review as a conference paper at ICLR 2020are aliased together, should accurately reflect the effect of parameter sharing after performingsequences of updates (in contrast, (9) uses only a single update).3.1 E XPERIMENTAL SETUPWe will now perform a series of experiments aimed at assessing the amount of generalization ofvarious bootstrapping algorithms, compared to supervised learning, in combination with DNNs.First, we test whether DNNs have a large gradient update generalization gain when trained underideal conditions (data generated by expert policies and labelled with correct values, which can beused in supervised learning). Then, we test the policy evaluation case (using the same input data, butbootstrapped targets instead of supervised learning). We then test the usual control case, when noexpert trajectories are available. Finally, we measure the effect of TD( ) on generalization gain inpolicy evaluation, as well as test Q-Learning’s robustness to withheld data.We perform our experiments on the Atari environment (Bellemare et al., 2013), with the stochasticsetup recommended by Machado et al. (2018). We use a standard DQN architecture (Mnih et al., 2013).In order to generate expert trajectories, we use rollouts from a policy trained with Rainbow (Hesselet al., 2018); we denote Da dataset of transitions obtained with this agent, and the parametersafter training that agent. For control experiments, we use Mnih et al. (2013)’s Q-Learning setup.When measuring Ynearwe choose the nearest 60 examples in time to a given state-action pair (30previous and 30 following on the same trajectory).3.2 A SSESSING TEMPORAL STRUCTURE WITH SUPERVISED LEARNINGIn this experiment, we will assess if temporal structure, as described above, exists and can be capturedby our architecture. To do so, we train DNNs starting from random parameters but with “ideal" targetscoming from the expert parameters and expert trajectories D; this removes all non-stationarityfrom the learning. We train Qwith 3 different objectives:MC: LMC(s;a;) = (Q(s;a)G(D)(s))2(10)Reg: Lreg(s;a;) = (Q(s;a)Q(s;a))2(11)TD:LTD(s;a;r;s ;) = (Q(s;a)(r+maxa0Q(s0;a0)))2(12)where byG(D)(s)we denote the Monte-Carlo return within the dataset D, as in (1). Note that sinceLTD“bootstraps” to , this should be roughly equivalent to Lreg, the latter being plain supervisedlearning (or some sort of distillation, à-laHinton et al. (2012)).Results are visualized in Fig. 1 for experiments ran on MsPacman, Asterix, and Seaquest for 10 runseach. Results are averaged over these three environments (they have similar magnitudes and variance).Learning rates are kept constant, they affect the magnitude but not the shape of these curves.We draw two conclusions from these results. First, as seen in Fig. 1a & 1b, all curves tend tohave large gains around x= 0(the sample used in the update), especially from indices -10 to 10,showing that there is some amount of temporal structure captured by both objectives. Since Qis a good approximation, we expect that Q(s;a)(r+maxa0Q(s0;a0)), soLregandLTDhave similar targets and we expect them to have similar behaviours. Indeed, in Fig. 1 their curvesmostly overlap. Second, there is a clear asymmetry between training on expectations (i.e. the learnedQ(s;a)ormaxa0Q(s0;a0)) and high-variance Monte-Carlo returns (red and blue curves in Fig. 1a).We hypothesize that since the returns Gare computed from the same state sequence that is used tomeasure the gain, Gistruly informative of the expected value of future states. Strangely, this doesnot seem to be the case for past states, which is surprising.3On the other hand, while Gappearsmore informative of future expected returns, it is not particularly more informative of future sampledreturns than past returns, which explains the symmetric nature of the MC gain shown in Fig. 1b.3A possible explanation is that, due to the exponential discounting nature of returns ( V(St)kV(St+k)aside discontinuities when R6= 0), the correlation between the current and future returns simply has a largermagnitude than with past returns. This might push DNNs to prefer to “assign capacity” w.r.t. future returns.4Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.020.000.020.040.060.080.10TD gain ( YT D)MC+adamReg+adamTD∗+adamMC+rmspropReg+rmspropTD∗+rmsprop(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.050.100.150.200.250.30MC gain ( YMC)MC+adamReg+adamTD∗+adamMC+rmspropReg+rmspropTD∗+rmsprop (b) Near Monte-Carlo gain, YnearMCFigure 1: Supervised learning on Atari: Gain as a function of distance in the replay buffer from theupdate sample. We use dotted lines for the point at 0 distance, to emphasize that the correspondingstate was used for the update. (a-b) The curve around 0 indicates the temporal structure captured bythe TD and regression objectives.Another striking distinction in these curves appears between the Adam (Kingma & Ba, 2015) andRMSProp (Hinton et al., 2012) optimizers.4When moving far away from s, RMSProp tends toinduce a negative gain, while Adam tends to induce a near-zero gain. This is seen in Fig. 1a whereRMSProp’s TD gain is below 0 for states more than 10 steps away from the sample used in an update.Note that similar differences appear in virtually all following experiments, which we discuss later.3.3 P OLICY EVALUATION AND TD GAINWe have seen that DNNs can capture some temporal structure and have good gradient updategeneralization when given good quality inputs andtargets. We will now remove the expert targetsgenerated using the pretrained , but we will keep the expert inputs. This corresponds to policyevaluation on expert trajectories, and we would expect to see slightly worse generalization than in theprevious case.We run policy evaluation with 2 objectives, LQLandLSarsa as defined in (5), and (6)respectively,using a frozen target to bootstrap (Mnih et al., 2013), updated after every 10k minibatches. Experi-ments are run on 24 Atari environments (see A.1.1) for 10 runs each. Gain results are visualized inFig. 2, averaged over the 24 environments.The main observation from Fig. 2a is how narrow the peak around 0 is, suggesting that whenever astate’s value is updated, the prediction for other states does not change much in expectation, as ifthe representation were almost tabular, with estimates for encountered states being memorized. Theconclusion we draw is that, with a fixed data distribution, DNNs bootstrapping to an evolving targetnetwork will not proprely capture temporal structure, but will still be able to learn (at least in thesense of correctly approximating the value function).Another worrying observation is that RMSProp consistently has negative expected gain for nearbysamples (but large, larger than Adam, positive gain on XU, the minibatch sample), suggesting thatparameters trained with this optimizer memorize input-output pairs rather than assign capacity togeneralize.3.4 C OMPARING MEMORIZATION BEHAVIOUR IN POLICY EVALUATIONThe previous results established that some amount of memorization is done during TD-based policyevaluation. Quantifying memorization is still an open problem, but in this experiment we offer an4It has been reported that Adam is less sensitive than RMSProp to hyperparameters in value-based meth-ods (Hessel et al., 2018), although evidence suggests it doesn’t help policy gradients (Henderson et al., 2018).5Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.250.000.250.500.751.00TD gain ( YT D)×10−2 All GamesQL+adamSarsa+adamQL+rmspropSarsa+rmsprop(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−2−10TD gain ( YT D)×10−3 All GamesQL+adamSarsa+adamQL+rmspropSarsa+rmsprop (b) Near TD gain, YnearTD , zoom of (a) excluding XU, i.e.distance 0, to show the lack of temporal structureFigure 2: Policy evaluation on Atari: Gain as a function of distance in the replay buffer of the updatesample. (a) We use dotted lines for the point at 0 distance to emphasize that the corresponding statewas used for the update. (a-b) Compared to regression in Fig. 1a, almost no temporal structure iscaptured, which can be seen by how narrow the curve is around distance 0.interesting qualitative inspection to confirm that TD-based methods may lie somewhere between purememorization (acting like a lookup table) and strong generalization (capturing alllatent factors).In Zhang et al. (2016), the authors compare images classifiers trained with true labels to classifierstrained with random labels (in which case the model hasto simply memorize the labels), finding that,surprisingly, both can reach 0 training error. While this suggests that DNNs may also memorize whengiven the true labels, further studies showed many behavioural differences between the two setups,notably that DNNs first captured structure, and only afterwards fit random noise (Arpit et al., 2017).Taking inspiration from Zhang et al. (2016), we assign a random class in [N]to every state inD,change ourQfunction to be a usual classifier with Noutputs, and introduce a new objective, Lrand,which is simply the cross-entropy between the random class and the prediction. Experiments arerun on MsPacman, Breakout, and Seaquest. We use datasets of sizes 10k, 100k, and 500k, and useN2f2;10;50g. Interestingly, the architecture of Mnih et al. (2013) that is reused here struggles toreach 0 error5(for example, a model trained with 10k samples with N= 2reaches 5.7% error, whilea model trained with 500k and N= 50 totally fails at 85% error, see Table ??).Fig. 3 shows the evolution during training of the distribution of (S;A) =Q(S;A;current )Q(S;A;last (S)), wherelast (S)represents the value of the parameters when Swas last used in aminibatch update, and current represents the value of the parameters right before usingSfor themost recent update. If the parameters were those of a look-up table, would always be 0. For lossesother thanLrand (Q-Learning, Sarsa, and MC) we reuse the results of the previous section (with adataset size of 500k).The difference between Fig. 3a and Fig. 3b-d is compelling, and somewhat reassuring. In Fig. 3a thelog-likelihood for = 0 is above -2 (white) showing that it is very unlikely for the prediction at astate to have changed by more than 0:01when it is updated. In contrast, the distribution of ismore spread out in Fig. 3b-d. Combined with the fact that the memorization experiment does notreach 0 error, this allows us to confidently claim that DQN is not fully memorizing its state space .Even though the gain curve in Fig. 2 is very close to 0, except at the update sample (i.e. temporalstructure is poorly captured), some structure is captured by DNNs that allow them to learn about astate without having to use it explicitly in an update.5This could be due to the particularly shallow architecture of Mnih et al. (2013), as architectures with lessparameters but more layers are commonly assumed to have more effective capacity. It has indeed been shownthat deeper models can distinguish between exponentially more linear regions (Montufar et al., 2014).6Under review as a conference paper at ICLR 20200 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di))(a)Lrand0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di)) (b)LQL0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di)) (c)Lsarsa0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visit−10−8−6−4−2log(P( ∆=di))(d)LMC0 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visitl=0.5−10−8−6−4−2log(P( ∆=di)) (e)LTD();= 0:50 200 400Iterations (thousands)-1.0-0.8-0.6-0.4-0.20.00.20.40.60.81.0Difference ( ∆) since last visitl=0.95−10−8−6−4−2log(P( ∆=di)) (f)LTD();= 0:95Figure 3: Policy evaluation on Atari: evolution of the distribution of , the difference since lastvisit, during training. In (a) the DNN is forced to memorize, as such the density of is concentratedaround 0 (thin red/white band). In (b-c), Q-Learning and Sarsa, the density is much less peaked at 0(larger yellow/green bands) as the DNN learns about states without visiting them. In (d) the DNNlearns quickly presumably without memorizing (the distribution of is more spread out and not asconcentrated around 0, seen by the larger yellow/green band), as it is trained on Monte-Carlo returns,and quickly converges as can be seen by the high density of positive s early. In (e,f) we see theeffect of using returns (see appendix A.6 for all values of ).3.5 TD G AIN IN CONTROLHaving removed in section 3.3, we now additionally remove Dand simply perform Q-Learningfrom scratch on MsPacman, Asterix, and Seaquest for 10M steps.Results are shown in Fig. 4. Interestingly, while Q-Learning does not have as strong a gain as theregressions from Fig. 1, it has a larger gain than policy evaluation. This may have several causes, andwe investigate two:Initially, because of the random exploratory policy, the DNN sees little data, and may be ableto capture a minimal set of factors of variation; then, upon seeing new states, the extractedfeatures are forced to be mapped onto those factors of variation, improving them, leading to anatural curriculum. By looking at the singular values of the last hidden layer’s matrix after100k steps, we do find that there is a consistently larger spread in the policy evaluation casethan the control case (see appendix A.3), showing that in the control case fewer factors areinitially captured. This effect diminishes as training progresses.Having run for 10M steps, control models could have been trained on more data and thus beforced to generalize better; this turns out notto be the case, as measuring the same quantitiesfor only the first 500k steps yields very similar magnitudes (see appendix A.4).Interestingly, these results are consistent with those of Agarwal et al. (2019), who study off-policylearning. Among many other results, Agarwal et al. (2019) find that off-policy-retraining a DQNmodel on another DQN agent’s lifetime set of trajectories yields much worse performance on average.7Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.010.020.030.04TD gain ( YT D)All Gamesadammsgdrmspropsgd(a) Near TD gain, YnearTD−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer−0.002−0.0010.0000.001TD gain ( YT D)All Gamesadammsgdrmspropsgd (b) Near TD gain, YnearTD , zoom excluding XU, i.e. dis-tance 0, to show the magnitude of generalizationFigure 4: Q-Learning on Atari: Gain as a function of distance in the replay buffer of the updatesample. (a-b) Compared to policy evaluation, gain appears to be better, but not as large as forregression. (b) For Adam and RMSProp, we include the corresponding curves for policy evaluationin lighter shades.While the authors suggest the strongly off-policy aspect of this experiment as the cause, our resultsstill show differences between control-Q-Learning and policy-evaluation-Q-Learning, which are bothdone “on-policy” in our setup, suggesting there are more factors at play than only off-policyness.Note that we also additionally run experiments with SGD and Momentum-SGD optimizers tohighlight the difference between Adam, that has a momentum component, and RMSprop, which onlyscales per-parameter learning rates. Predictably, Momentum-SGD’s behaviour is similar to Adam,and SGD’s to RMSprop.3.6 TD()AND RELIANCE ON BOOTSTRAPPINGTD() trades off between the immediate biased estimates of the future values and the true returnthrough itsparameter. To observe the effect of this parameter we perform policy evaluation on Dwith theLTD()objective on MsPacman.Results are shown in Fig. 5, where we can observe that (1) increasing increases near gain withoutoverly increasing update-sample gain (2) as for LMC, there is an asymmetry: updating informs usmore about the future than about the past, on average. Results for the distribution of are shown inFig. 3(e,f) (and appendix A.6), where we see that the closer is to 1, the more the TD( ) objectivecreates updates that affect all states.These results seem to indicate that TD( ) better captures factors of variation. One cause could bethat the more one relies on a sequence of DNN predictions (i.e. the sequence of n-step returns of the-return depend on the successive V(St+i)) to build a target, the more correlation there is betweenstates and targets (due to DNN smoothness), and the more temporal coherence there is (and thusmore opportunities for DNNs to capture the temporal dimension’s correlations). This is hard to verifyempirically, but we can proxy the correlation measure via the similarity between gradients. We doindeed find that the closer is to 1, the higher the average cosine similarity between gradients is (seeappendix A.5). This suggests that it may be advantageous to use -returns in environments wheregeneralization is important.3.7 T ESTING GENERALIZATION WITH AN INTRA -TASK TEST SETAnother way to assess whether agents fail to properly generalize in the sense of statistical inference –making predictions about states without visiting them – is to create a test set to measure generalizationerror. We do so on the MsPacman Atari environment, as it contains many opportunities for gener-alization in translational invariances (locally, the optimal action only depends on the surrounding8Under review as a conference paper at ICLR 2020−30−20−10 0 10 20 30Distance to minibatch sample in replay buffer0.000.250.500.751.001.251.50TD gain ( YT D)×10−2 MsPacmanl=0.10.250.50.750.80.90.950.99Figure 5: TD gain for policy evaluation withTD()& Adam. Note the larger gain as goes to1, as well as the asymmetry around 0.0 1 2 3 4 5 6 7 8 9 10Iterations (millions)20406080100120140Average episodic rewardMsPacmanp=0p=0.1p=0.25p=0.5Figure 6: Episodic rewards over Q-Learning onAtari. We train an agent while witholding statesfrom the training set with probability p.configuration of the agent, reward pellets, ghosts). We train our agent with the usual DQN setup(Mnih et al., 2013) but prevent the insertion of a state into the replay buffer with some probability p.More specifically, we use the RAM (ground truth) state information to exclude observations fromtraining. We run 5 seeds for each p2f0;0:1;0:25;0:5g.Results are shown in Fig. 6, where we see that witholding only 10% of states already slightly affectsagents. At 50%, performance is significantly reduced. While this is somewhat expected and consistentwith the literature (Farebrother et al., 2018), it again attests that TD-based methods can struggle withgeneralization, as observed also by Packer et al. (2018), who study interpolation and extrapolationfailures in deep RL agents.3.8 A DDITIONAL OBSERVATIONSOn other structures Our figures mostly show gradient update generalization gain as a function of“time” (temporal distance within a trajectory), but there might be structure elsewhere. We measuredgain as a function of 3 different metrics: ground truth state distance by reusing the Annotated AtariRAM of Anand et al. (2019), value distance (as DNNs may alias states with the same value), andfeature distance. Unfortunately, we were unable to find correlations (see appendix A.2).On convergence Figures 1, 2, and 4 show values averaged over the course of training. We find thatexcept in the first few iterations, these curves remain constant throughout training (see figures in A.4)and show no sign of convergence. This is also consistent with previous studies, as DQN is known tonot converge on Atari (Anschel et al., 2017).On variance WhileVar(YL)tends to be large, we find that the confidence interval of the mean isalways small, and would barely appear on most of our plots. Additionally, although generalizationgain is typically a fraction of the magnitude of the value function, it is consistently non-zero.On optimizers We find that the systematic differences we see between Adam and RMSProp alsooccur in behaviour, where control agents trained with RMSProp tend to get slightly more reward.An interpretation of our results is that RMSProp memorizes faster than Adam: it has much largeron-sample gain, it tends to make the singular values of the weight matrices larger, and it hasnegative near-sample gain, suggesting that capacity is spent memorizing on average. In Atari tasks,memorization can be an efficient strategy (although it is sensitive to noise, see Machado et al. (2018)).Hence, the better performance of RMSProp on Atari is consistent with our claims. This property maynot be as desireable in more complex environments requiring generalization.4 D ISCUSSIONRL is generally considered a harder problem than supervised learning. Hence, the fact that TD-stylemethods require more samples than supervised learning when used with deep nets is not necessarily9Under review as a conference paper at ICLR 2020surprising. However, with the same data and the same final targets (the “true" value function), it is notclear why TD updates lead to parameters that generalize worse than supervised learning. This couldbe a problem, as most RL methods rely on the TD mechanism in one way or another. In particular, ourresults show that both Q-Learning and Sarsa generalize poorly, leading to DNNs that memorize thetraining data (not unlike table lookup). Our results also suggest that TD( ), although not widely usedin recent DRL, improves generalization. Finally, we find differences between Adam and RMSPropthat we initially did not anticipate. Very little work has been done to understand and improve thecoupling between optimizers and TD, and our results indicate that this would be an important futurework direction.Our work suggests that the RL community should pay special attention to the current research ongeneralization in DNNs, because approaching the TD bootstrapping mechanism as a supervisedlearning problem does not seem to leverage the full generalization potential of DNNs.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text The manuscript is analyzing the "generalization" in TD(lambda) methods. It includes supervised learning from trajectories, on-policy imitation learning, and basic RL setting. Moreover, memoization performance has also been measured. Main conclusion is the fact that TD(0) performs very similar to tabular learning failing to transfer inductive biases between states. There are also additional surprising results about optimization. The empirical study is rather complete and significant. It raises interesting questions for the community and states some clear open problems. Results are conclusive and interesting. I believe it is a study which a practitioner using TD-based method should be aware of. Hence, I believe it is impactful. On the other hand, the manuscript has some significant issues which need to be resolved as follows: - One major issue is calling the analyzed metric "generalization". Generalization by definition requires something beyond what is seen. I believe the quantity defined in (9) is generalization. However, it can not be computed. Hence, calling its empirical version, "generalization" is confusing and a clear misuse of the term. I strongly urge authors to call the observed quantity something else. "Empirical expected improvement", "gradient regularity", "expected gain", etc. are some candidates come to my mind. - The optimization aspect is very interesting; however, it confuses the exposition significantly. I think it is better to give all results using adam first, and then showing the comparisons between adam and rmsprop later would be much more readable and easier to understand. - There are some clarity issues in the explanation of the experiments. Figure 3 is very confusing and it requires multiple reading to be understandable. A clearer visualization or a better explanation would improve the paper. - I am puzzled about why the authors did not use Q_MC in policy evaluation experiments (Section 3.3). I think it can very well be used in a straightforward manner. It would be an interesting addition to the experiments. Minor Nitpicks: - Memorization section is not clear. The discussion on N is very confusing as "14.4% for N = 2 and of 16.1% for N = 50" does not match any of "10.5%, 22.7%, and 34.2%" Can you give full error table in appendix? Overall, I like the study and suggest to accept it hoping authors can fix the issues I raise during rebuttal period. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
SJNDWNOlg
ICLR.cc/2017/conference
2017
What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
["Jiedong Hao", "Jing Dong", "Wei Wang", "Tieniu Tan"]
Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as feature representation of a particular image region. Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success of such methods is the feature representation. However, the different factors that impact the effectiveness of features are still not explored thoroughly. There are much less discussion about the best combination of them. The main contribution of our paper is the thorough evaluations of the various factors that affect the discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify the best choices for different factors and propose a new multi-scale image feature representation method to encode the image effectively. Finally, we show that the proposed method generalises well and outperforms the state-of-the-art methods on four typical datasets used for visual instance retrieval.
["Computer vision", "Deep learning"]
ABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.
By5gfegEg
An outdated method with misleading claims.
3: Clear rejection
This paper explores different strategies for instance-level image retrieval with deep CNNs. The approach consists of extracting features from a network pre-trained for image classification (e.g. VGG), and post-process them for image retrieval. In other words, the network is off-the-shelf and solely acts as a feature extractor. The post-processing strategies are borrowed from traditional retrieval pipelines relying on hand-crafted features (e.g. SIFT + Fisher Vectors), denoted by the authors as "traditional wisdom". Specifically, the authors examine where to extract features in the network (i.e. features are neurons activations of a convolution layer), which type of feature aggregation and normalization performs best, whether resizing images helps, whether combining multiple scales helps, and so on. While this type of experimental study is reasonable and well motivated, it suffers from a huge problem. Namely it "ignores" 2 major recent works that are in direct contradictions with many claims of the paper ([a] "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. and [b] "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples" by Radenović et al., both ECCV'16 papers). These works have shown that training for retrieval can be achieved with a siamese architectures and have demonstrated outstanding performance. As a result, many claims and findings of the paper are either outdated, questionable or just wrong. Here are some of the misleading claims: - "Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years." Until [a] (not cited), the state-of-the-art was still largely dominated by sparse invariant features based methods (see last Table in [a]). - "the proposed method [...] outperforms the state-of-the-art methods on four typical datasets" That is not true, for the same reasons than above, and also because the state-of-the-art is now dominated by [a] and [b]. - "Also in situations where a large numbers of training samples are not available, instance retrieval using unsupervised method is still preferable and may be the only option.". This is a questionable opinion. The method exposed in "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. outperforms the state-of-the-art on the UKB dataset (3.84 without QE or DBA) whereas it was trained for landmarks retrieval and not objects, i.e. in a different retrieval context. This demonstrates that in spite of insufficient training data, training is still possible and beneficial. - Finally, most findings are not even new or surprising (e.g. aggregate several regions in a multi-scale manner was already achieved by Tolias at al, etc.). So the interest of the paper is limited overall. In addition, there are some problems in the experiments. For instance, the tuning experiments are only conducted on the Oxford dataset and using a single network (VGG-19), whereas it is not clear whether these conditions are well representative of all datasets and all networks (it is well known that the Oxford dataset behaves very differently than the Holidays dataset, for instance). In addition, tuning is performed very aggressively, making it look like the authors are tuning on the test set (e.g. see Table 3). To conclude, the paper is one year too late with respect to recent developments in the state of the art.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title What Is the Best Practice for CNNs Applied to Visual Instance Retrieval? ### Paper Abstract Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as feature representation of a particular image region. Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success of such methods is the feature representation. However, the different factors that impact the effectiveness of features are still not explored thoroughly. There are much less discussion about the best combination of them. The main contribution of our paper is the thorough evaluations of the various factors that affect the discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify the best choices for different factors and propose a new multi-scale image feature representation method to encode the image effectively. Finally, we show that the proposed method generalises well and outperforms the state-of-the-art methods on four typical datasets used for visual instance retrieval. ### Paper Keywords ["Computer vision", "Deep learning"] ### Paper Content ABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.<|im_end|> <|im_start|>assistant ### Review Title An outdated method with misleading claims. ### Review Text This paper explores different strategies for instance-level image retrieval with deep CNNs. The approach consists of extracting features from a network pre-trained for image classification (e.g. VGG), and post-process them for image retrieval. In other words, the network is off-the-shelf and solely acts as a feature extractor. The post-processing strategies are borrowed from traditional retrieval pipelines relying on hand-crafted features (e.g. SIFT + Fisher Vectors), denoted by the authors as "traditional wisdom". Specifically, the authors examine where to extract features in the network (i.e. features are neurons activations of a convolution layer), which type of feature aggregation and normalization performs best, whether resizing images helps, whether combining multiple scales helps, and so on. While this type of experimental study is reasonable and well motivated, it suffers from a huge problem. Namely it "ignores" 2 major recent works that are in direct contradictions with many claims of the paper ([a] "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. and [b] "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples" by Radenović et al., both ECCV'16 papers). These works have shown that training for retrieval can be achieved with a siamese architectures and have demonstrated outstanding performance. As a result, many claims and findings of the paper are either outdated, questionable or just wrong. Here are some of the misleading claims: - "Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years." Until [a] (not cited), the state-of-the-art was still largely dominated by sparse invariant features based methods (see last Table in [a]). - "the proposed method [...] outperforms the state-of-the-art methods on four typical datasets" That is not true, for the same reasons than above, and also because the state-of-the-art is now dominated by [a] and [b]. - "Also in situations where a large numbers of training samples are not available, instance retrieval using unsupervised method is still preferable and may be the only option.". This is a questionable opinion. The method exposed in "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. outperforms the state-of-the-art on the UKB dataset (3.84 without QE or DBA) whereas it was trained for landmarks retrieval and not objects, i.e. in a different retrieval context. This demonstrates that in spite of insufficient training data, training is still possible and beneficial. - Finally, most findings are not even new or surprising (e.g. aggregate several regions in a multi-scale manner was already achieved by Tolias at al, etc.). So the interest of the paper is limited overall. In addition, there are some problems in the experiments. For instance, the tuning experiments are only conducted on the Oxford dataset and using a single network (VGG-19), whereas it is not clear whether these conditions are well representative of all datasets and all networks (it is well known that the Oxford dataset behaves very differently than the Holidays dataset, for instance). In addition, tuning is performed very aggressively, making it look like the authors are tuning on the test set (e.g. see Table 3). To conclude, the paper is one year too late with respect to recent developments in the state of the art. ### Review Rating 3: Clear rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
9p2CltauWEY
ICLR.cc/2021/Conference
2021
On Size Generalization in Graph Neural Networks
["Gilad Yehudai", "Ethan Fetaya", "Eli Meirom", "Gal Chechik", "Haggai Maron"]
Graph neural networks (GNNs) can process graphs of different sizes but their capacity to generalize across sizes is still not well understood. Size generalization is key to numerous GNN applications, from solving combinatorial optimization problems to learning in molecular biology. In such problems, obtaining labels and training on large graphs can be prohibitively expensive, but training on smaller graphs is possible. This paper puts forward the size-generalization question and characterizes important aspects of that problem theoretically and empirically. We prove that even for very simple tasks, such as counting the number of nodes or edges in a graph, GNNs do not naturally generalize to graphs of larger size. Instead, their generalization performance is closely related to the distribution of local patterns of connectivity and features and how that distribution changes from small to large graphs. Specifically, we prove that for many tasks, there are weight assignments for GNNs that can perfectly solve the task on small graphs but fail on large graphs, if there is a discrepancy between their local patterns. We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs. We then formalize size generalization as a domain-adaption problem and describe two learning setups where size generalization can be improved. First, as a self-supervised learning problem (SSL) over the target domain of large graphs. Second as a semi-supervised learning problem when few samples are available in the target domain. We demonstrate the efficacy of these solutions on a diverse set of benchmark graph datasets.
["graph neural networks", "gnn", "generalization", "Weisfeiler-Lehman"]
ABSTRACTGraph neural networks (GNNs) can process graphs of different sizes but theircapacity to generalize across sizes is still not well understood. Size generalizationis key to numerous GNN applications, from solving combinatorial optimizationproblems to learning in molecular biology. In such problems, obtaining labels andtraining on large graphs can be prohibitively expensive, but training on smallergraphs is possible.This paper puts forward the size-generalization question and characterizes impor-tant aspects of that problem theoretically and empirically. We prove that evenfor very simple tasks, such as counting the number of nodes or edges in a graph,GNNs do not naturally generalize to graphs of larger size. Instead, their gen-eralization performance is closely related to the distribution of local patterns ofconnectivity and features and how that distribution changes from small to largegraphs. Specifically, we prove that for many tasks, there are weight assignmentsfor GNNs that can perfectly solve the task on small graphs but fail on large graphs,if there is a discrepancy between their local patterns. We further demonstrate onseveral tasks, that training GNNs on small graphs results in solutions which do notgeneralize to larger graphs. We then formalize size generalization as a domain-adaption problem and describe two learning setups where size generalization canbe improved. First, as a self-supervised learning problem (SSL) over the targetdomain of large graphs. Second as a semi-supervised learning problem when fewsamples are available in the target domain. We demonstrate the efficacy of thesesolutions on a diverse set of benchmark graph datasets.1 I NTRODUCTIONGraphs are a flexible representation, widely used for representing diverse data and phenomena.Graph neural networks (GNNs) – Deep models that operate over graphs – have emerged as a promi-nent learning model (Bruna et al., 2013; Kipf and Welling, 2016; Veli ˇckovi ́c et al., 2017). They areused in natural sciences (Gilmer et al., 2017), social network analysis (Fan et al., 2019), for solvingdifficult mathematical problems (Luz et al., 2020) and for approximating solutions to combinatorialoptimization problems (Li et al., 2018).In many domains, graphs data vary significantly in size. This is the case for molecular biology,where molecules – represented as graphs over atoms as nodes – span from small compounds toproteins with many thousands of nodes. It is even more severe in social networks, which can reachbillions of nodes. The success of GNNs for such data stems from the fact that the same GNNmodel can process input graphs regardless of their size. Indeed, it has been proposed that GNNscan generalize to graphs whose size is different from what they were trained on , but it is largelyunknown in what problems such generalization occurs. Empirically, several papers report goodgeneralization performance on specific tasks (Li et al., 2018; Luz et al., 2020). Other papers, likeVeliˇckovi ́c et al. (2019), show that size generalization can fail on several simple graph algorithms,and can be improved by using task-specific training procedures and specific architectures.Given their flexibility to operate on variable-sized graphs, A fundamental question arises aboutgeneralization in GNNs: ”When do GNNs trained on small graphs generalize to large graphs?”Aside from being an intriguing theoretical question, this problem has important practical implica-tions. In many domains, it is hard to label large graphs. For instance, in combinatorial optimization1Under review as a conference paper at ICLR 2021problems, labeling a large graph boils down to solving large and hard optimization problems. Inother domains, it is often very hard for human raters to correctly label complex networks. Oneapproach to this problem could have been to resize graphs into a homogeneous size. This is thestrategy taken in computer vision, where it is well understood how to resize an image while keepingits content. Unfortunately, there are no effective resizing procedures for graphs. It would thereforebe extremely valuable to develop techniques that can generalize from training on small graphs.As we discuss below, a theoretical analysis of size generalization is very challenging because itdepends on several different factors, including the task, the architecture, and the data. For tasks,we argue that it is important to distinguish two types of tasks, local andglobal . Local tasks can besolved by GNNs whose depth does not depend on the size of the input graph. For example, the taskof finding a constant-size pattern. Global tasks require that the depth of the GNN grows with thesize of the input graph. For example, calculating the diameter of a graph. While there are a fewprevious works that explore depth-dependant GNNs (e.g., Tang et al. (2020), constant depth GNNsare by far the most widely used GNN models today and are therefore the focus of this paper.In this paper, we focus on GNNs with constant depth and study the ability of the most expressivemessage passing neural networks (Xu et al., 2018; Morris et al., 2019) to generalize to unseensizes. Our key observation is that generalization to graphs of different sizes is strongly related to thedistribution of patterns around nodes in the graphs of interest. These patterns, dubbed d-patterns(wheredis the radius of the local neighborhood), describe the local feature-connectivity structurearound each node, as seen by message-passing neural networks and are defined in Section 3.We study the role of d-patterns both empirically and theoretically. First, we theoretically showthat when there is a significant discrepancy between the d-pattern distributions, GNNs have multipleglobal minima for graphs of a specific size range, out of which only a subset of models can generalizewell to larger graphs. We complement our theoretical analysis with an experimental study andshow that GNNs tend to converge to non-generalizing global minima, when d-patterns from thelarge graph distribution are not well-represented in the small graph distribution. Furthermore wedemonstrate that the size generalization problem is accentuated in deeper GNNs.Following these observations, in the final part of this paper, we discuss two learning setups that helpimprove size-generalization by formulating the learning problem as a domain adaptation problem:(1) Training the GNNs on self-supervised tasks aimed at learning the d-pattern distribution of boththe target (large graphs) and source (small graphs) domains. We also propose a novel SSL task thataddresses over-fitting of d-patterns. (2) A semi-supervised learning setup with a limited number oflabeled examples from the target domain. The idea behind both setups is to promote convergenceof GNNs to local/global minima with good size generalization properties. We show that both setupsare useful in a series of experiments on synthetic and real data.To summarize, this paper makes the following contributions. (1) We identify a size generalizationproblem when learning local tasks with GNNs and analyze it empirically and theoretically. (2) Welink the size-generalization problem with the distribution of d-patterns and suggest to approach itas a domain adaptation problem (3) We empirically show how several learning setups help improvesize generalization.2 R ELATED WORKSize generalization in set and graph learning. Several papers observed successful generalizationacross graph sizes, but the underlying reasons were not investigated (Li et al., 2018; Maron et al.,2018; Luz et al., 2020). More recently, (Veli ˇckovi ́c et al., 2019) showed that when training GNNsto perform simple graph algorithms step by step they generalize better to graphs of different sizes.Unfortunately, such training procedures cannot be easily applied to general tasks. Knyazev et al.(2019) studied the relationship between generalization and attention mechanisms. Tang et al. (2020)observed two issues that can harm generalization: (1) There are tasks for which a constant number oflayers is not sufficient. (2) Some graph learning tasks are homogeneous functions. They then suggesta new GNN architecture to deal with these issues. Our work is complementary to these works as itexplores another fundamental size generalization problem, focusing on constant depth GNNs. Formore details on the distinction between constant depth and variable depth tasks see Appendix A.Several works also studied size generalization and expressivity when learning set-structured inputs(Zweig and Bruna, 2020; Bueno and Hylton, 2020). On the more practical side, Joshi et al. (2019),2Under review as a conference paper at ICLR 2021Joshi et al. (2020) study the combinatorial problem of traveling salesman and whether it is possibleto generalize to larger sizes. Corso et al. (2020) study several multitask learning problems on graphsand evaluate how the performance changes as the size of the graphs change.Expressivity and generalization in graph neural networks. (Xu et al., 2018; Morris et al., 2019)established a fundamental connection between message-passing neural networks and the Weisfeiler-Leman (WL) graph-isomorphism test. We use similar arguments to show that GNNs have enoughexpressive power to solve a task on a set of small graphs and to fail on it on a set of large graphs.Several works studied generalization bounds for certain classes of GNNs (Garg et al., 2020; Punyet al., 2020; Verma and Zhang, 2019), but did not discuss size-generalization. Sinha et al. (2020)proposed a benchmark for assessing the logical generalization abilities of GNNs.3 T HE SIZE GENERALIZATION PROBLEMWe now present the main problem discussed in the paper, that is, what determines if a GNN gener-alizes well to graphs of sizes not seen during training. We start with a simple motivating exampleshowing the problem on single layer GNNs. We then show that the question of size generalizationactually depends on d-patterns , the local patterns of connectivity and features of the graphs, and notonly on their actual size.Setup. We are given two distributions over graphs P1;P2that contain small and large graphs ac-cordingly, and a task that can be solved with 0 error for all graph sizes using a constant depth GNN.We train a GNN on a training set Ssampled i.i.d from P1and study its performance on P2.GNN model. We focus on the first order GNN (1-GNN) architecture from Morris et al. (2019)defined in the following way:h(t)v=0@W(t)2h(t1)v +Xu2N(v)W(t)1h(t1)u +b(t)1A:Here, h(t)vis the feature vector of node vaftertlayers,W(t)1;W(t)22Rdt1dt;b(t)2Rdtdenotesthe parameters of the t-th layer of the GNN, and is some non-linear activation (e.g ReLU). Itwas shown in Morris et al. (2019) that GNNs composed from these layers have maximal expressivepower with respect to all message-passing neural networks. In the experimental section we alsoexperiment with Graph Isomorphism Network (GIN) (Xu et al., 2018). For further details on GNNssee Appendix A. In this work we use the most expressive GNN variants that use the ”sum” aggre-gation function. Using a ”max” or ”mean” reduces the expressive power of the network, makingit not powerful enough to solve simple counting problems (e.g. counting edges or computing nodedegrees). On the other hand, these networks give rise to slightly different definitions of patterns andcan generalize better in some cases as shown in (Veli ˇckovi ́c et al., 2019), yet still suffer from sizeoverfit. Exploring these networks is beyond the scope of this work.3.1 S IZE GENERALIZATION IN SINGLE -LAYER GNN SWe start our discussion on size generalization with a theoretical analysis of a simple setup. Weconsider a single-layer GNN and an easy task and show that: (1) The training objective has manydifferent solutions, but only a small subset of these solutions generalizes to larger graphs (2) Simpleregularization techniques cannot mitigate the problem. This subsection serves as a warm-up for thenext subsections that contain our main results.Assume we train on a distribution of graphs with a fixed number of nodes nand a fixed num-ber of edges m. Our goal is to predict the number of edges in the graph using a 1-GNNwith a single linear layer and additive readout function, for simplicity also consider the squaredloss. The objective boils down to the following function for any graph Gin the training set::L(w1;w2;b;G) =Pu2V(G)w1xu+Pv2N(u)w2xv+by2. Here,Gis an inputgraph,V(G)are the nodes of G,N(v)are all the neighbors of node v,w1;w2andbare the train-able parameters, yis the target ( min this case) and xvis the node feature for node v. Further,assume that we have no additional information on the nodes, so we can just embed each node as a3Under review as a conference paper at ICLR 2021one-dimensional feature vector with a fixed value of 1. In this simple case, the trainable parametersare also one-dimensional. We note that the training objective can also be written in the follow-ing formL(w1;w2;b;G) = (nw1+ 2mw 2+nbm)2, and that one can easily find its solutionsspace, which is an affine subspace defined by w2=mn(w1+b)2m. In particular, the solutions withb+w1= 0; w 2= 1=2are the only ones which do not depend on the specific training set graph sizen, and generalize to graphs of any size. It can be readily seen that when training the model on graphsof fixed size (fixed m;n ), gradient descent will have no reason to favor one solution over anotherand we will not be able to generalize. We also note that the generalizing solution is not always theleast norm solution (with respect to both L1andL2norms) so simple regularization will not helphere. On the other hand, it is easy to show that training on graphs with different number of edgeswill favor the generalizing solution. As we will see next, the problem gets worse when consideringGNNs with multiple non-linear layers, and this simple solution will not help in this case: we cantrain deeper GNNs on a wide variety of sizes and the solution will not generalize to other sizes.3.2d-PATTERNSWe wish to understand theoretically when does a GNN which was trained on on graphs with a smallnumber of nodes can generalize to graphs with a large number of nodes. To answer that question,we first analyze what information is received by each node in the graph from its neighboring nodesafter a graph is processed by a GNN with Tlayers. It is easy to see that every node can receiveinformation about its neighbors which are at most Thops away. We also know that nodes do nothave full information about their order Tenvironment. For example, GNNs cannot determine if atriangle is present in a neighborhood of a given node Chen et al. (2020). In order to characterizethe exact information that can be found in each node after a Tlayer GNN, we use the definition ofthe WL test, specifically its iteration structure, which has the same representational power as GNNs(see Xu et al. (2018); Morris et al. (2019)), For more details on the WL test see Appendix A.Definition 3.1 (d-patterns) .LetCbe a finite set of node features and N2N. Ford0we definethe set ofd-patternsPdon graphs with maximal degree Nand node features from C. The definitionis recursive in the following way: For d= 0,P0=C. We definePdto be the set of all tuples (a;b)wherea2Pd1andbis in multisets of size at most Nconsisting of elements from Pd1.LetG= (V;E)be a graph with maximal degree Nand a node feature cv2Cfor every nodev2V. We define the d-pattern of a node v2Vford0recursively: For d= 0, its0-pattern iscv. Ford > 0we say that vwith`neighboring d1patterns has a d-patternp=(pv;f(pi1;mpi1);:::; (pi`;mpi`)g)iff nodevhas(d1)-patternpvand for every j2f1;:::;`gthe number of neighbors of vwith(d1)-patternpijis exactlympij.Thed-pattern of a node is an encoding of the (d1)-patterns of itself and of its neighbors. Forexample, assume a graph has a maximal degree of Nand all the nodes start with the same nodefeature. The 1-pattern of each node is its degree. The 2-pattern of each node is for each possibledegreei2f1;:::;Ngthe number of neighbors with degree i. In the same manner, the 3-patternof a node is for each possible 2-pattern, the number of its neighbors with this exact 2-pattern. Thedefinition of d-patterns can naturally be extended to the case of unbounded degrees. We have thefollowing theorem which connects the d-patterns with the expressive power of GNNs:Theorem 3.2. Any function that can be represented by a d-layer GNN is constant on d-patterns.In particular, the theorem shows that for any two graphs (of any size) and two nodes, one in eachgraph, if the nodes have the exact same d-pattern, then any d-layer GNN will output the same resultfor the two nodes. The full proof can be found in Appendix B, and follows directly from the analogybetween the WL algorithm (see Appendix A) and d-patterns. Thm. 3.2 implies that d-patterns don’trepresent more expressive power than GNN. In the next subsection, we prove that GNNs can exactlycomputed-patterns, and show that this capacity is tightly related to size generalization. It is alsoeasy to see from the definition of d-patterns and the proof of Theorem 2 from Morris et al. (2019)thatd-patterns exactly represent the expressive power of GNNs (with additive aggregation), thus thisdefinition is a natural tool to study the properties of GNNs, such as size generalization.3.3 GNN S MAY OVERFIT d-PATTERNSWe can now connect the size generalization problem to the concept of d-patterns. We start with anexample: consider a node prediction task in which an output is specified for each node in an input4Under review as a conference paper at ICLR 2021graph, and is solvable by a d-layer GNN. To perfectly solve this task, the model should produce thecorrect output for the d-pattern of all the nodes in the training set. Testing this GNN on a differentset of graphs will succeed if the test set has graphs with similar d-patterns to those in the trainingset. Note that this requirement is not related to the sizeof the graphs but to the distribution of thed-patterns of the nodes in the test set.In the following theorem we show rigorously, that given a set of d-patterns and output for each suchpattern, there is an assignment of weights to a GNN with O(d)layers that perfectly fits the outputfor each pattern. We will then use this theorem in order to show that, under certain assumptions onthe distribution of d-patterns of the large graphs, GNNs can perfectly solve a task on a set of smallgraphs, and completely fail on a set on large graphs. In other words, we show that there are multipleglobal minima for the training objective that do not generalize to larger graphs.Theorem 3.3. LetCbe a finite set of node features, Pbe a finite set of d-patterns on graphswith maximal degree N2N, and for each pattern p2Pletyp2[1;1]be some target label.Then there exists a 1-GNN Fwithd+ 2layers, width bounded by maxn(N+ 1)djCj;2pjPjoand ReLU activation such that for every graph Gwith nodesv1;:::;vn, corresponding d-patternsp1;:::;pnPand node features from C, the output of Fon nodeviis exactlyypi.The full proof is in Appendix B. Note that the width of the required GNN from the theorem is notvery large if dis small, where drepresents the depth of the 1-GNN. In practice, shallow GNNs arevery commonly used and are proven empirically successful, while training deep GNNs was shownto be hard due to many problems like over-smoothing (Zhao and Akoglu, 2019).Using the above theorem we can claim that there are assignments of weights to GNN that cannot”size-generalize”, that is, given a specific task, the GNN succeeds on the task for small graphs (upto some bound) and fails on larger graphs, as long as there is a notable discrepancy between theird-patterns distributions:Corollary 3.4. LetP1andP2be distributions of small and large graphs respectively with finitesupport, and let Pdpat1 be the distribution of d-patterns over small graphs and similarly Pdpat2for large graphs. For any node prediction task which is solvable by a 1-GNN with depth dand>0there exists a 1-GNN with depth at most d+ 2that has 0-1 loss smaller then onP1and 0-1 loss onP2, where() = maxA:Pdpat1(A)<Pdpat2 (A): (1)Here,Ais a set ofd-patterns and P(A)is the total probability mass for that set under P.Intuitively, large means that there exists a set of d-patterns that have a low probability for smallgraphs and high probability for large graphs. Corollary 3.4 implies that the major factor in thesuccess of GNN to generalize to larger graphs is not the graph size, but the distribution of the d-patterns. Different distributions of d-patterns lead to large and thus to bad generalization to largergraphs. On the other hand, from Thm. 3.2 we immediately get that similar distributions of d-patternsimply that every GNN model that succeeds on small graphs will also succeed on large graphs, sinceGNNs are constant on d-patterns:Corollary 3.5. In the setting of Corollary 3.4, also assume that all the patterns that have a positiveprobability in Pdpat2 also have a positive probability in Pdpat1 . Then, for any node prediction tasksolvable by a depth dGNN, any 1GNN that have 0loss (w.r.t the 01loss) onP1will also have0loss onP2.Examples. Corollary 3.4 shows that even for simple tasks, GNN may fail, here are two simpleexamples. (i) Consider the task of calculating the node degree. From Corollary 3.4 there is a GNNthat successfully output the degree of nodes with max degree up to Nand fails on nodes with largerdegrees. Note that this problem can easily be solved for any node degree with a 1-layer GNN. (ii)Consider some node regression task, when the training set consists of graphs sampled i.i.d from anErdos-Renyi graph G(n;p)1, and the test set contains graphs sampled i.i.d from G(2n;p). In thiscase, a GNN trained on the training set will be trained on graphs with an average degree np, whilethe test set contains graphs with an average degree 2np. This means that the d-patterns in the trainand test set are very different, and by Corollary 3.4 the GNN may overfit.1Graphs with nnodes such that each edge exists with probability p.5Under review as a conference paper at ICLR 2021Graph prediction tasks. Our theoretical results discuss node prediction tasks. We note that theyare also relevant for graph prediction tasks where there is a single output to each input graph. Thereason for that is that in order to solve graph prediction tasks, a GNN first calculates node featuresand then pools them into a single global graph feature. Our analysis shows that the first part of theGNN, which is responsible for calculating the node features, might not generalize to large graphs.As a result, the GNN will generate an uninformative global graph feature and the GNN will fail onthe original graph prediction task. In the experimental sections, we show that the size generalizationproblem is indeed relevant for both node and graph prediction tasks. Here is a formal statementregarding graph prediction tasks, the full proof can be found in Appendix B.Corollary 3.6. LetP1andP2be distributions of small and large graphs respectively with finitesupport. Let Pdpat1 be the distribution of d-patterns over small graphs and similarly Pdpat2 forlarge graphs, and assume that the supports of Pdpat1 andPdpat2 are disjoint. For any graphprediction task solvable by a 1-GNN with depth dand summation readout function, there exists a1-GNN with depth at most d+ 3that perfectly solves the task on P1and fails on all graphs from P2.Relation to Morris et al. (2019); Xu et al. (2018). We note that Theorem 3.3 and Corollary 3.4 aresomewhat related to the expressivity results in (Xu et al., 2018; Morris et al., 2019) that show thatGNNs can be as powerful as the WL test. Here, we show that the expressive power of GNNs cancause negative effects when there is a discrepancy between the training and test sets.3.4 E MPIRICAL VALIDATIONIn the previous subsection we have shown that for any node task, and any two datasets of graphswith different sizes that significantly differ in their d-patterns distributions, there is a 1-GNN thatsuccessfully solves the task on one dataset but fails on the second. In this subsection, we showempirically that reaching these ”overfitting” GNNs is actually very common. Specifically, the size-overfit phenomenon is prevalent when the d-patterns of in the large graph distribution are not foundin the small graph distribution. We also show that GNNs can generalize to larger graphs if thedistribution of d-patterns remains similar to the distribution of patterns in the small graphs.To show this, we use a controlled regression task in a student-teacher setting. In this setting, wesample a ”teacher” GNN with random weights, freeze the network, and label each graph in thedataset using the output of the ”teacher” network. Our goal is to train a ”student” network, whichhas the same architecture as the ”teacher” network, to fit the labels of the teacher network. Theadvantages of this setting are two-fold: (1) A solution is guaranteed to exist : We know that there is aweight assignment of the student network which perfectly solve the task for graphs of any size. (2)Generality : It includes all tasks solvable by constant depth GNNs. We discuss more settings below.Architecture and training protocol. We use 1-GNN as defined in (Morris et al., 2019). The numberof GNN layers in the network we use is either 1;2or3; the width of the teacher network is 32, andof the student network 64, providing more expressive power to the student network. We obtainedsimilar results when testing with a width of 32, same as the teacher network. We use a summationreadout function followed by a two-layer fully connected suffix. We use ADAM with learning rate103. We performed a hyper-parameters search on the learning rate and weight decay and usevalidation-based early stopping on the source domain (small graphs). The results are averaged over10 random seeds. All runs used Pytorch Geometric (Fey and Lenssen, 2019) on NVIDIA DGX-1.Results. Fig. 1 compares the loss of GNNs as the distribution of d-patterns changes, for the taskof teacher-student graph level regression. The model was trained on graphs generated using theG(n;p)model. We show the normalized L2loss computed on test, where output is normalized bythe average test-set (target) output. The left panel shows the test loss when training on n2[40;50]andp= 0:3and testing on G(n;p)graphs with n= 100 andpvarying from 0:05to0:5. In thisexperiment, the expected node degree is np, hence the distribution of d-patterns is most similar tothe one observed in the training set when p= 0:15. Indeed, this is the value of pwhere the testloss is minimized. The right panel is discussed in the caption. These results are consistent withCorollary 3.4, since when the distributions of d-patterns are far the model is not able to generalizewell, and it does generalize well when these distributions are similar.To give further confirmation to the effect of the local distributions on the generalization capabilitiesof GNN we conducted the following two experiments: (1) We tested on the teacher-student setupwith a 3-layer GNN on graphs of sizes uniformly from n= 40;:::; 50and sampled from G(n;0:3).6Under review as a conference paper at ICLR 20210.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50p of test G(n,p)32101234Average loss (log scale)one layertwo layersthree layers60 80 100 120 140Train graph size43210123Average loss (log scale)one layertwo layersthree layersFigure 1: The effect of graph size and d-pattern distribution on generalization in G(n,p) graphs.(left) The effect of distribution of d-patterns . Train onndrawn uniformly from [40;50]andp= 0:3test onn= 100 and varying p;(right) The effect of train-graph size. Train onndrawnuniformly from [40;x]wherexvaries andp= 0:3; test onn= 150 ,p= 0:3.60 80 100 120 140Test graph size432101234Average loss (log scale)50 75 100 125 150 175 200 225 250Test graph size432101234Average loss (log scale)Figure 2: Constant training size and varying test size with and without normalization. Left: constantpwhich leads to different d-patterns. Right :pis normalized to keep d-patterns distribution similar.We tested on graphs sampled from G(N;0:3)whereNvaries from 50up to 150. It is evident thatas the graph size in the test set increases, the model performs worse. (2) We did the same test as in(1), but this time we normalize pon the test set so that pN= 15 , which is the approximate ratiofor the training set. Here we even went further to train up to sizes N= 250 . In this experiment, theGNN successfully generalized to larger graphs, since the local distributions of the train and test setare indeed very similar. For the results see Fig. 2.We also tested on the tasks of finding the max clique in a graph, calculating the number of edges,and the node prediction tasks of calculating the node degree, and the student-teacher task at the nodelevel. In addition, we tested on the popular GIN architecture (Xu et al., 2018), and show that the sizegeneralization problem also occurs there. We also tested on ReLU, Tanh, and sigmoid activations.See additional experiments in Appendix C.4 T OWARDS IMPROVING SIZE -GENERALIZATIONThe results from the previous section show that the problem of size generalization is not only relatedto the size of the graph in terms of the number of nodes or edges but to the distribution of d-patternsinduced by the distributions from which the graphs are sampled. Based on this observation, we nowformulate the size generalization problem as a domain adaptation (DA) problem. We then build ontechniques from domain adaptation and suggest two approaches to improve size generalization. (1)Self-supervised learning on the target domain (large graphs) and (2) Semi-supervised learning witha few labeled target samples. We consider the DA setting where we are given two distributions overgraphs: a source distribution DS(say, for small graphs) and a target distribution DT(say, for largegraphs). We consider two settings. First, the unlabeled DA setting, where we have access to labeledsamples from the source DSbut the target data from DTis unlabeled. Our goal is to infer labels on7Under review as a conference paper at ICLR 2021a test dataset sampled from the target DT. Second, we consider a semi-supervised setup, where wealso have access to a small number of labeled examples from the target DT.Size generalization with Self-supervised learning. In Self-supervised learning (SSL) for DA,a model is trained on unlabeled data to learn a pretext task, which is different from the maintask at hand. If the pretext task is chosen wisely, the model learns useful representations (Doer-sch et al., 2015; Gidaris et al., 2018) that can help with the main task. Here, we train the pre-text task on both the source and target domains, as was done for images and point clouds (Sunet al., 2019; Achituve et al., 2020). The idea is that the pretext task aligns the representationsof the source and target domains leading to better predictions of the main task for target graphs.Figure 3: Left: a graph withnode features represented bycolors. Right: A tree that rep-resents thed-patterns for theblack node. The tree descrip-tor is the number of nodesfrom each class in each layerof the tree.fFor a detailed review of the training procedures and the losses seeAppendix E. Given a pretext task we consider two different train-ing procedures: (1) Multi-task learning (MTL) : parallel trainingof the main task on a source domain and a pretext task on both thesource of target domain (You et al., 2020). In this case, the architec-ture consists of a main GNN that acts as a feature extractor and twosecondary networks (heads) that operate on the extracted featuresand try to predict the main task and the pretext task. (2) Pretraining(PT) : in this procedure (Hu et al., 2019), the GNN feature extractoris trained until convergence on the pretext task on both the sourceand target examples. Then, the GNN part is frozen, and only thehead of the model is trained on labeled examples from the source.Pattern-tree pretext task. We propose a novel pretext task whichis motivated by the definition of d-patterns. We do that by con-structing a tree that fully represents the d-patterns of each node (seee.g., Xu et al. (2018)). We then calculate a descriptor of the tree, which is a vector containing countsof the number of nodes from each class in each layer of the tree. We treat this descriptor as the targetlabel to be reconstructed by the SSL task. For more details see Figure 3. Intuitively, in order to besuccessful on a task on graphs, the GNN needs to correctly represent the pattern trees of the nodesof the graphs. This means, that to generalize to the target domain Dt, the GNN needs to be forcedto represent pattern trees from both the source and the target distributions. For more details aboutthe construction of the pattern tree see Appendix D. In short, each tree corresponds to a d-patternin the following way: the d-pattern tree of a node can be seen as a multiset of the children of theroot, and each child is a multiset of its children, etc. The pattern tree is a different description of thed-pattern of a node. This means that a GNN that successfully represent a pattern tree also representsits corresponding d-pattern, thus connecting this SSL task to the theory from Sec. 3.Semi-supervised setup. We also consider a case where a small number of labeled samples areavailable for the target domain. A natural approach is to train an SSL pretext task on samples fromboth the source and target domain, and train the main task on all the labeled samples available. Wetested this setup with 1, 5, or 10 labeled examples from the target domain.4.1 E XPERIMENTSArchitecture and training protocol. The setup is the same as in Subsection 3.4 with the followingchanges. We use a three-layer GNN in all experiments. Multi-task learning is used with equal weightto the main and SSL tasks. In the semi-supervised setup, we used an equal weight for the main taskand the labeled examples from the target domain.Baselines. We compare our new pretext task to the following baselines: (1) Vanilla : standardtraining on the source domain; (2) HomoGNN (Tang et al., 2020) a homogeneous GNN withoutthe bias term trained on the source domain; (3) Graph autoencoder (GAE) pretext task (Kipf andWelling, 2016); (4) Node masking (NM) pretext task from Hu et al. (2019) where at each trainingiteration we mask 10% of the node features and the goal is to reconstruct them. In case the graphdoes not have node features then the task was to predict the degree of the masked nodes. (5) Nodemetric learning (NML) : we use metric learning to learn useful node representations. We use acorruption function that given a graph and corruption parameter p2[0;1], replacespjEjof theedges with random edges, and thus can generate positive ( p= 0:1) and negative ( p= 0:3) examplesfor all nodes of the graph. we train with the triplet loss (Weinberger and Saul, 2009).8Under review as a conference paper at ICLR 2021DATASETS DEEZER IMDB - B NCI1 NCI109 PROTEINS TWITCH DD AVERAGESMALL GRAPHS 56:50:863:23:375:51:678:41:475:43:169:70:271:14:4 70.0%VANILLA 41:16:855:97:865:94:368:93:876:08:560:53:676:33:2 63.5%HOMO -GNN 40:56:656:37:066:03:768:83:277:110 60:82:3 76:83 76:83 76:83 63.8%NM MTL 51:68:5 51:68:5 51:68:555:66:849:97:861:75:778:88:449:52:867:45:4 59.2%NM PT 50:17:554:96:751:76:655:85:078:28:248:44:060:315:9 57.1%GAE MTL 49:411:055:56:051:29:957:69:479:511:762:55:167:810:0 60.5%GAE PT 47:110:054:16:858:97:667:25:670:59:453:64:7 697:1 60.1%NML MTL 46:49:554:47:052:36:356:26:578:76:857:44:164:711:9 58.6%NML PT 48:410:753:86:154:66:256:18:176:38:054:94:761:415:1 57.9%PATTERN MTL 45:68:856:89:260:57:567:97:275:811:161:63:5 76:83 76:83 76:83 63.6%PATTERN PT 447:7 61:93:2 61:93:2 61:93:267:811:7 67:811:7 67:811:774:85:7 74:85:7 74:85:784:75:1 84:75:1 84:75:164:53:3 64:53:3 64:53:374:95:2 67.5%Table 1: Test accuracy of compared methods in binary classification tasks. The Pattern task withpretraining achieves the highest accuracy in most tasks and has 4% higher average accuracy than thesecond-best method. High variance is due to the domain shift between the source and target domain.Datasets. We use datasets from Morris et al. (2020) and Rozemberczki et al. (2020) (Twitchegos and Deezer egos). We selected datasets that have a sufficient number of graphs (morethan 1,000) and with a non-trivial split to small and large graphs as detailed in Appendix F.1.In total we used 7 datasets, 4 in molecular biology (NCI1, NCI109, D&D, Proteins), and 3of social networks (Twitch ego nets, Deezer ego nets, IMDB-Binary). In all datasets, 50%smallest graphs were assigned to the training set, and the largest 10% of graphs assignedto the test set. We further split a random 10% of the small graphs as a validation set.0 1 5 10Num of labeled large graph examples60626466687072Average accuracy over all datasets67.0367.568.970.4763.5766.4768.7470.62 Pattern PTVanillaFigure 4: Mean accuracy over alldatasets in Tab. 1 for d-pattern pretraining and no SSL (Vanilla).Results. Table 1 compares the effect of using the Pattern-tree pretext task to the baselines described above. The smallgraphs row presents vanilla results on a validation set withsmall graphs. The small graph accuracy on 5 out of 7datasets is larger by 7.3%-15.5% than on large graphs, indi-cating that the size-generalization problem is indeed preva-lent in real datasets. Pretraining with the d-patterns pretexttask outperforms other baselines in 5 out 7 datasets, withan average 4%improved accuracy on all datasets. HOMO-GNN slightly improves over the vanilla while other pretexttasks do not improve average accuracy. Naturally, the ac-curacy here is much lower than SOTA on these datasets be-cause the domain shift makes the problem much harder. InAppendix F.2 we show the 1-pattern distribution discrep-ancy between large and small graphs in two real datasets:IMDB (large discrepancy) and D&D (small discrepancy). Correspondingly, the pattern tree SSLtask improved performance on the IMDB dataset, while not improving performance on the D& Ddataset. This gives further evidence that a discrepancy between the d-patterns leads to bad general-ization, and that correctly representing the patterns of the test set can improve performance.Figure 4 compares the performance of vanilla training versus pretraining with the pattern-tree pretexttask in the semi-supervised setup. The accuracy monotonically increases with respect to the numberof labeled examples in both cases. Moreover, pretraining with the pretext task yields better results inthe case of 0,1,5 labeled examples and comparable results in the case we use 10 labeled examples.We additionally tested on the synthetic tasks discussed in Sec. 3, and show that the pattern-treepretext task improves in the student-teacher setting, while it does not solve the edge count or degreeprediction tasks. On the other hand, adding even a single labeled sample from the target distributionsignificantly improves performance on the synthetic tasks we tested on. For more details see Sec. F.5 C ONCLUSION AND DISCUSSIONThis paper is a first step towards understanding the size generalization problem in graph neuralnetworks. We showed that GNNs do not naturally generalize to larger graphs even on simple tasks,characterized how this failure depends on d-patterns, and suggested two approaches that can improvegeneralization. Our characterization of d-patterns is likely to have implications to other problemswhere generalization is harmed by distribution shifts, and offer a way to mitigate those problems.9Under review as a conference paper at ICLR 2021
lEa9viabW5
Novel and well-motivated investigation into GNN size generalization error
7: Good paper, accept
Summary: This paper investigates the issue of generalizability of GNNs when trained on small graphs and tested on larger graphs, a common setting in graph learning. The paper argues that the ability of constant-depth GNNs to generalize is not dependent on the size difference, but on the difference in distributions of nodes' neighborhood features, which authors call "d-patterns." The paper shows theoretically that a GNN can be trained to achieve zero loss on small graphs, yet fail to generalize to larger graphs and the loss on these graphs will be dependent on the difference in d-pattern distributions. Strengths: * Shows strong theoretical and empirical evidence of GNNs inability to generalize to graphs larger than those seen in training when the distribution of d-patterns of the graphs differ, showing that techniques like regularization will not help in these cases * Provides a first step to mitigate problem by using domain adaptation methods and in particular using an intermediate task of identifying the d-patterns in each graph. Weaknesses: * The work only considers constant-depth GNNs and local tasks * Generalization error is shown only for 0-1 loss, limiting the graph problems that it applies to to node live binary classification and functions thereof. Conclusion: The failures of GNNs to generalize to larger graphs in many cases is a known phenomenon. This paper's investigation of this issue and a clear explanation of at least one reason for it is clear and well supported. Though a more minor contribution to the paper, the proposal for some mitigating techniques (DA and in particular the pattern tree labeling task) can serve as motivation for further investigation of models robust to these generalization errors. I recommend acceptance based on the clear motivation and development of the investigation presented in the work. update: Thank you to authors for their response to reviewer comments. I acknowledge I have read and reviewed their rebuttals. Questions: 1. The claim made is that the distribution shift in d-patterns is the driving factor in generalization error. The experiments in 3.4 support this, but one experiment similar to those shown in Figure 1 that would illustrate this further is training on sizes n=[40,x] and p=0.3 as in Figure 1 (right) and testing on n=150 and p=0.002*[40,x]. If the size difference between train and test is not the driving factor, but rather d-patters, should we expect such an experiment would yield a mostly-flat loss curve? Would the value of the loss on the test graphs be similar to the loss achieved on graphs from the original distribution (n=[40, x] with p=0.03)? 2. The focus of the work is on size generalization and in particular small-to-large graph domain shift. The claims appear to also hold for evaluating generalization on similarly sized graphs with differences in node attributes and/or connectivity patterns. Is this the case, why or why not? Minor comments and typos: * In the definition of d-patterns, the integer $\ell$ is not defined. Is this the number of neighbors, or the number of possible d-patterns? * Top of page 8 “does have node features” should be “does *not* have node features” * Page 8, “Datasets” paragraph says “and the and 10% largest graphs.” I assume this should read “and the largest 10% of graphs…” * “The” should be capitalized in the last paragraph (“Results”) at the beginning of the sentence “The accuracy monotonically increases…”
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title On Size Generalization in Graph Neural Networks ### Paper Abstract Graph neural networks (GNNs) can process graphs of different sizes but their capacity to generalize across sizes is still not well understood. Size generalization is key to numerous GNN applications, from solving combinatorial optimization problems to learning in molecular biology. In such problems, obtaining labels and training on large graphs can be prohibitively expensive, but training on smaller graphs is possible. This paper puts forward the size-generalization question and characterizes important aspects of that problem theoretically and empirically. We prove that even for very simple tasks, such as counting the number of nodes or edges in a graph, GNNs do not naturally generalize to graphs of larger size. Instead, their generalization performance is closely related to the distribution of local patterns of connectivity and features and how that distribution changes from small to large graphs. Specifically, we prove that for many tasks, there are weight assignments for GNNs that can perfectly solve the task on small graphs but fail on large graphs, if there is a discrepancy between their local patterns. We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs. We then formalize size generalization as a domain-adaption problem and describe two learning setups where size generalization can be improved. First, as a self-supervised learning problem (SSL) over the target domain of large graphs. Second as a semi-supervised learning problem when few samples are available in the target domain. We demonstrate the efficacy of these solutions on a diverse set of benchmark graph datasets. ### Paper Keywords ["graph neural networks", "gnn", "generalization", "Weisfeiler-Lehman"] ### Paper Content ABSTRACTGraph neural networks (GNNs) can process graphs of different sizes but theircapacity to generalize across sizes is still not well understood. Size generalizationis key to numerous GNN applications, from solving combinatorial optimizationproblems to learning in molecular biology. In such problems, obtaining labels andtraining on large graphs can be prohibitively expensive, but training on smallergraphs is possible.This paper puts forward the size-generalization question and characterizes impor-tant aspects of that problem theoretically and empirically. We prove that evenfor very simple tasks, such as counting the number of nodes or edges in a graph,GNNs do not naturally generalize to graphs of larger size. Instead, their gen-eralization performance is closely related to the distribution of local patterns ofconnectivity and features and how that distribution changes from small to largegraphs. Specifically, we prove that for many tasks, there are weight assignmentsfor GNNs that can perfectly solve the task on small graphs but fail on large graphs,if there is a discrepancy between their local patterns. We further demonstrate onseveral tasks, that training GNNs on small graphs results in solutions which do notgeneralize to larger graphs. We then formalize size generalization as a domain-adaption problem and describe two learning setups where size generalization canbe improved. First, as a self-supervised learning problem (SSL) over the targetdomain of large graphs. Second as a semi-supervised learning problem when fewsamples are available in the target domain. We demonstrate the efficacy of thesesolutions on a diverse set of benchmark graph datasets.1 I NTRODUCTIONGraphs are a flexible representation, widely used for representing diverse data and phenomena.Graph neural networks (GNNs) – Deep models that operate over graphs – have emerged as a promi-nent learning model (Bruna et al., 2013; Kipf and Welling, 2016; Veli ˇckovi ́c et al., 2017). They areused in natural sciences (Gilmer et al., 2017), social network analysis (Fan et al., 2019), for solvingdifficult mathematical problems (Luz et al., 2020) and for approximating solutions to combinatorialoptimization problems (Li et al., 2018).In many domains, graphs data vary significantly in size. This is the case for molecular biology,where molecules – represented as graphs over atoms as nodes – span from small compounds toproteins with many thousands of nodes. It is even more severe in social networks, which can reachbillions of nodes. The success of GNNs for such data stems from the fact that the same GNNmodel can process input graphs regardless of their size. Indeed, it has been proposed that GNNscan generalize to graphs whose size is different from what they were trained on , but it is largelyunknown in what problems such generalization occurs. Empirically, several papers report goodgeneralization performance on specific tasks (Li et al., 2018; Luz et al., 2020). Other papers, likeVeliˇckovi ́c et al. (2019), show that size generalization can fail on several simple graph algorithms,and can be improved by using task-specific training procedures and specific architectures.Given their flexibility to operate on variable-sized graphs, A fundamental question arises aboutgeneralization in GNNs: ”When do GNNs trained on small graphs generalize to large graphs?”Aside from being an intriguing theoretical question, this problem has important practical implica-tions. In many domains, it is hard to label large graphs. For instance, in combinatorial optimization1Under review as a conference paper at ICLR 2021problems, labeling a large graph boils down to solving large and hard optimization problems. Inother domains, it is often very hard for human raters to correctly label complex networks. Oneapproach to this problem could have been to resize graphs into a homogeneous size. This is thestrategy taken in computer vision, where it is well understood how to resize an image while keepingits content. Unfortunately, there are no effective resizing procedures for graphs. It would thereforebe extremely valuable to develop techniques that can generalize from training on small graphs.As we discuss below, a theoretical analysis of size generalization is very challenging because itdepends on several different factors, including the task, the architecture, and the data. For tasks,we argue that it is important to distinguish two types of tasks, local andglobal . Local tasks can besolved by GNNs whose depth does not depend on the size of the input graph. For example, the taskof finding a constant-size pattern. Global tasks require that the depth of the GNN grows with thesize of the input graph. For example, calculating the diameter of a graph. While there are a fewprevious works that explore depth-dependant GNNs (e.g., Tang et al. (2020), constant depth GNNsare by far the most widely used GNN models today and are therefore the focus of this paper.In this paper, we focus on GNNs with constant depth and study the ability of the most expressivemessage passing neural networks (Xu et al., 2018; Morris et al., 2019) to generalize to unseensizes. Our key observation is that generalization to graphs of different sizes is strongly related to thedistribution of patterns around nodes in the graphs of interest. These patterns, dubbed d-patterns(wheredis the radius of the local neighborhood), describe the local feature-connectivity structurearound each node, as seen by message-passing neural networks and are defined in Section 3.We study the role of d-patterns both empirically and theoretically. First, we theoretically showthat when there is a significant discrepancy between the d-pattern distributions, GNNs have multipleglobal minima for graphs of a specific size range, out of which only a subset of models can generalizewell to larger graphs. We complement our theoretical analysis with an experimental study andshow that GNNs tend to converge to non-generalizing global minima, when d-patterns from thelarge graph distribution are not well-represented in the small graph distribution. Furthermore wedemonstrate that the size generalization problem is accentuated in deeper GNNs.Following these observations, in the final part of this paper, we discuss two learning setups that helpimprove size-generalization by formulating the learning problem as a domain adaptation problem:(1) Training the GNNs on self-supervised tasks aimed at learning the d-pattern distribution of boththe target (large graphs) and source (small graphs) domains. We also propose a novel SSL task thataddresses over-fitting of d-patterns. (2) A semi-supervised learning setup with a limited number oflabeled examples from the target domain. The idea behind both setups is to promote convergenceof GNNs to local/global minima with good size generalization properties. We show that both setupsare useful in a series of experiments on synthetic and real data.To summarize, this paper makes the following contributions. (1) We identify a size generalizationproblem when learning local tasks with GNNs and analyze it empirically and theoretically. (2) Welink the size-generalization problem with the distribution of d-patterns and suggest to approach itas a domain adaptation problem (3) We empirically show how several learning setups help improvesize generalization.2 R ELATED WORKSize generalization in set and graph learning. Several papers observed successful generalizationacross graph sizes, but the underlying reasons were not investigated (Li et al., 2018; Maron et al.,2018; Luz et al., 2020). More recently, (Veli ˇckovi ́c et al., 2019) showed that when training GNNsto perform simple graph algorithms step by step they generalize better to graphs of different sizes.Unfortunately, such training procedures cannot be easily applied to general tasks. Knyazev et al.(2019) studied the relationship between generalization and attention mechanisms. Tang et al. (2020)observed two issues that can harm generalization: (1) There are tasks for which a constant number oflayers is not sufficient. (2) Some graph learning tasks are homogeneous functions. They then suggesta new GNN architecture to deal with these issues. Our work is complementary to these works as itexplores another fundamental size generalization problem, focusing on constant depth GNNs. Formore details on the distinction between constant depth and variable depth tasks see Appendix A.Several works also studied size generalization and expressivity when learning set-structured inputs(Zweig and Bruna, 2020; Bueno and Hylton, 2020). On the more practical side, Joshi et al. (2019),2Under review as a conference paper at ICLR 2021Joshi et al. (2020) study the combinatorial problem of traveling salesman and whether it is possibleto generalize to larger sizes. Corso et al. (2020) study several multitask learning problems on graphsand evaluate how the performance changes as the size of the graphs change.Expressivity and generalization in graph neural networks. (Xu et al., 2018; Morris et al., 2019)established a fundamental connection between message-passing neural networks and the Weisfeiler-Leman (WL) graph-isomorphism test. We use similar arguments to show that GNNs have enoughexpressive power to solve a task on a set of small graphs and to fail on it on a set of large graphs.Several works studied generalization bounds for certain classes of GNNs (Garg et al., 2020; Punyet al., 2020; Verma and Zhang, 2019), but did not discuss size-generalization. Sinha et al. (2020)proposed a benchmark for assessing the logical generalization abilities of GNNs.3 T HE SIZE GENERALIZATION PROBLEMWe now present the main problem discussed in the paper, that is, what determines if a GNN gener-alizes well to graphs of sizes not seen during training. We start with a simple motivating exampleshowing the problem on single layer GNNs. We then show that the question of size generalizationactually depends on d-patterns , the local patterns of connectivity and features of the graphs, and notonly on their actual size.Setup. We are given two distributions over graphs P1;P2that contain small and large graphs ac-cordingly, and a task that can be solved with 0 error for all graph sizes using a constant depth GNN.We train a GNN on a training set Ssampled i.i.d from P1and study its performance on P2.GNN model. We focus on the first order GNN (1-GNN) architecture from Morris et al. (2019)defined in the following way:h(t)v=0@W(t)2h(t1)v +Xu2N(v)W(t)1h(t1)u +b(t)1A:Here, h(t)vis the feature vector of node vaftertlayers,W(t)1;W(t)22Rdt1dt;b(t)2Rdtdenotesthe parameters of the t-th layer of the GNN, and is some non-linear activation (e.g ReLU). Itwas shown in Morris et al. (2019) that GNNs composed from these layers have maximal expressivepower with respect to all message-passing neural networks. In the experimental section we alsoexperiment with Graph Isomorphism Network (GIN) (Xu et al., 2018). For further details on GNNssee Appendix A. In this work we use the most expressive GNN variants that use the ”sum” aggre-gation function. Using a ”max” or ”mean” reduces the expressive power of the network, makingit not powerful enough to solve simple counting problems (e.g. counting edges or computing nodedegrees). On the other hand, these networks give rise to slightly different definitions of patterns andcan generalize better in some cases as shown in (Veli ˇckovi ́c et al., 2019), yet still suffer from sizeoverfit. Exploring these networks is beyond the scope of this work.3.1 S IZE GENERALIZATION IN SINGLE -LAYER GNN SWe start our discussion on size generalization with a theoretical analysis of a simple setup. Weconsider a single-layer GNN and an easy task and show that: (1) The training objective has manydifferent solutions, but only a small subset of these solutions generalizes to larger graphs (2) Simpleregularization techniques cannot mitigate the problem. This subsection serves as a warm-up for thenext subsections that contain our main results.Assume we train on a distribution of graphs with a fixed number of nodes nand a fixed num-ber of edges m. Our goal is to predict the number of edges in the graph using a 1-GNNwith a single linear layer and additive readout function, for simplicity also consider the squaredloss. The objective boils down to the following function for any graph Gin the training set::L(w1;w2;b;G) =Pu2V(G)w1xu+Pv2N(u)w2xv+by2. Here,Gis an inputgraph,V(G)are the nodes of G,N(v)are all the neighbors of node v,w1;w2andbare the train-able parameters, yis the target ( min this case) and xvis the node feature for node v. Further,assume that we have no additional information on the nodes, so we can just embed each node as a3Under review as a conference paper at ICLR 2021one-dimensional feature vector with a fixed value of 1. In this simple case, the trainable parametersare also one-dimensional. We note that the training objective can also be written in the follow-ing formL(w1;w2;b;G) = (nw1+ 2mw 2+nbm)2, and that one can easily find its solutionsspace, which is an affine subspace defined by w2=mn(w1+b)2m. In particular, the solutions withb+w1= 0; w 2= 1=2are the only ones which do not depend on the specific training set graph sizen, and generalize to graphs of any size. It can be readily seen that when training the model on graphsof fixed size (fixed m;n ), gradient descent will have no reason to favor one solution over anotherand we will not be able to generalize. We also note that the generalizing solution is not always theleast norm solution (with respect to both L1andL2norms) so simple regularization will not helphere. On the other hand, it is easy to show that training on graphs with different number of edgeswill favor the generalizing solution. As we will see next, the problem gets worse when consideringGNNs with multiple non-linear layers, and this simple solution will not help in this case: we cantrain deeper GNNs on a wide variety of sizes and the solution will not generalize to other sizes.3.2d-PATTERNSWe wish to understand theoretically when does a GNN which was trained on on graphs with a smallnumber of nodes can generalize to graphs with a large number of nodes. To answer that question,we first analyze what information is received by each node in the graph from its neighboring nodesafter a graph is processed by a GNN with Tlayers. It is easy to see that every node can receiveinformation about its neighbors which are at most Thops away. We also know that nodes do nothave full information about their order Tenvironment. For example, GNNs cannot determine if atriangle is present in a neighborhood of a given node Chen et al. (2020). In order to characterizethe exact information that can be found in each node after a Tlayer GNN, we use the definition ofthe WL test, specifically its iteration structure, which has the same representational power as GNNs(see Xu et al. (2018); Morris et al. (2019)), For more details on the WL test see Appendix A.Definition 3.1 (d-patterns) .LetCbe a finite set of node features and N2N. Ford0we definethe set ofd-patternsPdon graphs with maximal degree Nand node features from C. The definitionis recursive in the following way: For d= 0,P0=C. We definePdto be the set of all tuples (a;b)wherea2Pd1andbis in multisets of size at most Nconsisting of elements from Pd1.LetG= (V;E)be a graph with maximal degree Nand a node feature cv2Cfor every nodev2V. We define the d-pattern of a node v2Vford0recursively: For d= 0, its0-pattern iscv. Ford > 0we say that vwith`neighboring d1patterns has a d-patternp=(pv;f(pi1;mpi1);:::; (pi`;mpi`)g)iff nodevhas(d1)-patternpvand for every j2f1;:::;`gthe number of neighbors of vwith(d1)-patternpijis exactlympij.Thed-pattern of a node is an encoding of the (d1)-patterns of itself and of its neighbors. Forexample, assume a graph has a maximal degree of Nand all the nodes start with the same nodefeature. The 1-pattern of each node is its degree. The 2-pattern of each node is for each possibledegreei2f1;:::;Ngthe number of neighbors with degree i. In the same manner, the 3-patternof a node is for each possible 2-pattern, the number of its neighbors with this exact 2-pattern. Thedefinition of d-patterns can naturally be extended to the case of unbounded degrees. We have thefollowing theorem which connects the d-patterns with the expressive power of GNNs:Theorem 3.2. Any function that can be represented by a d-layer GNN is constant on d-patterns.In particular, the theorem shows that for any two graphs (of any size) and two nodes, one in eachgraph, if the nodes have the exact same d-pattern, then any d-layer GNN will output the same resultfor the two nodes. The full proof can be found in Appendix B, and follows directly from the analogybetween the WL algorithm (see Appendix A) and d-patterns. Thm. 3.2 implies that d-patterns don’trepresent more expressive power than GNN. In the next subsection, we prove that GNNs can exactlycomputed-patterns, and show that this capacity is tightly related to size generalization. It is alsoeasy to see from the definition of d-patterns and the proof of Theorem 2 from Morris et al. (2019)thatd-patterns exactly represent the expressive power of GNNs (with additive aggregation), thus thisdefinition is a natural tool to study the properties of GNNs, such as size generalization.3.3 GNN S MAY OVERFIT d-PATTERNSWe can now connect the size generalization problem to the concept of d-patterns. We start with anexample: consider a node prediction task in which an output is specified for each node in an input4Under review as a conference paper at ICLR 2021graph, and is solvable by a d-layer GNN. To perfectly solve this task, the model should produce thecorrect output for the d-pattern of all the nodes in the training set. Testing this GNN on a differentset of graphs will succeed if the test set has graphs with similar d-patterns to those in the trainingset. Note that this requirement is not related to the sizeof the graphs but to the distribution of thed-patterns of the nodes in the test set.In the following theorem we show rigorously, that given a set of d-patterns and output for each suchpattern, there is an assignment of weights to a GNN with O(d)layers that perfectly fits the outputfor each pattern. We will then use this theorem in order to show that, under certain assumptions onthe distribution of d-patterns of the large graphs, GNNs can perfectly solve a task on a set of smallgraphs, and completely fail on a set on large graphs. In other words, we show that there are multipleglobal minima for the training objective that do not generalize to larger graphs.Theorem 3.3. LetCbe a finite set of node features, Pbe a finite set of d-patterns on graphswith maximal degree N2N, and for each pattern p2Pletyp2[1;1]be some target label.Then there exists a 1-GNN Fwithd+ 2layers, width bounded by maxn(N+ 1)djCj;2pjPjoand ReLU activation such that for every graph Gwith nodesv1;:::;vn, corresponding d-patternsp1;:::;pnPand node features from C, the output of Fon nodeviis exactlyypi.The full proof is in Appendix B. Note that the width of the required GNN from the theorem is notvery large if dis small, where drepresents the depth of the 1-GNN. In practice, shallow GNNs arevery commonly used and are proven empirically successful, while training deep GNNs was shownto be hard due to many problems like over-smoothing (Zhao and Akoglu, 2019).Using the above theorem we can claim that there are assignments of weights to GNN that cannot”size-generalize”, that is, given a specific task, the GNN succeeds on the task for small graphs (upto some bound) and fails on larger graphs, as long as there is a notable discrepancy between theird-patterns distributions:Corollary 3.4. LetP1andP2be distributions of small and large graphs respectively with finitesupport, and let Pdpat1 be the distribution of d-patterns over small graphs and similarly Pdpat2for large graphs. For any node prediction task which is solvable by a 1-GNN with depth dand>0there exists a 1-GNN with depth at most d+ 2that has 0-1 loss smaller then onP1and 0-1 loss onP2, where() = maxA:Pdpat1(A)<Pdpat2 (A): (1)Here,Ais a set ofd-patterns and P(A)is the total probability mass for that set under P.Intuitively, large means that there exists a set of d-patterns that have a low probability for smallgraphs and high probability for large graphs. Corollary 3.4 implies that the major factor in thesuccess of GNN to generalize to larger graphs is not the graph size, but the distribution of the d-patterns. Different distributions of d-patterns lead to large and thus to bad generalization to largergraphs. On the other hand, from Thm. 3.2 we immediately get that similar distributions of d-patternsimply that every GNN model that succeeds on small graphs will also succeed on large graphs, sinceGNNs are constant on d-patterns:Corollary 3.5. In the setting of Corollary 3.4, also assume that all the patterns that have a positiveprobability in Pdpat2 also have a positive probability in Pdpat1 . Then, for any node prediction tasksolvable by a depth dGNN, any 1GNN that have 0loss (w.r.t the 01loss) onP1will also have0loss onP2.Examples. Corollary 3.4 shows that even for simple tasks, GNN may fail, here are two simpleexamples. (i) Consider the task of calculating the node degree. From Corollary 3.4 there is a GNNthat successfully output the degree of nodes with max degree up to Nand fails on nodes with largerdegrees. Note that this problem can easily be solved for any node degree with a 1-layer GNN. (ii)Consider some node regression task, when the training set consists of graphs sampled i.i.d from anErdos-Renyi graph G(n;p)1, and the test set contains graphs sampled i.i.d from G(2n;p). In thiscase, a GNN trained on the training set will be trained on graphs with an average degree np, whilethe test set contains graphs with an average degree 2np. This means that the d-patterns in the trainand test set are very different, and by Corollary 3.4 the GNN may overfit.1Graphs with nnodes such that each edge exists with probability p.5Under review as a conference paper at ICLR 2021Graph prediction tasks. Our theoretical results discuss node prediction tasks. We note that theyare also relevant for graph prediction tasks where there is a single output to each input graph. Thereason for that is that in order to solve graph prediction tasks, a GNN first calculates node featuresand then pools them into a single global graph feature. Our analysis shows that the first part of theGNN, which is responsible for calculating the node features, might not generalize to large graphs.As a result, the GNN will generate an uninformative global graph feature and the GNN will fail onthe original graph prediction task. In the experimental sections, we show that the size generalizationproblem is indeed relevant for both node and graph prediction tasks. Here is a formal statementregarding graph prediction tasks, the full proof can be found in Appendix B.Corollary 3.6. LetP1andP2be distributions of small and large graphs respectively with finitesupport. Let Pdpat1 be the distribution of d-patterns over small graphs and similarly Pdpat2 forlarge graphs, and assume that the supports of Pdpat1 andPdpat2 are disjoint. For any graphprediction task solvable by a 1-GNN with depth dand summation readout function, there exists a1-GNN with depth at most d+ 3that perfectly solves the task on P1and fails on all graphs from P2.Relation to Morris et al. (2019); Xu et al. (2018). We note that Theorem 3.3 and Corollary 3.4 aresomewhat related to the expressivity results in (Xu et al., 2018; Morris et al., 2019) that show thatGNNs can be as powerful as the WL test. Here, we show that the expressive power of GNNs cancause negative effects when there is a discrepancy between the training and test sets.3.4 E MPIRICAL VALIDATIONIn the previous subsection we have shown that for any node task, and any two datasets of graphswith different sizes that significantly differ in their d-patterns distributions, there is a 1-GNN thatsuccessfully solves the task on one dataset but fails on the second. In this subsection, we showempirically that reaching these ”overfitting” GNNs is actually very common. Specifically, the size-overfit phenomenon is prevalent when the d-patterns of in the large graph distribution are not foundin the small graph distribution. We also show that GNNs can generalize to larger graphs if thedistribution of d-patterns remains similar to the distribution of patterns in the small graphs.To show this, we use a controlled regression task in a student-teacher setting. In this setting, wesample a ”teacher” GNN with random weights, freeze the network, and label each graph in thedataset using the output of the ”teacher” network. Our goal is to train a ”student” network, whichhas the same architecture as the ”teacher” network, to fit the labels of the teacher network. Theadvantages of this setting are two-fold: (1) A solution is guaranteed to exist : We know that there is aweight assignment of the student network which perfectly solve the task for graphs of any size. (2)Generality : It includes all tasks solvable by constant depth GNNs. We discuss more settings below.Architecture and training protocol. We use 1-GNN as defined in (Morris et al., 2019). The numberof GNN layers in the network we use is either 1;2or3; the width of the teacher network is 32, andof the student network 64, providing more expressive power to the student network. We obtainedsimilar results when testing with a width of 32, same as the teacher network. We use a summationreadout function followed by a two-layer fully connected suffix. We use ADAM with learning rate103. We performed a hyper-parameters search on the learning rate and weight decay and usevalidation-based early stopping on the source domain (small graphs). The results are averaged over10 random seeds. All runs used Pytorch Geometric (Fey and Lenssen, 2019) on NVIDIA DGX-1.Results. Fig. 1 compares the loss of GNNs as the distribution of d-patterns changes, for the taskof teacher-student graph level regression. The model was trained on graphs generated using theG(n;p)model. We show the normalized L2loss computed on test, where output is normalized bythe average test-set (target) output. The left panel shows the test loss when training on n2[40;50]andp= 0:3and testing on G(n;p)graphs with n= 100 andpvarying from 0:05to0:5. In thisexperiment, the expected node degree is np, hence the distribution of d-patterns is most similar tothe one observed in the training set when p= 0:15. Indeed, this is the value of pwhere the testloss is minimized. The right panel is discussed in the caption. These results are consistent withCorollary 3.4, since when the distributions of d-patterns are far the model is not able to generalizewell, and it does generalize well when these distributions are similar.To give further confirmation to the effect of the local distributions on the generalization capabilitiesof GNN we conducted the following two experiments: (1) We tested on the teacher-student setupwith a 3-layer GNN on graphs of sizes uniformly from n= 40;:::; 50and sampled from G(n;0:3).6Under review as a conference paper at ICLR 20210.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50p of test G(n,p)32101234Average loss (log scale)one layertwo layersthree layers60 80 100 120 140Train graph size43210123Average loss (log scale)one layertwo layersthree layersFigure 1: The effect of graph size and d-pattern distribution on generalization in G(n,p) graphs.(left) The effect of distribution of d-patterns . Train onndrawn uniformly from [40;50]andp= 0:3test onn= 100 and varying p;(right) The effect of train-graph size. Train onndrawnuniformly from [40;x]wherexvaries andp= 0:3; test onn= 150 ,p= 0:3.60 80 100 120 140Test graph size432101234Average loss (log scale)50 75 100 125 150 175 200 225 250Test graph size432101234Average loss (log scale)Figure 2: Constant training size and varying test size with and without normalization. Left: constantpwhich leads to different d-patterns. Right :pis normalized to keep d-patterns distribution similar.We tested on graphs sampled from G(N;0:3)whereNvaries from 50up to 150. It is evident thatas the graph size in the test set increases, the model performs worse. (2) We did the same test as in(1), but this time we normalize pon the test set so that pN= 15 , which is the approximate ratiofor the training set. Here we even went further to train up to sizes N= 250 . In this experiment, theGNN successfully generalized to larger graphs, since the local distributions of the train and test setare indeed very similar. For the results see Fig. 2.We also tested on the tasks of finding the max clique in a graph, calculating the number of edges,and the node prediction tasks of calculating the node degree, and the student-teacher task at the nodelevel. In addition, we tested on the popular GIN architecture (Xu et al., 2018), and show that the sizegeneralization problem also occurs there. We also tested on ReLU, Tanh, and sigmoid activations.See additional experiments in Appendix C.4 T OWARDS IMPROVING SIZE -GENERALIZATIONThe results from the previous section show that the problem of size generalization is not only relatedto the size of the graph in terms of the number of nodes or edges but to the distribution of d-patternsinduced by the distributions from which the graphs are sampled. Based on this observation, we nowformulate the size generalization problem as a domain adaptation (DA) problem. We then build ontechniques from domain adaptation and suggest two approaches to improve size generalization. (1)Self-supervised learning on the target domain (large graphs) and (2) Semi-supervised learning witha few labeled target samples. We consider the DA setting where we are given two distributions overgraphs: a source distribution DS(say, for small graphs) and a target distribution DT(say, for largegraphs). We consider two settings. First, the unlabeled DA setting, where we have access to labeledsamples from the source DSbut the target data from DTis unlabeled. Our goal is to infer labels on7Under review as a conference paper at ICLR 2021a test dataset sampled from the target DT. Second, we consider a semi-supervised setup, where wealso have access to a small number of labeled examples from the target DT.Size generalization with Self-supervised learning. In Self-supervised learning (SSL) for DA,a model is trained on unlabeled data to learn a pretext task, which is different from the maintask at hand. If the pretext task is chosen wisely, the model learns useful representations (Doer-sch et al., 2015; Gidaris et al., 2018) that can help with the main task. Here, we train the pre-text task on both the source and target domains, as was done for images and point clouds (Sunet al., 2019; Achituve et al., 2020). The idea is that the pretext task aligns the representationsof the source and target domains leading to better predictions of the main task for target graphs.Figure 3: Left: a graph withnode features represented bycolors. Right: A tree that rep-resents thed-patterns for theblack node. The tree descrip-tor is the number of nodesfrom each class in each layerof the tree.fFor a detailed review of the training procedures and the losses seeAppendix E. Given a pretext task we consider two different train-ing procedures: (1) Multi-task learning (MTL) : parallel trainingof the main task on a source domain and a pretext task on both thesource of target domain (You et al., 2020). In this case, the architec-ture consists of a main GNN that acts as a feature extractor and twosecondary networks (heads) that operate on the extracted featuresand try to predict the main task and the pretext task. (2) Pretraining(PT) : in this procedure (Hu et al., 2019), the GNN feature extractoris trained until convergence on the pretext task on both the sourceand target examples. Then, the GNN part is frozen, and only thehead of the model is trained on labeled examples from the source.Pattern-tree pretext task. We propose a novel pretext task whichis motivated by the definition of d-patterns. We do that by con-structing a tree that fully represents the d-patterns of each node (seee.g., Xu et al. (2018)). We then calculate a descriptor of the tree, which is a vector containing countsof the number of nodes from each class in each layer of the tree. We treat this descriptor as the targetlabel to be reconstructed by the SSL task. For more details see Figure 3. Intuitively, in order to besuccessful on a task on graphs, the GNN needs to correctly represent the pattern trees of the nodesof the graphs. This means, that to generalize to the target domain Dt, the GNN needs to be forcedto represent pattern trees from both the source and the target distributions. For more details aboutthe construction of the pattern tree see Appendix D. In short, each tree corresponds to a d-patternin the following way: the d-pattern tree of a node can be seen as a multiset of the children of theroot, and each child is a multiset of its children, etc. The pattern tree is a different description of thed-pattern of a node. This means that a GNN that successfully represent a pattern tree also representsits corresponding d-pattern, thus connecting this SSL task to the theory from Sec. 3.Semi-supervised setup. We also consider a case where a small number of labeled samples areavailable for the target domain. A natural approach is to train an SSL pretext task on samples fromboth the source and target domain, and train the main task on all the labeled samples available. Wetested this setup with 1, 5, or 10 labeled examples from the target domain.4.1 E XPERIMENTSArchitecture and training protocol. The setup is the same as in Subsection 3.4 with the followingchanges. We use a three-layer GNN in all experiments. Multi-task learning is used with equal weightto the main and SSL tasks. In the semi-supervised setup, we used an equal weight for the main taskand the labeled examples from the target domain.Baselines. We compare our new pretext task to the following baselines: (1) Vanilla : standardtraining on the source domain; (2) HomoGNN (Tang et al., 2020) a homogeneous GNN withoutthe bias term trained on the source domain; (3) Graph autoencoder (GAE) pretext task (Kipf andWelling, 2016); (4) Node masking (NM) pretext task from Hu et al. (2019) where at each trainingiteration we mask 10% of the node features and the goal is to reconstruct them. In case the graphdoes not have node features then the task was to predict the degree of the masked nodes. (5) Nodemetric learning (NML) : we use metric learning to learn useful node representations. We use acorruption function that given a graph and corruption parameter p2[0;1], replacespjEjof theedges with random edges, and thus can generate positive ( p= 0:1) and negative ( p= 0:3) examplesfor all nodes of the graph. we train with the triplet loss (Weinberger and Saul, 2009).8Under review as a conference paper at ICLR 2021DATASETS DEEZER IMDB - B NCI1 NCI109 PROTEINS TWITCH DD AVERAGESMALL GRAPHS 56:50:863:23:375:51:678:41:475:43:169:70:271:14:4 70.0%VANILLA 41:16:855:97:865:94:368:93:876:08:560:53:676:33:2 63.5%HOMO -GNN 40:56:656:37:066:03:768:83:277:110 60:82:3 76:83 76:83 76:83 63.8%NM MTL 51:68:5 51:68:5 51:68:555:66:849:97:861:75:778:88:449:52:867:45:4 59.2%NM PT 50:17:554:96:751:76:655:85:078:28:248:44:060:315:9 57.1%GAE MTL 49:411:055:56:051:29:957:69:479:511:762:55:167:810:0 60.5%GAE PT 47:110:054:16:858:97:667:25:670:59:453:64:7 697:1 60.1%NML MTL 46:49:554:47:052:36:356:26:578:76:857:44:164:711:9 58.6%NML PT 48:410:753:86:154:66:256:18:176:38:054:94:761:415:1 57.9%PATTERN MTL 45:68:856:89:260:57:567:97:275:811:161:63:5 76:83 76:83 76:83 63.6%PATTERN PT 447:7 61:93:2 61:93:2 61:93:267:811:7 67:811:7 67:811:774:85:7 74:85:7 74:85:784:75:1 84:75:1 84:75:164:53:3 64:53:3 64:53:374:95:2 67.5%Table 1: Test accuracy of compared methods in binary classification tasks. The Pattern task withpretraining achieves the highest accuracy in most tasks and has 4% higher average accuracy than thesecond-best method. High variance is due to the domain shift between the source and target domain.Datasets. We use datasets from Morris et al. (2020) and Rozemberczki et al. (2020) (Twitchegos and Deezer egos). We selected datasets that have a sufficient number of graphs (morethan 1,000) and with a non-trivial split to small and large graphs as detailed in Appendix F.1.In total we used 7 datasets, 4 in molecular biology (NCI1, NCI109, D&D, Proteins), and 3of social networks (Twitch ego nets, Deezer ego nets, IMDB-Binary). In all datasets, 50%smallest graphs were assigned to the training set, and the largest 10% of graphs assignedto the test set. We further split a random 10% of the small graphs as a validation set.0 1 5 10Num of labeled large graph examples60626466687072Average accuracy over all datasets67.0367.568.970.4763.5766.4768.7470.62 Pattern PTVanillaFigure 4: Mean accuracy over alldatasets in Tab. 1 for d-pattern pretraining and no SSL (Vanilla).Results. Table 1 compares the effect of using the Pattern-tree pretext task to the baselines described above. The smallgraphs row presents vanilla results on a validation set withsmall graphs. The small graph accuracy on 5 out of 7datasets is larger by 7.3%-15.5% than on large graphs, indi-cating that the size-generalization problem is indeed preva-lent in real datasets. Pretraining with the d-patterns pretexttask outperforms other baselines in 5 out 7 datasets, withan average 4%improved accuracy on all datasets. HOMO-GNN slightly improves over the vanilla while other pretexttasks do not improve average accuracy. Naturally, the ac-curacy here is much lower than SOTA on these datasets be-cause the domain shift makes the problem much harder. InAppendix F.2 we show the 1-pattern distribution discrep-ancy between large and small graphs in two real datasets:IMDB (large discrepancy) and D&D (small discrepancy). Correspondingly, the pattern tree SSLtask improved performance on the IMDB dataset, while not improving performance on the D& Ddataset. This gives further evidence that a discrepancy between the d-patterns leads to bad general-ization, and that correctly representing the patterns of the test set can improve performance.Figure 4 compares the performance of vanilla training versus pretraining with the pattern-tree pretexttask in the semi-supervised setup. The accuracy monotonically increases with respect to the numberof labeled examples in both cases. Moreover, pretraining with the pretext task yields better results inthe case of 0,1,5 labeled examples and comparable results in the case we use 10 labeled examples.We additionally tested on the synthetic tasks discussed in Sec. 3, and show that the pattern-treepretext task improves in the student-teacher setting, while it does not solve the edge count or degreeprediction tasks. On the other hand, adding even a single labeled sample from the target distributionsignificantly improves performance on the synthetic tasks we tested on. For more details see Sec. F.5 C ONCLUSION AND DISCUSSIONThis paper is a first step towards understanding the size generalization problem in graph neuralnetworks. We showed that GNNs do not naturally generalize to larger graphs even on simple tasks,characterized how this failure depends on d-patterns, and suggested two approaches that can improvegeneralization. Our characterization of d-patterns is likely to have implications to other problemswhere generalization is harmed by distribution shifts, and offer a way to mitigate those problems.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Novel and well-motivated investigation into GNN size generalization error ### Review Text Summary: This paper investigates the issue of generalizability of GNNs when trained on small graphs and tested on larger graphs, a common setting in graph learning. The paper argues that the ability of constant-depth GNNs to generalize is not dependent on the size difference, but on the difference in distributions of nodes' neighborhood features, which authors call "d-patterns." The paper shows theoretically that a GNN can be trained to achieve zero loss on small graphs, yet fail to generalize to larger graphs and the loss on these graphs will be dependent on the difference in d-pattern distributions. Strengths: * Shows strong theoretical and empirical evidence of GNNs inability to generalize to graphs larger than those seen in training when the distribution of d-patterns of the graphs differ, showing that techniques like regularization will not help in these cases * Provides a first step to mitigate problem by using domain adaptation methods and in particular using an intermediate task of identifying the d-patterns in each graph. Weaknesses: * The work only considers constant-depth GNNs and local tasks * Generalization error is shown only for 0-1 loss, limiting the graph problems that it applies to to node live binary classification and functions thereof. Conclusion: The failures of GNNs to generalize to larger graphs in many cases is a known phenomenon. This paper's investigation of this issue and a clear explanation of at least one reason for it is clear and well supported. Though a more minor contribution to the paper, the proposal for some mitigating techniques (DA and in particular the pattern tree labeling task) can serve as motivation for further investigation of models robust to these generalization errors. I recommend acceptance based on the clear motivation and development of the investigation presented in the work. update: Thank you to authors for their response to reviewer comments. I acknowledge I have read and reviewed their rebuttals. Questions: 1. The claim made is that the distribution shift in d-patterns is the driving factor in generalization error. The experiments in 3.4 support this, but one experiment similar to those shown in Figure 1 that would illustrate this further is training on sizes n=[40,x] and p=0.3 as in Figure 1 (right) and testing on n=150 and p=0.002*[40,x]. If the size difference between train and test is not the driving factor, but rather d-patters, should we expect such an experiment would yield a mostly-flat loss curve? Would the value of the loss on the test graphs be similar to the loss achieved on graphs from the original distribution (n=[40, x] with p=0.03)? 2. The focus of the work is on size generalization and in particular small-to-large graph domain shift. The claims appear to also hold for evaluating generalization on similarly sized graphs with differences in node attributes and/or connectivity patterns. Is this the case, why or why not? Minor comments and typos: * In the definition of d-patterns, the integer $\ell$ is not defined. Is this the number of neighbors, or the number of possible d-patterns? * Top of page 8 “does have node features” should be “does *not* have node features” * Page 8, “Datasets” paragraph says “and the and 10% largest graphs.” I assume this should read “and the largest 10% of graphs…” * “The” should be capitalized in the last paragraph (“Results”) at the beginning of the sentence “The accuracy monotonically increases…” ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
kBVJ2NtiY-
ICLR.cc/2021/Conference
2021
Learning What To Do by Simulating the Past
["David Lindner", "Rohin Shah", "Pieter Abbeel", "Anca Dragan"]
Since reward functions are hard to specify, recent work has focused on learning policies from human feedback. However, such approaches are impeded by the expense of acquiring such feedback. Recent work proposed that agents have access to a source of information that is effectively free: in any environment that humans have acted in, the state will already be optimized for human preferences, and thus an agent can extract information about what humans want from the state. Such learning is possible in principle, but requires simulating all possible past trajectories that could have led to the observed state. This is feasible in gridworlds, but how do we scale it to complex tasks? In this work, we show that by combining a learned feature encoder with learned inverse models, we can enable agents to simulate human actions backwards in time to infer what they must have done. The resulting algorithm is able to reproduce a specific skill in MuJoCo environments given a single state sampled from the optimal policy for that skill.
["imitation learning", "reward learning", "reinforcement learning"]
ABSTRACTSince reward functions are hard to specify, recent work has focused on learningpolicies from human feedback. However, such approaches are impeded by theexpense of acquiring such feedback. Recent work proposed that agents have accessto a source of information that is effectively free: in any environment that humanshave acted in, the state will already be optimized for human preferences, and thusan agent can extract information about what humans want from the state (Shahet al., 2019). Such learning is possible in principle, but requires simulating allpossible past trajectories that could have led to the observed state. This is feasiblein gridworlds, but how do we scale it to complex tasks? In this work, we showthat by combining a learned feature encoder with learned inverse models, we canenable agents to simulate human actions backwards in time to infer what they musthave done. The resulting algorithm is able to reproduce a specific skill in MuJoCoenvironments given a single state sampled from the optimal policy for that skill.1 I NTRODUCTIONAs deep learning has become popular, many parts of AI systems that were previously designedby hand have been replaced with learned components. Neural architecture search has automatedarchitecture design (Zoph & Le, 2017; Elsken et al., 2019), population-based training has automatedhyperparameter tuning (Jaderberg et al., 2017), and self-supervised learning has led to impressiveresults in language modeling (Devlin et al., 2019; Radford et al., 2019; Clark et al., 2020) andreduced the need for labels in image classification (Oord et al., 2018; He et al., 2020; Chen et al.,2020). However, in reinforcement learning, one component continues to be designed by humans:the task specification. Handcoded reward functions are notoriously difficult to specify (Clark &Amodei, 2016; Krakovna, 2018), and learning from demonstrations (Ng et al., 2000; Fu et al., 2018)or preferences (Wirth et al., 2017; Christiano et al., 2017) requires a lot of human input. Is there away that we can automate even the specification of what must be done?It turns out that we can learn part of what the user wants simply by looking at the state of theenvironment : after all, the user will already have optimized the state towards their own preferences(Shah et al., 2019). For example, when a robot is deployed in a room containing an intact vase, itcan reason that if its user wanted the vase to be broken, it would already have been broken; thus sheprobably wants the vase to remain intact.However, we must ensure that the agent distinguishes between aspects of the state that the usercouldn’t control from aspects that the user deliberately designed . This requires us to simulate whatthe user must have done to lead to the observed state: anything that the user put effort into in thepast is probably something the agent should do as well. As illustrated in Figure 1, if we observe aCheetah balancing on its front leg, we can infer how it must have launched itself into that position.Unfortunately, it is unclear how to simulate these past trajectories that lead to the observed state. Sofar, this has only been done in gridworlds, where all possible trajectories can be considered usingdynamic programming (Shah et al., 2019).Our key insight is that we can sample such trajectories by starting at the observed state and simulatingbackwards in time . To enable this, we derive a gradient that is amenable to estimation throughbackwards simulation, and learn an inverse policy and inverse dynamics model using supervisedWork done at the Center for Human-Compatible AI, UC Berkeley.1Published as a conference paper at ICLR 2021Figure 1: Suppose we observe a Cheetah balancing on its front leg (left). Given a simulator for theenvironment, Deep RLSP is able to infer how the cheetah must have acted to end up in this position. Itcan then imitate these actions in order to recreate this skill. Note that the state contains joint velocitiesin addition to positions, which makes the task more tractable than this picture might suggest.learning to perform the backwards simulation. Then, the only remaining challenge is finding a rewardrepresentation that can be meaningfully updated from a single state observation. To that end, ratherthan defining the reward directly on the raw input space, we represent it as a linear combination offeatures learned through self-supervised representation learning. Putting these components together,we propose the Deep Reward Learning by Simulating the Past (Deep RLSP) algorithm.We evaluate Deep RLSP on MuJoCo environments and show that it can recover fairly good perfor-mance on the task reward given access to a small number of states sampled from a policy optimizedfor that reward. We also use Deep RLSP to imitate skills generated using a skill discovery algorithm(Sharma et al., 2020), in some cases given just a single state sampled from the policy for that skill.Information from the environment state cannot completely replace reward supervision. For example,it would be hard to infer how clean Bob would ideally want his room to be, if the room is currentlymessy because Bob is too busy to clean it. Nonetheless, we are optimistic that information from theenvironment state can be used to significantly reduce the burden of human supervision required totrain useful, capable agents.2 M ETHODIn this section, we describe how Deep RLSP can learn a reward function for high dimensionalenvironments given access only to a simulator and the observed state s0.Notation. A finite-horizon Markov Decision Process (MDP) M=hS;A;T;r;P;Ticontains aset of statesSand a set of actions A. The transition function T:SAS7! [0;1]determinesthe distribution over next states given a state and an action, and Pis a prior distribution over initialstates. The reward function r:S7!Rdetermines the agent’s objective. T2Z+is a finite planninghorizon. A policy:SA7![0;1]specifies how to choose actions given a state. Given an initialstate distribution, a policy and the transition function, we can sample a trajectoryby sampling thefirst state fromP, every subsequent action from , and every subsequent state from T. We denote theprobability distribution over trajectories as hP;;Tiand writehP;;Tifor the sampling step.We will sometimes write a single state sinstead of a distribution Pif the initial state is deterministic.The goal of reinforcement learning (RL) is to find a policy that maximizes the expected cumulativereward EhP;;TihPTt=1r(st)i.We use:S!Rnto denote a feature function (whether handcoded or learned) that produces afeature vector of length nfor every state. The reward function ris linear over if it can be expressedin the formr(s) =T(s)for some2Rn.We assume that some past trajectoryT:0=sTaT:::a1s0produced the observed state s0.2.1 I DEALIZED ALGORITHMWe first explain what we would ideally do, if we had a handcoded a feature function and anenumerable (small) state space Sthat affords dynamic programming. This is a recap of RewardLearning by Simulating the Past (RLSP; Shah et al., 2019).2Published as a conference paper at ICLR 2021We assume the human follows a Boltzmann-rational policy t(ajs;)/exp(Qt(s;a;)), wheretheQvalues are computed using soft value iteration. Marginalizing over past trajectories, yields adistribution over the observed state p(s0j) =PsT:::a1p(=sTaT:::a1s0j). We com-pute the maximum likelihood estimate, argmaxlnp(s0j), via gradient ascent, by expressing thegradient of the observed state as a weighted combination of gradients of consistent trajectories (Shahet al., 2019, Appendix B):rlnp(s0j) = ET:1p(T:1js0;)[rlnp(j)] (1)rlnp(j)is a gradient for inverse reinforcement learning. Since we assume a Boltzmann-rationalhuman, this is the gradient for Maximum Causal Entropy Inverse Reinforcement Learning (MCEIRL;Ziebart et al., 2010). However, we still need to compute an expectation over all trajectories that endins0, which is in general intractable. Shah et al. (2019) use dynamic programming to compute thisgradient in tabular settings.2.2 G RADIENT AS BACKWARDS -FORWARDS CONSISTENCYApproximating the expectation. For higher-dimensional environments, we must approximate theexpectation over past trajectories p(T:1js0;). We would like to sample from the distribution,but it is not clear how to sample the past conditioned on the present. Our key idea is that just as wecan sample the future by rolling out forwards in time, we should be able to sample the past by rollingout backwards in time . Note that by the Markov property we have:p(T:1js0;) =1Yt=Tp(stjat;st+1;:::s 0;)p(atjst+1;at+1;:::s 0;)=1Yt=Tp(stjat;st+1;)p(atjst+1;)Thus, given the inverse policy 1t(atjst+1;), the inverse dynamics T1t(stjat;st+1;),and the observed state s0, we can sample a past trajectory T:1p(T:1js0;)by iter-atively applying 1andT1, starting from s0. Analogous to forward trajectories, we expressthe sampling as T:1 hs0;1;T1i. Thus, we can write the gradient in Equation 1 asET:1hs0;1;T1i[rlnp(j)].Learning,1andT1.In order to learn 1, we must first know . We assumed that thehuman was Boltzmann-rational, which corresponds to the maximum entropy reinforcement learningobjective (Levine, 2018). We use the Soft Actor-Critic algorithm (SAC; Haarnoja et al., 2018) toestimate the policy (ajs;), since it explicitly optimizes the maximum entropy RL objective.Given the forward policy (ajs;)and simulatorT, we can construct a dataset of sampled forwardtrajectories, and learn the inverse policy 1and the inverse dynamics T1using supervised learning.Given these, we can then sample T:1, allowing us to approximate the expectation in the gradient.In general, both 1andT1could be stochastic and time-dependent.Estimating the gradient for a trajectory. We now turn to the term within the expectation, whichis the inverse reinforcement learning gradient given a demonstration trajectory =sTaT:::s 0.Assuming that the user is Boltzmann-rational, this is the MCEIRL gradient (Ziebart et al., 2010),which can be written as (Shah et al., 2019, Appendix A):rlnp(j) = 0Xt=T(st)!FT(sT)+1Xt=T Es0t+1T(jst;at)Ft+1(s0t+1)Ft+1(st+1)!(2)Fis the expected feature count under , that is,Ft(st),Et:0hst;;TihP0t0=t(st0)i.The first term computes the feature counts of the demonstrated trajectory , while the second termcomputes the feature counts obtained by the policy for the current reward function (starting from3Published as a conference paper at ICLR 2021the initial state sT). Sincer(s) =T(s), these terms increase the reward of features present inthe demonstration and decrease the reward of features under the current policy. Thus, the gradientincentivizes consistency between the demonstration and rollouts from the learned policy.The last term is essentially a correction for the observed dynamics: if we see that st;atled tost+1, itcorrects for the fact that we “could have” seen some other state s0t+1. Since this correction is zero inexpectation (and expensive to compute), we drop it for our estimator.Gradient estimator. After dropping the last term in Equation 2, expanding the definition of F, andsubstituting in to Equation 1, our final gradient estimator is:rlnp(s0j) = ET:1hs0;1;T1i" 0Xt=T(st)! E0hsT;;Ti" 0Xt=T(s0t)!##(3)Thus, given s0,,,T,1, andT1, computing the gradient consists of three steps:1. Simulate backwards from s0, and compute the feature counts of the resulting trajectories.2. Simulate forwards from sTof these trajectories, and compute their feature counts.3. Take the difference between these two quantities.This again incentivizes consistency, this time between the backwards and forwards trajectories: thegradient leads to movement towards “what the human must have done” and away from “what thehuman would do if they had this reward”. The gradient becomes zero when they are identical.It may seem like the backwards and forwards trajectories should always be consistent with each other,since1andT1are inverses of andT. The key difference is that s0imposes constraints on thebackwards trajectories, but not on the forward trajectories. For example, suppose we observe s0inwhich a vase is unbroken, and our current hypothesis is that the user wants to break the vase. Whenwe simulate backwards, our trajectory will contain an unbroken vase, but when we simulate forwardsfromsT,will break the vase. The gradient would then reduce the reward for a broken vase andincrease the reward for an unbroken vase.2.3 L EARNING A LATENT MDPOur gradient still relies on a feature function , with the reward parameterized as r(s) =T(s). Anatural way to remove this assumption would be to instead allow to parameterize a neural network,which can then learn whatever features are relevant to the reward from the RLSP gradient.However, this approach will not work because the information contained in the RLSP gradient isinsufficient to identify the appropriate features to construct: after all, it is derived from a single state.If we were to learn a single unified reward using the same gradient, the resulting reward would likelybe degenerate: for example, it may simply identify the observed state, that is R(s) = 1[s=s0].Thus, we continue to assume that the reward is linear in features, and instead learn the feature functionusing self-supervised learning (Oord et al., 2018; He et al., 2020). In our experiments, we use avariational autoencoder (V AE; Kingma & Welling, 2014) to learn the feature function. The V AEencodes the states into a latent feature representation, which we can use to learn a reward function ifthe environment is fully observable, i.e., the states contain all relevant information.For partially observable environments recurrent state space models (RSSMs; Karl et al., 2017; Doerret al., 2018; Buesing et al., 2018; Kurutach et al., 2018; Hafner et al., 2019; 2020) could be usedinstead. These methods aim to learn a latent MDP , by computing the states using a recurrent modelover the observations, thus allowing the states to encode the history. For such a model, we canimagine that the underlying POMDP has been converted into a latent MDP whose feature function is the identity. We can then compute gradients directly in this latent MDP.2.4 D EEPRLSPPutting these components together gives us the Deep RLSP algorithm (Algorithm 1). We first learn afeature function using self-supervised learning, and then train an inverse dynamics model T1,all using a dataset of environment interactions (such as random rollouts). Then, we update using4Published as a conference paper at ICLR 2021Algorithm 1 TheDEEPRLSP algorithm. The initial dataset of environment interactions Dcan beconstructed in many different ways: random rollouts, human play data, curiosity-driven exploration,etc. The specific method will determine the quality of the learned features.procedure DEEPRLSP(fs0g,T)D dataset of environment interactionsInitializee;d;;1;T1;randomly.e;d SelfSupervisedLearning (D).Train encoder and decoder for latent MDPInitialize experience replay Ewith data in D.T1 SupervisedLearning (D) .Train inverse dynamicsT 1 .Start horizon at 1foriin[1::num_epochs ]do SAC() .Train policy1 SupervisedLearning (e;E) .Train inverse policy +COMPUTE GRAD(fs0g,,T,1,T1,T,e) .Updateifgradient magnitudes are sufficiently low thenT T+ 1 .Advance horizonreturn,eprocedure COMPUTE GRAD(fs0g,,T,1,T1,T,e)fbackwardg Rollout (fs0g;1;T1;T) .Simulate backwards from s0backward AverageFeatureCounts (e;fbackwardg).Compute backward feature countsfsTg FinalStates (fbackwardg)fforwardg Rollout (fsTg;;T;T) .Simulate forwards from sTforward AverageFeatureCounts (e;fforwardg).Compute forward feature countsRelabelfbackwardg,fforwardgand add them to E.returnbackwardforwardEquation 3, and continually train , and1alongsideto keep them up to date. The full algorithmalso adds a few bells and whistles that we describe next.Initial state distribution P.The attentive reader may wonder why our gradient appears to beindependent ofP. This is actually not the case: while andTare independent of P,1andT1dodepend on it. For example, if we observe Alice exiting the San Francisco airport, the corresponding1should hypothesize different flights if she started from New York than if she started from Tokyo.However, in order to actually produce such explanations, we must train 1andT1solely ontrajectories of length Tstarting from sTP . We instead train 1andT1on a variety oftrajectory data, which loses the useful information in P, but leads to several benefits. First, we cantrain the models on exactly the distributions that they will be used on, allowing us to avoid failuresdue to distribution shift. Second, the horizon Tis no longer critical: previously, Tencoded theseparation in time between sTands0, and as a result misspecification of Tcould cause bad results.Since we now only have information about s0, it doesn’t matter much what we set Tto, and as aresult we can use it to set a curriculum (discussed next). Finally, this allows Deep RLSP to be used indomains where an initial state distribution is not available.Note that we are no longer able to use information about Pthrough1andT1. However, havinginformation aboutPmight be crucial in some applications to prevent Deep RLSP from converging toa degenerate solution with sT=s0and a policy that does nothing. While we did not find this tobe a problem in our experiments, we discuss a heuristic to incorporate information about sTintoDeep RLSP in Appendix C.Curriculum. Since the horizon Tis no longer crucial, we can use it to provide a curriculum. Weinitially calculate gradients with low values of T, to prevent compounding errors in our learnedmodels, and making it easier to enforce backwards-forwards consistency, and then slowly grow T,making the problem harder. In practice, we found this crucial for performance: intuitively, it is mucheasier to make short backwards and forwards trajectories consistent than with longer trajectories; thelatter would likely have much higher variance.Multiple input states. If we get multiple independent s0as input, we average their gradients.5Published as a conference paper at ICLR 2021Experience replay. We maintain an experience replay buffer Ethat persists across policy trainingsteps. We initialize Ewith the same set of environment interactions that the feature function andinverse dynamics model are trained on. When computing the gradient, we collect all backward andforward trajectories and add them to E. To avoid compounding errors from the inverse dynamicsmodel, we relabel all transitions using a simulator of the environment. Whenever we’d add a transition(s;a;s0)toE, we initialize the simulator at sand execute ato obtain ~sand add transition (s1;a;~s)toEinstead.3 E XPERIMENTS3.1 S ETUPTo demonstrate that Deep RLSP can be scaled to complex, continuous, high-dimensional environ-ments, we use the MuJoCo physics simulator (Todorov et al., 2012). We consider the Inverted Pendu-lum,Half-Cheetah andHopper environments implemented in Open AI Gym (Brockman et al., 2016).The hyperparameters of our experiments are described in detail in Appendix B. We provide code toreplicate our experiments at https://github.com/HumanCompatibleAI/deep-rlsp .Baselines. To our knowledge, this is the first work to train policies using a single state as input. Dueto lack of alternatives, we compare against GAIL (Ho & Ermon, 2016) using the implementationfrom the imitation library (Wang et al., 2020). For each state we provide to Deep RLSP, weprovide a transition (s;a;s0)to GAIL.Ablations. In Section 2.2, we derived a gradient for Deep RLSP that enforces consistency betweenthe backwards and forwards trajectories. However, we could also ignore the temporal informationaltogether. If an optimal policy led to the observed state s0, then it is probably a good bet that s0ishigh reward, and that the agent should try to keep the state similar to s0. Thus, we can simply set=(s0)jj(s0)jj, and not deal with 1andT1at all.How should we handle multiple states s10;:::;sN0? Given that these are all sampled i.i.d. fromrollouts of an optimal policy, a natural choice is to simply average the feature vectors of all of thestates, which we call AverageFeatures . Alternatively, we could view each of the observed states as apotential waypoint of the optimal policy, and reward an agent for being near any one of them. Weimplement this Waypoints method asR(s) = maxi(si0)jj(si0)jj(s). Note that both of these ablationsstill require us to learn the feature function .Feature learning dataset. By default, we use random rollouts to generate the initial dataset that isused to train the features and the inverse model T1. (This isDin Algorithm 1.) However, in theinverted pendulum environment, the pendulum falls very quickly in random rollouts, and T1neverlearns what a balanced pendulum looks like. So, for this environment only, we combine randomrollouts with rollouts from an expert policy that balances the pendulum.3.2 G RIDWORLD ENVIRONMENTSAs a first check, we consider the gridworld environments in Shah et al. (2019). In these stylizedgridworlds, self-supervised learning should not be expected to learn the necessary features. Forexample, in the room with vase environment, the two door features are just particular locations, withno distinguishing features that would allow self-supervised learning to identify these locations asimportant. So, we run Algorithm 1 without the feature learning and instead use the pre-definedfeature function of the environments. With this setup we are able to use Deep RLSP to recover thedesired behavior from a single state in all environments in which the exact RLSP algorithm is able torecover it. However, AverageFeatures fails on several of the environments. Since only one state isprovided, Waypoints is equivalent to AverageFeatures. It is not clear how to apply GAIL to theseenvironments, and so we do not compare to it. Further details on all of the environments and resultscan be found in Appendix A.6Published as a conference paper at ICLR 2021Environment SAC # states Deep RLSP AverageFeatures Waypoints GAILInvertedPendulum10001 303 (299) 6 (2) N/A 1000 (0)10 335 (333) 3 (1) 4 (1) 1000 (0)50 339 (331) 6 (4) 3.7 (0.3) 1000 (0)Cheetah(forward)132361 4591 (2073) 6466 (3343) N/A -288 (55)10 6917 (421) 6245 (2352) -10 (23) -296 (172)50 6078 (589)) 4504 (2970) -126 (38) -54 (295)Cheetah(backward)133611 5730 (2733) 12443 (645) N/A -335 (46)10 7917 (249) 12829 (651) -80 (388) -283 (45)50 7588 (171) 11616 (178) -509 (87) 2113 (1015)Hopper(terminate)32741 68 (8) 99 (45) N/A 991 (9)10 47 (21) 159 (126) 58 (7) 813 (200)50 72 (1) 65 (36) 14 (4) 501 (227)Hopper(penalty)33631 1850 (634) 2537 (363) N/A 990 (9)10 2998 (62) 3103 (64) 709 (133) 784 (229)50 1667 (737) 2078 (581) 1612 (785) 508 (259)Table 1: Average returns achieved by the policies learned through various methods, for differentnumbers of input states. The states are sampled from a policy trained using SAC on the true rewardfunction; the return of that policy is given as a comparison. Besides the SAC policy return, all valuesare averaged over 3 seeds and the standard error is given in parentheses. We don’t report Waypointson 1 state as it is identical to AverageFeatures on 1 state.3.3 S OLVING THE ENVIRONMENTS WITHOUT ACCESS TO THE REWARD FUNCTIONFirst we look at the typical target behavior in each environment: balancing the inverted pendulum,and making the half-cheetah and the hopper move forwards. Additionally we consider the goal ofmaking the cheetah run backwards (that is, the negative of its usual reward function). We aim to useDeep RLSP to learn these behaviors without having access to the reward function .We train a policy using soft actor critic (SAC; Haarnoja et al., 2018) to optimize for the true rewardfunction, and sample either 1, 10 or 50 states from rollouts of this policy to use as input. We then useDeep RLSP to infer a reward and policy. Ideally we would evaluate this learned policy rather thanreoptimizing the learned reward, since learned reward models can often be gamed (Stiennon et al.,2020), but it would be too computationally expensive to run the required number of SAC steps duringeach policy learning step. As a result, we run SAC for many more iterations on the inferred rewardfunction, and evaluate the resulting policy on the true reward function (which Deep RLSP does nothave access to).Results are shown in Table 1. In Hopper, we noticed that videos of the policies learned by Deep RLSPlooked okay, but the quantitative evaluation said otherwise. It turns out that the policies learned byDeep RLSP do jump, as we might want, but they often fall down, terminating the episode; in contrastGAIL policies stand still or fall over slowly, leading to later termination and explaining their betterquantitative performance. We wanted to also evaluate the policies without this termination bias, andso we evaluate the same policies in an environment that does not terminate the episode, but providesa negative reward instead; in this evaluation both Deep RLSP and AverageFeatures perform muchbetter. We also provide videos of the learned policies at https://sites.google.com/view/deep-rlsp , which show that the policies learned by Deep RLSP do exhibit hopping behavior(though with a strong tendency to fall forward).GAIL is only able to learn a truly good policy for the (very simple) inverted pendulum, even though itgets states and actions as input. Deep RLSP on the other hand achieves reasonable behavior (thoughclearly not expert behavior) in all of the environments, using only states as input. Surprisingly, theAverageFeatures method also performs quite well, even beating the full algorithm on some tasks,though failing quite badly on Pendulum. It seems that the task of running forward or backward isvery well specified by a single state, since it can be inferred even without any information about thedynamics (except that which is encoded in the features learned from the initial dataset).7Published as a conference paper at ICLR 2021Figure 2: We sample a few states from a policy performing a specific skill to provide as input. Here,Deep RLSP learns to balance the cheetah on the front leg from a single state . We provide videos of theoriginal skills and learned policies at: https://sites.google.com/view/deep-rlsp .3.4 L EARNING SKILLS FROM A SINGLE STATEWe investigate to what extent Deep RLSP can learn other skills where the reward is not clear.Evaluation on these tasks is much harder, because there is no ground truth reward. Therefore weevaluate qualitatively how similar the policies learned by Deep RLSP are to the original skill. We alsoattempted to quantify similarity by checking how quickly a discriminator could learn to distinguishbetween the learned policy and the original skill, but unfortunately this metric was not conclusive(results are reported in Appendix D.1). Unlike the previous case, we do not reoptimize the learnedreward and only look at the policies learned by Deep RLSP.We consider skills learned by running Dynamics-Aware Unsupervised Discovery of Skills (DADS;Sharma et al., 2020). Since we are not interested in navigation, we remove the “x-y prior” usedto get directional skills in DADS. We run DADS on the half-cheetah environment and select allskills that are not some form of running. This resulted in two skills: one in which the cheetah ismoving forward making big leaps ( “jumping” ) and one in which it is slowly moving forward onone leg ( “balancing” ). As before we roll out these policies and sample individual states from thetrajectories to provide as an input for Deep RLSP. We then evaluate the policy learned by Deep RLSP.Since the best evaluation here is to simply watch what the learned policy does, we provide videos ofthe learned policies at https://sites.google.com/view/deep-rlsp . We also providevisualizations in Appendix D.2.The first thing to notice is that relative to the ablations, only Deep RLSP is close to imitating the skill.None of the other policies resemble the original skills at all. While AverageFeatures could performwell on simple tasks such as running, the full algorithm is crucial to imitate more complex behavior.Between Deep RLSP and GAIL the comparison is less clear. Deep RLSP can learn the balancingskill fairly well from a single state, which we visualize in Figure 2 (though we emphasize that thevideos are much clearer). Like the original skill, the learned policy balances on one leg and slowlymoves forward by jumping, though with slightly more erratic behavior. However, the learned policysometimes drops back to its feet or falls over on its back. We suspect this is an artifact of the shorthorizon (T10) used for simulating the past in our algorithm. A small horizon is necessary to avoidcompounding errors in the learned inverse dynamics model, but can cause the resulting behavior tobe more unstable on timescales greater than T. We see similar behavior when given 10 or 50 states.GAIL leads to a good policy given a single transition, where the cheetah balances on its front leg andhead (rather than just the front leg), but does not move forward very much. However, with 10 or 50transition, the policies learned by GAIL do not look at all like balancing.However, the jumping behavior is harder to learn, especially from a single state. We speculate thathere a single state is less informative than the balancing state. In the balancing state, the low jointvelocities tell us that the cheetah is not performing a flip, suggesting that we had optimized for thisspecific balancing state. On the other hand, with the jumping behavior, we only get a single state ofthe cheetah in the air with high velocity, which is likely not sufficient to determine what the jumplooked like exactly. In line with this hypothesis, at 1 state Deep RLSP learns to erratically hop, at 10states it executes slightly bigger jumps, and at 50 states it matches the original skill relatively closely.8Published as a conference paper at ICLR 2021The GAIL policies for jumping are also reasonable, though in a different way that makes it hard tocompare. Using 1 or 10 transitions, the policy doesn’t move very much, staying in contact with theground most of the time. However, at 50 transitions, it performs noticeably forward hops slightlysmoother than the policy learned by Deep RLSP.4 R ELATED WORKLearning from human feedback. Many algorithms aim to learn good policies from human demon-strations, including ones in imitation learning (Ho & Ermon, 2016) and inverse reinforcement learning(IRL; Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2018). Useful policies can also be learned fromother types of feedback, such as preferences (Christiano et al., 2017), corrections (Bajcsy et al., 2017),instructions (Bahdanau et al., 2019), or combinations of feedback modalities (Ibarz et al., 2018).While these methods require expensive human feedback, Deep RLSP instead simulates the tra-jectories that must have happened. This is reflected in the algorithm: in Equation 1, the innergradient corresponds to an inverse reinforcement learning problem. While we used the MCEIRLformulation (Ziebart et al., 2010), other IRL algorithms could be used instead (Fu et al., 2018).Learning from observations. For many tasks, we have demonstrations without action labels , e.g.,YouTube videos. Learning from Observations (LfO; Torabi et al., 2019; Gandhi et al., 2019) aims torecover a policy from such demonstrations. Similarly to LfO, we do not have access to action labels,but our setting is further restricted to observing only a small number of states.5 L IMITATIONS AND FUTURE WORKSummary. Learning useful policies with neural networks requires significant human effort, whether itis done by writing down a reward function by hand, or by learning from explicit human feedback suchas preferences or demonstrations. We showed that it is possible to reduce this burden by extracting“free” information present in the current state of the environment. This enables us to imitate policiesin MuJoCo environments with access to just a few states sampled from those policies. We hope thatDeep RLSP will help us train agents that are better aligned with human preferences.Learned models. The Deep RLSP gradient depends on having access to a good model of ,T,1,andT1. In practice, it was quite hard to train sufficiently good versions of the inverse models. Thiscould be a significant barrier to practical implementations of Deep RLSP. It can also be taken as asign for optimism: self-supervised representation learning through deep learning is fairly recent andis advancing rapidly; such advances will likely translate directly into improvements in Deep RLSP.Computational cost. Imitation learning with full demonstrations can already be quite computation-ally expensive. Deep RLSP learns several distinct neural network models, and then simulates potentialdemonstrations, and finally imitates them. Unsurprisingly, this leads to increased computational cost.Safe RL. Shah et al. (2019) discuss how the exact RLSP algorithm can be used to avoid negativeside-effects in RL by combining preferences learned from the initial state with a reward function.While we focused on learning hard to specify behavior, Deep RLSP can also be used to learn to avoidnegative side-effects, which is crucial for safely deploying RL systems in the real world (Amodeiet al., 2016).Multiagent settings. In any realistic environment, there is not just a single “user” who is influencingthe environment: many people act simultaneously, and the state is a result of joint optimization by allof them. However, our model assumes that the environment state resulted from optimization by asingle agent, which will not take into account the fact that each agent will have constraints imposedupon them by other agents. We will likely require new algorithms for such a setting.ACKNOWLEDGMENTSThis work was partially supported by Open Philanthropy, AFOSR, ONR YIP, NSF CAREER, NSFNRI, and Microsoft Swiss JRC. We thank researchers at the Center for Human-Compatible AI andthe InterACT lab for helpful discussion and feedback.9Published as a conference paper at ICLR 2021
qYadRr6LzR0
Cool idea but a bit limited in a number of ways. I did not find the empirical results fully convincing.
5: Marginally below acceptance threshold
This paper introduces an algorithm, called deep reward learning by simulating the past (deep RLSP), that seeks to infer a reward function by looking at states in demonstration data. An example of this described in the paper is an environment with a vase: if demonstration data shows an intact vase in the presence of an embodied agent then breaking the vase is unlikely to be the intended behavior. Otherwise the vase would already be broken in the demo. To achieve this, the paper assumes a Boltzmann distribution on the demonstration policy and a reward function that is linear in some pre-trained state features. The paper then derives a gradient of the log probability of a demonstration state. The gradient estimator involves simulating a possible past from a demonstration state (using a learned inverse policy and inverse transition function) and then simulating forward from the possible past (using the policy and a simulator) The gradient is then the difference between features counts from the backward and forward simulations. The paper is generally clearly written and works on a crucial problem in reinforcement learning, namely how to specify a human preference without resorting to tedious reward engineering. Novel, scalable approaches to this problem would certainly be of interest to the ICLR community. The primary technical contribution of the paper is the derivation of the gradient estimator which is correct. I find the idea of the paper very interesting and the results showing meaningful behavior emerge from a single demonstration are quite nice. However I think the paper is limited in a number of ways: - It requires access to a pretrained state representation - It requires access to a simulator of the environment which requires being able to reset the environment to arbitrary states. This seems quite limiting for real world applications. Worryingly, appendix D states that learning a dynamics model was attempted by the authors but failed to yield good results. - I think the choice of evaluation environments is a little odd and simplistic. I think environments more aligned with the eventual application areas for a method such as Deep RLSP would make the paper much more compelling. Given the motivation of the paper, I think perhaps manipulation environments where a robot arm interacts with multiple objects could be an interesting choice. - From the empirical results, it is not clear that Deep RLSP works substantially better than the simple average features baseline. Overall I think the paper has the potential to be a good paper but could still be substantially improved and I'm leaning towards rejection. Minor comments and questions for the authors: - I'm curious how you choose the gradient magnitude threshold? Does Deep RLSP fail without the curriculum? Could you provide an ablation that shows the effect of using a curriculum? - I would also be interested in an ablation of the cosine-similarity weighting heuristic. - I think the phrase recent work in the abstract could use a reference. - I'm a bit confused by the description of the environment suite by Shah et al. in appendix A, in particular the different rewards. Could you clarify and expand the description a bit?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning What To Do by Simulating the Past ### Paper Abstract Since reward functions are hard to specify, recent work has focused on learning policies from human feedback. However, such approaches are impeded by the expense of acquiring such feedback. Recent work proposed that agents have access to a source of information that is effectively free: in any environment that humans have acted in, the state will already be optimized for human preferences, and thus an agent can extract information about what humans want from the state. Such learning is possible in principle, but requires simulating all possible past trajectories that could have led to the observed state. This is feasible in gridworlds, but how do we scale it to complex tasks? In this work, we show that by combining a learned feature encoder with learned inverse models, we can enable agents to simulate human actions backwards in time to infer what they must have done. The resulting algorithm is able to reproduce a specific skill in MuJoCo environments given a single state sampled from the optimal policy for that skill. ### Paper Keywords ["imitation learning", "reward learning", "reinforcement learning"] ### Paper Content ABSTRACTSince reward functions are hard to specify, recent work has focused on learningpolicies from human feedback. However, such approaches are impeded by theexpense of acquiring such feedback. Recent work proposed that agents have accessto a source of information that is effectively free: in any environment that humanshave acted in, the state will already be optimized for human preferences, and thusan agent can extract information about what humans want from the state (Shahet al., 2019). Such learning is possible in principle, but requires simulating allpossible past trajectories that could have led to the observed state. This is feasiblein gridworlds, but how do we scale it to complex tasks? In this work, we showthat by combining a learned feature encoder with learned inverse models, we canenable agents to simulate human actions backwards in time to infer what they musthave done. The resulting algorithm is able to reproduce a specific skill in MuJoCoenvironments given a single state sampled from the optimal policy for that skill.1 I NTRODUCTIONAs deep learning has become popular, many parts of AI systems that were previously designedby hand have been replaced with learned components. Neural architecture search has automatedarchitecture design (Zoph & Le, 2017; Elsken et al., 2019), population-based training has automatedhyperparameter tuning (Jaderberg et al., 2017), and self-supervised learning has led to impressiveresults in language modeling (Devlin et al., 2019; Radford et al., 2019; Clark et al., 2020) andreduced the need for labels in image classification (Oord et al., 2018; He et al., 2020; Chen et al.,2020). However, in reinforcement learning, one component continues to be designed by humans:the task specification. Handcoded reward functions are notoriously difficult to specify (Clark &Amodei, 2016; Krakovna, 2018), and learning from demonstrations (Ng et al., 2000; Fu et al., 2018)or preferences (Wirth et al., 2017; Christiano et al., 2017) requires a lot of human input. Is there away that we can automate even the specification of what must be done?It turns out that we can learn part of what the user wants simply by looking at the state of theenvironment : after all, the user will already have optimized the state towards their own preferences(Shah et al., 2019). For example, when a robot is deployed in a room containing an intact vase, itcan reason that if its user wanted the vase to be broken, it would already have been broken; thus sheprobably wants the vase to remain intact.However, we must ensure that the agent distinguishes between aspects of the state that the usercouldn’t control from aspects that the user deliberately designed . This requires us to simulate whatthe user must have done to lead to the observed state: anything that the user put effort into in thepast is probably something the agent should do as well. As illustrated in Figure 1, if we observe aCheetah balancing on its front leg, we can infer how it must have launched itself into that position.Unfortunately, it is unclear how to simulate these past trajectories that lead to the observed state. Sofar, this has only been done in gridworlds, where all possible trajectories can be considered usingdynamic programming (Shah et al., 2019).Our key insight is that we can sample such trajectories by starting at the observed state and simulatingbackwards in time . To enable this, we derive a gradient that is amenable to estimation throughbackwards simulation, and learn an inverse policy and inverse dynamics model using supervisedWork done at the Center for Human-Compatible AI, UC Berkeley.1Published as a conference paper at ICLR 2021Figure 1: Suppose we observe a Cheetah balancing on its front leg (left). Given a simulator for theenvironment, Deep RLSP is able to infer how the cheetah must have acted to end up in this position. Itcan then imitate these actions in order to recreate this skill. Note that the state contains joint velocitiesin addition to positions, which makes the task more tractable than this picture might suggest.learning to perform the backwards simulation. Then, the only remaining challenge is finding a rewardrepresentation that can be meaningfully updated from a single state observation. To that end, ratherthan defining the reward directly on the raw input space, we represent it as a linear combination offeatures learned through self-supervised representation learning. Putting these components together,we propose the Deep Reward Learning by Simulating the Past (Deep RLSP) algorithm.We evaluate Deep RLSP on MuJoCo environments and show that it can recover fairly good perfor-mance on the task reward given access to a small number of states sampled from a policy optimizedfor that reward. We also use Deep RLSP to imitate skills generated using a skill discovery algorithm(Sharma et al., 2020), in some cases given just a single state sampled from the policy for that skill.Information from the environment state cannot completely replace reward supervision. For example,it would be hard to infer how clean Bob would ideally want his room to be, if the room is currentlymessy because Bob is too busy to clean it. Nonetheless, we are optimistic that information from theenvironment state can be used to significantly reduce the burden of human supervision required totrain useful, capable agents.2 M ETHODIn this section, we describe how Deep RLSP can learn a reward function for high dimensionalenvironments given access only to a simulator and the observed state s0.Notation. A finite-horizon Markov Decision Process (MDP) M=hS;A;T;r;P;Ticontains aset of statesSand a set of actions A. The transition function T:SAS7! [0;1]determinesthe distribution over next states given a state and an action, and Pis a prior distribution over initialstates. The reward function r:S7!Rdetermines the agent’s objective. T2Z+is a finite planninghorizon. A policy:SA7![0;1]specifies how to choose actions given a state. Given an initialstate distribution, a policy and the transition function, we can sample a trajectoryby sampling thefirst state fromP, every subsequent action from , and every subsequent state from T. We denote theprobability distribution over trajectories as hP;;Tiand writehP;;Tifor the sampling step.We will sometimes write a single state sinstead of a distribution Pif the initial state is deterministic.The goal of reinforcement learning (RL) is to find a policy that maximizes the expected cumulativereward EhP;;TihPTt=1r(st)i.We use:S!Rnto denote a feature function (whether handcoded or learned) that produces afeature vector of length nfor every state. The reward function ris linear over if it can be expressedin the formr(s) =T(s)for some2Rn.We assume that some past trajectoryT:0=sTaT:::a1s0produced the observed state s0.2.1 I DEALIZED ALGORITHMWe first explain what we would ideally do, if we had a handcoded a feature function and anenumerable (small) state space Sthat affords dynamic programming. This is a recap of RewardLearning by Simulating the Past (RLSP; Shah et al., 2019).2Published as a conference paper at ICLR 2021We assume the human follows a Boltzmann-rational policy t(ajs;)/exp(Qt(s;a;)), wheretheQvalues are computed using soft value iteration. Marginalizing over past trajectories, yields adistribution over the observed state p(s0j) =PsT:::a1p(=sTaT:::a1s0j). We com-pute the maximum likelihood estimate, argmaxlnp(s0j), via gradient ascent, by expressing thegradient of the observed state as a weighted combination of gradients of consistent trajectories (Shahet al., 2019, Appendix B):rlnp(s0j) = ET:1p(T:1js0;)[rlnp(j)] (1)rlnp(j)is a gradient for inverse reinforcement learning. Since we assume a Boltzmann-rationalhuman, this is the gradient for Maximum Causal Entropy Inverse Reinforcement Learning (MCEIRL;Ziebart et al., 2010). However, we still need to compute an expectation over all trajectories that endins0, which is in general intractable. Shah et al. (2019) use dynamic programming to compute thisgradient in tabular settings.2.2 G RADIENT AS BACKWARDS -FORWARDS CONSISTENCYApproximating the expectation. For higher-dimensional environments, we must approximate theexpectation over past trajectories p(T:1js0;). We would like to sample from the distribution,but it is not clear how to sample the past conditioned on the present. Our key idea is that just as wecan sample the future by rolling out forwards in time, we should be able to sample the past by rollingout backwards in time . Note that by the Markov property we have:p(T:1js0;) =1Yt=Tp(stjat;st+1;:::s 0;)p(atjst+1;at+1;:::s 0;)=1Yt=Tp(stjat;st+1;)p(atjst+1;)Thus, given the inverse policy 1t(atjst+1;), the inverse dynamics T1t(stjat;st+1;),and the observed state s0, we can sample a past trajectory T:1p(T:1js0;)by iter-atively applying 1andT1, starting from s0. Analogous to forward trajectories, we expressthe sampling as T:1 hs0;1;T1i. Thus, we can write the gradient in Equation 1 asET:1hs0;1;T1i[rlnp(j)].Learning,1andT1.In order to learn 1, we must first know . We assumed that thehuman was Boltzmann-rational, which corresponds to the maximum entropy reinforcement learningobjective (Levine, 2018). We use the Soft Actor-Critic algorithm (SAC; Haarnoja et al., 2018) toestimate the policy (ajs;), since it explicitly optimizes the maximum entropy RL objective.Given the forward policy (ajs;)and simulatorT, we can construct a dataset of sampled forwardtrajectories, and learn the inverse policy 1and the inverse dynamics T1using supervised learning.Given these, we can then sample T:1, allowing us to approximate the expectation in the gradient.In general, both 1andT1could be stochastic and time-dependent.Estimating the gradient for a trajectory. We now turn to the term within the expectation, whichis the inverse reinforcement learning gradient given a demonstration trajectory =sTaT:::s 0.Assuming that the user is Boltzmann-rational, this is the MCEIRL gradient (Ziebart et al., 2010),which can be written as (Shah et al., 2019, Appendix A):rlnp(j) = 0Xt=T(st)!FT(sT)+1Xt=T Es0t+1T(jst;at)Ft+1(s0t+1)Ft+1(st+1)!(2)Fis the expected feature count under , that is,Ft(st),Et:0hst;;TihP0t0=t(st0)i.The first term computes the feature counts of the demonstrated trajectory , while the second termcomputes the feature counts obtained by the policy for the current reward function (starting from3Published as a conference paper at ICLR 2021the initial state sT). Sincer(s) =T(s), these terms increase the reward of features present inthe demonstration and decrease the reward of features under the current policy. Thus, the gradientincentivizes consistency between the demonstration and rollouts from the learned policy.The last term is essentially a correction for the observed dynamics: if we see that st;atled tost+1, itcorrects for the fact that we “could have” seen some other state s0t+1. Since this correction is zero inexpectation (and expensive to compute), we drop it for our estimator.Gradient estimator. After dropping the last term in Equation 2, expanding the definition of F, andsubstituting in to Equation 1, our final gradient estimator is:rlnp(s0j) = ET:1hs0;1;T1i" 0Xt=T(st)! E0hsT;;Ti" 0Xt=T(s0t)!##(3)Thus, given s0,,,T,1, andT1, computing the gradient consists of three steps:1. Simulate backwards from s0, and compute the feature counts of the resulting trajectories.2. Simulate forwards from sTof these trajectories, and compute their feature counts.3. Take the difference between these two quantities.This again incentivizes consistency, this time between the backwards and forwards trajectories: thegradient leads to movement towards “what the human must have done” and away from “what thehuman would do if they had this reward”. The gradient becomes zero when they are identical.It may seem like the backwards and forwards trajectories should always be consistent with each other,since1andT1are inverses of andT. The key difference is that s0imposes constraints on thebackwards trajectories, but not on the forward trajectories. For example, suppose we observe s0inwhich a vase is unbroken, and our current hypothesis is that the user wants to break the vase. Whenwe simulate backwards, our trajectory will contain an unbroken vase, but when we simulate forwardsfromsT,will break the vase. The gradient would then reduce the reward for a broken vase andincrease the reward for an unbroken vase.2.3 L EARNING A LATENT MDPOur gradient still relies on a feature function , with the reward parameterized as r(s) =T(s). Anatural way to remove this assumption would be to instead allow to parameterize a neural network,which can then learn whatever features are relevant to the reward from the RLSP gradient.However, this approach will not work because the information contained in the RLSP gradient isinsufficient to identify the appropriate features to construct: after all, it is derived from a single state.If we were to learn a single unified reward using the same gradient, the resulting reward would likelybe degenerate: for example, it may simply identify the observed state, that is R(s) = 1[s=s0].Thus, we continue to assume that the reward is linear in features, and instead learn the feature functionusing self-supervised learning (Oord et al., 2018; He et al., 2020). In our experiments, we use avariational autoencoder (V AE; Kingma & Welling, 2014) to learn the feature function. The V AEencodes the states into a latent feature representation, which we can use to learn a reward function ifthe environment is fully observable, i.e., the states contain all relevant information.For partially observable environments recurrent state space models (RSSMs; Karl et al., 2017; Doerret al., 2018; Buesing et al., 2018; Kurutach et al., 2018; Hafner et al., 2019; 2020) could be usedinstead. These methods aim to learn a latent MDP , by computing the states using a recurrent modelover the observations, thus allowing the states to encode the history. For such a model, we canimagine that the underlying POMDP has been converted into a latent MDP whose feature function is the identity. We can then compute gradients directly in this latent MDP.2.4 D EEPRLSPPutting these components together gives us the Deep RLSP algorithm (Algorithm 1). We first learn afeature function using self-supervised learning, and then train an inverse dynamics model T1,all using a dataset of environment interactions (such as random rollouts). Then, we update using4Published as a conference paper at ICLR 2021Algorithm 1 TheDEEPRLSP algorithm. The initial dataset of environment interactions Dcan beconstructed in many different ways: random rollouts, human play data, curiosity-driven exploration,etc. The specific method will determine the quality of the learned features.procedure DEEPRLSP(fs0g,T)D dataset of environment interactionsInitializee;d;;1;T1;randomly.e;d SelfSupervisedLearning (D).Train encoder and decoder for latent MDPInitialize experience replay Ewith data in D.T1 SupervisedLearning (D) .Train inverse dynamicsT 1 .Start horizon at 1foriin[1::num_epochs ]do SAC() .Train policy1 SupervisedLearning (e;E) .Train inverse policy +COMPUTE GRAD(fs0g,,T,1,T1,T,e) .Updateifgradient magnitudes are sufficiently low thenT T+ 1 .Advance horizonreturn,eprocedure COMPUTE GRAD(fs0g,,T,1,T1,T,e)fbackwardg Rollout (fs0g;1;T1;T) .Simulate backwards from s0backward AverageFeatureCounts (e;fbackwardg).Compute backward feature countsfsTg FinalStates (fbackwardg)fforwardg Rollout (fsTg;;T;T) .Simulate forwards from sTforward AverageFeatureCounts (e;fforwardg).Compute forward feature countsRelabelfbackwardg,fforwardgand add them to E.returnbackwardforwardEquation 3, and continually train , and1alongsideto keep them up to date. The full algorithmalso adds a few bells and whistles that we describe next.Initial state distribution P.The attentive reader may wonder why our gradient appears to beindependent ofP. This is actually not the case: while andTare independent of P,1andT1dodepend on it. For example, if we observe Alice exiting the San Francisco airport, the corresponding1should hypothesize different flights if she started from New York than if she started from Tokyo.However, in order to actually produce such explanations, we must train 1andT1solely ontrajectories of length Tstarting from sTP . We instead train 1andT1on a variety oftrajectory data, which loses the useful information in P, but leads to several benefits. First, we cantrain the models on exactly the distributions that they will be used on, allowing us to avoid failuresdue to distribution shift. Second, the horizon Tis no longer critical: previously, Tencoded theseparation in time between sTands0, and as a result misspecification of Tcould cause bad results.Since we now only have information about s0, it doesn’t matter much what we set Tto, and as aresult we can use it to set a curriculum (discussed next). Finally, this allows Deep RLSP to be used indomains where an initial state distribution is not available.Note that we are no longer able to use information about Pthrough1andT1. However, havinginformation aboutPmight be crucial in some applications to prevent Deep RLSP from converging toa degenerate solution with sT=s0and a policy that does nothing. While we did not find this tobe a problem in our experiments, we discuss a heuristic to incorporate information about sTintoDeep RLSP in Appendix C.Curriculum. Since the horizon Tis no longer crucial, we can use it to provide a curriculum. Weinitially calculate gradients with low values of T, to prevent compounding errors in our learnedmodels, and making it easier to enforce backwards-forwards consistency, and then slowly grow T,making the problem harder. In practice, we found this crucial for performance: intuitively, it is mucheasier to make short backwards and forwards trajectories consistent than with longer trajectories; thelatter would likely have much higher variance.Multiple input states. If we get multiple independent s0as input, we average their gradients.5Published as a conference paper at ICLR 2021Experience replay. We maintain an experience replay buffer Ethat persists across policy trainingsteps. We initialize Ewith the same set of environment interactions that the feature function andinverse dynamics model are trained on. When computing the gradient, we collect all backward andforward trajectories and add them to E. To avoid compounding errors from the inverse dynamicsmodel, we relabel all transitions using a simulator of the environment. Whenever we’d add a transition(s;a;s0)toE, we initialize the simulator at sand execute ato obtain ~sand add transition (s1;a;~s)toEinstead.3 E XPERIMENTS3.1 S ETUPTo demonstrate that Deep RLSP can be scaled to complex, continuous, high-dimensional environ-ments, we use the MuJoCo physics simulator (Todorov et al., 2012). We consider the Inverted Pendu-lum,Half-Cheetah andHopper environments implemented in Open AI Gym (Brockman et al., 2016).The hyperparameters of our experiments are described in detail in Appendix B. We provide code toreplicate our experiments at https://github.com/HumanCompatibleAI/deep-rlsp .Baselines. To our knowledge, this is the first work to train policies using a single state as input. Dueto lack of alternatives, we compare against GAIL (Ho & Ermon, 2016) using the implementationfrom the imitation library (Wang et al., 2020). For each state we provide to Deep RLSP, weprovide a transition (s;a;s0)to GAIL.Ablations. In Section 2.2, we derived a gradient for Deep RLSP that enforces consistency betweenthe backwards and forwards trajectories. However, we could also ignore the temporal informationaltogether. If an optimal policy led to the observed state s0, then it is probably a good bet that s0ishigh reward, and that the agent should try to keep the state similar to s0. Thus, we can simply set=(s0)jj(s0)jj, and not deal with 1andT1at all.How should we handle multiple states s10;:::;sN0? Given that these are all sampled i.i.d. fromrollouts of an optimal policy, a natural choice is to simply average the feature vectors of all of thestates, which we call AverageFeatures . Alternatively, we could view each of the observed states as apotential waypoint of the optimal policy, and reward an agent for being near any one of them. Weimplement this Waypoints method asR(s) = maxi(si0)jj(si0)jj(s). Note that both of these ablationsstill require us to learn the feature function .Feature learning dataset. By default, we use random rollouts to generate the initial dataset that isused to train the features and the inverse model T1. (This isDin Algorithm 1.) However, in theinverted pendulum environment, the pendulum falls very quickly in random rollouts, and T1neverlearns what a balanced pendulum looks like. So, for this environment only, we combine randomrollouts with rollouts from an expert policy that balances the pendulum.3.2 G RIDWORLD ENVIRONMENTSAs a first check, we consider the gridworld environments in Shah et al. (2019). In these stylizedgridworlds, self-supervised learning should not be expected to learn the necessary features. Forexample, in the room with vase environment, the two door features are just particular locations, withno distinguishing features that would allow self-supervised learning to identify these locations asimportant. So, we run Algorithm 1 without the feature learning and instead use the pre-definedfeature function of the environments. With this setup we are able to use Deep RLSP to recover thedesired behavior from a single state in all environments in which the exact RLSP algorithm is able torecover it. However, AverageFeatures fails on several of the environments. Since only one state isprovided, Waypoints is equivalent to AverageFeatures. It is not clear how to apply GAIL to theseenvironments, and so we do not compare to it. Further details on all of the environments and resultscan be found in Appendix A.6Published as a conference paper at ICLR 2021Environment SAC # states Deep RLSP AverageFeatures Waypoints GAILInvertedPendulum10001 303 (299) 6 (2) N/A 1000 (0)10 335 (333) 3 (1) 4 (1) 1000 (0)50 339 (331) 6 (4) 3.7 (0.3) 1000 (0)Cheetah(forward)132361 4591 (2073) 6466 (3343) N/A -288 (55)10 6917 (421) 6245 (2352) -10 (23) -296 (172)50 6078 (589)) 4504 (2970) -126 (38) -54 (295)Cheetah(backward)133611 5730 (2733) 12443 (645) N/A -335 (46)10 7917 (249) 12829 (651) -80 (388) -283 (45)50 7588 (171) 11616 (178) -509 (87) 2113 (1015)Hopper(terminate)32741 68 (8) 99 (45) N/A 991 (9)10 47 (21) 159 (126) 58 (7) 813 (200)50 72 (1) 65 (36) 14 (4) 501 (227)Hopper(penalty)33631 1850 (634) 2537 (363) N/A 990 (9)10 2998 (62) 3103 (64) 709 (133) 784 (229)50 1667 (737) 2078 (581) 1612 (785) 508 (259)Table 1: Average returns achieved by the policies learned through various methods, for differentnumbers of input states. The states are sampled from a policy trained using SAC on the true rewardfunction; the return of that policy is given as a comparison. Besides the SAC policy return, all valuesare averaged over 3 seeds and the standard error is given in parentheses. We don’t report Waypointson 1 state as it is identical to AverageFeatures on 1 state.3.3 S OLVING THE ENVIRONMENTS WITHOUT ACCESS TO THE REWARD FUNCTIONFirst we look at the typical target behavior in each environment: balancing the inverted pendulum,and making the half-cheetah and the hopper move forwards. Additionally we consider the goal ofmaking the cheetah run backwards (that is, the negative of its usual reward function). We aim to useDeep RLSP to learn these behaviors without having access to the reward function .We train a policy using soft actor critic (SAC; Haarnoja et al., 2018) to optimize for the true rewardfunction, and sample either 1, 10 or 50 states from rollouts of this policy to use as input. We then useDeep RLSP to infer a reward and policy. Ideally we would evaluate this learned policy rather thanreoptimizing the learned reward, since learned reward models can often be gamed (Stiennon et al.,2020), but it would be too computationally expensive to run the required number of SAC steps duringeach policy learning step. As a result, we run SAC for many more iterations on the inferred rewardfunction, and evaluate the resulting policy on the true reward function (which Deep RLSP does nothave access to).Results are shown in Table 1. In Hopper, we noticed that videos of the policies learned by Deep RLSPlooked okay, but the quantitative evaluation said otherwise. It turns out that the policies learned byDeep RLSP do jump, as we might want, but they often fall down, terminating the episode; in contrastGAIL policies stand still or fall over slowly, leading to later termination and explaining their betterquantitative performance. We wanted to also evaluate the policies without this termination bias, andso we evaluate the same policies in an environment that does not terminate the episode, but providesa negative reward instead; in this evaluation both Deep RLSP and AverageFeatures perform muchbetter. We also provide videos of the learned policies at https://sites.google.com/view/deep-rlsp , which show that the policies learned by Deep RLSP do exhibit hopping behavior(though with a strong tendency to fall forward).GAIL is only able to learn a truly good policy for the (very simple) inverted pendulum, even though itgets states and actions as input. Deep RLSP on the other hand achieves reasonable behavior (thoughclearly not expert behavior) in all of the environments, using only states as input. Surprisingly, theAverageFeatures method also performs quite well, even beating the full algorithm on some tasks,though failing quite badly on Pendulum. It seems that the task of running forward or backward isvery well specified by a single state, since it can be inferred even without any information about thedynamics (except that which is encoded in the features learned from the initial dataset).7Published as a conference paper at ICLR 2021Figure 2: We sample a few states from a policy performing a specific skill to provide as input. Here,Deep RLSP learns to balance the cheetah on the front leg from a single state . We provide videos of theoriginal skills and learned policies at: https://sites.google.com/view/deep-rlsp .3.4 L EARNING SKILLS FROM A SINGLE STATEWe investigate to what extent Deep RLSP can learn other skills where the reward is not clear.Evaluation on these tasks is much harder, because there is no ground truth reward. Therefore weevaluate qualitatively how similar the policies learned by Deep RLSP are to the original skill. We alsoattempted to quantify similarity by checking how quickly a discriminator could learn to distinguishbetween the learned policy and the original skill, but unfortunately this metric was not conclusive(results are reported in Appendix D.1). Unlike the previous case, we do not reoptimize the learnedreward and only look at the policies learned by Deep RLSP.We consider skills learned by running Dynamics-Aware Unsupervised Discovery of Skills (DADS;Sharma et al., 2020). Since we are not interested in navigation, we remove the “x-y prior” usedto get directional skills in DADS. We run DADS on the half-cheetah environment and select allskills that are not some form of running. This resulted in two skills: one in which the cheetah ismoving forward making big leaps ( “jumping” ) and one in which it is slowly moving forward onone leg ( “balancing” ). As before we roll out these policies and sample individual states from thetrajectories to provide as an input for Deep RLSP. We then evaluate the policy learned by Deep RLSP.Since the best evaluation here is to simply watch what the learned policy does, we provide videos ofthe learned policies at https://sites.google.com/view/deep-rlsp . We also providevisualizations in Appendix D.2.The first thing to notice is that relative to the ablations, only Deep RLSP is close to imitating the skill.None of the other policies resemble the original skills at all. While AverageFeatures could performwell on simple tasks such as running, the full algorithm is crucial to imitate more complex behavior.Between Deep RLSP and GAIL the comparison is less clear. Deep RLSP can learn the balancingskill fairly well from a single state, which we visualize in Figure 2 (though we emphasize that thevideos are much clearer). Like the original skill, the learned policy balances on one leg and slowlymoves forward by jumping, though with slightly more erratic behavior. However, the learned policysometimes drops back to its feet or falls over on its back. We suspect this is an artifact of the shorthorizon (T10) used for simulating the past in our algorithm. A small horizon is necessary to avoidcompounding errors in the learned inverse dynamics model, but can cause the resulting behavior tobe more unstable on timescales greater than T. We see similar behavior when given 10 or 50 states.GAIL leads to a good policy given a single transition, where the cheetah balances on its front leg andhead (rather than just the front leg), but does not move forward very much. However, with 10 or 50transition, the policies learned by GAIL do not look at all like balancing.However, the jumping behavior is harder to learn, especially from a single state. We speculate thathere a single state is less informative than the balancing state. In the balancing state, the low jointvelocities tell us that the cheetah is not performing a flip, suggesting that we had optimized for thisspecific balancing state. On the other hand, with the jumping behavior, we only get a single state ofthe cheetah in the air with high velocity, which is likely not sufficient to determine what the jumplooked like exactly. In line with this hypothesis, at 1 state Deep RLSP learns to erratically hop, at 10states it executes slightly bigger jumps, and at 50 states it matches the original skill relatively closely.8Published as a conference paper at ICLR 2021The GAIL policies for jumping are also reasonable, though in a different way that makes it hard tocompare. Using 1 or 10 transitions, the policy doesn’t move very much, staying in contact with theground most of the time. However, at 50 transitions, it performs noticeably forward hops slightlysmoother than the policy learned by Deep RLSP.4 R ELATED WORKLearning from human feedback. Many algorithms aim to learn good policies from human demon-strations, including ones in imitation learning (Ho & Ermon, 2016) and inverse reinforcement learning(IRL; Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2018). Useful policies can also be learned fromother types of feedback, such as preferences (Christiano et al., 2017), corrections (Bajcsy et al., 2017),instructions (Bahdanau et al., 2019), or combinations of feedback modalities (Ibarz et al., 2018).While these methods require expensive human feedback, Deep RLSP instead simulates the tra-jectories that must have happened. This is reflected in the algorithm: in Equation 1, the innergradient corresponds to an inverse reinforcement learning problem. While we used the MCEIRLformulation (Ziebart et al., 2010), other IRL algorithms could be used instead (Fu et al., 2018).Learning from observations. For many tasks, we have demonstrations without action labels , e.g.,YouTube videos. Learning from Observations (LfO; Torabi et al., 2019; Gandhi et al., 2019) aims torecover a policy from such demonstrations. Similarly to LfO, we do not have access to action labels,but our setting is further restricted to observing only a small number of states.5 L IMITATIONS AND FUTURE WORKSummary. Learning useful policies with neural networks requires significant human effort, whether itis done by writing down a reward function by hand, or by learning from explicit human feedback suchas preferences or demonstrations. We showed that it is possible to reduce this burden by extracting“free” information present in the current state of the environment. This enables us to imitate policiesin MuJoCo environments with access to just a few states sampled from those policies. We hope thatDeep RLSP will help us train agents that are better aligned with human preferences.Learned models. The Deep RLSP gradient depends on having access to a good model of ,T,1,andT1. In practice, it was quite hard to train sufficiently good versions of the inverse models. Thiscould be a significant barrier to practical implementations of Deep RLSP. It can also be taken as asign for optimism: self-supervised representation learning through deep learning is fairly recent andis advancing rapidly; such advances will likely translate directly into improvements in Deep RLSP.Computational cost. Imitation learning with full demonstrations can already be quite computation-ally expensive. Deep RLSP learns several distinct neural network models, and then simulates potentialdemonstrations, and finally imitates them. Unsurprisingly, this leads to increased computational cost.Safe RL. Shah et al. (2019) discuss how the exact RLSP algorithm can be used to avoid negativeside-effects in RL by combining preferences learned from the initial state with a reward function.While we focused on learning hard to specify behavior, Deep RLSP can also be used to learn to avoidnegative side-effects, which is crucial for safely deploying RL systems in the real world (Amodeiet al., 2016).Multiagent settings. In any realistic environment, there is not just a single “user” who is influencingthe environment: many people act simultaneously, and the state is a result of joint optimization by allof them. However, our model assumes that the environment state resulted from optimization by asingle agent, which will not take into account the fact that each agent will have constraints imposedupon them by other agents. We will likely require new algorithms for such a setting.ACKNOWLEDGMENTSThis work was partially supported by Open Philanthropy, AFOSR, ONR YIP, NSF CAREER, NSFNRI, and Microsoft Swiss JRC. We thank researchers at the Center for Human-Compatible AI andthe InterACT lab for helpful discussion and feedback.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Cool idea but a bit limited in a number of ways. I did not find the empirical results fully convincing. ### Review Text This paper introduces an algorithm, called deep reward learning by simulating the past (deep RLSP), that seeks to infer a reward function by looking at states in demonstration data. An example of this described in the paper is an environment with a vase: if demonstration data shows an intact vase in the presence of an embodied agent then breaking the vase is unlikely to be the intended behavior. Otherwise the vase would already be broken in the demo. To achieve this, the paper assumes a Boltzmann distribution on the demonstration policy and a reward function that is linear in some pre-trained state features. The paper then derives a gradient of the log probability of a demonstration state. The gradient estimator involves simulating a possible past from a demonstration state (using a learned inverse policy and inverse transition function) and then simulating forward from the possible past (using the policy and a simulator) The gradient is then the difference between features counts from the backward and forward simulations. The paper is generally clearly written and works on a crucial problem in reinforcement learning, namely how to specify a human preference without resorting to tedious reward engineering. Novel, scalable approaches to this problem would certainly be of interest to the ICLR community. The primary technical contribution of the paper is the derivation of the gradient estimator which is correct. I find the idea of the paper very interesting and the results showing meaningful behavior emerge from a single demonstration are quite nice. However I think the paper is limited in a number of ways: - It requires access to a pretrained state representation - It requires access to a simulator of the environment which requires being able to reset the environment to arbitrary states. This seems quite limiting for real world applications. Worryingly, appendix D states that learning a dynamics model was attempted by the authors but failed to yield good results. - I think the choice of evaluation environments is a little odd and simplistic. I think environments more aligned with the eventual application areas for a method such as Deep RLSP would make the paper much more compelling. Given the motivation of the paper, I think perhaps manipulation environments where a robot arm interacts with multiple objects could be an interesting choice. - From the empirical results, it is not clear that Deep RLSP works substantially better than the simple average features baseline. Overall I think the paper has the potential to be a good paper but could still be substantially improved and I'm leaning towards rejection. Minor comments and questions for the authors: - I'm curious how you choose the gradient magnitude threshold? Does Deep RLSP fail without the curriculum? Could you provide an ablation that shows the effect of using a curriculum? - I would also be interested in an ablation of the cosine-similarity weighting heuristic. - I think the phrase recent work in the abstract could use a reference. - I'm a bit confused by the description of the environment suite by Shah et al. in appendix A, in particular the different rewards. Could you clarify and expand the description a bit? ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Hyewf3AqYX
ICLR.cc/2019/Conference
2019
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
["Jinghui Chen", "Jinfeng Yi", "Quanquan Gu"]
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an $O(1/\sqrt{T})$ convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
["efficient", "framework", "effective adversarial attacks", "attack", "attack algorithms", "algorithms", "much information", "adversary", "access", "adversarial attacks"]
ABSTRACTDepending on how much information an adversary can access to, adversarial at-tacks can be classified as white-box attack and black-box attack. In both cases,optimization-based attack algorithms can achieve relatively low distortions andhigh attack success rates. However, they usually suffer from poor time and querycomplexities, thereby limiting their practical usefulness. In this work, we focus onthe problem of developing efficient and effective optimization-based adversarialattack algorithms. In particular, we propose a novel adversarial attack frameworkfor both white-box and black-box settings based on the non-convex Frank-Wolfealgorithm. We show in theory that the proposed attack algorithms are efficientwith anO(1=pT)convergence rate. The empirical results of attacking InceptionV3 model and ResNet V2 model on the ImageNet dataset also verify the effi-ciency and effectiveness of the proposed algorithms. More specific, our proposedalgorithms attain the highest attack success rate in both white-box and black-boxattacks among all baselines, and are more time and query efficient than the state-of-the-art.1 I NTRODUCTIONDeep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial in-telligence such as image classification (Krizhevsky et al., 2012; He et al., 2016a), object detection(Ren et al., 2015; Girshick, 2015), and speech recognition (Mohamed et al., 2012; Bahdanau et al.,2016). However, recent studies show that deep neural networks can be vulnerable to adversarialexamples (Szegedy et al., 2013; Goodfellow et al., 2015) – a tiny perturbation on an image that isalmost invisible to human eyes could mislead a well-trained image classifier towards misclassifica-tion. Soon later this is proved to be not a coincidence: similar phenomena have been observed inother problems such as speech recognition (Carlini et al., 2016), visual QA (Xu et al., 2017), imagecaptioning (Chen et al., 2017a), machine translation (Cheng et al., 2018), reinforcement learning(Pattanaik et al., 2018), and even on systems that operate in the physical world (Kurakin et al.,2016).Depending on how much information an adversary can access to, adversarial attacks can be classifiedinto two classes: white-box attack (Szegedy et al., 2013; Goodfellow et al., 2015) and black-boxattack (Papernot et al., 2016a; Chen et al., 2017c). In the white-box setting, the adversary has fullaccess to the target model, while in the black-box setting, the adversary can only access the input andoutput of the target model but not its internal configurations. Among the approaches proposed forwhite-box and black-box attacks, optimization-based methods (Carlini & Wagner, 2017; Chen et al.,2017b;c; Ilyas et al., 2018) are most effective: they usually achieve relatively low distortions andhigh attack success rates. However, these methods are far from efficient. In the white-box setting,they need to solve constrained optimization problems (Carlini & Wagner, 2017), and are usuallysignificantly slower than Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) or IterativeFGSM (I-FGM) (Kurakin et al., 2016). Applying those methods with one or two examples are fine,yet in the case of attacking hundreds of thousands examples, e.g. in adversarial training (Kurakinet al., 2016; Madry et al., 2018), this is far from satisfactory.In the black-box setting, it becomes even more severe since they need to make gradient estima-tions (Chen et al., 2017c). Therefore, a large number of queries are needed for them to perform asuccessful attack, especially when the data dimension is large. For example, attacking a 2992993Imagenet image may take them hundreds of thousands of queries. This significantly limits their prac-1Under review as a conference paper at ICLR 2019tical usefulness since they can be easily defeated by limiting the number of queries that an adversarycan make to the target model.In this study, we aim to examine the following questions in this study:Can we improve the efficiency of the optimization-based attack algorithms? In other words, can weuse less time and queries to conduct adversarial attacks?In this work, we provide an affirmative answer to this question by proposing an efficient Frank-Wolfe optimization framework for both white-box and black-box attacks. In summary, we make thefollowing main contributions:We propose a novel Frank-Wolfe based adversarial attack framework. The white-box at-tack algorithm is an iterative first-order method which admits the fast gradient sign method(FGSM) as the one-step special case. And the corresponding black-box attack algorithmadopts zeroth-order optimization with two sensing vector options (either from the Eu-clidean unit sphere or from the standard Gaussian distribution) provided.We show that the proposed white-box and black-box attack algorithms enjoy an O(1=pT)convergence rate. Also we show that the query complexity of the proposed black-box attackalgorithm is linear in data dimension d.Our empirical results on attacking Inception V3 model with the ImageNet dataset show that(i) the proposed white-box attack algorithm is more efficient than all the baseline white-box algorithms evaluated here, and (ii) the proposed black-box attack algorithm is highlyefficient and is also the only one algorithm that achieves a 100% attack success rate.2 R ELATED WORKThere is a large body of work on adversarial attacks. In this section, we review the most relevantwork in both white-box and black-box attack settings, as well as the non-convex Frank-Wolfe opti-mization.White-box Attacks: Szegedy et al. (2013) proposed to use box-constrained L-BFGS algorithm forconducting white-box attacks. Goodfellow et al. (2015) proposed the Fast Gradient Sign Method(FGSM) based on linearization of the network as a simple alternative to L-BFGS. Kurakin et al.(2016) proposed to iteratively perform one-step FGSM (Goodfellow et al., 2015) algorithm andclips the adversarial point back to the distortion limit after every iteration. It is called Basic Itera-tive Method (BIM) or I-FGM in the literature. Madry et al. (2018) showed that for the L1normcase, BIM/I-FGM is equivalent to Projected Gradient Descent (PGD), which is a standard tool forconstrained optimization. Papernot et al. (2016b) proposed JSMA to greedily attack the most sig-nificant pixel based on the Jacobian-based saliency map. Moosavi-Dezfooli et al. (2016) proposedattack methods by projecting the data to the closest separating hyperplane. Carlini & Wagner (2017)introduced the so-called CW attack by proposing multiple new loss functions for generating adver-sarial examples. Chen et al. (2017b) followed CW’s framework and use an Elastic Net term as thedistortion penalty.Black-box Attacks: One popular family of black-box attacks (Hu & Tan, 2017; Papernot et al.,2016a; 2017) is based on the transferability of adversarial examples (Liu et al., 2018; Bhagoji et al.,2017), where an adversarial example generated for one DNN may be reused to attack other neuralnetworks. This allows the adversary to construct a substitute model that mimics the targeted DNN,and then attack the constructed substitute model using white-box attack methods. However, thistype of attack algorithms usually suffer from large distortions and relatively low success rates (Chenet al., 2017c). To address this issue, Chen et al. (2017c) proposed the Zeroth-Order Optimization(ZOO) algorithm that extends the CW attack to the black-box setting and uses a zeroth-order opti-mization approach to conduct the attack. Although ZOO achieves much higher attack success ratesthan the substitute model-based black-box attacks, it suffers from a poor query complexity sinceits naive implementation requires to estimate the gradients of all the coordinates (pixels) of the im-age. To improve its query complexity, several approaches have been proposed. For example, Tuet al. (2018) introduces an adaptive random gradient estimation algorithm and a well-trained Au-toencoder to speed up the attack process. Ilyas et al. (2018) and Liu et al. (2018) improved ZOO’squery complexity by using Natural Evolutionary Strategies (NES) (Wierstra et al., 2014; Salimanset al., 2017) and active learning, respectively.2Under review as a conference paper at ICLR 2019Non-convex Frank-Wolfe Algorithms: The Frank-Wolfe algorithm (Frank & Wolfe, 1956), alsoknown as the conditional gradient method, is an iterative optimization method for constrained op-timization problem. Jaggi (2013) revisited Frank-Wolfe algorithm in 2013 and provided a strongerand more general convergence analysis in the convex setting. Yu et al. (2017) proved the first conver-gence rate for Frank-Wolfe type algorithm in the non-convex setting. Lacoste-Julien (2016) providedthe convergence guarantee for Frank-Wolfe algorithm in the non-convex setting with adaptive stepsizes. Reddi et al. (2016) further studied the convergence rate of non-convex stochastic Frank-Wolfealgorithm in the finite-sun optimization setting. Very recently, Staib & Jegelka (2017) proposedto use Frank-Wolfe for distributionally robust training (Sinha et al., 2018). Balasubramanian &Ghadimi (2018) proved the convergence rate for zeroth-order nonconvex Frank-Wolfe algorithmusing one-side finite difference gradient estimator with standard Gaussian sensing vectors.3 M ETHODOLOGY3.1 N OTATIONSThroughout the paper, scalars are denoted by lower case letters, vectors by lower case bold faceletters and sets by calligraphy upper cae letters. For a vector x2Rd, we denote the Lpnorm of xbykxkp= (Pdi=1xpi)1=p. Specially, for p=1, theL1norm of xbykxk1= maxdi=1jij. WedenotePX(x)as the projection operation of projecting vector xinto the setX.3.2 P ROBLEM FORMULATIONAccording to the attack purposes, attacks can be divided into two categories: untargeted attack andtargeted attack . In particular, untargeted attack aims to turn the prediction into any incorrect label,while the targeted attack, which is considerably harder, requires to mislead the classifier to a specifictarget class. In this work, we follow the literature (Carlini & Wagner, 2017; Ilyas et al., 2018) andfocus on the strictly harder targeted attack setting. It is worth noting that our proposed algorithmcan be extended to untargeted attack straightforwardly.Let us define f()as the classification loss function of the targeted DNN. For targeted attacks, weaim to learn an adversarial example xthat is close enough to the original input xoriand can bemisclassified to the target class ytar. The corresponding optimization problem1is defined as:minxf(x; ytar)subject tokxxorikp: (3.1)Evidently, the constraint set X:=fxjkxxorikpgis a bounded convex set when p1.Normally,p= 2andp=1are used to measure the distortions kxxorikp, resulting in L2attackmodel andL1attack model respectively. In this work, we study both attack models. In the sequel,since we mainly focus on the targeted attack case, we use f(x)to denotef(x; ytar)for simplicity.3.3 F RANK -WOLFE WHITE -BOX ATTACKSFrank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient descent,is a popular optimization tool for constrained optimization. Different from PGD that first performsgradient descent followed by a projection step at each iteration, Frank-Wolfe algorithm calls a LinearMinimization Oracle (LMO) over the the constraint set Xat each iteration, i.e.,LMO2argminv2Xhv;rf(xt)i:The LMO can be seen as the minimization of the first-order Taylor expansion of f()at point xt:minv2Xf(xt) +hvxt;rf(xt)i:By calling LMO, Frank Wolfe solves the linear problem in Xand then perform weighted averagewith previous iterate to obtain the final update formula.We present our proposed Frank-Wolfe white-box attack algorithm in Algorithm 1, which is builtupon the original Frank-Wolfe algorithm. The key difference between Algorithm 1 and the standardFrank-Wolfe algorithm is in Line 4, where the LMO is called over a slightly relaxed constraint set1Note that there is usually an additional constraint on the input variable x, e.g., x2[0;1]nfor normalizedimage inputs.3Under review as a conference paper at ICLR 2019X:=fxjkxxorikpgwith1, instead of the original constraint set X. When= 1, setXreduces toX, and Algorithm 1 reduces to standard Frank Wolfe. We argue that this modificationmakes our algorithm more general, and gives rise to better attack results.Algorithm 1 Frank-Wolfe White-box Attack Algorithm1:input: number of iterations T, step sizesftg,>0, original image xori;2:x0=xori3:fort= 0;:::;T1do4:vt= argminv2Xhv;rf(xt)i// LMO5:dt=vtxt6:xt+1=xt+tdt7: if>1then8: xt+1=PX(xt+1)9: end if10:end for11:output: xTThe LMO solution itself can be expensive to obtain in general. Fortunately, applying Frank-Wolfe tosolve (3.1) actually gives us a closed-form LMO solution. We provide the solutions of LMO (Line4 in Algorithm 1) for L2norm andL1norm cases respectively:vt=rf(xt)krf(xt)k2+xori; (L2norm)vt=sign(rf(xt)) +xori: (L1norm)The derivation can be found in the supplemental materials.Note that when T= 1;= 1, substituting the above LMO solutions into Algorithm 1 yields thefinal update of x1=x0trf(xt), which reduces to FGSM2whent= 1. Similar derivationalso applies to L2norm case. Therefore, just like PGD, our proposed Frank-Wolfe white-box attackalso includes FGSM (FGM) as a one-step special instance.3.4 F RANK -WOLFE BLACK -BOX ATTACKSNext we consider the black-box setting, where we cannot perform back-propagation to calculate thegradient of the loss function anymore. Instead, we can only query the DNN system’s outputs withspecific inputs. To clarify, here the output refers to the logit layer’s output (confidence scores forclassification), not the final prediction label. The label-only setting is doable under our framework,but will incur extra difficulty such as designing new loss functions. For simplicity, here we considerthe confidence score output.We propose a zeroth-order Frank-Wolfe based algorithm to solve this problem. Algorithm 2 showour proposed Frank-Wolfe black-box attack algorithm. The key difference between our proposedblack-box attack and white-box attack is one extra gradient estimation step, which is presented inLine 4 in Algorithm 2. Also note that for the final output, we provide two options. While option IIis the common choice in practice, option I is also provided for the ease of theoretical analysis.As many other zeroth-order optimization algorithms (Shamir, 2017; Flaxman et al., 2005), Algo-rithm 3 uses symmetric finite differences to estimate the gradient and therefore, gets rid of thedependence on back-propagation in white-box setting. Different from Chen et al. (2017c), here wedo not utilize natural basis as our sensing vectors, instead, we provide two options: one is to usevectors uniformly sampled from Euclidean unit sphere and the other is to use vectors uniformlysampled from standard multivarite Gaussian distribution. This will greatly improve the gradientestimation efficiency comparing to sensing with natural basis as such option will only be able toestimate one coordinate of the gradient vector per query. In practice, both options here provide uscompetitive experimental results. It is worth noting that NES method (Wierstra et al., 2014) withantithetic sampling (Salimans et al., 2017) used in Ilyas et al. (2018) yields similar formula as ourOption II in Algorithm 3.2The extra clipping operation in FGSM is to project to the additional box constraint for image classificationtask. We will also need this clipping operation at the end of each iteration for specific tasks such as imageclassification.4Under review as a conference paper at ICLR 2019Algorithm 2 Frank-Wolfe Black-box Attack Algorithm1:input: number of iterations T, step sizesftg,>0, original image xori, target label ytar;2:x0=xori3:fort= 0;:::;T1do4:qt= ZERO ORD GRAD EST(xt)// Algorithm 35:vt= argminv2Xhv;qti6:dt=vtxt7:xt+1=xt+tdt8: if>1then9: xt+1=PX(xt+1)10: end if11:end for12:Option I: xais uniformly random chosen from fxtgTt=113:Option II: xa=xT14:output: xaAlgorithm 3 Zeroth-Order Gradient Estimation (ZERO ORD GRAD EST)1:parameters: number of gradient estimation samples b, sampling parameter t;2:q=03:fori= 1;:::;b do4: Option I: Sample uiuniformly from the Euclidean unit sphere with kuik2= 1q=q+d2tbf(xt+tui)f(xttui)ui5: Option II: Sample uiuniformly from the standard Gaussian distribution N(0;I)q=q+12tbf(xt+tui)f(xttui)ui6:end for7:output: q4 M AINTHEORYIn this section, we establish the convergence guarantees for our proposed Frank-Wolfe adversarialattack algorithms described in Section 3. First, we introduce the convergence criterion for our Frank-Wolfe adversarial attack framework.4.1 C ONVERGENCE CRITERIONThe loss function for common DNN models are generally nonconvex. In addition, (3.1) is a con-strained optimization. For such general nonconvex constrained optimization, we typically adoptthe Frank-Wolfe gap as the convergence criterion (since gradient norm of fis no longer a propercriterion for constrained optimization problems):g(xt) = maxx2Xhxxt;rf(xt)i:Note that for the Frank-Wolfe gap, we always have g(xt)0andxtis a stationary point for theconstrained optimization problem if and only if g(xt) = 0 . Also the Frank-Wolfe gap is affineinvariant and do not tie to any specific choice of norm, which makes itself a perfect convergencecriterion for Frank-Wolfe based algorithms.4.2 C ONVERGENCE GUARANTEE FOR FRANK -WOLFE WHITE -BOX ATTACKBefore we are going to provide the convergence guarantee of Frank-Wolfe white-box attack (Algo-rithm 1), we introduce the following assumptions that are essential to the convergence analysis.Assumption 4.1. Functionf()isL-smooth with respect to x, i.e., for any x;x0, it holds thatf(x0)f(x) +rf(x)>(x0x) +L2kx0xk22:Assumption 4.1 is a standard assumption in nonconvex optimization, and is also adopted in otherFrank-Wolfe literature such as Lacoste-Julien (2016); Reddi et al. (2016). Note that even thoughthe smoothness assumption does not hold for general DNN models, a recent study (Santurkar et al.,2018) shows that batch normalization that is used in many modern DNNs such as Inception V35Under review as a conference paper at ICLR 2019model, actually makes the optimization landscape significantly smoother3. This justifies the validityof Assumption 4.1.Assumption 4.2. SetXis bounded with diameter D, i.e.,kxx0k2Dfor all x;x02X.Assumption 4.2 implies that the input space is bounded. For common tasks such as image classifica-tion, given the fact that images have bounded pixel range and is a small constant, this assumptiontrivially holds.Now we present the theorem, which characterizes the convergence rate of our proposed Frank-Wolfewhite-box adversarial attack algorithm presented in Algorithm 1.Theorem 4.3. Under Assumptions 4.1 and 4.2, let t==p2(f(x0)f(x))=(LD2T), denoteegT= min 1kTg(xk)wherefxkgTk=1are iterates in Algorithm 1 with = 1, we have:egTrLD2(f(x0)f(x))2T;where xis the optimal solution to (3.1).Remark 4.4. Theorem 4.3 suggests that our proposed Frank-Wolfe white-box attack algorithmachieves aO(1=pT)rate of convergence. Note that similar result has been proved in Lacoste-Julien(2016) under a different choice of step size.4.3 C ONVERGENCE GUARANTEE FOR FRANK -WOLFE BLACK -BOX ATTACKNext we analyze the convergence of our proposed Frank-Wolfe black-box adversarial attack algo-rithm presented in Algorithm 2.In order to prove the convergence of our proposed Frank-Wolfe black-box attack algorithm, we needthe following additional assumption that krf(0)k2is bounded.Assumption 4.5. Gradient off()at zero pointrf(0)satisfies maxykrf(0)k2Cg.Following the analysis in Shamir (2017), let f(x) =Eu[f(x+u)], which is the smoothed versionoff(x). This smoothed function value plays a central role in our theoretical analysis, since it bridgesthe finite difference gradient approximation with the actual gradient. The following lemma showsthis relationship.Lemma 4.6. For the gradient estimator qtin Algorithm 3, its expectation and variance satisfyE[qt] =rf(xt);EkqtE[qt]k221b2d(Cg+LD)2+122tL2d2:Now we are going to present the theorem, which characterizes the convergence rate of Algorithm 2.Theorem 4.7. Under Assumptions 4.1, 4.2 and 4.5, let t==p2(f(x0)f(x))=(LD2T),b=Tdandt=p2=Td2, suppose we use Option I in Algorithm 2 and option II for Algorithm 3,then the output xafrom Algorithm 2 with = 1satisfies:E[g(xa)]Dp2TpL(f(x0)f(x)) + 2(L+Cg+LD);where xis the optimal solution to (3.1).Remark 4.8. Theorem 4.7 suggests that Algorithm 2 also enjoys a O(1=pT)rate of convergence.In terms of query complexity, the total number of queries needed is Tb=T2d, which is linear in thedata dimension d. In fact, in the experiment part, we observed that this number can be substantiallysmaller than d, e.g.,b= 25 , which is much lower than the theorem suggests. Note that although weonly prove for option I in Algorithm 3, our result can be readily extended to Option II (the Gaussiansensing vector case).3The original argument in Santurkar et al. (2018) refers to the smoothness with respect to each layer’sparameters. Note that the first layer’s parameters are in the mirror position (in terms of backpropagation) as thenetwork inputs. Therefore, the argument in Santurkar et al. (2018) can also be applied here with respect to thenetwork inputs.6Under review as a conference paper at ICLR 20195 E XPERIMENTSIn this section, we present the experimental results for our proposed Frank-Wolfe attack frameworkagainst other state-of-the-art adversarial attack algorithms in both white-box and black-box settings.All of our experiments are conducted on Amazon AWS p3.2xlarge servers which come with IntelXeon E5 CPU and one NVIDIA Tesla V100 GPU (16G RAM). All experiments are implemented inTensorflow platform version 1:10:0within Python 3:6:4.5.1 E VALUATION SETUP AND METRICSWe test the attack effectiveness of all algorithms by evaluating on a pre-trained Inception V3 model(Szegedy et al., 2016) and a ResNet V2 50 (He et al., 2016b) model that are trained on ImageNetdataset (Deng et al., 2009). The pre-trained Inception V3 model is reported to have a 78:0%top-1accuracy and a 93:9%top-5accuracy. The pre-trained ResNet V2 model is reported to have a 75:6%top-1and a 92:8%top-5accuracy. We randomly choose 500images from the ImageNet validationset that are verified to be correctly classified by the pre-trained model and also randomly choose atarget class for each image. Each image has a dimension of 2992993and we test all attackalgorithms through the same randomly chosen data samples and target labels.We test for both L2norm based and L1norm based attacks. In the white-box setting, we performbinary search / grid search for the best distortion parameter ( in our formulation and cin CW’sregularized formulation). In the black-box setting, for L2norm based attack, we set = 5 and forL1based attack, we set = 0:05. For white-box attack, we restrict a maximum of 1;000iterationsper attack for each method. And for black-box attack, we set a maximum query limit of 500;000per attack per image for each method.For all algorithms, we stop the algorithm when a successful attack is found. For our proposed black-box attack, we use option II in Algorithm 2 and test both options in Algorithm 3. We set the numberof gradient estimation samples b= 25 for Algorithm 2. More detailed description on parametersettings can be found in the supplemental materials.We evaluate the final performance through attack success rate where the success is defined as makingthe classifier output the exact target class label (not any incorrect labels). We also measure averageattack time per image, average distortion (only on successful attacked samples) and average numberof queries needed (only for black-box attack) per image. For a fair time comparison, even thoughsome of the algorithms including ours can be written in batch form (attack multiple images at onetime), all algorithms are set to attack one image at a time.Due to page limit, we leave all experimental results on ResNet V2 model in the supplemental mate-rials.5.2 B ASELINE METHODSWe compare the proposed algorithms with several state-of-the-art baseline algorithms. Specifically,we compare the proposed white-box attack algorithm with4(i) PGD (Madry et al., 2018) (which isessentially I-FGM (Kurakin et al., 2016)), (ii) CW attack (Carlini & Wagner, 2017) and (iii) EADattack (Chen et al., 2017b). We compare the proposed black-box attack algorithm with (i) ZOOattack (Chen et al., 2017c) and (ii) NES-PGD attack (Ilyas et al., 2018).5.3 W HITE -BOX ATTACK EXPERIMENTSIn this subsection, we present the white-box attack experiments on Inception V3 model. Tables 1 and2 present our experimental results for L2norm andL1norm based white-box attacks respectively.As we can observe from the tables, the attack success rate is 100% for every method. For the otherbaselines in the L2norm case, CW method achieves the smallest average distortion, yet it comes withan expansive time cost. EAD method does not have either time advantage or distortion advantagein this experiment, probably due to its different motivation in attacking. PGD has moderate averagedistortion, yet it also costs quite some time to finish the attack. On the other hand, our proposedalgorithm achieves the shortest attack time with moderate distortion. It significantly reduces thetime complexity needed for attacking data with large dimensionality. For the L1norm case, CWmethod takes significantly longer time and does not perform very well on average distortion either.4We did not compare with FGM (FGSM) (Goodfellow et al., 2015) since it basically has zero success ratefor targeted attack on Inception V3 or ResNet V2 models.7Under review as a conference paper at ICLR 2019Table 1: Comparison of L2norm based white-box attacks on Inception V3 model with = 5. Wereport attack success rate, average time and average distortion.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE DISTORTIONPGD 100.0 143.2 0.74CW 100.0 169.9 0.57EAD 100.0 167.8 1.09FW-White 100.0 50.6 0.85Table 2: Comparison of L1norm based white-box attacks on Inception V3 model with = 0:05.We report attack success rate, average time and average distortion.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE DISTORTIONPGD 100.0 39.1 0.0027CW 100.0 745.2 0.0071FW-White 100.0 13.7 0.0034This is largely due to the original CW was designed for L2norm attack, and in order to apply ittoL1norm attack, special design is needed, which sacrifices its performance in terms of runtime.Again, our proposed white-box attack algorithm achieves the shortest average attack time and amoderate average distortion.In Figure 1, we also examine the effect of in our proposed Frank-Wolfe white-box attack algorithm.We plot the objective loss function value of attacking one example against the number of iterationsfor bothL2andL1based white-box attack on Inception V3 model. From the plot, we can see thatlargerindeed leads to faster convergence.0 20 40 60 80 100Iterations02468LossPGDFW =1FW =5FW =20(a)L2norm based attack0 20 40 60 80 100Iterations02468LossPGDFW =1FW =3FW =5 (b)L1norm based attackFigure 1: Loss against the number of iterations plot for PGD and FW algorithms in both L2normandL1norm based white-box attacks on Inception V3 model.5.4 B LACK -BOX ATTACK EXPERIMENTSIn this subsection, we present the black-box attack experiments on Inception V3 model. For black-box attacks, attack success rate, time and number of queries needed are more meaningful evaluationmetrics than distortion distances. Therefore, we omit all the grid search / binary search steps that areused in the white-box setting since extra time / queries are needed for finding parameters that canobtain better distortion distances.Tables 3 and 4 present our experimental results for L2norm andL1norm based black-box attacksrespectively. For ZOO method, note that it only has the L2norm version and it follows CW’sframework and thus uses different loss function and problem formulation (cannot exactly controlthe adversarial example to be within the distortion limit, we manage to keep the average distortionaroundfor ZOO while other methods have average distortions very close to ). Furthermore, wecan observe that ZOO is quite slow in this task. Attack on a single image can take up to 2hours forZOO and it is only able to achieve a 74:8%success rate (compared with the 88:9%success rate inthe original paper, we think the main reason is the query limit here is only half of the query limit8Under review as a conference paper at ICLR 20190 100000 200000 300000 400000 500000Queries0.00.20.40.60.81.0Attack Success RateZOONES-PGDFW (Opt I)FW (Opt II)(a)L2norm based attack0 25000 50000 75000 100000 125000 150000 175000 200000Queries0.00.20.40.60.81.0Attack Success RateNES-PGDFW (Opt I)FW (Opt II) (b)L1norm based attackFigure 2: Attack success rate against the number of queries plot for different black-box attack algo-rithms in both L2normL1norm cases on Inception V3 model.in the original paper). NES-PGD method, while greatly improving ZOO’s performance, still cannotachieve 100% success rate in both attack models and takes relatively more time and queries. Insharp contrast, our proposed Frank-Wolfe black-box attacks (both option I and option II) achievethe highest success rate in both L2norm andL1norm based black-box attacks and further largelyimprove the attack efficiency.Table 3: Comparison of L2norm based black-box attacks on Inception V3 model with = 5. Wereport attack success rate, average time and average number of queries needed per image. Opt I andOpt II refer to the two options in Algorithm 2.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE QUERIESZOO 74.8 5692.6 296867.0NES-PGD 96.7 133.0 58921.8FW-Black (Opt I) 100.0 102.9 45994.5FW-Black (Opt II) 100.0 100.9 45156.0Table 4: Comparison of L1norm based black-box attacks on Inception V3 model with = 0:05.We report attack success rate, average time and average number of queries needed per image. Opt Iand Opt II refer to the two options in Algorithm 2.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE QUERIESNES-PGD 98.0 76.9 34062.2FW-Black (Opt I) 100.0 50.4 22313.2FW-Black (Opt II) 100.0 50.6 22424.1Figure 2 illustrates the attack success rate against the number of queries plot for different algorithmsin bothL2norm andL1norm based black-box attacks on Inception V3 model. As we can see fromthe plot, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the highestattack success rate and best efficiency (least queries needed for achieving the same success rate),especially in the L2norm case.6 C ONCLUSIONSIn this work, we propose a Frank-Wolfe framework for efficient and effective adversarial attacks.Our proposed white-box and black-box attack algorithms enjoy an O(1=pT)rate of convergence,and the query complexity of the proposed black-box attack algorithm is linear in data dimension d.Finally, our empirical study on attacking Inception V3 model with ImageNet dataset yields a 100%attack success rate for our proposed algorithms, even in the setting of black-box attack.9Under review as a conference paper at ICLR 2019
SyllU9Iq37
A method to produce adversarial attack using a Frank-Wolfe inspired method
5: Marginally below acceptance threshold
This paper provide a method to produce adversarial attack using a Frank-Wolfe inspired method. I have some concerns about the motivation of this method: - What are the motivations to use Frank-Wolfe ? Usually this algorithm is used when the constraints are to complicated to have a tractable projection (which is not the case for the L_2 and L_\infty balls) or when one wants to have sparse iterates which do not seem to be the case here. - Consequently why did not you compare simple projected gradient method ? (BIM) is not equivalent to the projected gradient method since the direction chosen is the sign of the gradient and not the gradient itself (the first iteration is actually equivalent because we start at the center of the box but after both methods are no longer equivalent). - There is no motivations for the use of $\lambda >1$ neither practical or theoretical since the results are only proven for $\lambda =1$ whereas the experiments are done with \lambda = 5,20 or 30. - What is the difference between the result of Theorem 4.3 and the result from (Lacoste-Julien 2016)? Depending on the answer to these questions I'm planning to move up or down my grade. In the experiment there is no details on how you set the hyperparameters of CW and EAD. They use a penalized formulation instead of a constrained one. Consequently the regularization hyperparameters have to be set differently. The only new result seem to be Theorem 4.7 which is a natural extension to theorem 4.3 to zeroth-order methods. Comment: - in the whole paper there is $y$ which is not defined. I guess it is the $y_{tar}$ fixed in the problem formulation Sec 3.2. In don't see why there is a need to work on any $y$. If it is true, case assumption 4.5 do not make any sense since $y = y_{tar}$ (we just need to note $\|\nabla f(O,y_{tar})\| = C_g$) and some notation could be simplified setting for instance $f(x,y_{tar}) = f(x)$. - In Theorem 4.7 an expectation on g(x_a) is missing Minor comments: - Sec 3.1 theta_i -> x_i - Sec 3.3 the argmin is a set, then it is LMO $\in$ argmin. ===== After rebuttal ====== The authors answered some of my questions but I still think it is a borderline submission.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks ### Paper Abstract Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an $O(1/\sqrt{T})$ convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art. ### Paper Keywords ["efficient", "framework", "effective adversarial attacks", "attack", "attack algorithms", "algorithms", "much information", "adversary", "access", "adversarial attacks"] ### Paper Content ABSTRACTDepending on how much information an adversary can access to, adversarial at-tacks can be classified as white-box attack and black-box attack. In both cases,optimization-based attack algorithms can achieve relatively low distortions andhigh attack success rates. However, they usually suffer from poor time and querycomplexities, thereby limiting their practical usefulness. In this work, we focus onthe problem of developing efficient and effective optimization-based adversarialattack algorithms. In particular, we propose a novel adversarial attack frameworkfor both white-box and black-box settings based on the non-convex Frank-Wolfealgorithm. We show in theory that the proposed attack algorithms are efficientwith anO(1=pT)convergence rate. The empirical results of attacking InceptionV3 model and ResNet V2 model on the ImageNet dataset also verify the effi-ciency and effectiveness of the proposed algorithms. More specific, our proposedalgorithms attain the highest attack success rate in both white-box and black-boxattacks among all baselines, and are more time and query efficient than the state-of-the-art.1 I NTRODUCTIONDeep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial in-telligence such as image classification (Krizhevsky et al., 2012; He et al., 2016a), object detection(Ren et al., 2015; Girshick, 2015), and speech recognition (Mohamed et al., 2012; Bahdanau et al.,2016). However, recent studies show that deep neural networks can be vulnerable to adversarialexamples (Szegedy et al., 2013; Goodfellow et al., 2015) – a tiny perturbation on an image that isalmost invisible to human eyes could mislead a well-trained image classifier towards misclassifica-tion. Soon later this is proved to be not a coincidence: similar phenomena have been observed inother problems such as speech recognition (Carlini et al., 2016), visual QA (Xu et al., 2017), imagecaptioning (Chen et al., 2017a), machine translation (Cheng et al., 2018), reinforcement learning(Pattanaik et al., 2018), and even on systems that operate in the physical world (Kurakin et al.,2016).Depending on how much information an adversary can access to, adversarial attacks can be classifiedinto two classes: white-box attack (Szegedy et al., 2013; Goodfellow et al., 2015) and black-boxattack (Papernot et al., 2016a; Chen et al., 2017c). In the white-box setting, the adversary has fullaccess to the target model, while in the black-box setting, the adversary can only access the input andoutput of the target model but not its internal configurations. Among the approaches proposed forwhite-box and black-box attacks, optimization-based methods (Carlini & Wagner, 2017; Chen et al.,2017b;c; Ilyas et al., 2018) are most effective: they usually achieve relatively low distortions andhigh attack success rates. However, these methods are far from efficient. In the white-box setting,they need to solve constrained optimization problems (Carlini & Wagner, 2017), and are usuallysignificantly slower than Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) or IterativeFGSM (I-FGM) (Kurakin et al., 2016). Applying those methods with one or two examples are fine,yet in the case of attacking hundreds of thousands examples, e.g. in adversarial training (Kurakinet al., 2016; Madry et al., 2018), this is far from satisfactory.In the black-box setting, it becomes even more severe since they need to make gradient estima-tions (Chen et al., 2017c). Therefore, a large number of queries are needed for them to perform asuccessful attack, especially when the data dimension is large. For example, attacking a 2992993Imagenet image may take them hundreds of thousands of queries. This significantly limits their prac-1Under review as a conference paper at ICLR 2019tical usefulness since they can be easily defeated by limiting the number of queries that an adversarycan make to the target model.In this study, we aim to examine the following questions in this study:Can we improve the efficiency of the optimization-based attack algorithms? In other words, can weuse less time and queries to conduct adversarial attacks?In this work, we provide an affirmative answer to this question by proposing an efficient Frank-Wolfe optimization framework for both white-box and black-box attacks. In summary, we make thefollowing main contributions:We propose a novel Frank-Wolfe based adversarial attack framework. The white-box at-tack algorithm is an iterative first-order method which admits the fast gradient sign method(FGSM) as the one-step special case. And the corresponding black-box attack algorithmadopts zeroth-order optimization with two sensing vector options (either from the Eu-clidean unit sphere or from the standard Gaussian distribution) provided.We show that the proposed white-box and black-box attack algorithms enjoy an O(1=pT)convergence rate. Also we show that the query complexity of the proposed black-box attackalgorithm is linear in data dimension d.Our empirical results on attacking Inception V3 model with the ImageNet dataset show that(i) the proposed white-box attack algorithm is more efficient than all the baseline white-box algorithms evaluated here, and (ii) the proposed black-box attack algorithm is highlyefficient and is also the only one algorithm that achieves a 100% attack success rate.2 R ELATED WORKThere is a large body of work on adversarial attacks. In this section, we review the most relevantwork in both white-box and black-box attack settings, as well as the non-convex Frank-Wolfe opti-mization.White-box Attacks: Szegedy et al. (2013) proposed to use box-constrained L-BFGS algorithm forconducting white-box attacks. Goodfellow et al. (2015) proposed the Fast Gradient Sign Method(FGSM) based on linearization of the network as a simple alternative to L-BFGS. Kurakin et al.(2016) proposed to iteratively perform one-step FGSM (Goodfellow et al., 2015) algorithm andclips the adversarial point back to the distortion limit after every iteration. It is called Basic Itera-tive Method (BIM) or I-FGM in the literature. Madry et al. (2018) showed that for the L1normcase, BIM/I-FGM is equivalent to Projected Gradient Descent (PGD), which is a standard tool forconstrained optimization. Papernot et al. (2016b) proposed JSMA to greedily attack the most sig-nificant pixel based on the Jacobian-based saliency map. Moosavi-Dezfooli et al. (2016) proposedattack methods by projecting the data to the closest separating hyperplane. Carlini & Wagner (2017)introduced the so-called CW attack by proposing multiple new loss functions for generating adver-sarial examples. Chen et al. (2017b) followed CW’s framework and use an Elastic Net term as thedistortion penalty.Black-box Attacks: One popular family of black-box attacks (Hu & Tan, 2017; Papernot et al.,2016a; 2017) is based on the transferability of adversarial examples (Liu et al., 2018; Bhagoji et al.,2017), where an adversarial example generated for one DNN may be reused to attack other neuralnetworks. This allows the adversary to construct a substitute model that mimics the targeted DNN,and then attack the constructed substitute model using white-box attack methods. However, thistype of attack algorithms usually suffer from large distortions and relatively low success rates (Chenet al., 2017c). To address this issue, Chen et al. (2017c) proposed the Zeroth-Order Optimization(ZOO) algorithm that extends the CW attack to the black-box setting and uses a zeroth-order opti-mization approach to conduct the attack. Although ZOO achieves much higher attack success ratesthan the substitute model-based black-box attacks, it suffers from a poor query complexity sinceits naive implementation requires to estimate the gradients of all the coordinates (pixels) of the im-age. To improve its query complexity, several approaches have been proposed. For example, Tuet al. (2018) introduces an adaptive random gradient estimation algorithm and a well-trained Au-toencoder to speed up the attack process. Ilyas et al. (2018) and Liu et al. (2018) improved ZOO’squery complexity by using Natural Evolutionary Strategies (NES) (Wierstra et al., 2014; Salimanset al., 2017) and active learning, respectively.2Under review as a conference paper at ICLR 2019Non-convex Frank-Wolfe Algorithms: The Frank-Wolfe algorithm (Frank & Wolfe, 1956), alsoknown as the conditional gradient method, is an iterative optimization method for constrained op-timization problem. Jaggi (2013) revisited Frank-Wolfe algorithm in 2013 and provided a strongerand more general convergence analysis in the convex setting. Yu et al. (2017) proved the first conver-gence rate for Frank-Wolfe type algorithm in the non-convex setting. Lacoste-Julien (2016) providedthe convergence guarantee for Frank-Wolfe algorithm in the non-convex setting with adaptive stepsizes. Reddi et al. (2016) further studied the convergence rate of non-convex stochastic Frank-Wolfealgorithm in the finite-sun optimization setting. Very recently, Staib & Jegelka (2017) proposedto use Frank-Wolfe for distributionally robust training (Sinha et al., 2018). Balasubramanian &Ghadimi (2018) proved the convergence rate for zeroth-order nonconvex Frank-Wolfe algorithmusing one-side finite difference gradient estimator with standard Gaussian sensing vectors.3 M ETHODOLOGY3.1 N OTATIONSThroughout the paper, scalars are denoted by lower case letters, vectors by lower case bold faceletters and sets by calligraphy upper cae letters. For a vector x2Rd, we denote the Lpnorm of xbykxkp= (Pdi=1xpi)1=p. Specially, for p=1, theL1norm of xbykxk1= maxdi=1jij. WedenotePX(x)as the projection operation of projecting vector xinto the setX.3.2 P ROBLEM FORMULATIONAccording to the attack purposes, attacks can be divided into two categories: untargeted attack andtargeted attack . In particular, untargeted attack aims to turn the prediction into any incorrect label,while the targeted attack, which is considerably harder, requires to mislead the classifier to a specifictarget class. In this work, we follow the literature (Carlini & Wagner, 2017; Ilyas et al., 2018) andfocus on the strictly harder targeted attack setting. It is worth noting that our proposed algorithmcan be extended to untargeted attack straightforwardly.Let us define f()as the classification loss function of the targeted DNN. For targeted attacks, weaim to learn an adversarial example xthat is close enough to the original input xoriand can bemisclassified to the target class ytar. The corresponding optimization problem1is defined as:minxf(x; ytar)subject tokxxorikp: (3.1)Evidently, the constraint set X:=fxjkxxorikpgis a bounded convex set when p1.Normally,p= 2andp=1are used to measure the distortions kxxorikp, resulting in L2attackmodel andL1attack model respectively. In this work, we study both attack models. In the sequel,since we mainly focus on the targeted attack case, we use f(x)to denotef(x; ytar)for simplicity.3.3 F RANK -WOLFE WHITE -BOX ATTACKSFrank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient descent,is a popular optimization tool for constrained optimization. Different from PGD that first performsgradient descent followed by a projection step at each iteration, Frank-Wolfe algorithm calls a LinearMinimization Oracle (LMO) over the the constraint set Xat each iteration, i.e.,LMO2argminv2Xhv;rf(xt)i:The LMO can be seen as the minimization of the first-order Taylor expansion of f()at point xt:minv2Xf(xt) +hvxt;rf(xt)i:By calling LMO, Frank Wolfe solves the linear problem in Xand then perform weighted averagewith previous iterate to obtain the final update formula.We present our proposed Frank-Wolfe white-box attack algorithm in Algorithm 1, which is builtupon the original Frank-Wolfe algorithm. The key difference between Algorithm 1 and the standardFrank-Wolfe algorithm is in Line 4, where the LMO is called over a slightly relaxed constraint set1Note that there is usually an additional constraint on the input variable x, e.g., x2[0;1]nfor normalizedimage inputs.3Under review as a conference paper at ICLR 2019X:=fxjkxxorikpgwith1, instead of the original constraint set X. When= 1, setXreduces toX, and Algorithm 1 reduces to standard Frank Wolfe. We argue that this modificationmakes our algorithm more general, and gives rise to better attack results.Algorithm 1 Frank-Wolfe White-box Attack Algorithm1:input: number of iterations T, step sizesftg,>0, original image xori;2:x0=xori3:fort= 0;:::;T1do4:vt= argminv2Xhv;rf(xt)i// LMO5:dt=vtxt6:xt+1=xt+tdt7: if>1then8: xt+1=PX(xt+1)9: end if10:end for11:output: xTThe LMO solution itself can be expensive to obtain in general. Fortunately, applying Frank-Wolfe tosolve (3.1) actually gives us a closed-form LMO solution. We provide the solutions of LMO (Line4 in Algorithm 1) for L2norm andL1norm cases respectively:vt=rf(xt)krf(xt)k2+xori; (L2norm)vt=sign(rf(xt)) +xori: (L1norm)The derivation can be found in the supplemental materials.Note that when T= 1;= 1, substituting the above LMO solutions into Algorithm 1 yields thefinal update of x1=x0trf(xt), which reduces to FGSM2whent= 1. Similar derivationalso applies to L2norm case. Therefore, just like PGD, our proposed Frank-Wolfe white-box attackalso includes FGSM (FGM) as a one-step special instance.3.4 F RANK -WOLFE BLACK -BOX ATTACKSNext we consider the black-box setting, where we cannot perform back-propagation to calculate thegradient of the loss function anymore. Instead, we can only query the DNN system’s outputs withspecific inputs. To clarify, here the output refers to the logit layer’s output (confidence scores forclassification), not the final prediction label. The label-only setting is doable under our framework,but will incur extra difficulty such as designing new loss functions. For simplicity, here we considerthe confidence score output.We propose a zeroth-order Frank-Wolfe based algorithm to solve this problem. Algorithm 2 showour proposed Frank-Wolfe black-box attack algorithm. The key difference between our proposedblack-box attack and white-box attack is one extra gradient estimation step, which is presented inLine 4 in Algorithm 2. Also note that for the final output, we provide two options. While option IIis the common choice in practice, option I is also provided for the ease of theoretical analysis.As many other zeroth-order optimization algorithms (Shamir, 2017; Flaxman et al., 2005), Algo-rithm 3 uses symmetric finite differences to estimate the gradient and therefore, gets rid of thedependence on back-propagation in white-box setting. Different from Chen et al. (2017c), here wedo not utilize natural basis as our sensing vectors, instead, we provide two options: one is to usevectors uniformly sampled from Euclidean unit sphere and the other is to use vectors uniformlysampled from standard multivarite Gaussian distribution. This will greatly improve the gradientestimation efficiency comparing to sensing with natural basis as such option will only be able toestimate one coordinate of the gradient vector per query. In practice, both options here provide uscompetitive experimental results. It is worth noting that NES method (Wierstra et al., 2014) withantithetic sampling (Salimans et al., 2017) used in Ilyas et al. (2018) yields similar formula as ourOption II in Algorithm 3.2The extra clipping operation in FGSM is to project to the additional box constraint for image classificationtask. We will also need this clipping operation at the end of each iteration for specific tasks such as imageclassification.4Under review as a conference paper at ICLR 2019Algorithm 2 Frank-Wolfe Black-box Attack Algorithm1:input: number of iterations T, step sizesftg,>0, original image xori, target label ytar;2:x0=xori3:fort= 0;:::;T1do4:qt= ZERO ORD GRAD EST(xt)// Algorithm 35:vt= argminv2Xhv;qti6:dt=vtxt7:xt+1=xt+tdt8: if>1then9: xt+1=PX(xt+1)10: end if11:end for12:Option I: xais uniformly random chosen from fxtgTt=113:Option II: xa=xT14:output: xaAlgorithm 3 Zeroth-Order Gradient Estimation (ZERO ORD GRAD EST)1:parameters: number of gradient estimation samples b, sampling parameter t;2:q=03:fori= 1;:::;b do4: Option I: Sample uiuniformly from the Euclidean unit sphere with kuik2= 1q=q+d2tbf(xt+tui)f(xttui)ui5: Option II: Sample uiuniformly from the standard Gaussian distribution N(0;I)q=q+12tbf(xt+tui)f(xttui)ui6:end for7:output: q4 M AINTHEORYIn this section, we establish the convergence guarantees for our proposed Frank-Wolfe adversarialattack algorithms described in Section 3. First, we introduce the convergence criterion for our Frank-Wolfe adversarial attack framework.4.1 C ONVERGENCE CRITERIONThe loss function for common DNN models are generally nonconvex. In addition, (3.1) is a con-strained optimization. For such general nonconvex constrained optimization, we typically adoptthe Frank-Wolfe gap as the convergence criterion (since gradient norm of fis no longer a propercriterion for constrained optimization problems):g(xt) = maxx2Xhxxt;rf(xt)i:Note that for the Frank-Wolfe gap, we always have g(xt)0andxtis a stationary point for theconstrained optimization problem if and only if g(xt) = 0 . Also the Frank-Wolfe gap is affineinvariant and do not tie to any specific choice of norm, which makes itself a perfect convergencecriterion for Frank-Wolfe based algorithms.4.2 C ONVERGENCE GUARANTEE FOR FRANK -WOLFE WHITE -BOX ATTACKBefore we are going to provide the convergence guarantee of Frank-Wolfe white-box attack (Algo-rithm 1), we introduce the following assumptions that are essential to the convergence analysis.Assumption 4.1. Functionf()isL-smooth with respect to x, i.e., for any x;x0, it holds thatf(x0)f(x) +rf(x)>(x0x) +L2kx0xk22:Assumption 4.1 is a standard assumption in nonconvex optimization, and is also adopted in otherFrank-Wolfe literature such as Lacoste-Julien (2016); Reddi et al. (2016). Note that even thoughthe smoothness assumption does not hold for general DNN models, a recent study (Santurkar et al.,2018) shows that batch normalization that is used in many modern DNNs such as Inception V35Under review as a conference paper at ICLR 2019model, actually makes the optimization landscape significantly smoother3. This justifies the validityof Assumption 4.1.Assumption 4.2. SetXis bounded with diameter D, i.e.,kxx0k2Dfor all x;x02X.Assumption 4.2 implies that the input space is bounded. For common tasks such as image classifica-tion, given the fact that images have bounded pixel range and is a small constant, this assumptiontrivially holds.Now we present the theorem, which characterizes the convergence rate of our proposed Frank-Wolfewhite-box adversarial attack algorithm presented in Algorithm 1.Theorem 4.3. Under Assumptions 4.1 and 4.2, let t==p2(f(x0)f(x))=(LD2T), denoteegT= min 1kTg(xk)wherefxkgTk=1are iterates in Algorithm 1 with = 1, we have:egTrLD2(f(x0)f(x))2T;where xis the optimal solution to (3.1).Remark 4.4. Theorem 4.3 suggests that our proposed Frank-Wolfe white-box attack algorithmachieves aO(1=pT)rate of convergence. Note that similar result has been proved in Lacoste-Julien(2016) under a different choice of step size.4.3 C ONVERGENCE GUARANTEE FOR FRANK -WOLFE BLACK -BOX ATTACKNext we analyze the convergence of our proposed Frank-Wolfe black-box adversarial attack algo-rithm presented in Algorithm 2.In order to prove the convergence of our proposed Frank-Wolfe black-box attack algorithm, we needthe following additional assumption that krf(0)k2is bounded.Assumption 4.5. Gradient off()at zero pointrf(0)satisfies maxykrf(0)k2Cg.Following the analysis in Shamir (2017), let f(x) =Eu[f(x+u)], which is the smoothed versionoff(x). This smoothed function value plays a central role in our theoretical analysis, since it bridgesthe finite difference gradient approximation with the actual gradient. The following lemma showsthis relationship.Lemma 4.6. For the gradient estimator qtin Algorithm 3, its expectation and variance satisfyE[qt] =rf(xt);EkqtE[qt]k221b2d(Cg+LD)2+122tL2d2:Now we are going to present the theorem, which characterizes the convergence rate of Algorithm 2.Theorem 4.7. Under Assumptions 4.1, 4.2 and 4.5, let t==p2(f(x0)f(x))=(LD2T),b=Tdandt=p2=Td2, suppose we use Option I in Algorithm 2 and option II for Algorithm 3,then the output xafrom Algorithm 2 with = 1satisfies:E[g(xa)]Dp2TpL(f(x0)f(x)) + 2(L+Cg+LD);where xis the optimal solution to (3.1).Remark 4.8. Theorem 4.7 suggests that Algorithm 2 also enjoys a O(1=pT)rate of convergence.In terms of query complexity, the total number of queries needed is Tb=T2d, which is linear in thedata dimension d. In fact, in the experiment part, we observed that this number can be substantiallysmaller than d, e.g.,b= 25 , which is much lower than the theorem suggests. Note that although weonly prove for option I in Algorithm 3, our result can be readily extended to Option II (the Gaussiansensing vector case).3The original argument in Santurkar et al. (2018) refers to the smoothness with respect to each layer’sparameters. Note that the first layer’s parameters are in the mirror position (in terms of backpropagation) as thenetwork inputs. Therefore, the argument in Santurkar et al. (2018) can also be applied here with respect to thenetwork inputs.6Under review as a conference paper at ICLR 20195 E XPERIMENTSIn this section, we present the experimental results for our proposed Frank-Wolfe attack frameworkagainst other state-of-the-art adversarial attack algorithms in both white-box and black-box settings.All of our experiments are conducted on Amazon AWS p3.2xlarge servers which come with IntelXeon E5 CPU and one NVIDIA Tesla V100 GPU (16G RAM). All experiments are implemented inTensorflow platform version 1:10:0within Python 3:6:4.5.1 E VALUATION SETUP AND METRICSWe test the attack effectiveness of all algorithms by evaluating on a pre-trained Inception V3 model(Szegedy et al., 2016) and a ResNet V2 50 (He et al., 2016b) model that are trained on ImageNetdataset (Deng et al., 2009). The pre-trained Inception V3 model is reported to have a 78:0%top-1accuracy and a 93:9%top-5accuracy. The pre-trained ResNet V2 model is reported to have a 75:6%top-1and a 92:8%top-5accuracy. We randomly choose 500images from the ImageNet validationset that are verified to be correctly classified by the pre-trained model and also randomly choose atarget class for each image. Each image has a dimension of 2992993and we test all attackalgorithms through the same randomly chosen data samples and target labels.We test for both L2norm based and L1norm based attacks. In the white-box setting, we performbinary search / grid search for the best distortion parameter ( in our formulation and cin CW’sregularized formulation). In the black-box setting, for L2norm based attack, we set = 5 and forL1based attack, we set = 0:05. For white-box attack, we restrict a maximum of 1;000iterationsper attack for each method. And for black-box attack, we set a maximum query limit of 500;000per attack per image for each method.For all algorithms, we stop the algorithm when a successful attack is found. For our proposed black-box attack, we use option II in Algorithm 2 and test both options in Algorithm 3. We set the numberof gradient estimation samples b= 25 for Algorithm 2. More detailed description on parametersettings can be found in the supplemental materials.We evaluate the final performance through attack success rate where the success is defined as makingthe classifier output the exact target class label (not any incorrect labels). We also measure averageattack time per image, average distortion (only on successful attacked samples) and average numberof queries needed (only for black-box attack) per image. For a fair time comparison, even thoughsome of the algorithms including ours can be written in batch form (attack multiple images at onetime), all algorithms are set to attack one image at a time.Due to page limit, we leave all experimental results on ResNet V2 model in the supplemental mate-rials.5.2 B ASELINE METHODSWe compare the proposed algorithms with several state-of-the-art baseline algorithms. Specifically,we compare the proposed white-box attack algorithm with4(i) PGD (Madry et al., 2018) (which isessentially I-FGM (Kurakin et al., 2016)), (ii) CW attack (Carlini & Wagner, 2017) and (iii) EADattack (Chen et al., 2017b). We compare the proposed black-box attack algorithm with (i) ZOOattack (Chen et al., 2017c) and (ii) NES-PGD attack (Ilyas et al., 2018).5.3 W HITE -BOX ATTACK EXPERIMENTSIn this subsection, we present the white-box attack experiments on Inception V3 model. Tables 1 and2 present our experimental results for L2norm andL1norm based white-box attacks respectively.As we can observe from the tables, the attack success rate is 100% for every method. For the otherbaselines in the L2norm case, CW method achieves the smallest average distortion, yet it comes withan expansive time cost. EAD method does not have either time advantage or distortion advantagein this experiment, probably due to its different motivation in attacking. PGD has moderate averagedistortion, yet it also costs quite some time to finish the attack. On the other hand, our proposedalgorithm achieves the shortest attack time with moderate distortion. It significantly reduces thetime complexity needed for attacking data with large dimensionality. For the L1norm case, CWmethod takes significantly longer time and does not perform very well on average distortion either.4We did not compare with FGM (FGSM) (Goodfellow et al., 2015) since it basically has zero success ratefor targeted attack on Inception V3 or ResNet V2 models.7Under review as a conference paper at ICLR 2019Table 1: Comparison of L2norm based white-box attacks on Inception V3 model with = 5. Wereport attack success rate, average time and average distortion.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE DISTORTIONPGD 100.0 143.2 0.74CW 100.0 169.9 0.57EAD 100.0 167.8 1.09FW-White 100.0 50.6 0.85Table 2: Comparison of L1norm based white-box attacks on Inception V3 model with = 0:05.We report attack success rate, average time and average distortion.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE DISTORTIONPGD 100.0 39.1 0.0027CW 100.0 745.2 0.0071FW-White 100.0 13.7 0.0034This is largely due to the original CW was designed for L2norm attack, and in order to apply ittoL1norm attack, special design is needed, which sacrifices its performance in terms of runtime.Again, our proposed white-box attack algorithm achieves the shortest average attack time and amoderate average distortion.In Figure 1, we also examine the effect of in our proposed Frank-Wolfe white-box attack algorithm.We plot the objective loss function value of attacking one example against the number of iterationsfor bothL2andL1based white-box attack on Inception V3 model. From the plot, we can see thatlargerindeed leads to faster convergence.0 20 40 60 80 100Iterations02468LossPGDFW =1FW =5FW =20(a)L2norm based attack0 20 40 60 80 100Iterations02468LossPGDFW =1FW =3FW =5 (b)L1norm based attackFigure 1: Loss against the number of iterations plot for PGD and FW algorithms in both L2normandL1norm based white-box attacks on Inception V3 model.5.4 B LACK -BOX ATTACK EXPERIMENTSIn this subsection, we present the black-box attack experiments on Inception V3 model. For black-box attacks, attack success rate, time and number of queries needed are more meaningful evaluationmetrics than distortion distances. Therefore, we omit all the grid search / binary search steps that areused in the white-box setting since extra time / queries are needed for finding parameters that canobtain better distortion distances.Tables 3 and 4 present our experimental results for L2norm andL1norm based black-box attacksrespectively. For ZOO method, note that it only has the L2norm version and it follows CW’sframework and thus uses different loss function and problem formulation (cannot exactly controlthe adversarial example to be within the distortion limit, we manage to keep the average distortionaroundfor ZOO while other methods have average distortions very close to ). Furthermore, wecan observe that ZOO is quite slow in this task. Attack on a single image can take up to 2hours forZOO and it is only able to achieve a 74:8%success rate (compared with the 88:9%success rate inthe original paper, we think the main reason is the query limit here is only half of the query limit8Under review as a conference paper at ICLR 20190 100000 200000 300000 400000 500000Queries0.00.20.40.60.81.0Attack Success RateZOONES-PGDFW (Opt I)FW (Opt II)(a)L2norm based attack0 25000 50000 75000 100000 125000 150000 175000 200000Queries0.00.20.40.60.81.0Attack Success RateNES-PGDFW (Opt I)FW (Opt II) (b)L1norm based attackFigure 2: Attack success rate against the number of queries plot for different black-box attack algo-rithms in both L2normL1norm cases on Inception V3 model.in the original paper). NES-PGD method, while greatly improving ZOO’s performance, still cannotachieve 100% success rate in both attack models and takes relatively more time and queries. Insharp contrast, our proposed Frank-Wolfe black-box attacks (both option I and option II) achievethe highest success rate in both L2norm andL1norm based black-box attacks and further largelyimprove the attack efficiency.Table 3: Comparison of L2norm based black-box attacks on Inception V3 model with = 5. Wereport attack success rate, average time and average number of queries needed per image. Opt I andOpt II refer to the two options in Algorithm 2.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE QUERIESZOO 74.8 5692.6 296867.0NES-PGD 96.7 133.0 58921.8FW-Black (Opt I) 100.0 102.9 45994.5FW-Black (Opt II) 100.0 100.9 45156.0Table 4: Comparison of L1norm based black-box attacks on Inception V3 model with = 0:05.We report attack success rate, average time and average number of queries needed per image. Opt Iand Opt II refer to the two options in Algorithm 2.METHODS SUCCESS RATE (%) A VERAGE TIME (s) A VERAGE QUERIESNES-PGD 98.0 76.9 34062.2FW-Black (Opt I) 100.0 50.4 22313.2FW-Black (Opt II) 100.0 50.6 22424.1Figure 2 illustrates the attack success rate against the number of queries plot for different algorithmsin bothL2norm andL1norm based black-box attacks on Inception V3 model. As we can see fromthe plot, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the highestattack success rate and best efficiency (least queries needed for achieving the same success rate),especially in the L2norm case.6 C ONCLUSIONSIn this work, we propose a Frank-Wolfe framework for efficient and effective adversarial attacks.Our proposed white-box and black-box attack algorithms enjoy an O(1=pT)rate of convergence,and the query complexity of the proposed black-box attack algorithm is linear in data dimension d.Finally, our empirical study on attacking Inception V3 model with ImageNet dataset yields a 100%attack success rate for our proposed algorithms, even in the setting of black-box attack.9Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title A method to produce adversarial attack using a Frank-Wolfe inspired method ### Review Text This paper provide a method to produce adversarial attack using a Frank-Wolfe inspired method. I have some concerns about the motivation of this method: - What are the motivations to use Frank-Wolfe ? Usually this algorithm is used when the constraints are to complicated to have a tractable projection (which is not the case for the L_2 and L_\infty balls) or when one wants to have sparse iterates which do not seem to be the case here. - Consequently why did not you compare simple projected gradient method ? (BIM) is not equivalent to the projected gradient method since the direction chosen is the sign of the gradient and not the gradient itself (the first iteration is actually equivalent because we start at the center of the box but after both methods are no longer equivalent). - There is no motivations for the use of $\lambda >1$ neither practical or theoretical since the results are only proven for $\lambda =1$ whereas the experiments are done with \lambda = 5,20 or 30. - What is the difference between the result of Theorem 4.3 and the result from (Lacoste-Julien 2016)? Depending on the answer to these questions I'm planning to move up or down my grade. In the experiment there is no details on how you set the hyperparameters of CW and EAD. They use a penalized formulation instead of a constrained one. Consequently the regularization hyperparameters have to be set differently. The only new result seem to be Theorem 4.7 which is a natural extension to theorem 4.3 to zeroth-order methods. Comment: - in the whole paper there is $y$ which is not defined. I guess it is the $y_{tar}$ fixed in the problem formulation Sec 3.2. In don't see why there is a need to work on any $y$. If it is true, case assumption 4.5 do not make any sense since $y = y_{tar}$ (we just need to note $\|\nabla f(O,y_{tar})\| = C_g$) and some notation could be simplified setting for instance $f(x,y_{tar}) = f(x)$. - In Theorem 4.7 an expectation on g(x_a) is missing Minor comments: - Sec 3.1 theta_i -> x_i - Sec 3.3 the argmin is a set, then it is LMO $\in$ argmin. ===== After rebuttal ====== The authors answered some of my questions but I still think it is a borderline submission. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
5zErZzsW2U1
ICLR.cc/2021/Conference
2021
Category Disentangled Context: Turning Category-irrelevant Features Into Treasures
["Keke Tang", "Guodong Wei", "Jie Zhu", "Yuexin Ma", "Runnan Chen", "Zhaoquan Gu", "Wenping Wang"]
Deep neural networks have achieved great success in computer vision, thanks to their ability in extracting category-relevant semantic features. On the contrary, irrelevant features (e.g., background and confusing parts) are usually considered to be harmful. In this paper, we bring a new perspective on the potential benefits brought by irrelevant features: they could act as references to help identify relevant ones. Therefore, (1) we formulate a novel Category Disentangled Context (CDC) and develop an adversarial deep network to encode it; (2) we investigate utilizing the CDC to improve image classification with the attention mechanism as a bridge. Extensive comparisons on four benchmarks with various backbone networks demonstrate that the CDC could bring remarkable improvements consistently, validating the usefulness of irrelevant features.
["context", "irrelevant features", "cdc", "features", "category", "treasures category", "great success", "computer vision", "thanks"]
ABSTRACTDeep neural networks have achieved great success in computer vision, thanks totheir ability in extracting category-relevant semantic features. On the contrary,irrelevant features (e.g., background and confusing parts) are usually consideredto be harmful. In this paper, we bring a new perspective on the potential benefitsbrought by irrelevant features: they could act as references to help identify rel-evant ones. Therefore, (1) we formulate a novel Category Disentangled Context(CDC) and develop an adversarial deep network to encode it; (2) we investigateutilizing the CDC to improve image classification with the attention mechanismas a bridge. Extensive comparisons on four benchmarks with various backbonenetworks demonstrate that the CDC could bring remarkable improvements con-sistently, validating the usefulness of irrelevant features.1 I NTRODUCTIONWith the emergence of deep neural networks, their performance on most vision tasks is surpassinghuman-level. It has reached a consensus that the success of deep networks is brought by theirpowerful ability in extracting high-level semantic features (Simonyan et al., 2014).To take more insight into the internal behavior, researchers explain it in the view of attention. Theattention is a physiological mechanism which describes the phenomenon that human’s perceptionsystem could focus on the object of interest (OOI) while suppressing the background. In the last fewyears, more and more evidences have shown that deep networks originally have such ability to locateOOI (e.g., the category-relevant regions) even without requiring any explicit supervision (Zhou et al.,2015; 2016), therefore enabling them to encode category-relevant features.To obtain more category-relevant features, researchers design various powerful networks, as strongernetworks usually have better attention (Zagoruyko & Komodakis, 2017). However, since the net-works are deeper (He et al., 2015a) or wider (Zagoruyko & Komodakis, 2016), the overhead in-creases accordingly. Other researchers instead control the networks’ attention via formulating ex-plicit attention modules, directly refining the networks to encode more category-relevant features.In contrast to the internal guidance, Zagoruyko & Komodakis (2017) attempt to improve the per-formance of student networks with external guidance by mimicking the attention maps of morepowerful teacher networks inspired by knowledge distillation. However, since in both cases theencoded relevant features in the attention maps have a large overlap with those in the backbonenetworks, the room for improvement is somewhat limited. This brings us to the main topic of thispaper: could we solve the problem in the opposite way? More specifically, can we adopt irrelevantfeatures as references and forbid backbone networks to encode them? If so, can backbone networksencode more relevant features with the guidance of pre-extracted category-irrelevant features?To study these questions, one first needs to specify a proper context that only contains irrelevantfeatures. To that end, we propose to extract a novel Category Disentangled Context (CDC), whichis expected to encode all the information of the dataset except that is category-relevant. In this case,the CDC is “complementary” to the category-relevant features. Therefore, the CDC could act asa good reference to help identify the relevant features. We encode the CDC by designing a novelconditional auto-encoder to capture the underlying property of the whole dataset with adversarialtraining for category disentangling (Mathieu et al., 2016). To demonstrate the potential benefitsthat could be brought by irrelevant features, we investigate utilizing the CDC to improve the taskof image classification. Specifically, we adopt the attention mechanism as a bridge by inferring1Under review as a conference paper at ICLR 2021Figure 1: (a) Input images; (b) ResNet-50 originally focuses on some background (black rectan-gles) and misses part of the targets (red rectangles); (c) by utilizing the attention maps Ms inferredfrom the CDCs, which indicate category-relevant/irrelevant regions to be encouraged/suppressed;(d) ResNet-50 could be guided to correctly identify the objects of interest in both images. Note: wevisualize the focuses of networks following (Zagoruyko & Komodakis, 2017).the attention map from the CDC and then multiplying it with the backbone networks to refine theirattention (e.g., OOI). With the CDC as a reference, backbone networks could purify their encodedfeatures by eliminating irrelevant information that is contained in the CDC (see the suppressed focusmarked with black rectangles in the top row of Fig. 1), and explore to encode more relevant featuresthat are not in the CDC (see the added focus marked with red rectangles in the bottom row of Fig. 1),and thus improve the performance. To the best of our knowledge, this is the first work that utilizescategory-irrelevant features to improve a vision task.To summarize, the contributions of this work are as follows:• We introduce a novel CDC that captures the underlying property of the whole dataset exceptcategory-relevant information and develop an adversarial network to obtain it.• We demonstrate utilizing the CDC to improve image classification in a novel attention manner.• We validate the effectiveness of the CDC by extensive evaluations with various backbone networkson four public datasets.2 C ATEGORY DISENTANGLED CONTEXTOur aim is to design a model that (1) captures the underlying property of the whole dataset; (2) doesnot have any category-relevant information. We hypothesize that the information encoded in thismodel is complementary to category-relevant features . Therefore, it could act as a good reference toexplore more category-relevant features.We define the latent features TTT2RCHWencoded in the above model as “ Category DisentangledContext ”, whereTTTis an intermediate 3D tensor derived from an input image x. Fig. 2 demonstratesthe structure of our CDC Extraction Network, which is adapted from (Lample et al., 2017) with thefollowing key components.Conditional Auto-encoder. The general architecture of our CDC Extraction Network is a con-ditional auto-encoder. Compared with (Lample et al., 2017), (1) we choose the latent features en-coded in a latter layer, since we require TTTin a larger spatial resolution, but directly let the originallatent features in a high resolution will hinder the auto-encoder to learn a good embedding (Hin-ton & Salakhutdinov, 2006); (2) we add a skip-connection since it has been validated in previouswork (Ronneberger et al., 2015) that skip-connection in the auto-encoder could ease training.Category Disentangling Branch. To remove category-relevant features, we add a category dis-entangling branch by extending (Lample et al., 2017), which is originally designed for two-attributedisentangling via attribute flipping and thus could not be directly used in our problem. Specifically,we iteratively train the discriminator Dto classify the correct category based on TTTusing the crossentropy loss, and then update the auto-encoder to output a new TTTto foolD, which is achieved byminimizing the predicted confidence of the correct category,Lfool(x;y) =Softmax (V(x))[y]; (1)2Under review as a conference paper at ICLR 2021Figure 2: The CDC Extraction Network: given an image-category pair ( x,y) as input, the conditionalauto-encoder ( E1andE2are encoders, G1andG2are decoders) outputs a reconstructed image x0;Dis the discriminator for category disentangling and TTTis the CDC. Note that only the networkswithin the green box are executed for generating TTTin the evaluation stage.Figure 3: Demonstration of applying the CDC-based framework to the feature maps FFF:L1is thenetwork to infer spatial attention map Mfrom the pooled CDCs [TTTsavg;TTTsmax],L2andL3areembedding networks to project [TTTsavg;TTTsmax]andFFFinto the same feature space for calculatingchannel compatibility; Ris the residual unit for adjusting the feature maps after applying attention.Note that the CDC Extraction Network is pre-trained and its parameters are fixed.whereV(x)is the output of the last fully connected layer of the discriminator for image x, andyis the corresponding category. We normalize V(x)using the Softmax function such that each itemindicates the predicted probability of one category. We set the weight of Lfoolas 0.0001.Repulsive Loss. To guarantee the conditional vector could really work, we add a repulsive lossfollowing (Yu et al., 2018; Wang et al., 2019b) to enforce the discrepancy between images generatedwith the same TTTbut different conditional vectors to be large enough.Lrepul(T) = max(jjG2(TTT;g(y))G2(TTT;1g(y))jj;0); (2)whereg(y)is a function that represents yas a one-hot vector, (e.g., 0.01) is the margin to guaranteereasonable changes, and the weight of Lrepul is 0.001.3 U TILIZING THE CDC INIMAGE CLASSIFICATIONTo demonstrate the usefulness of the CDC, we investigate utilizing it to improve image classifi-cation, which is the basis for almost all the other vision tasks. Specifically, we adopt the attentionmechanism as a bridge to utilize the CDC by inferring the attention map from it and then multiplyingwith the feature maps of backbone networks. Unlike the traditional positive attention mechanismthat amplifies category-related neurons (Hu et al., 2018; Woo et al., 2018), our mechanism is in theopposite way, whose goal is to suppress irrelevant neurons. However, unlike in positive ones, whereamplified neurons could easily contribute to the network, suppressed neurons will leave activatedto deteriorate the performance. Therefore, we multiply the inferred attention maps with the featuremaps before activation, and then add a Residual Unit to learn a residual to adjust the suppressedfeature maps, such that those irrelevant neurons could be totally inactivated by ReLU. Furthermore,since the inferred attention maps from the CDC are very noisy (see Fig. 4(b)), we adopt a ChannelCompatibility Module to alleviate the effects on less related channels. For residual networks, weapply the framework to the feature maps outputted by a residual group. For VGG, we choose thefeature maps outputted by the convolutional layer that is just before a maximum pooling layer andhave the same resolution as the CDC to apply our framework (see Fig. 3 for demonstration). In thefollowing, we will describe the key components.Infer Attention Maps. Instead of computing the attention maps from 3D tensors directly (Wanget al., 2017), we conduct two pooling operations to reduce the computational complexity as in (Wooet al., 2018). Specifically, given the CDC TTTextracted from x, we conduct an average pooling3Under review as a conference paper at ICLR 2021Figure 4: Qualitative results for demonstrating the function of each component in our framework:(a) input images; (b) inferred attention maps Ms from the CDCs; (c) the focused areas by thesecond residual group of ResNet-50; (d) the focused areas after applying channel-aware suppres-sion/amplification; (e) the focused areas after applying RU; (f) the focused areas by the last residualgroup; (g) the focused areas by the last residual group of baseline ResNet-50.and a maximum pooling along the channel axes, generating TTTsavgandTTTsmax. Then we infer theattention map M(TTT)2R1HW(will be abbreviated as M) by forwarding [TTTsavg;TTTsmax]to theconvolutional network L1(see Fig. 3):M(T) =L1([AvgPool (TTT);MaxPool (TTT)]) =L1([TTTsavg;TTTsmax]); (3)where [;]denotes concatenation.Channel Compatibility Module. SupposeFFF(before activation) is immediate feature maps out-putted by the classification network that have the same spatial resolution as M. SinceMmay notbe compatible with all the channels of FFF, therefore instead of applying Mto all the channels of FFFequally, we choose only a part of channels to apply Mvia computing a channel compatibility mea-sure between TTTand each channel of FFF. Specifically, we compute the compatibility similar as (Jetleyet al., 2018) by first projecting [TTTsavg;TTTsmax]andFFFinto the same high-dimensional feature space viausing the embedding networks L2andL3respectively, and then conducting dot product between theembedded features Me2R1HWandFFFe2RCHWafter squeezing the spatial dimensions,and finally normalizing them with a sigmoid function:Ci=Sigmoid (L2([TTTsavg;TTTsmax]);L3(FFF)i) =Sigmoid (hMe;FFFeii); (4)whereh;idenotes dot product after squeezing the spatial dimensions, and subscript iindicates thei-th channel. After that, channel-aware suppression could be realized by applying an element-wisemultiplication between each channel of FFFand the attention map M, weighted with the channelcompatibility C, resulting in an irrelevant-suppressed feature FFF0:FFF0i=FFFiMCi; (5)wheredenotes conducting expansion and then element-wise multiplication.Residual Unit. After suppression, the activation of task-irrelevant neurons will be decreased (e.g.,from 0.1 to 0.001), but leave activated (see the black rectangle in Fig. 4). Similarly, the activationof some task-relevant neurons will be relatively enlarged, but since the positive signal is weak, theiractivations are still very low (e.g., from 0.0001 to 0.001, see the red rectangle in Fig. 4). To resolvethese issues, we intentionally add a residual unit R, which is a skip connection as in ResNets (Heet al., 2015a), to adaptively learn to adjust FFF0withFFFas input (e.g., adjust an irrelevant neuron withvalue 0.001 to -0.001 and thus is inactivated by ReLU or adjust a relevant neuron with value 0.001 to0.1). Therefore, the final feature maps are ReLU (R(FFF)+FFF0), with suppressed neurons inactivated.Multi-layer Extension. Our framework could be applied to intermediate feature maps outputtedby multiple layers with moderate additional computational cost. Specifically, for the feature mapsoutputted by another layer that have the same spatial resolution as FFF,MeandMcould be directlyreused, while for the feature maps with a different resolution, we could down/up-sample MeandMto make them consistent with the resolution of the target feature maps.4 E XPERIMENTS AND DISCUSSIONSWe demonstrate the usefulness of the CDC by taking the task of image classification as an example.Therefore, we exhaustively evaluate the CDC-based classification framework on four benchmarks,4Under review as a conference paper at ICLR 2021Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + DM 93.47 73.62VGG13 + GM 94.39 75.48VGG13 + CGM 94.45 75.56VGG13 + CDCGM 94.71 75.81Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG16 93.85 73.78VGG16 + DM 93.16 73.42VGG16 + GM 94.15 75.03VGG16 + CGM 94.18 74.85VGG16 + CDCGM 94.33 75.24Table 1: Comparison results of different context modeling approaches on the CIFAR datasets.including CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), ImageNet32 32 (Chrabaszczet al., 2017) and the full ImageNet (Deng et al., 2009), with various network architectures asbackbones, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and WideResNet (Zagoruyko & Komodakis, 2016) (we refer Wide ResNet with depth iand widening factorkas WRN-i-k). We first apply ablation studies on CIFAR datasets due to limited computationalresources, and then report more comparisons with the state-of-the-art methods on the other datasets.For the implementation details and more experiments, please refer to the Appendix.4.1 CIFAR E XPERIMENTSContext Modeling Methods. We first investigate whether the CDC is the best context for improv-ing classification. In a multi-category problem with nclasses (denote xas the data and y2f1;:::;ngas its label), there are four mathematical models that could be utilized for modeling context:•Discriminative model (DM) attempts to compute p(yjx), withp(1jx)+p(2jx)+:::+p(njx) = 1 .This formulation could be implemented with a classification network.•Generative model (GM) attempts to model p(x)without explicitly considering y. Researchersusually model it as an auto-encoder.•Conditional generative model (CGM) attempts to compute p(xjy). Generally, researchers adoptthe architecture of conditional auto-encoder trained in a semi-supervised manner to model it (Che-ung et al., 2014).•Category disentangled conditional generative model (CDCGM) attempts to model p(x), withp(x) =p(xjy), namely the model shouldn’t be relevant to yat all. The readers need to distinguishbetween CDCGM and GM, although GM does not explicitly consider y, the information of yisstill included, while CDCGM explicitly enforces the model without containing any information ofy. Our CDC Extraction Network is a CDCGM.Although all four models could capture the underlying property of a dataset, their abilities vary a lot.Discriminative models focus on classification boundaries, whereas generative models emphasize thedata generation process, and thus generative models could carry richer information (Tu, 2007). Inaddition, among the three generative models, only CDCGM does not encode any information of y.We investigate the benefits brought by the context encoded in the above four modeling methods.For DM, we choose WRN-16-10, while for GM and CGM, the networks are simply adapted fromour CDC Extraction Network by canceling the introducing of conditional vector and/or removingthe category disentangling branch. Comparison results on the CIFAR-10 and CIFAR-100 datasetsare reported in Tab. 1. It could be seen that both VGG13 and VGG16 obtain a certain amount ofimprovements by “attending” the context computed by generative models. In addition, the finalresults of attending the context in GM and CGM are very similar (e.g., 94.39% and 94.35% onCIFAR-10), since both of their models are relevant to the category, no matter explicitly modeledor not. Obviously, our approach that adopts the CDC performs the best, validating that the CDCcould bring more guidance. Interestingly, the performance drops heavily by “attending” the contextin a discriminative model, although the approach (Zagoruyko & Komodakis, 2017) that enforcestheir attention maps to be exactly the same could work (see Tab. 3). The reason is probably that theinformation of classification boundaries alone is not suitable for the trainable attention mechanism.Residual Unit (RU). To demonstrate the importance of RU, we train another VGG13/16 with ap-plying the CDC-based framework that has no RU. The results in Tab. 2 show that both networksobtain slight improvement without the help of RU, and the performance is far behind to that of thefull framework, validating that the attention maps inferred from the CDC could be hardly utilizedwithout RU. In addition, we also investigate whether the improvement is brought by RU alone via5Under review as a conference paper at ICLR 2021Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + Ours w/o RU 94.24 74.98VGG13 + Ours w/o CC 94.60 75.62VGG13 + Ours w/o RL 94.41 75.37VGG13 + Ours N 94.40 74.48VGG13 + Ours H 94.06 75.20VGG13 + Ours 94.71 75.81VGG13 + Ours after ReLU 94.51 75.33VGG13 + RU 94.22 75.03Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG16 93.85 73.78VGG16 + Ours w/o RU 94.04 74.21VGG16 + Ours w/o CC 94.28 74.54VGG16 + Ours w/o RL 94.19 74.36VGG16 + Ours N 94.23 74.73VGG16 + Ours H 93.95 74.11VGG16 + Ours 94.33 75.24VGG16 + Ours after ReLU 94.12 74.71VGG16 + RU 94.06 73.84Table 2: Ablation studies on the CIFAR-10/100 datasets. Note that Ours Nindicates the results ofapplying CDGC to one layer with 16 16 resolution, Ours Hindicates the results of applying CDGCto one layer with 8 8 resolution, and Ours indicates the results of applying CDGC to both layers.Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + AT (WRN-16-10) 94.43 75.58VGG13 + Ours 94.71 75.81VGG16 93.85 73.78VGG16 + AT (WRN-16-10) 94.23 74.76VGG16 + Ours 94.33 75.24Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100WRN-16-10 94.58 77.72WRN-16-10 + AT (WRN-28-10) 94.86 78.77WRN-16-10 + Ours 95.23 79.06WRN-16-10 + SE 95.81 80.38WRN-16-10 + SE + Ours 96.02 80.86WRN-16-10 + CBAM 95.09 79.51WRN-16-10 + CBAM + Ours 95.60 80.38WRN-28-10 95.18 79.15Table 3: Comparison results on CIFAR-10/100. Note that the architecture described in the bracketafter AT denotes the corresponding teacher network.applying RU to the baseline networks without using the attention mechanism. The limited improve-ments reported in Tab. 2 validate that RU could not work without the attention framework. Pleaserefer to Fig. 4(d,e) and Fig. 6(d,e) in the Appendix to see the qualitative effects brought by RU.Repulsive Loss (RL). We demonstrate the importance of RL by training another CDC ExtractionNetwork without adding RL, and then applying the corresponding CDC to VGG13/16 to evaluatethe performance. The results in Tab. 2 show that the framework using the CDC extracted withoutadding repulsive loss could still improve the classification performance, but is consistently worsethan that of the full version, validating the importance of repulsive loss.Channel Compatibility. To investigate the benefits brought by channel compatibility (CC), wecompare the results of using CC or not. The results in Tab. 2 show that, without channel compati-bility, the models with applying our framework could still outperform the baseline networks, but areworse than their full versions, validating the usefulness of channel compatibility.Multi-layer. We investigate the usefulness of applying the framework to multiple layers. Theresults in Tab. 2 show that the performance of only applying the CDC to the layer with 1616(88)’s output is worse than applying it to both two layers, validating the usefulness of extendingto multiple layers. We have also tried to apply the CDC to more layers, but the improvement is verylimited compared to the increased overhead.Apply Attention After ReLU. We investigate whether we could apply the framework to the featuremaps after activation. The results in Tab. 2 show that our framework could also work, but theperformance is a bit worse than that of applying attention before activation, validating that ourframework could utilize ReLU to inactivate irrelevant features instead of only suppressing them.Comparisons with the State-of-the-art Methods. We make comprehensive comparisons withthe state-of-the-art methods using various backbone networks (e.g., VGG and WRN) on the CIFARdatasets. The results in Tab. 3 show that networks with applying the CDC-based framework couldbring substantial improvements consistently. We would like to point out that by applying the CDC-based framework, WRN-16-10 achieves comparable results with WRN-28-10, validating that ourframework provides a useful alternative to simply deepening or widening networks. Besides, ourmethod outperforms the approach of Attention Transfer (AT) (Zagoruyko & Komodakis, 2017) inall cases including two VGG networks with WRN-16-10 as the teacher and WRN-16-10 with WRN-28-10 as the teacher. Note that, due to the degradation problem (He et al., 2015a), VGG16 performsslightly worse than VGG13, which also indicates the importance of exploring other directions toimprove networks. Although the accuracies of Squeeze-and-Excitation Networks (SE) (Hu et al.,2018) and Convolutional Block Attention Module (CBAM) (Woo et al., 2018) are slightly higherthan ours, their methods could be further improved (e.g., 0.5%/0.9% for CBAM) by combining ourframework, indicating that our framework is complementary to traditional attention-based methods.6Under review as a conference paper at ICLR 2021Acc(%)Architecture Top-1 Top-5ResNet-32 37.31 62.97ResNet-32 + AT (ResNet-56) 37.20 62.74ResNet-32 + Ours 40.37 66.06ResNet-32 + SE 37.41 63.02ResNet-32 + SE + Ours 40.11 65.82ResNet-32 + CBAM 37.82 63.35ResNet-32 + CBAM + Ours 40.22 65.88Acc(%)Architecture Top-1 Top-5ResNet-56 42.35 68.01ResNet-56 + AT (ResNet-110) 42.48 68.18ResNet-56 + Ours 44.43 70.18ResNet-56 + SE 42.53 68.29ResNet-56 + SE + Ours 45.08 70.57ResNet-56 + CBAM 43.18 68.92ResNet-56 + CBAM + Ours 44.55 70.06Acc(%)Architecture Top-1 Top-5ResNet-110 49.08 74.35ResNet-110 + AT (ResNet-164) 49.03 74.11ResNet-110 + Ours 51.36 75.93ResNet-110 + SE 49.18 74.32ResNet-110 + SE + Ours 50.29 75.17ResNet-110 + CBAM 49.43 74.55ResNet-110 + CBAM + Ours 50.91 75.72Table 4: Classification results on ImageNet32 32. Note that the architecture described in thebracket after AT denotes the corresponding teacher network.Figure 5: Row 1: input images; Row 2: the inferred attention maps Ms from the CDCs; Row 3:the focused areas by baseline ResNet-50; Row 4: the focused areas by ResNet-50 after applying theCDC-based framework.4.2 I MAGE NET3232 E XPERIMENTSTo demonstrate the superiority of our method, we perform extensive comparisons with the state-of-the-art methods on ImageNet32 32 (Chrabaszcz et al., 2017). Tab. 4 shows that all three ResNetsobtain substantial improvements (e.g., 2 3% on top-1 accuracy) after applying the CDC-basedframework, demonstrating that our approach could generalize well on the large-scale dataset withmore categories. In contrast, CBAM- and SE-based approaches improve the baseline networksslightly. Whereas, for the AT-based methods, we could not see any improvement, which probablydue to AT-based methods require attention maps to be with high resolutions. In addition, by combin-ing our framework with CBAM or SE, we could see another improvement, validating that utilizingthe CDC in an attention manner is complementary to traditional attention-based methods.Complexity. To make our framework scalable, it must provide an effective trade-off betweenmodel complexity and performance. Therefore, we use PARAMs (the number of parameters) andFLOPs (floating-point operations per second) to measure the complexity of the framework. Withoutconsidering the network for extracting the CDC, the PARAMs of our framework is 0.034M, whilethat of ResNet-32, ResNet-56, and ResNet-110 is 0.53M, 0.92M and 1.79M. For the FLOPS, ourframework is 3.61M while that of ResNet-32, ResNet-56, and ResNet-110 is 68.19M, 125.51M and254.46M. We do not report the complexity of the CDC Extraction Network, since it is pretrained andcould be reused unlimitedly for different models on the same dataset. Indeed, the CDC ExtractionNetwork requires 0.35M PARAMs and 7.21M FLOPs, which are acceptable, especially for its lowFLOPs. Overall, our CDC-based framework is scalable.4.3 F ULL IMAGE NETEXPERIMENTSWe conduct experiments on the full ImageNet (Deng et al., 2009) to validate our framework couldhandle high-resolution images. The results on the validation set reported in Tab. 5 show that byapplying the CDC-based framework, all backbone networks obtain large margins of improvement(e.g., 2.4% top-1 accuracy increment for ResNet-50). Besides, our framework with all four ResNetsas the backbones outperforms the corresponding SE- and CBAM-based methods. Furthermore, wewould like to point out that our approach with ResNet-50 performs better than ResNet-101. Overall,our framework perform well on large-scale datasets in high resolutions.Visualization. To demonstrate how the CDC bootstrap the backbone networks intuitively, we visu-alize the focused areas by the last residual group of the original ResNet-50 and those after utilizingthe CDC following (Zagoruyko & Komodakis, 2017), together with the attention maps Ms inferredfrom the CDCs in Fig. 5. It could be seen that ResNet-50 originally fails to locate the objects7Under review as a conference paper at ICLR 2021Acc(%)Architecture Top-1 Top-5ResNet-18 70.33 89.38ResNet-18 + SE 70.48 89.60ResNet-18 + CBAM 70.64 89.83ResNet-18 + Ours 71.38 90.12ResNet-34 73.21 91.32ResNet-34 + SE 73.76 91.56ResNet-34 + CBAM 73.89 91.73ResNet-34 + Ours 74.97 92.11Acc(%)Architecture Top-1 Top-5ResNet-50 75.49 92.43ResNet-50 + SE 76.38 92.89ResNet-50 + CBAM 77.31 93.44ResNet-50 + Ours 77.85 93.71ResNet-101 76.52 93.04ResNet-101 + SE 77.28 93.31ResNet-101 + CBAM 78.39 94.25ResNet-101 + Ours 78.63 94.42Table 5: Classification results on the full ImageNet dataset.of interest, and all these locations are correctly identified with the guidance of Ms, validating theusefulness of the CDC. For more visualizations, please refer to Fig. 6 in the Appendix.5 R ELATED WORKBackground Modeling. Most studies in the computer vision community focus on modeling objectsin the foreground, leaving background modeling (Bewley & Upcroft, 2017) less investigated. Untilvery recently, some pioneering work (Zhu et al., 2017; Xiao et al., 2020) have demonstrated thatbackground contains many useful hints for improving image recognition, since it usually has strongsemantic correlation with the foreground objects. Compared with them, our work is to model the“semantic” background that is not relevant to the foreground objects at all.Visual Attention is a basic concept in psychology (Bundesen, 1990). In the field of computer vision,current attention-based methods could be classified into two categories. Post-hoc attention methodsanalyze the attention mechanism mostly to reason for the task of visual classification (Simonyanet al., 2014; Cao et al., 2015; Zhang et al., 2016; Zhou et al., 2016; Selvaraju et al., 2017). Besidesanalyzing, Zagoruyko & Komodakis (2017) defined the gradient-based and activation-based atten-tion maps, and improved the student networks by mimicking the attention maps of a more powerfulteacher network. Trainable attention methods instead incorporate the extraction of attention and tasklearning into an end-to-end architecture and are mostly applied to query-based tasks. By projectingthe query instance into the same high-dimensional space as the target, relevant spatial regions of thequery will be highlighted to guide the desired inference (Jetley et al., 2018; Anderson et al., 2018;Chen et al., 2017; Bahdanau et al., 2014). In contrast to those approaches that adopt query-basedattention, self-attention based methods attempt to learn the attention maps by themselves (Hu et al.,2018; Wang et al., 2017; Woo et al., 2018; Wang et al., 2019a; Bello et al., 2019; Parmar et al., 2019;Zhao et al., 2020). Our approach consists both the post-hoc and trainable attention modules. For theextraction of the CDC, it could be considered as a post-hoc attention method. The most similar workto ours is (Zagoruyko & Komodakis, 2017), as they also attempt to adopt the “context” of a teachernetwork to guide the student network. However, the difference is still two-fold. Firstly, they applythe context as a “hard” constraint, enforcing the two attention maps to be exactly the same, suchthat some valuable information of the student network is forced to be discarded; while we apply itas soft guidance, and thus could take advantage of both networks. Secondly, unlike their approachthat obtains the context via a deep network which is a discriminative model, the CDC is encodedin a category disentangled conditional generative model, and thus could bring more guidance. For“attending” the CDC, it is a trainable attention method. Compared with (Hu et al., 2018; Woo et al.,2018) that encourage feature activation on category-relevant regions, our mechanism is in the op-posite way. Besides, we adopt the CDC as external knowledge to infer the attention maps insteadof totally by the backbone networks themselves. Last but not least, since our key idea is to utilizecategory-irrelevant features, it is complementary to traditional attention-based methods.6 C ONCLUSIONWe have presented a novel Category Disentangled Context (CDC), which is a kind of category-irrelevant features capturing the underlying property of the whole dataset. We demonstrate itsusefulness by utilizing it as a reference to guide image classification networks, with the attentionmechanism as a bridge. Extensive experimental results validate the CDC could bring substantialimprovements for various backbone networks and is superior to the state-of-the-art methods. In thefuture, we plan to apply the CDC to handle more complex vision applications, e.g., generating bet-ter region proposals and making more accurate predictions on them in object detection. Overall, wethink that our interesting findings will help further advance the research on irrelevant features, andunderstanding convolutional neural networks in general.8Under review as a conference paper at ICLR 2021
sz0uV8GCxxd
Paper review from AnonReviewer4
4: Ok but not good enough - rejection
Summary: The paper proposes an attention method for image classification by considering the category-irrelevant information. Specifically, the authors first pre-train a network to extract the attention map as the category irrelevant cue. Then the paper introduces a few designs (feature channel-wise operations and the residual unit) to obtain a sharper attention map. Finally, the refined attention map can be multiplied with the feature maps to perform image classification. Experiments are conducted on CIFAR and ImageNet benchmark datasets. Pros: The idea of using category-irrelevant features or attentions could be interesting for various computer vision tasks. Experiments show good results on ImageNet compared to other attention-based baselines. Cons: The novelty and technical contributions are limited in the proposed method. For example, the CDC network to extract category-irrelevant cues is mostly based on the prior work (Lample et al., 2017) and the standard adversarial learning with a discriminator. In addition, the Channel Compatibility Module to refine the attention map is also similar to that in Jetley et al., 2018. Although experimental results seem promising, the proposed framework is more like an engineering pipeline. It is not clear whether the category-irrelevant information could be properly disentangled from the relevant one through the CDC network with the discriminator that tries to predict the category label. For example, to predict the correct object category, the model needs not to see the entire object anyway, or it even needs to use the context information to guide the network. The authors may discuss more on the motivation of using category-irrelevant cues to help image classification, and whether the CDC network can suffice this need (e.g., visualizations in Figure 4 and 5 of the paper do not show good improvement on the attention map). On CIFAR datasets (Table 3), the proposed method does not perform well compared to SE and CBAM baselines. Since CIFAR datasets contain the object in the center of the image, which is an easier case for localizing objects using the attention mechanism, the authors should discuss more on why the proposed attention-based method does not work well. Overall, the paper presents a framework to achieve good performance for image classification, but the main contributions are not clear, as well as the lack of novelty and explanations on how the category-irrelevant cues could be successfully extracted to improve model training.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Category Disentangled Context: Turning Category-irrelevant Features Into Treasures ### Paper Abstract Deep neural networks have achieved great success in computer vision, thanks to their ability in extracting category-relevant semantic features. On the contrary, irrelevant features (e.g., background and confusing parts) are usually considered to be harmful. In this paper, we bring a new perspective on the potential benefits brought by irrelevant features: they could act as references to help identify relevant ones. Therefore, (1) we formulate a novel Category Disentangled Context (CDC) and develop an adversarial deep network to encode it; (2) we investigate utilizing the CDC to improve image classification with the attention mechanism as a bridge. Extensive comparisons on four benchmarks with various backbone networks demonstrate that the CDC could bring remarkable improvements consistently, validating the usefulness of irrelevant features. ### Paper Keywords ["context", "irrelevant features", "cdc", "features", "category", "treasures category", "great success", "computer vision", "thanks"] ### Paper Content ABSTRACTDeep neural networks have achieved great success in computer vision, thanks totheir ability in extracting category-relevant semantic features. On the contrary,irrelevant features (e.g., background and confusing parts) are usually consideredto be harmful. In this paper, we bring a new perspective on the potential benefitsbrought by irrelevant features: they could act as references to help identify rel-evant ones. Therefore, (1) we formulate a novel Category Disentangled Context(CDC) and develop an adversarial deep network to encode it; (2) we investigateutilizing the CDC to improve image classification with the attention mechanismas a bridge. Extensive comparisons on four benchmarks with various backbonenetworks demonstrate that the CDC could bring remarkable improvements con-sistently, validating the usefulness of irrelevant features.1 I NTRODUCTIONWith the emergence of deep neural networks, their performance on most vision tasks is surpassinghuman-level. It has reached a consensus that the success of deep networks is brought by theirpowerful ability in extracting high-level semantic features (Simonyan et al., 2014).To take more insight into the internal behavior, researchers explain it in the view of attention. Theattention is a physiological mechanism which describes the phenomenon that human’s perceptionsystem could focus on the object of interest (OOI) while suppressing the background. In the last fewyears, more and more evidences have shown that deep networks originally have such ability to locateOOI (e.g., the category-relevant regions) even without requiring any explicit supervision (Zhou et al.,2015; 2016), therefore enabling them to encode category-relevant features.To obtain more category-relevant features, researchers design various powerful networks, as strongernetworks usually have better attention (Zagoruyko & Komodakis, 2017). However, since the net-works are deeper (He et al., 2015a) or wider (Zagoruyko & Komodakis, 2016), the overhead in-creases accordingly. Other researchers instead control the networks’ attention via formulating ex-plicit attention modules, directly refining the networks to encode more category-relevant features.In contrast to the internal guidance, Zagoruyko & Komodakis (2017) attempt to improve the per-formance of student networks with external guidance by mimicking the attention maps of morepowerful teacher networks inspired by knowledge distillation. However, since in both cases theencoded relevant features in the attention maps have a large overlap with those in the backbonenetworks, the room for improvement is somewhat limited. This brings us to the main topic of thispaper: could we solve the problem in the opposite way? More specifically, can we adopt irrelevantfeatures as references and forbid backbone networks to encode them? If so, can backbone networksencode more relevant features with the guidance of pre-extracted category-irrelevant features?To study these questions, one first needs to specify a proper context that only contains irrelevantfeatures. To that end, we propose to extract a novel Category Disentangled Context (CDC), whichis expected to encode all the information of the dataset except that is category-relevant. In this case,the CDC is “complementary” to the category-relevant features. Therefore, the CDC could act asa good reference to help identify the relevant features. We encode the CDC by designing a novelconditional auto-encoder to capture the underlying property of the whole dataset with adversarialtraining for category disentangling (Mathieu et al., 2016). To demonstrate the potential benefitsthat could be brought by irrelevant features, we investigate utilizing the CDC to improve the taskof image classification. Specifically, we adopt the attention mechanism as a bridge by inferring1Under review as a conference paper at ICLR 2021Figure 1: (a) Input images; (b) ResNet-50 originally focuses on some background (black rectan-gles) and misses part of the targets (red rectangles); (c) by utilizing the attention maps Ms inferredfrom the CDCs, which indicate category-relevant/irrelevant regions to be encouraged/suppressed;(d) ResNet-50 could be guided to correctly identify the objects of interest in both images. Note: wevisualize the focuses of networks following (Zagoruyko & Komodakis, 2017).the attention map from the CDC and then multiplying it with the backbone networks to refine theirattention (e.g., OOI). With the CDC as a reference, backbone networks could purify their encodedfeatures by eliminating irrelevant information that is contained in the CDC (see the suppressed focusmarked with black rectangles in the top row of Fig. 1), and explore to encode more relevant featuresthat are not in the CDC (see the added focus marked with red rectangles in the bottom row of Fig. 1),and thus improve the performance. To the best of our knowledge, this is the first work that utilizescategory-irrelevant features to improve a vision task.To summarize, the contributions of this work are as follows:• We introduce a novel CDC that captures the underlying property of the whole dataset exceptcategory-relevant information and develop an adversarial network to obtain it.• We demonstrate utilizing the CDC to improve image classification in a novel attention manner.• We validate the effectiveness of the CDC by extensive evaluations with various backbone networkson four public datasets.2 C ATEGORY DISENTANGLED CONTEXTOur aim is to design a model that (1) captures the underlying property of the whole dataset; (2) doesnot have any category-relevant information. We hypothesize that the information encoded in thismodel is complementary to category-relevant features . Therefore, it could act as a good reference toexplore more category-relevant features.We define the latent features TTT2RCHWencoded in the above model as “ Category DisentangledContext ”, whereTTTis an intermediate 3D tensor derived from an input image x. Fig. 2 demonstratesthe structure of our CDC Extraction Network, which is adapted from (Lample et al., 2017) with thefollowing key components.Conditional Auto-encoder. The general architecture of our CDC Extraction Network is a con-ditional auto-encoder. Compared with (Lample et al., 2017), (1) we choose the latent features en-coded in a latter layer, since we require TTTin a larger spatial resolution, but directly let the originallatent features in a high resolution will hinder the auto-encoder to learn a good embedding (Hin-ton & Salakhutdinov, 2006); (2) we add a skip-connection since it has been validated in previouswork (Ronneberger et al., 2015) that skip-connection in the auto-encoder could ease training.Category Disentangling Branch. To remove category-relevant features, we add a category dis-entangling branch by extending (Lample et al., 2017), which is originally designed for two-attributedisentangling via attribute flipping and thus could not be directly used in our problem. Specifically,we iteratively train the discriminator Dto classify the correct category based on TTTusing the crossentropy loss, and then update the auto-encoder to output a new TTTto foolD, which is achieved byminimizing the predicted confidence of the correct category,Lfool(x;y) =Softmax (V(x))[y]; (1)2Under review as a conference paper at ICLR 2021Figure 2: The CDC Extraction Network: given an image-category pair ( x,y) as input, the conditionalauto-encoder ( E1andE2are encoders, G1andG2are decoders) outputs a reconstructed image x0;Dis the discriminator for category disentangling and TTTis the CDC. Note that only the networkswithin the green box are executed for generating TTTin the evaluation stage.Figure 3: Demonstration of applying the CDC-based framework to the feature maps FFF:L1is thenetwork to infer spatial attention map Mfrom the pooled CDCs [TTTsavg;TTTsmax],L2andL3areembedding networks to project [TTTsavg;TTTsmax]andFFFinto the same feature space for calculatingchannel compatibility; Ris the residual unit for adjusting the feature maps after applying attention.Note that the CDC Extraction Network is pre-trained and its parameters are fixed.whereV(x)is the output of the last fully connected layer of the discriminator for image x, andyis the corresponding category. We normalize V(x)using the Softmax function such that each itemindicates the predicted probability of one category. We set the weight of Lfoolas 0.0001.Repulsive Loss. To guarantee the conditional vector could really work, we add a repulsive lossfollowing (Yu et al., 2018; Wang et al., 2019b) to enforce the discrepancy between images generatedwith the same TTTbut different conditional vectors to be large enough.Lrepul(T) = max(jjG2(TTT;g(y))G2(TTT;1g(y))jj;0); (2)whereg(y)is a function that represents yas a one-hot vector, (e.g., 0.01) is the margin to guaranteereasonable changes, and the weight of Lrepul is 0.001.3 U TILIZING THE CDC INIMAGE CLASSIFICATIONTo demonstrate the usefulness of the CDC, we investigate utilizing it to improve image classifi-cation, which is the basis for almost all the other vision tasks. Specifically, we adopt the attentionmechanism as a bridge to utilize the CDC by inferring the attention map from it and then multiplyingwith the feature maps of backbone networks. Unlike the traditional positive attention mechanismthat amplifies category-related neurons (Hu et al., 2018; Woo et al., 2018), our mechanism is in theopposite way, whose goal is to suppress irrelevant neurons. However, unlike in positive ones, whereamplified neurons could easily contribute to the network, suppressed neurons will leave activatedto deteriorate the performance. Therefore, we multiply the inferred attention maps with the featuremaps before activation, and then add a Residual Unit to learn a residual to adjust the suppressedfeature maps, such that those irrelevant neurons could be totally inactivated by ReLU. Furthermore,since the inferred attention maps from the CDC are very noisy (see Fig. 4(b)), we adopt a ChannelCompatibility Module to alleviate the effects on less related channels. For residual networks, weapply the framework to the feature maps outputted by a residual group. For VGG, we choose thefeature maps outputted by the convolutional layer that is just before a maximum pooling layer andhave the same resolution as the CDC to apply our framework (see Fig. 3 for demonstration). In thefollowing, we will describe the key components.Infer Attention Maps. Instead of computing the attention maps from 3D tensors directly (Wanget al., 2017), we conduct two pooling operations to reduce the computational complexity as in (Wooet al., 2018). Specifically, given the CDC TTTextracted from x, we conduct an average pooling3Under review as a conference paper at ICLR 2021Figure 4: Qualitative results for demonstrating the function of each component in our framework:(a) input images; (b) inferred attention maps Ms from the CDCs; (c) the focused areas by thesecond residual group of ResNet-50; (d) the focused areas after applying channel-aware suppres-sion/amplification; (e) the focused areas after applying RU; (f) the focused areas by the last residualgroup; (g) the focused areas by the last residual group of baseline ResNet-50.and a maximum pooling along the channel axes, generating TTTsavgandTTTsmax. Then we infer theattention map M(TTT)2R1HW(will be abbreviated as M) by forwarding [TTTsavg;TTTsmax]to theconvolutional network L1(see Fig. 3):M(T) =L1([AvgPool (TTT);MaxPool (TTT)]) =L1([TTTsavg;TTTsmax]); (3)where [;]denotes concatenation.Channel Compatibility Module. SupposeFFF(before activation) is immediate feature maps out-putted by the classification network that have the same spatial resolution as M. SinceMmay notbe compatible with all the channels of FFF, therefore instead of applying Mto all the channels of FFFequally, we choose only a part of channels to apply Mvia computing a channel compatibility mea-sure between TTTand each channel of FFF. Specifically, we compute the compatibility similar as (Jetleyet al., 2018) by first projecting [TTTsavg;TTTsmax]andFFFinto the same high-dimensional feature space viausing the embedding networks L2andL3respectively, and then conducting dot product between theembedded features Me2R1HWandFFFe2RCHWafter squeezing the spatial dimensions,and finally normalizing them with a sigmoid function:Ci=Sigmoid (L2([TTTsavg;TTTsmax]);L3(FFF)i) =Sigmoid (hMe;FFFeii); (4)whereh;idenotes dot product after squeezing the spatial dimensions, and subscript iindicates thei-th channel. After that, channel-aware suppression could be realized by applying an element-wisemultiplication between each channel of FFFand the attention map M, weighted with the channelcompatibility C, resulting in an irrelevant-suppressed feature FFF0:FFF0i=FFFiMCi; (5)wheredenotes conducting expansion and then element-wise multiplication.Residual Unit. After suppression, the activation of task-irrelevant neurons will be decreased (e.g.,from 0.1 to 0.001), but leave activated (see the black rectangle in Fig. 4). Similarly, the activationof some task-relevant neurons will be relatively enlarged, but since the positive signal is weak, theiractivations are still very low (e.g., from 0.0001 to 0.001, see the red rectangle in Fig. 4). To resolvethese issues, we intentionally add a residual unit R, which is a skip connection as in ResNets (Heet al., 2015a), to adaptively learn to adjust FFF0withFFFas input (e.g., adjust an irrelevant neuron withvalue 0.001 to -0.001 and thus is inactivated by ReLU or adjust a relevant neuron with value 0.001 to0.1). Therefore, the final feature maps are ReLU (R(FFF)+FFF0), with suppressed neurons inactivated.Multi-layer Extension. Our framework could be applied to intermediate feature maps outputtedby multiple layers with moderate additional computational cost. Specifically, for the feature mapsoutputted by another layer that have the same spatial resolution as FFF,MeandMcould be directlyreused, while for the feature maps with a different resolution, we could down/up-sample MeandMto make them consistent with the resolution of the target feature maps.4 E XPERIMENTS AND DISCUSSIONSWe demonstrate the usefulness of the CDC by taking the task of image classification as an example.Therefore, we exhaustively evaluate the CDC-based classification framework on four benchmarks,4Under review as a conference paper at ICLR 2021Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + DM 93.47 73.62VGG13 + GM 94.39 75.48VGG13 + CGM 94.45 75.56VGG13 + CDCGM 94.71 75.81Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG16 93.85 73.78VGG16 + DM 93.16 73.42VGG16 + GM 94.15 75.03VGG16 + CGM 94.18 74.85VGG16 + CDCGM 94.33 75.24Table 1: Comparison results of different context modeling approaches on the CIFAR datasets.including CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), ImageNet32 32 (Chrabaszczet al., 2017) and the full ImageNet (Deng et al., 2009), with various network architectures asbackbones, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and WideResNet (Zagoruyko & Komodakis, 2016) (we refer Wide ResNet with depth iand widening factorkas WRN-i-k). We first apply ablation studies on CIFAR datasets due to limited computationalresources, and then report more comparisons with the state-of-the-art methods on the other datasets.For the implementation details and more experiments, please refer to the Appendix.4.1 CIFAR E XPERIMENTSContext Modeling Methods. We first investigate whether the CDC is the best context for improv-ing classification. In a multi-category problem with nclasses (denote xas the data and y2f1;:::;ngas its label), there are four mathematical models that could be utilized for modeling context:•Discriminative model (DM) attempts to compute p(yjx), withp(1jx)+p(2jx)+:::+p(njx) = 1 .This formulation could be implemented with a classification network.•Generative model (GM) attempts to model p(x)without explicitly considering y. Researchersusually model it as an auto-encoder.•Conditional generative model (CGM) attempts to compute p(xjy). Generally, researchers adoptthe architecture of conditional auto-encoder trained in a semi-supervised manner to model it (Che-ung et al., 2014).•Category disentangled conditional generative model (CDCGM) attempts to model p(x), withp(x) =p(xjy), namely the model shouldn’t be relevant to yat all. The readers need to distinguishbetween CDCGM and GM, although GM does not explicitly consider y, the information of yisstill included, while CDCGM explicitly enforces the model without containing any information ofy. Our CDC Extraction Network is a CDCGM.Although all four models could capture the underlying property of a dataset, their abilities vary a lot.Discriminative models focus on classification boundaries, whereas generative models emphasize thedata generation process, and thus generative models could carry richer information (Tu, 2007). Inaddition, among the three generative models, only CDCGM does not encode any information of y.We investigate the benefits brought by the context encoded in the above four modeling methods.For DM, we choose WRN-16-10, while for GM and CGM, the networks are simply adapted fromour CDC Extraction Network by canceling the introducing of conditional vector and/or removingthe category disentangling branch. Comparison results on the CIFAR-10 and CIFAR-100 datasetsare reported in Tab. 1. It could be seen that both VGG13 and VGG16 obtain a certain amount ofimprovements by “attending” the context computed by generative models. In addition, the finalresults of attending the context in GM and CGM are very similar (e.g., 94.39% and 94.35% onCIFAR-10), since both of their models are relevant to the category, no matter explicitly modeledor not. Obviously, our approach that adopts the CDC performs the best, validating that the CDCcould bring more guidance. Interestingly, the performance drops heavily by “attending” the contextin a discriminative model, although the approach (Zagoruyko & Komodakis, 2017) that enforcestheir attention maps to be exactly the same could work (see Tab. 3). The reason is probably that theinformation of classification boundaries alone is not suitable for the trainable attention mechanism.Residual Unit (RU). To demonstrate the importance of RU, we train another VGG13/16 with ap-plying the CDC-based framework that has no RU. The results in Tab. 2 show that both networksobtain slight improvement without the help of RU, and the performance is far behind to that of thefull framework, validating that the attention maps inferred from the CDC could be hardly utilizedwithout RU. In addition, we also investigate whether the improvement is brought by RU alone via5Under review as a conference paper at ICLR 2021Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + Ours w/o RU 94.24 74.98VGG13 + Ours w/o CC 94.60 75.62VGG13 + Ours w/o RL 94.41 75.37VGG13 + Ours N 94.40 74.48VGG13 + Ours H 94.06 75.20VGG13 + Ours 94.71 75.81VGG13 + Ours after ReLU 94.51 75.33VGG13 + RU 94.22 75.03Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG16 93.85 73.78VGG16 + Ours w/o RU 94.04 74.21VGG16 + Ours w/o CC 94.28 74.54VGG16 + Ours w/o RL 94.19 74.36VGG16 + Ours N 94.23 74.73VGG16 + Ours H 93.95 74.11VGG16 + Ours 94.33 75.24VGG16 + Ours after ReLU 94.12 74.71VGG16 + RU 94.06 73.84Table 2: Ablation studies on the CIFAR-10/100 datasets. Note that Ours Nindicates the results ofapplying CDGC to one layer with 16 16 resolution, Ours Hindicates the results of applying CDGCto one layer with 8 8 resolution, and Ours indicates the results of applying CDGC to both layers.Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100VGG13 94.18 74.72VGG13 + AT (WRN-16-10) 94.43 75.58VGG13 + Ours 94.71 75.81VGG16 93.85 73.78VGG16 + AT (WRN-16-10) 94.23 74.76VGG16 + Ours 94.33 75.24Top-1 Acc(%)Architecture CIFAR-10 CIFAR-100WRN-16-10 94.58 77.72WRN-16-10 + AT (WRN-28-10) 94.86 78.77WRN-16-10 + Ours 95.23 79.06WRN-16-10 + SE 95.81 80.38WRN-16-10 + SE + Ours 96.02 80.86WRN-16-10 + CBAM 95.09 79.51WRN-16-10 + CBAM + Ours 95.60 80.38WRN-28-10 95.18 79.15Table 3: Comparison results on CIFAR-10/100. Note that the architecture described in the bracketafter AT denotes the corresponding teacher network.applying RU to the baseline networks without using the attention mechanism. The limited improve-ments reported in Tab. 2 validate that RU could not work without the attention framework. Pleaserefer to Fig. 4(d,e) and Fig. 6(d,e) in the Appendix to see the qualitative effects brought by RU.Repulsive Loss (RL). We demonstrate the importance of RL by training another CDC ExtractionNetwork without adding RL, and then applying the corresponding CDC to VGG13/16 to evaluatethe performance. The results in Tab. 2 show that the framework using the CDC extracted withoutadding repulsive loss could still improve the classification performance, but is consistently worsethan that of the full version, validating the importance of repulsive loss.Channel Compatibility. To investigate the benefits brought by channel compatibility (CC), wecompare the results of using CC or not. The results in Tab. 2 show that, without channel compati-bility, the models with applying our framework could still outperform the baseline networks, but areworse than their full versions, validating the usefulness of channel compatibility.Multi-layer. We investigate the usefulness of applying the framework to multiple layers. Theresults in Tab. 2 show that the performance of only applying the CDC to the layer with 1616(88)’s output is worse than applying it to both two layers, validating the usefulness of extendingto multiple layers. We have also tried to apply the CDC to more layers, but the improvement is verylimited compared to the increased overhead.Apply Attention After ReLU. We investigate whether we could apply the framework to the featuremaps after activation. The results in Tab. 2 show that our framework could also work, but theperformance is a bit worse than that of applying attention before activation, validating that ourframework could utilize ReLU to inactivate irrelevant features instead of only suppressing them.Comparisons with the State-of-the-art Methods. We make comprehensive comparisons withthe state-of-the-art methods using various backbone networks (e.g., VGG and WRN) on the CIFARdatasets. The results in Tab. 3 show that networks with applying the CDC-based framework couldbring substantial improvements consistently. We would like to point out that by applying the CDC-based framework, WRN-16-10 achieves comparable results with WRN-28-10, validating that ourframework provides a useful alternative to simply deepening or widening networks. Besides, ourmethod outperforms the approach of Attention Transfer (AT) (Zagoruyko & Komodakis, 2017) inall cases including two VGG networks with WRN-16-10 as the teacher and WRN-16-10 with WRN-28-10 as the teacher. Note that, due to the degradation problem (He et al., 2015a), VGG16 performsslightly worse than VGG13, which also indicates the importance of exploring other directions toimprove networks. Although the accuracies of Squeeze-and-Excitation Networks (SE) (Hu et al.,2018) and Convolutional Block Attention Module (CBAM) (Woo et al., 2018) are slightly higherthan ours, their methods could be further improved (e.g., 0.5%/0.9% for CBAM) by combining ourframework, indicating that our framework is complementary to traditional attention-based methods.6Under review as a conference paper at ICLR 2021Acc(%)Architecture Top-1 Top-5ResNet-32 37.31 62.97ResNet-32 + AT (ResNet-56) 37.20 62.74ResNet-32 + Ours 40.37 66.06ResNet-32 + SE 37.41 63.02ResNet-32 + SE + Ours 40.11 65.82ResNet-32 + CBAM 37.82 63.35ResNet-32 + CBAM + Ours 40.22 65.88Acc(%)Architecture Top-1 Top-5ResNet-56 42.35 68.01ResNet-56 + AT (ResNet-110) 42.48 68.18ResNet-56 + Ours 44.43 70.18ResNet-56 + SE 42.53 68.29ResNet-56 + SE + Ours 45.08 70.57ResNet-56 + CBAM 43.18 68.92ResNet-56 + CBAM + Ours 44.55 70.06Acc(%)Architecture Top-1 Top-5ResNet-110 49.08 74.35ResNet-110 + AT (ResNet-164) 49.03 74.11ResNet-110 + Ours 51.36 75.93ResNet-110 + SE 49.18 74.32ResNet-110 + SE + Ours 50.29 75.17ResNet-110 + CBAM 49.43 74.55ResNet-110 + CBAM + Ours 50.91 75.72Table 4: Classification results on ImageNet32 32. Note that the architecture described in thebracket after AT denotes the corresponding teacher network.Figure 5: Row 1: input images; Row 2: the inferred attention maps Ms from the CDCs; Row 3:the focused areas by baseline ResNet-50; Row 4: the focused areas by ResNet-50 after applying theCDC-based framework.4.2 I MAGE NET3232 E XPERIMENTSTo demonstrate the superiority of our method, we perform extensive comparisons with the state-of-the-art methods on ImageNet32 32 (Chrabaszcz et al., 2017). Tab. 4 shows that all three ResNetsobtain substantial improvements (e.g., 2 3% on top-1 accuracy) after applying the CDC-basedframework, demonstrating that our approach could generalize well on the large-scale dataset withmore categories. In contrast, CBAM- and SE-based approaches improve the baseline networksslightly. Whereas, for the AT-based methods, we could not see any improvement, which probablydue to AT-based methods require attention maps to be with high resolutions. In addition, by combin-ing our framework with CBAM or SE, we could see another improvement, validating that utilizingthe CDC in an attention manner is complementary to traditional attention-based methods.Complexity. To make our framework scalable, it must provide an effective trade-off betweenmodel complexity and performance. Therefore, we use PARAMs (the number of parameters) andFLOPs (floating-point operations per second) to measure the complexity of the framework. Withoutconsidering the network for extracting the CDC, the PARAMs of our framework is 0.034M, whilethat of ResNet-32, ResNet-56, and ResNet-110 is 0.53M, 0.92M and 1.79M. For the FLOPS, ourframework is 3.61M while that of ResNet-32, ResNet-56, and ResNet-110 is 68.19M, 125.51M and254.46M. We do not report the complexity of the CDC Extraction Network, since it is pretrained andcould be reused unlimitedly for different models on the same dataset. Indeed, the CDC ExtractionNetwork requires 0.35M PARAMs and 7.21M FLOPs, which are acceptable, especially for its lowFLOPs. Overall, our CDC-based framework is scalable.4.3 F ULL IMAGE NETEXPERIMENTSWe conduct experiments on the full ImageNet (Deng et al., 2009) to validate our framework couldhandle high-resolution images. The results on the validation set reported in Tab. 5 show that byapplying the CDC-based framework, all backbone networks obtain large margins of improvement(e.g., 2.4% top-1 accuracy increment for ResNet-50). Besides, our framework with all four ResNetsas the backbones outperforms the corresponding SE- and CBAM-based methods. Furthermore, wewould like to point out that our approach with ResNet-50 performs better than ResNet-101. Overall,our framework perform well on large-scale datasets in high resolutions.Visualization. To demonstrate how the CDC bootstrap the backbone networks intuitively, we visu-alize the focused areas by the last residual group of the original ResNet-50 and those after utilizingthe CDC following (Zagoruyko & Komodakis, 2017), together with the attention maps Ms inferredfrom the CDCs in Fig. 5. It could be seen that ResNet-50 originally fails to locate the objects7Under review as a conference paper at ICLR 2021Acc(%)Architecture Top-1 Top-5ResNet-18 70.33 89.38ResNet-18 + SE 70.48 89.60ResNet-18 + CBAM 70.64 89.83ResNet-18 + Ours 71.38 90.12ResNet-34 73.21 91.32ResNet-34 + SE 73.76 91.56ResNet-34 + CBAM 73.89 91.73ResNet-34 + Ours 74.97 92.11Acc(%)Architecture Top-1 Top-5ResNet-50 75.49 92.43ResNet-50 + SE 76.38 92.89ResNet-50 + CBAM 77.31 93.44ResNet-50 + Ours 77.85 93.71ResNet-101 76.52 93.04ResNet-101 + SE 77.28 93.31ResNet-101 + CBAM 78.39 94.25ResNet-101 + Ours 78.63 94.42Table 5: Classification results on the full ImageNet dataset.of interest, and all these locations are correctly identified with the guidance of Ms, validating theusefulness of the CDC. For more visualizations, please refer to Fig. 6 in the Appendix.5 R ELATED WORKBackground Modeling. Most studies in the computer vision community focus on modeling objectsin the foreground, leaving background modeling (Bewley & Upcroft, 2017) less investigated. Untilvery recently, some pioneering work (Zhu et al., 2017; Xiao et al., 2020) have demonstrated thatbackground contains many useful hints for improving image recognition, since it usually has strongsemantic correlation with the foreground objects. Compared with them, our work is to model the“semantic” background that is not relevant to the foreground objects at all.Visual Attention is a basic concept in psychology (Bundesen, 1990). In the field of computer vision,current attention-based methods could be classified into two categories. Post-hoc attention methodsanalyze the attention mechanism mostly to reason for the task of visual classification (Simonyanet al., 2014; Cao et al., 2015; Zhang et al., 2016; Zhou et al., 2016; Selvaraju et al., 2017). Besidesanalyzing, Zagoruyko & Komodakis (2017) defined the gradient-based and activation-based atten-tion maps, and improved the student networks by mimicking the attention maps of a more powerfulteacher network. Trainable attention methods instead incorporate the extraction of attention and tasklearning into an end-to-end architecture and are mostly applied to query-based tasks. By projectingthe query instance into the same high-dimensional space as the target, relevant spatial regions of thequery will be highlighted to guide the desired inference (Jetley et al., 2018; Anderson et al., 2018;Chen et al., 2017; Bahdanau et al., 2014). In contrast to those approaches that adopt query-basedattention, self-attention based methods attempt to learn the attention maps by themselves (Hu et al.,2018; Wang et al., 2017; Woo et al., 2018; Wang et al., 2019a; Bello et al., 2019; Parmar et al., 2019;Zhao et al., 2020). Our approach consists both the post-hoc and trainable attention modules. For theextraction of the CDC, it could be considered as a post-hoc attention method. The most similar workto ours is (Zagoruyko & Komodakis, 2017), as they also attempt to adopt the “context” of a teachernetwork to guide the student network. However, the difference is still two-fold. Firstly, they applythe context as a “hard” constraint, enforcing the two attention maps to be exactly the same, suchthat some valuable information of the student network is forced to be discarded; while we apply itas soft guidance, and thus could take advantage of both networks. Secondly, unlike their approachthat obtains the context via a deep network which is a discriminative model, the CDC is encodedin a category disentangled conditional generative model, and thus could bring more guidance. For“attending” the CDC, it is a trainable attention method. Compared with (Hu et al., 2018; Woo et al.,2018) that encourage feature activation on category-relevant regions, our mechanism is in the op-posite way. Besides, we adopt the CDC as external knowledge to infer the attention maps insteadof totally by the backbone networks themselves. Last but not least, since our key idea is to utilizecategory-irrelevant features, it is complementary to traditional attention-based methods.6 C ONCLUSIONWe have presented a novel Category Disentangled Context (CDC), which is a kind of category-irrelevant features capturing the underlying property of the whole dataset. We demonstrate itsusefulness by utilizing it as a reference to guide image classification networks, with the attentionmechanism as a bridge. Extensive experimental results validate the CDC could bring substantialimprovements for various backbone networks and is superior to the state-of-the-art methods. In thefuture, we plan to apply the CDC to handle more complex vision applications, e.g., generating bet-ter region proposals and making more accurate predictions on them in object detection. Overall, wethink that our interesting findings will help further advance the research on irrelevant features, andunderstanding convolutional neural networks in general.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Paper review from AnonReviewer4 ### Review Text Summary: The paper proposes an attention method for image classification by considering the category-irrelevant information. Specifically, the authors first pre-train a network to extract the attention map as the category irrelevant cue. Then the paper introduces a few designs (feature channel-wise operations and the residual unit) to obtain a sharper attention map. Finally, the refined attention map can be multiplied with the feature maps to perform image classification. Experiments are conducted on CIFAR and ImageNet benchmark datasets. Pros: The idea of using category-irrelevant features or attentions could be interesting for various computer vision tasks. Experiments show good results on ImageNet compared to other attention-based baselines. Cons: The novelty and technical contributions are limited in the proposed method. For example, the CDC network to extract category-irrelevant cues is mostly based on the prior work (Lample et al., 2017) and the standard adversarial learning with a discriminator. In addition, the Channel Compatibility Module to refine the attention map is also similar to that in Jetley et al., 2018. Although experimental results seem promising, the proposed framework is more like an engineering pipeline. It is not clear whether the category-irrelevant information could be properly disentangled from the relevant one through the CDC network with the discriminator that tries to predict the category label. For example, to predict the correct object category, the model needs not to see the entire object anyway, or it even needs to use the context information to guide the network. The authors may discuss more on the motivation of using category-irrelevant cues to help image classification, and whether the CDC network can suffice this need (e.g., visualizations in Figure 4 and 5 of the paper do not show good improvement on the attention map). On CIFAR datasets (Table 3), the proposed method does not perform well compared to SE and CBAM baselines. Since CIFAR datasets contain the object in the center of the image, which is an easier case for localizing objects using the attention mechanism, the authors should discuss more on why the proposed attention-based method does not work well. Overall, the paper presents a framework to achieve good performance for image classification, but the main contributions are not clear, as well as the lack of novelty and explanations on how the category-irrelevant cues could be successfully extracted to improve model training. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
B9UN92zQnRF
ML_Reproducibility_Challenge/2021/Fall
2021
Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering
["Anonymous"]
Scope of Reproducibility In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with the instructions mentioned in it. Due to severe hardware limitations, it was not possible to learn the model and re-implement the code using Google Colab. Methodology Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following specifications: GPU 13GB RAM and 80GB Disk were used. The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. Results In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 was obtained, which is quite similar to the accuracy reported by the paper itself. What was easy The hardware requirements and the initial setup of the experiment were fully described in the paper in Section B (Hyperparameters), which was very helpful in re-executing the code. A description of all usable datasets was also provided in Section 4.1 (Datasets). The documentation published at the Git by the authors was almost comprehensive and practical, include installation requirements, hardware, data sets, how the model is taught and how the model is evaluated. What was difficult The authors used a 24GB GPU (RTX TITAN). Execution with such conditions is not possible due to the free features provided by Google Colab. Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by default in the article, to 2; But we still had a lack of RAM from Colab. It should be noted that by reducing the batch value, we also changed the number of epochs, but there were still problems. Communication with original authors Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need to interact with the authors. Of course, the gateway account and the authors' research gate account were available in ways to communicate with the authors, including email.
["authors", "google colab", "model", "article", "gpu", "learn", "conversational dependency", "consistency training framework", "conversational question", "process"]
Reproducibility report of Learn to Resolve ConversationalDependency: A Consistency Training Framework forConversational Question Answering paperAnonymous Author(s)AffiliationAddressemailReproducibility Summary 1Scope of Reproducibility 2In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of 3the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with 4the instructions mentioned in it. Due to severe hardware limitations, it was not possible to learn the model and re- 5implement the code using Google Colab. 6Methodology 7Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following 8specifications: GPU 13GB RAM and 80GB Disk were used. 9The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 10minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. 11Results 12In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 13was obtained, which is quite similar to the accuracy reported by the paper itself. 14What was easy 15The hardware requirements and the initial setup of the experiment were fully described in the paper in Section B 16(Hyperparameters), which was very helpful in re-executing the code. A description of all usable datasets was also 17provided in Section 4.1 (Datasets). The documentation published at the Git by the authors was almost comprehensive 18and practical, include installation requirements, hardware, data sets, how the model is taught and how the model is 19evaluated. 20What was difficult 21The authors used a 24GB GPU (RTX TITAN). Execution with such conditions is not possible due to the free features 22provided by Google Colab. Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by 23default in the article, to 2; But we still had a lack of RAM from Colab. It should be noted that by reducing the batch 24value, we also changed the number of epochs, but there were still problems. 25Communication with original authors 26Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need 27to interact with the authors. Of course, the gateway account and the authors’ research gate account were available in 28ways to communicate with the authors, including email. 29Submitted to ML Reproducibility Challenge 2021. Do not distribute.Figure 1: Proposed framework1 Introduction 30A Conversational Question Answering (CQA) is the design of an intelligent conversation system that can not only 31engage in an interactive conversation at the human level, but also go beyond it and answer questions on a variety of 32topics, one of which is today Prominent goals in the field of artificial intelligence. Conversational AI is an integral 33part of natural user interfaces. The main idea of the CQA is to ask the machine to answer a question based on the part 34provided. The approach to solving these problems is proposed: the end-to-end approach and the pipe line approach. 35This article uses the pipeline approach. 36In this paper, using a question and pipeline method, we have tried to teach a question and answer model that is able 37to answer conversational questions. RoBERTa was trained using the ExCorD framework. This model has been taught 38once and, according to the authors, can be used to evaluate common conversational Q&A models. In order to assess 39the feasibility and potential of code re-implementation and to discuss the method presented in the article, the maximum 40free hardware features of Google Colab were used. 412 Scope of reproducibility 42In this section, the claim made by the authors of the article is examined. In the pipe line method, to generate ambiguity 43questions from the main questions, if the T5 model is used to prepare a set of self-questions, questions similar to the 44original human questions can be generated. The proposed method works better than the two existing methods pipeline 45and end-to-end. The F1 criterion obtained during the code re-implementation process is evidence that the proposed 46model performed better than other methods. 47The RoBERTa method used in this paper performs better than the BERT and BERTHAE methods in conversational 48Q&A. This method has been investigated on both QuAC and CANARD datasets and the results show that the value of 49F1 criterion on the QuAC dataset is higher than its value on the CANARD dataset. 502The important point is that if the data is evaluated by human resources instead of using the model, a high F1 criterion 51will not be achieved; Hence, the HEQ evaluation criterion is used, which measures the performance of the model in 52relation to human responses to a question. 533 Methodology 54In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of 55the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with 56the instructions mentioned in it. Due to severe hardware limitations, it was not possible to teach the model and 57re-implement the code using Google Colab. 583.1 Model descriptions 59For the Question Rewriting section, a trained T5 model is used to generate the word sequence. This model actually 60receives a question with a set of sentences as history and in return produces a self-contained question. CANARD data 61set has been used to teach and evaluate this model. 62For the QA section, the RoBERTa model is used. The paper shows that the accuracy is much higher than other 63proposed models for this task, such as BERT and its fine-tuned version for answering the questions, ie. BERT+HAE. 64BERT is a text word representation model that has been pre-trained on large collections. Although BERT is not 65designed for CQA, it works well on CQA databases. Receives text, current question, and previous conversation 66history as input. 67BERT + HAE is a BERT-based QA model with a specific CQA module. embedding Answer history (HAE) is added 68to the embedding of BERT words. Using HAE, the answer information of previous questions is encrypted. 69RoBERTa is an enhanced BERT using pre-training techniques to achieve strong optimized weights in larger bodies. In 70their experiments, the authors found that RoBERTa performs well in CQA and achieves performance comparable to 71the previous SOTA model, HAM (Qu et al, 2019b), in QuAC. Therefore, they used the RoBERTa model as their base 72model because of its simplicity and effectiveness. 733.2 Datasets 741- QuAC (Choi et al., 2018) contains 100,000 pairs of QAs in information retrieval conversations, where the student 75asks questions based on a topic with background information provided, and the teacher provides answers in the form 76of text sections in Wikipedia documents. For validation, they used a subset of the original QuAC training suite that 77included questions that matched their questions in the CANARD suite. The remaining data is used for training. 782- CANARD (Elgohary et al., 2019) consists of 31K, 3K and 5K QA binaries for training, validation and testing sets. 79Questions in CANARD are created by rewriting a subset of the main questions in QuAC. They used training and 80development kits to train and validate QR models and test kits to evaluate QA models. 81The QaAC dataset can be downloaded from the link provided in GitHub. This dataset contains three separate files: 82train.json, valid.json and dev.json. The train.json file contains man-made questions, which is essentially the same as 83the QR rewrite model. If the model is retrained, the valid.json file is used to determine the optimal combination of 84hyperparameters and model evaluation. 853.3 Hyperparameters 86This section explains the meta-parameters that have been used. To achieve QR and QA models, the transformers library 87and T5 and RoBERTa models have been used. The hardware used was the resources that Kolb made available to the 88public with 12GB of memory. For QA model, ADAMW optimizer with 3e-5 learning rate is used. The maximum 89length of the input sequence is 512 and the maximum length of the response is 128. 12 size children are used to teach 90RoBERTa. Of course, to reduce the memory consumption, we reduced this amount to one, but the problem was still 91not solved and the problem of lack of memory prevented RoBERTa retraining. Since we had a total of three loss 92functions, we needed to determine the effect of each on the final loss function with coefficients. For this purpose, 93using the values mentioned in the article, we considered the loss function coefficient of the QA section to be 0.5 and 94the coefficient related to the consistency loss function to be 0.6. 9533.4 Experimental setup and code 96After installing the requirements, the relevant packages and downloading the data set, the RoBERTa model taught in 97the article was used; By downloading and unziping from the relevant address. Using the F1 evaluation criterion, as 98well as setting the number of yarns to 20 and the number of batches to 100, the evaluation process was performed 99using the dev.json file. 1003.5 Computational requirements 101Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following 102specifications: 103GPU 13GB RAM and 80GB Disk were used. 104The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 105minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. 1064 Results 107The F1 criterion obtained during the code reinforcement process is evidence that the proposed model performed better 108than other methods. 1094.1 Results reproducing original paper 110In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 111was obtained, which is quite similar to the accuracy reported by the paper itself. The main article also uses the HEQ-Q 112and HEQ-D criteria, which aim to evaluate the performance of the model in relation to humans, but here, due to the 113time-consuming nature of these evaluations, we were not able to achieve them. Because for a certain and relatively 114large number of questions, it is necessary to answer the questions manually and be measured by a higher level of 115monitoring than the model answers. 1164.2 Results beyond original paper 117In the mentioned article, for accurate evaluation between the previous methods (end-to-end and pipeline), all three 118ready-made models BERT, BERT + HAE and RoBERTa have been used. In this regard, it has been shown that the 119RoBERTa model gives better results for all three methods. Also, the proposed method was better accurate for each 120of the mentioned models than the other methods. The F1 benchmark was 1.2 for the QuAC dataset and 2.3 for the 121CANARD dataset. 1225 Discussion 123According to the comprehensive documentation provided, the article code was re-executed and evaluated in the colab 124context, and similar results were obtained to the results presented in the article. Due to hardware and GPU limita- 125tions, it was not possible to retrain the model on the other datasets mentioned in the article. Additional attempts and 126experiments were performed in this regard, including resizing batch and epoch, but to no avail. 1275.1 What was easy 128The hardware requirements and the initial setup of the experiment were fully described in the article in Section B 129(Hyperparameters), which was very helpful in re-executing the code. 130A description of all usable datasets was also provided in Section 4.1 (Datasets). For implementation, the article was 131developed according to the documentation in the gate and only the QuAC dataset was used. 132The documentation published at the gate by the authors was almost comprehensive and practical. These documents 133include installation requirements, hardware, data sets and their download addresses, how the model is taught (with all 134parameters), how the model is evaluated (with all parameters), as well as the evaluation criteria reported in the article 135(F1). 13645.2 What was difficult 137There were many hardware limitations to retrain data. The authors used a 24GB GPU (RTX TITAN). Execution with 138such conditions is not possible due to the free features provided by Google Colab. 139Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by default in the article, to 1402; But we still had a lack of RAM from Kolb. It should be noted that by reducing the batch value, we also changed 141the number of epochs, but there were still problems, because at the beginning of the work and before the start of 142processing, it requires at least 24GB of RAM. 1435.3 Communication with original authors 144Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need 145to interact with the authors. Of course, the gateway account and the authors’ research gate account were available in 146ways to communicate with the authors, including email. 1475
r8QlGfvc8ec
Short but detailed evaluation
7: Good paper, accept
The authors did what was expected. They reproduced the results given their hardware and matched the results. They did try it with different parameters and got similar results, which shows robustness. There was one case they weren't able to reproduce
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering ### Paper Abstract Scope of Reproducibility In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with the instructions mentioned in it. Due to severe hardware limitations, it was not possible to learn the model and re-implement the code using Google Colab. Methodology Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following specifications: GPU 13GB RAM and 80GB Disk were used. The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. Results In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 was obtained, which is quite similar to the accuracy reported by the paper itself. What was easy The hardware requirements and the initial setup of the experiment were fully described in the paper in Section B (Hyperparameters), which was very helpful in re-executing the code. A description of all usable datasets was also provided in Section 4.1 (Datasets). The documentation published at the Git by the authors was almost comprehensive and practical, include installation requirements, hardware, data sets, how the model is taught and how the model is evaluated. What was difficult The authors used a 24GB GPU (RTX TITAN). Execution with such conditions is not possible due to the free features provided by Google Colab. Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by default in the article, to 2; But we still had a lack of RAM from Colab. It should be noted that by reducing the batch value, we also changed the number of epochs, but there were still problems. Communication with original authors Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need to interact with the authors. Of course, the gateway account and the authors' research gate account were available in ways to communicate with the authors, including email. ### Paper Keywords ["authors", "google colab", "model", "article", "gpu", "learn", "conversational dependency", "consistency training framework", "conversational question", "process"] ### Paper Content Reproducibility report of Learn to Resolve ConversationalDependency: A Consistency Training Framework forConversational Question Answering paperAnonymous Author(s)AffiliationAddressemailReproducibility Summary 1Scope of Reproducibility 2In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of 3the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with 4the instructions mentioned in it. Due to severe hardware limitations, it was not possible to learn the model and re- 5implement the code using Google Colab. 6Methodology 7Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following 8specifications: GPU 13GB RAM and 80GB Disk were used. 9The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 10minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. 11Results 12In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 13was obtained, which is quite similar to the accuracy reported by the paper itself. 14What was easy 15The hardware requirements and the initial setup of the experiment were fully described in the paper in Section B 16(Hyperparameters), which was very helpful in re-executing the code. A description of all usable datasets was also 17provided in Section 4.1 (Datasets). The documentation published at the Git by the authors was almost comprehensive 18and practical, include installation requirements, hardware, data sets, how the model is taught and how the model is 19evaluated. 20What was difficult 21The authors used a 24GB GPU (RTX TITAN). Execution with such conditions is not possible due to the free features 22provided by Google Colab. Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by 23default in the article, to 2; But we still had a lack of RAM from Colab. It should be noted that by reducing the batch 24value, we also changed the number of epochs, but there were still problems. 25Communication with original authors 26Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need 27to interact with the authors. Of course, the gateway account and the authors’ research gate account were available in 28ways to communicate with the authors, including email. 29Submitted to ML Reproducibility Challenge 2021. Do not distribute.Figure 1: Proposed framework1 Introduction 30A Conversational Question Answering (CQA) is the design of an intelligent conversation system that can not only 31engage in an interactive conversation at the human level, but also go beyond it and answer questions on a variety of 32topics, one of which is today Prominent goals in the field of artificial intelligence. Conversational AI is an integral 33part of natural user interfaces. The main idea of the CQA is to ask the machine to answer a question based on the part 34provided. The approach to solving these problems is proposed: the end-to-end approach and the pipe line approach. 35This article uses the pipeline approach. 36In this paper, using a question and pipeline method, we have tried to teach a question and answer model that is able 37to answer conversational questions. RoBERTa was trained using the ExCorD framework. This model has been taught 38once and, according to the authors, can be used to evaluate common conversational Q&A models. In order to assess 39the feasibility and potential of code re-implementation and to discuss the method presented in the article, the maximum 40free hardware features of Google Colab were used. 412 Scope of reproducibility 42In this section, the claim made by the authors of the article is examined. In the pipe line method, to generate ambiguity 43questions from the main questions, if the T5 model is used to prepare a set of self-questions, questions similar to the 44original human questions can be generated. The proposed method works better than the two existing methods pipeline 45and end-to-end. The F1 criterion obtained during the code re-implementation process is evidence that the proposed 46model performed better than other methods. 47The RoBERTa method used in this paper performs better than the BERT and BERTHAE methods in conversational 48Q&A. This method has been investigated on both QuAC and CANARD datasets and the results show that the value of 49F1 criterion on the QuAC dataset is higher than its value on the CANARD dataset. 502The important point is that if the data is evaluated by human resources instead of using the model, a high F1 criterion 51will not be achieved; Hence, the HEQ evaluation criterion is used, which measures the performance of the model in 52relation to human responses to a question. 533 Methodology 54In this process, in order to evaluate the accuracy of the claims mentioned in the paper and also the reusability of 55the paper codes, the codes introduced in GitHub were used. For this purpose, it was carried out in accordance with 56the instructions mentioned in it. Due to severe hardware limitations, it was not possible to teach the model and 57re-implement the code using Google Colab. 583.1 Model descriptions 59For the Question Rewriting section, a trained T5 model is used to generate the word sequence. This model actually 60receives a question with a set of sentences as history and in return produces a self-contained question. CANARD data 61set has been used to teach and evaluate this model. 62For the QA section, the RoBERTa model is used. The paper shows that the accuracy is much higher than other 63proposed models for this task, such as BERT and its fine-tuned version for answering the questions, ie. BERT+HAE. 64BERT is a text word representation model that has been pre-trained on large collections. Although BERT is not 65designed for CQA, it works well on CQA databases. Receives text, current question, and previous conversation 66history as input. 67BERT + HAE is a BERT-based QA model with a specific CQA module. embedding Answer history (HAE) is added 68to the embedding of BERT words. Using HAE, the answer information of previous questions is encrypted. 69RoBERTa is an enhanced BERT using pre-training techniques to achieve strong optimized weights in larger bodies. In 70their experiments, the authors found that RoBERTa performs well in CQA and achieves performance comparable to 71the previous SOTA model, HAM (Qu et al, 2019b), in QuAC. Therefore, they used the RoBERTa model as their base 72model because of its simplicity and effectiveness. 733.2 Datasets 741- QuAC (Choi et al., 2018) contains 100,000 pairs of QAs in information retrieval conversations, where the student 75asks questions based on a topic with background information provided, and the teacher provides answers in the form 76of text sections in Wikipedia documents. For validation, they used a subset of the original QuAC training suite that 77included questions that matched their questions in the CANARD suite. The remaining data is used for training. 782- CANARD (Elgohary et al., 2019) consists of 31K, 3K and 5K QA binaries for training, validation and testing sets. 79Questions in CANARD are created by rewriting a subset of the main questions in QuAC. They used training and 80development kits to train and validate QR models and test kits to evaluate QA models. 81The QaAC dataset can be downloaded from the link provided in GitHub. This dataset contains three separate files: 82train.json, valid.json and dev.json. The train.json file contains man-made questions, which is essentially the same as 83the QR rewrite model. If the model is retrained, the valid.json file is used to determine the optimal combination of 84hyperparameters and model evaluation. 853.3 Hyperparameters 86This section explains the meta-parameters that have been used. To achieve QR and QA models, the transformers library 87and T5 and RoBERTa models have been used. The hardware used was the resources that Kolb made available to the 88public with 12GB of memory. For QA model, ADAMW optimizer with 3e-5 learning rate is used. The maximum 89length of the input sequence is 512 and the maximum length of the response is 128. 12 size children are used to teach 90RoBERTa. Of course, to reduce the memory consumption, we reduced this amount to one, but the problem was still 91not solved and the problem of lack of memory prevented RoBERTa retraining. Since we had a total of three loss 92functions, we needed to determine the effect of each on the final loss function with coefficients. For this purpose, 93using the values mentioned in the article, we considered the loss function coefficient of the QA section to be 0.5 and 94the coefficient related to the consistency loss function to be 0.6. 9533.4 Experimental setup and code 96After installing the requirements, the relevant packages and downloading the data set, the RoBERTa model taught in 97the article was used; By downloading and unziping from the relevant address. Using the F1 evaluation criterion, as 98well as setting the number of yarns to 20 and the number of batches to 100, the evaluation process was performed 99using the dev.json file. 1003.5 Computational requirements 101Contrary to what was mentioned in the article about executable hardware, Google Colab was used with the following 102specifications: 103GPU 13GB RAM and 80GB Disk were used. 104The duration of the model evaluation process on the Google Colab with the mentioned features is approximately 18 105minutes and 30 seconds, and the disk and GPU consumed in this process are 49GB and 4GB, respectively. 1064 Results 107The F1 criterion obtained during the code reinforcement process is evidence that the proposed model performed better 108than other methods. 1094.1 Results reproducing original paper 110In the evaluation performed using the proposed RoBERTa model of the paper, the criterion F1 67.73891723166825 111was obtained, which is quite similar to the accuracy reported by the paper itself. The main article also uses the HEQ-Q 112and HEQ-D criteria, which aim to evaluate the performance of the model in relation to humans, but here, due to the 113time-consuming nature of these evaluations, we were not able to achieve them. Because for a certain and relatively 114large number of questions, it is necessary to answer the questions manually and be measured by a higher level of 115monitoring than the model answers. 1164.2 Results beyond original paper 117In the mentioned article, for accurate evaluation between the previous methods (end-to-end and pipeline), all three 118ready-made models BERT, BERT + HAE and RoBERTa have been used. In this regard, it has been shown that the 119RoBERTa model gives better results for all three methods. Also, the proposed method was better accurate for each 120of the mentioned models than the other methods. The F1 benchmark was 1.2 for the QuAC dataset and 2.3 for the 121CANARD dataset. 1225 Discussion 123According to the comprehensive documentation provided, the article code was re-executed and evaluated in the colab 124context, and similar results were obtained to the results presented in the article. Due to hardware and GPU limita- 125tions, it was not possible to retrain the model on the other datasets mentioned in the article. Additional attempts and 126experiments were performed in this regard, including resizing batch and epoch, but to no avail. 1275.1 What was easy 128The hardware requirements and the initial setup of the experiment were fully described in the article in Section B 129(Hyperparameters), which was very helpful in re-executing the code. 130A description of all usable datasets was also provided in Section 4.1 (Datasets). For implementation, the article was 131developed according to the documentation in the gate and only the QuAC dataset was used. 132The documentation published at the gate by the authors was almost comprehensive and practical. These documents 133include installation requirements, hardware, data sets and their download addresses, how the model is taught (with all 134parameters), how the model is evaluated (with all parameters), as well as the evaluation criteria reported in the article 135(F1). 13645.2 What was difficult 137There were many hardware limitations to retrain data. The authors used a 24GB GPU (RTX TITAN). Execution with 138such conditions is not possible due to the free features provided by Google Colab. 139Due to the mentioned limitation, we tried to change the batch size, which was set to 12 by default in the article, to 1402; But we still had a lack of RAM from Kolb. It should be noted that by reducing the batch value, we also changed 141the number of epochs, but there were still problems, because at the beginning of the work and before the start of 142processing, it requires at least 24GB of RAM. 1435.3 Communication with original authors 144Due to the comprehensive documentation provided in the gate as well as in the text of the article, there was no need 145to interact with the authors. Of course, the gateway account and the authors’ research gate account were available in 146ways to communicate with the authors, including email. 1475<|im_end|> <|im_start|>assistant ### Review Title Short but detailed evaluation ### Review Text The authors did what was expected. They reproduced the results given their hardware and matched the results. They did try it with different parameters and got similar results, which shows robustness. There was one case they weren't able to reproduce ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
CXQpLhjrTwJ
NeurIPS.cc/2021/Workshop/SVRHM
2021
Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers
["Max Wolff", "Stuart Wolff"]
Feature preference in Convolutional Neural Network (CNN) image classifiers is integral to their decision making process, and while the topic has been well studied, it is still not understood at a fundamental level. We test a range of task relevant feature attributes (including shape, texture, and color) with varying degrees of signal and noise in highly controlled CNN image classification experiments using synthetic datasets to determine feature preferences. We find that CNNs will prefer features with stronger signal strength and lower noise irrespective of whether the feature is texture, shape, or color. This provides guidance for a predictive model for task relevant feature preferences, demonstrates pathways for bias in machine models that can be avoided with careful controls on experimental setup, and suggests that comparisons between how humans and machines prefer task relevant features in vision classification tasks should be revisited.
["shape", "texture", "color", "signal strength", "cnn image classifiers", "preference", "convolutional neural network", "cnn"]
Signal Strength and Noise Drive Feature Preferencein CNN Image ClassifiersMax WolffWesleyan Universitymswolff@wesleyan.eduStuart Wolffs.wolff1621@gmail.comAbstractFeature preference in Convolutional Neural Network (CNN) image classifiers isintegral to their decision making process, and while the topic has been well studied,it is still not understood at a fundamental level. We test a range of task relevantfeature attributes (including shape, texture, and color) with varying degrees ofsignal and noise in highly controlled CNN image classification experiments usingsynthetic datasets to determine feature preferences. We find that CNNs will preferfeatures with stronger signal strength and lower noise irrespective of whetherthe feature is texture, shape, or color. This provides guidance for a predictivemodel for task relevant feature preferences, demonstrates pathways for bias inmachine models that can be avoided with careful controls on experimental setup,and suggests that comparisons between how humans and machines prefer taskrelevant features in vision classification tasks should be revisited.1 IntroductionDeep neural networks (DNNs) can be used for a wide range of tasks, yet we do not yet have afundamental understanding of how DNNs actually perform many of these tasks. In this paper wefocus on image classification and seek to explain why image classifiers select certain features of theinput space over others.Feature preference in CNNs has been explored through prior research and the results often suggestthat machines classify images very differently from humans. Adversarial example research hasshown that CNN classifiers can be easily fooled by small and imperceptible (at least to humans)manipulations to inputs. Ilyas et al. (2019) suggest that CNN classifiers key off of widespreadnon-robust brittle features that are present in the dataset but imperceptible to humans leading to amisalignment with human expectation. Jacobsen et al. (2019) suggest that one type of adversarialvulnerability is a result of narrow learning and is caused by an overreliance on a few highly predictivefeatures in their decisions rendering the models excessively invariant. They suggest this is a resultof cross-entropy: maximizing the bound of mutual information between the labels and the features.However, they do not offer an explanation for why models lock in on some highly predictive featuresand ignore others. Hendrycks & Dietterich (2019) developed a benchmark to test the robustness ofCNNs to perturbations and corruptions. Rusak et al. (2020) pointed out that the human visual systemis generally robust to a wide range of image noise but that machine models strongly degrade withvarious types of unseen corruptions.Bias in machine models is a very important topic that is being actively investigated. Geirhos et al.(2019) designed a set of cue conflict experiments to compare how machines and humans classifyImageNet objects. When ImageNet-trained CNNs were fed images with conflicting shape and textureEqual contribution.3rd Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM 2021) of theNeural Information Processing Systems (NeurIPS) conference, Virtual.(1) (2) (3) (4) (5)(6) (7) (8) (9) (10)Figure 1: Example features (contained in a64x64 box) used in the pairs matrix experi-ments. (1) blue circle (2) red circle (3) square(4) triangle (5) plus (6) green circle (7) yellowcircle (8) banded (9) blocky (10) wavyFigure 2: Top row: cue conflict examplesfrom Pairs Matrix 1. Bottom row: cue con-flict examples from Pairs Matrix 2.features, the results showed that while humans preferred to classify these cue conflicts according toshape, the machines preferred to classify the images according to texture, which was described astexture bias. Subsequent research by Hermann et al. (2019) showed that by augmenting the trainingdata of an ImageNet CNN, they were able to increase shape bias, and concluded that texture bias isnot an inductive bias of the underlying model. Another machine bias was identified by Shah et al.(2020)–simplicity bias–which they described as the tendency for neural network classifiers trainedwith gradient descent to form the “simple” decision boundary before a more robust “complex” one.They suggest that simplicity bias can limit the robustness and generalization of machine models.Geirhos et al. (2020) outlines various types of DNN biases and unintended failure examples whichthey describe as shortcut learning. This occurs when solutions are found to tasks that are not theresult of learning the intended human-like solution. Hermann & Lampinen (2020) used syntheticimage datasets and found that the “easiness” and “predictivity” (how often the feature predicts aclass) were positively correlated with a CNN’s preference for that feature.Most of the previous work on feature preference in CNNs has shown that machine models willprefer “easy” or “simple” features over more “complex” or “difficult” features, and this can lead toerrors, biases, and misalignment between machine and human vision. In this work, we present afoundation for what actually makes task relevant features “harder” or “easier” for CNNs to identify(and ultimately use) in classification tasks.Contributions•CNNs will prefer task relevant features that are represented with larger signal—larger numberof pixels—over task relevant features that are represented by smaller signal—smaller numberof pixels.•CNNs will prefer task relevant features that are represented with lower noise. We identifyseveral feature attributes that increase noise and therefore lower preference including:deviation, overlap, and predictivity.•CNNs show no strong preference between color, shape, or texture—feature equivalency—when signal and noise are carefully controlled.2 MethodsTo understand what features will be preferred by a CNN image classifier in a controlled environment,we start with ten basic, synthetic features, which can be seen in Figure 1. There are 3 different “shape”features, 3 different “texture” features, and 4 different “color” features. From these ten features, wecreate 45 different classes, each representing a different combination of two of the ten features in thesame image, against a black background. The 45 combinations are then separated into nine different“pairs matrix datasets” with five classes each, where each feature appears in each dataset exactly once.We then train a ResNet-18 He et al. (2015) on each set using a modified version of the torchvisionImageNet training script.2CNN ORClass 1 Class 2Class 1 Class 2 Class 3 Class 4 Class 5Figure 3: We designed a pairs matrix cue conflict classification experiment to test feature preference.In one of the pairs matrix experiments, a CNN was trained to classify the five classes of featurecombinations in the top row. If the CNN chose “Class 1” rather than “Class 2” in the image shown inthe bottom row, then the “plus” feature is preferred to the “blue circle” feature. As seen in Section 3,sufficiently increasing the size of the “blue circle” feature can shift the CNN’s preference towards the“blue circle” feature from the “plus” feature.After training, the classifiers were tested on cue conflict images. Specifically, we measure the trainedclassifiers’ responses to examples of all of the 45 combinations of features, regardless of whetherthe classifier was trained on those combinations (see Figure 3). Then, we recorded the number oftimes a feature’s class was predicted by the classifier, and divided it by the total number of times afeature appeared in the cue conflict test set. The results were then aggregated across all classifiersand datasets to generate a feature preference ranking. The more a feature’s class was predicted incue conflict images by a classifier, the more that feature was generally preferred. By manipulatingthe qualities of the original ten features, we were able to quantitatively measure the effect that thesemanipulations had on how much a feature was preferred by a classifier.We render 300 images per class for training, 100 images per class for validation, 100 images perclass for testing, and 100 images per feature combination during cue conflict testing. We create onedataset per set, and average preference results across five training runs. All models are trained for90 epochs with SGD using learning rate 0.1, which gets decayed by 0.1 at epochs 30 and 60, andwith weight decay 0.0001. Images are normalized by ImageNet per-channel means and standarddeviations. Features in training images are placed within a 192x192 box, padded with 32 pixels oneach side, and the 256x256 result is randomly cropped into a 224x224 image. This procedure is usedfor experiments detailed in Section 3.3 Factors That Influence Feature PreferenceIn this section we present the results from our pairs matrix experiments and explore the factors thateither increase or decrease feature preference.Pixel CountWe find that when variables are controlled, there is a high correlation between the number of pixelsused to represent a feature and that feature’s preference. Specifically, we construct a pairs matrix thatcontains three elementary shapes (triangle, square, and plus), four different colors contained within acircle (red, green, blue, and yellow), and three different textures (blocky, wavy, and banded). We varythe pixel count for each of the ten features and test for preference.Within these ten features, we create three feature groups, where each group contains features withapproximately the same number of pixels: one group contains one shape, color, and texture witha large number of pixels; one group contains one shape, color and texture with a small number ofpixels; and one group contains one shape, color, and texture with a medium number of pixels. Wehave one spare color that is inserted into the medium pixel group.We observe a strong correlation between the number of pixels that define a feature and the averagepreference given to that feature, which can be seen in Figure 4. Large and small feature groups are3500 750 1000 1250 1500 1750 2000 2250Number of Pixels0.300.350.400.450.500.550.600.650.70PreferenceR=0.93Pairs Matrix 1bluegreenplusredsquarebandedblockywavytriangleyellow(a)500 750 1000 1250 1500 1750 2000 2250Number of Pixels0.30.40.50.60.7R=0.92Pairs Matrix 2 (b)Figure 4: Feature preference is linearly correlated with pixel count. Feature preference is averagedacross five runs of a pairs matrix experiment. The dataset included four colors, three shapes, andthree textures. In (a), the color, shape, and texture with the largest number of pixels showed thehighest preference and the features with the smallest number of pixels were least preferred. In (b),the features from (a) that had the largest number of pixels were reversed with the features that had thesmallest number of pixels (the middle group was left unchanged) resulting in a reversal in featurepreference. This shows that the ResNet-18 does not prefer any feature type (color, shape, or texture),implying feature equivalency.0.1 0.2 0.3 0.4 0.5 0.6 0.70.5250.5500.5750.6000.6250.6500.6750.700PreferenceR=-0.93Color DeviationFigure 5: Feature preference is linearly cor-related with color deviation. The hue of theblue circle feature dataset in the pairs matrixexperiment is modified with U(;)duringboth training and testing. The pixel count washeld constant during this experiment. Resultsshow that increasing deviation of a featurewill decrease the preference.0.0 0.2 0.4 0.6 0.8Blue-Green Interpolation0.2750.3000.3250.3500.3750.4000.4250.450PreferenceR=-0.89R=-0.92Color OverlapFigure 6: Feature preference is linearly corre-lated with overlap between two features. Inthis experiment, the blue circle feature of ourpairs matrix is linearly interpolated towardsthe green circle feature. Higher values of blue-green interpolation indicate higher values ofcolor hue overlap between the two features.Pixel counts of all features were held constantduring this experiment. Results show that in-creasing overlap between two features willdecrease the preference for both features.reversed in Figure 4 (a) and (b), but the correlation between the number of pixels and preferenceremains. Moreover, this correlation holds across various feature types including shapes, colors, andtextures. This shows that for CNN classification the number of pixels that represent a task relevantfeature defines signal strength, which in turn drives feature preference. Importantly, when signalstrength is normalized, the CNN shows feature equivalency; no preference for color, shape, or texturefeatures was observed.DeviationDeviation on task relevant features is a common characteristic of popular classification datasets suchas MNIST and ImageNet, and increasing deviation will make a classification task more difficult.For example, a handwritten seven might exist in two variations: with a horizontal line through the40.5 0.6 0.7 0.8 0.9 1.0Predictivity0.350.400.450.500.550.600.65PreferenceR=0.99PredictivityFigure 7: Feature preference is linearly correlated with predictivity (presence). The predictivityof the blue circle feature of the pairs matrix is varied. Pixel counts were held constant during thisexperiment. Results show that decreasing predictivity (presence) will decrease the preference for afeature.center and without; a handwritten digit classification model would have to learn both. Thus, it followsthat adding deviation to a task relevant feature will likely make the feature less preferred by a CNN,as it increases the difficulty for a model to capture that feature. Once we established controls forsignal strength above, we moved to the second set of experiments which focused on quantifying theeffects of different types of deviation on task relevant feature preference. For these experiments,we conducted pair matrix experiments similar to what was used for signal strength, but we addeddeviation to the hue of a color circle during training. As displayed in Figure 5, the amount of huedeviation added to a color during training is linearly correlated with the preference of that feature.OverlapWe also hypothesize that inter-class feature overlap has a negative effect on feature preference. In thisexperiment, we linearly interpolate the blue color circle to the green color circle, and keep their pixelcounts the same. All aspects of every feature in Pairs Matrix 1 are kept constant, except for the bluecircle feature which we bring down to the medium pixel feature group. Like deviation, interpolatingtwo features together linearly decreases the preference of the CNN for both features, as can be seenin Figure 6. If class relevant features are interpolated together, then each feature loses the predictivepower they hold for their respective classes.PredictivityDrawing from experiments described by Hermann & Lampinen (2020), we conducted experimentswhere we varied the frequency that a feature is present in its given class for each set in a pairs matrixexperiment. For example, if the predictivity of a feature is set to 50%, the feature is present in only50% of training instances for that class. As seen in Figure 7, decreasing a feature’s predictivity willcause a decrease in the preference in a pairs matrix experiment. Like inter-class overlap betweenfeatures, decreasing a feature’s predictivity decreases that feature’s predictive power for its respectiveclass, which will result in a lower feature preference relative to other predictive feature options.4 Discussion and ConclusionThe results of these experiments show that CNNs exhibit signal preference. For vision recognition,the base signal of a feature may be described as the number of pixels used to represent that feature.Increasing the signal will increase the preference for that feature over other task relevant featureswith lower signal assuming all other variables are controlled as shown in Figure 4. The results alsoshow that increasing noise factors which make the feature more difficult to capture or decrease thepredictive power of the feature will lower feature preference as shown in Figures 5, 6, and 7. Forvision recognition, noise includes 1) deviation between feature instances within a class 2) inter-classoverlap between a task relevant feature and another task relevant feature, and 3) predictivity (presenceor absence of the feature on some class instances in the dataset).While performance of CNN image classifiers has surpassed the capability of humans, we still lack afundamental, formal understanding of how CNNs perform vision tasks. Gaining this understanding5will not only inform the development of safe and interpretable vision models, but also has the potentialto provide insight into how human vision functions as well.Essential to understanding visual processing systems is a predictive model for such a system’spreference for features in its input space. Based on the results of this paper, we propose a model forfeature preference generally expressed as:F(pr) = Signal (pixel count) - Noise (Deviation + Overlap + Predictivity)Having a predictive model for CNN feature preference has the potential to help in a range of researchtopics including interpretability, new data augmentation strategies and training objectives to expandthe range of task relevant features that are included thereby potentially improving generalizability,robustness, and accuracy for some tasks and datasets. While working towards the predictive modelwe also found a pathway to ascribe biases to machine models that might be an artifact of someexperimental setups. In particular, if Signal and Noise are not carefully controlled for, it is possible tofind or mask feature preferences based on the test set that is used without needing to make changes toa trained model or dataset. For example, in our synthetic dataset, we can easily show preference forcolors, shapes, or texture features by simply making the desired feature be defined by more pixels inthe test set. We can also shift the feature preference by adding or removing deviation, overlap, orpredictivity.We also consider the impact that these experiments might have on the comparisons that have beenmade between machine and human vision. When tasks and datasets are carefully controlled for Signaland Noise, we expect that feature preferences of machines and humans move closer in alignment,but this must be tested experimentally, and should be explored in future work. Future work shouldalso test for the extensibility of these results across other tasks (including unsupervised objectives),datasets, data augmentation strategies, and architectures. The results may also inform DNN learningtheory.ReferencesNicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J. Kellman. Deep convolutionalnetworks do not classify based on global object shape. PLOS Computational Biology , 14(12):1–43,12 2018. doi: 10.1371/journal.pcbi.1006613.Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, andWieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape biasimproves accuracy and robustness. In International Conference on Learning Representations ,2019.Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, MatthiasBethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. In Nature MachineIntelligence , 2020.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. arXiv preprint arXiv:1512.03385 , 2015.Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to commoncorruptions and perturbations. In International Conference on Learning Representations , 2019.Katherine L. Hermann and Andrew K. Lampinen. What shapes feature representations? Exploringdatasets, architectures, and training. arXiv e-prints , art. arXiv:2006.12433, June 2020.Katherine L. Hermann, Ting Chen, and Simon Kornblith. The Origins and Prevalence of TextureBias in Convolutional Neural Networks. arXiv e-prints , art. arXiv:1911.09071, November 2019.Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and AleksanderMadry. Adversarial examples are not bugs, they are features. In Advances in Neural InformationProcessing Systems , volume 32, pp. 125–136, 2019.Joern-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariancecauses adversarial vulnerability. In International Conference on Learning Representations , 2019.6E. Rusak, L. Schott, R. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel.A simple way to make neural networks robust against diverse image corruptions. In EuropeanConference on Computer Vision (ECCV) , 2020.Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. ThePitfalls of Simplicity Bias in Neural Networks. arXiv e-prints , art. arXiv:2006.07710, June 2020.A Revisiting Texture Bias(a) (b) (c) (d)Figure 8: Modifications applied to cue conflict stimuli, from lowest to highest shape preference. (a)no modification (b) exterior mask then landscape (c) exterior mask (d) exterior mask then resize(50%).MODIFICATION SHAPE BIAS TEXTURE BIAS ACC.21.4 78.6 65.8FULL SHAPES 23.2 76.8 65.7LANDSCAPE 55.2 44.8 58.8MASK 66.3 33.7 66.3FULL SHAPES (M) 72.8 27.2 66.5RESIZE (50%) 87.7 12.3 57.1RESIZE (25%) 89.1 10.9 42.7Table 1: Effect of each modification to the cue conflict stimuli of Geirhos et al. (2019) in order ofincreasing shape bias. Acc. refers to the percentage of stimuli that were classified according to eithershape type or texture type. Full shapes (M) refers to full shape features that had an exterior maskapplied to them.A.1 MethodsTo measure the texture bias of ImageNet-trained CNNs, we follow the procedure outlined by Geirhoset al. (2019). We use the style transfer shape-texture cue conflict and silhouette stimuli open-sourcedby the authors. We keep the texture bias measurement procedure exactly the same, and modify onlythe test images. Details and results of these experiments are contained in Section A.2. All texturebias measurements were recorded using a torchvision ResNet-50.A.2 ResultsGeirhos et al. (2019) showed that ImageNet-trained CNNs will classify an object according to itstexture rather than shape in what they described as texture bias. The result is intriguing and importantsince it suggests that ImageNet trained models seem to classify objects quite differently from humans.When we reviewed the cue conflict experiment of Geirhos et al. (2019) the test images appeared tocontain a disproportionate amount of texture signal compared to shape signal which could potentiallyskew the results towards texture bias. The reason for this is 1) during the style-transfer process, atexture will get mapped over the entire image, while the shape remains fixed in a portion of the image70.0 0.2 0.4 0.6 0.8 1.0Interpolation0.40.50.60.70.8Texture PreferenceR=0.99Exterior InterpolationFigure 9: ImageNet-trained ResNet-50 fea-ture cue conflict texture preference is linearlycorrelated with the signal strength of the cue-conflict background. In this experiment, thebackground of each cue conflict image wasmasked to white, and then interpolated to-wards the original cue-conflict background.Higher values of interpolation indicate highersimilarity to the original cue-conflict back-ground. As the background becomes moresimilar to the texture cue it increases the over-all signal of the texture feature and results inincreased texture bias.0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Fraction of Image Size0.100.150.200.250.300.35Texture PreferenceR=0.96ResizingFigure 10: ImageNet-trained ResNet-50 tex-ture preference is linearly correlated withmasked cue conflict feature size. As the fea-ture size increases the texture signal (surfacearea atR2) to shape signal ( R) ratio increaseswhich results in increasing texture bias.resulting in a large pixel count imbalance between texture and shape (in favor of texture) and 2)the stye-transfer process often distorts shape information. We revisit the experiment to see if ourhypothesis on signal and deviation can be used to gain increased control over the cue conflict test setand then study whether the conclusion on texture bias still holds. In the experiments presented in thissection, we modify the texture-shape cue conflict images used in Geirhos et al. (2019) so that thetexture and shape signals (number of pixels) in each feature are varied in a controlled manner relativeto each other and the cue conflict preference is measured.MaskingTo mitigate the effect of the texture-shape signal imbalance in the cue conflict experiment, we maskthe background around the shape in each image. This eliminates texture signal outside of the object,and increases shape signal by increasing the contrast between the background and silhouette of theshape.We find that after performing this masking operation, ImageNet-trained CNNs show a preferencefor shape (66% shape bias, 34% texture bias) following the testing process that Geirhos et al. (2019)described. While Geirhos et al. (2019) and Baker et al. (2018) conducted a similar experiment thatstill resulted in a texture bias, there was a key difference that we believe led to a different result fromour experiment. In experiments performed in Geirhos et al. (2019) and Baker et al. (2018), a texturewas mapped onto the silhouette of a shape whereas in our experiment the texture was mapped ontothe object using style-transfer. Silhouettes do not contain all the shape signal, as there is some shapeinformation contained within an object that gets removed in a silhouette representation. In contrast,the style-transfer process preserves these important shape signals, so we believe that our experimentscreated a more accurate comparison between shape and texture preference in ImageNet-trainedCNNs.ResizingSince a texture gets mapped over the entire area of an object while shape information is containedin edges, we hypothesized that decreasing the size of the masked version of the texture-shape cueconflict test image would further increase shape signal relative to texture signal since the texturesignal will decrease proportional to dimension squared whereas shape will drop linearly as objectsize decreases. Indeed, there is a strong negative correlation between object size and CNN texturepreference (Figure 10).8Figure 11: Examples from the Binary shiftMNIST dataset.Using Only Full ShapesIn the cue conflict experiments of Geirhos et al. (2019), some of the shape features in the test setimages have incomplete shapes (e.g. the tip of a knife may be missing) whereas texture features aregenerally complete in all test images. This should further reduce the amount of shape informationcontained in a test image, and thus result in lower shape preference. To test this effect, we removed(manually selected) cue conflict test images with incomplete shapes from the cue conflict test set,and re-measured texture-shape preference. Interestingly, we saw no significant change in texturepreference using the original style-transfer images with incomplete shapes removed, but we didobserve an increase (8.5%) in shape preference after exterior masking. This further illustrates theimpact of background texture signal in the original style-transfer cue conflict test images.DiscussionInformed by the results outlined in Section 3, we demonstrated increased control over test imagesin the texture-shape cue conflict experiments proposed by Geirhos et al. (2019). In Hermann et al.(2019), the authors were able to shift the CNN from texture bias to shape bias by augmenting thetraining data of the ImageNet model but we were able to shift from texture bias to shape bias strictly bychanging the test images while using exactly the same ImageNet-trained model. We believe follow-upexperiments with full control over signal, deviation, overlap, and predictivity in the test images acrossall classes are needed to accurately quantify the level of texture bias in ImageNet-trained CNNs.B Revisiting Excessive InvarianceB.1 MethodsThe procedure for this experiment largely follows experiments done in Jacobsen et al. (2019). Weconstruct three different test sets, and one training set. For test sets, one is the unmodified MNISTtest set, one contains MNIST digits with location-based class-conditional pixels, and the last onlylocation-based class-conditional pixels. The training set is constructed by placing the location-basedclass-conditional pixel next to MNIST digits.To extract segments of the training set with a given amount of deviation, we trained a ResNet-20classifier on the original MNIST dataset, and created a new training set containing images closest,within a given percentage, to the per-class mean in latent space.We trained models using a ResNet-20 optimized using SGD with learning rate 0.1 (decayed by 0.1 atepochs 30 and 40) for 50 epochs, and with weight decay 0.0001. A random 55,000 images from thetotal 60,000 MNIST pool of training images was used for training and the rest was left for validation.Our model achieved 99.55% clean accuracy on unmodified MNIST. After training, a model is testedon all three test sets. We average all results across five training runs.B.2 ResultsJacobsen et al. (2019) demonstrated in their Binary shiftMNIST experiment that by adding a singlelocation-based, class-conditional pixel to an MNIST dataset during training but removed at test time,classification accuracy dropped from 100% to 13%–just slightly above random guessing. Clearly the90.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.20.30.40.50.60.70.80.9AccuracyR=-0.99Full MNISTFigure 12: In the Binary shiftMNIST ex-periment 1, the classification accuracy for atest set with just the MNIST digits increasesas the deviation of the MNIST digits in theBinary shiftMNIST training set decreases.The training set includes a location-based,class-conditional pixel for each class and theMNIST digits segmented by deviation.0.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.30.40.50.60.70.80.91.0AccuracyR=0.99Only Location-Based PixelFigure 13: In the Binary shiftMNIST exper-iment 2, classification accuracy for a test setwith just the location-based pixel decreasesas the deviation of the MNIST digits in theBinary shiftMNIST training set decreases.The training set includes a location-based,class-conditional pixel for each class and theMNIST digits segmented by deviation.0.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.98750.99000.99250.99500.99751.00001.0025AccuracyBinary shiftMNISTFigure 14: In the Binary shiftMNIST experiment 3, classification accuracy for a test set with both thelocation-based pixel and the MNIST digits remains relatively constant as the deviation of the MNISTdigits in the Binary shiftMNIST training set decreases. The training set includes a location-based,class-conditional pixel for each class and the MNIST training digits segmented by deviation.model had developed a strong preference for the pixel feature over the MNIST digit feature whenboth were equally predictive.Given our results above on feature preference, we wanted to see if we could increase the preferencefor the MNIST features in the Binary shiftMNIST experiment by decreasing the deviation of theMNIST features. We first separated the MNIST dataset into class segments based on the deviationfrom the mean in latent space of a ResNet-20 MNIST classifier. The deviation segments ranged from1% (all class instances were within 1% of the mean in latent space and then replicated), 5%, 10%,25%, 50%, and 100% (the full MNIST dataset).We then trained a ResNet-20 on each of these modified MNIST datasets, augmented with a singlelocation-based pixel for each class like the Binary shiftMNIST experiment. Results from thisexperiment, Binary shiftMNIST experiment 1, can be seen in Figure 12. We then tested this samemodel by removing the MNIST digits but leaving the location-based pixels in Binary shiftMNISTexperiment 2 (results shown in Figure 13). In Binary shiftMNIST experiment 3 we tested the samemodel with both the location-based pixels and MNIST digits present in the test images (results shownin Figure 14).B.3 DiscussionWhen the full MNIST dataset was trained with the location-based pixel features in Binary shiftMNISTexperiment 1, we were able to reproduce the result from Jacobsen et al. (2019) where the modelwas unable to accurately classify the MNIST features when the location-based pixel features were10removed from the test set. However, as we decreased the deviation in the MNIST training digits, wewere able to progressively increase the classification accuracy for the MNIST features suggestingthat the model began including the MNIST features in its feature representations. When the MNISTtraining features were within 5% of mean, the model was able to classify the MNIST test set witha relatively high accuracy. In Binary shiftMNIST experiment 2, we found that the classificationaccuracy decreased as the deviation in the MNIST training digits decreased showing that the modelbecame invariant to the location-based pixels once the deviation in the MNIST digits was sufficientlylow.When the model was tested with both location-based pixels and MNIST digital we found that it wasable to classify at a high accuracy for all the deviation segments. We believe that this shows that themodel was learning 1) just the location-based pixels when the MNIST digits had high deviation, 2)just the MNIST digits when they had low deviation because of the much larger signal provided by theMNIST features and 3) a subset of both the location-based pixels and MNIST digits in an entangledrepresentation for the mid level deviation segments (i.e. not able to accurately classify either featureseparately). This shows that the model we developed in Section 3 holds predictive potential for otherresults in machine learning.11
WKGVrUf7gxb
Interesting research question, but the research and paper structure is unclear
4: Ok but not good enough - rejection
The study presented in this paper aims to better understand which features of images play a more significant role in determining the CNN final classification decision. The research stems from a line of research from Geirhos and colleagues who showed that CNNs tend to have a bias towards the texture whereas humans tend to have a bias in favour of objects silhouette when having to decide between contrasting clues. Geirhos and colleagues test this bias by typically training a CNN on an image dataset and then presenting chimera test images composed by the texture of an object and the silhouette of another object. Then they measured whether the image is classified based on the silhouette or the texture. In the present paper, the authors aim to test the relative importance of a series of characteristics in a synthetic dataset: colour, shape, and texture. The authors report that when controlling for signal to noise ratio, none of the above features was preferred over the others when the CNN had to decide using contrasting clues. The idea of better understanding the inner works of a CNN and what feature of an image are the CNNs representing is a significant and impactful goal. And finding that no feature between colour, texture and shape is preferred when signal to noise ratio is of interest. However, I have some queries about the methods and the paper structure: * The authors state that their aim is to test colour, shape, and texture and, in the methods, they explain how they constructed the synthetic dataset. However, they never present the results for these features. In the results section the results from other tests and experiments are presented. * A ResNet-18 is trained on what, from the description in the methods, seems to be their standard dataset composed of synthetic images of different shape, colour, and texture. The network is then tested on a conflict task in which two conflicting images are presented together. However, in the experiments presented in the results different features have been manipulated. Specifically, the authors present experiments in which they manipulated pixel number, deviation, overlap, and predictivity of each feature. The results reported show that when a feature encompassed more pixels, when the signal to noise ratio was higher, and when a feature was more uniquely related to the object (less deviation, less overlap, and higher predictivity) that feature was preferred when the CNN was forced to choose between contrasting cues. How were these properties expressed on the train dataset? Was the network re-trained for each experiment? * The authors in the discussion propose a model to account for these features presented in the results section but I did not understand how the model was built and whether it was tested. And if it was, how was it tested and what were the results? * In the discussion the authors claim that they would expect feature preference between machines and humans to move closer in alignment if the signal to noise ratio was carefully controlled. I am not sure I understand this argument, especially considering that humans are able to successfully solve classification tasks even in bad visual conditions. * In the appendix two more experiments are proposed and they seem somewhat unrelated to the ones proposed in the main paper. In the first, the authors present an alternative to the experiment of Geirhos and colleagues. Specifically, they propose to use a different style transfer function, but it’s not clear what they wanted to maximize. First, it seems that the new style function makes it so that the number of pixels that form the shape are similar to the number of pixels that form the texture. It is claimed that the new style transfer function preserves shape information contained within the object which gets lost when focusing on the silhouette. * In the last experiment presented in the appendix a ResNet-20 was trained on the MINST and some modifications with the aim of investigating the effect of location-based pixels and deviation. * It is not clear why for each part of the paper (main paper, appendix experiment 1, appendix experiment 2) a different network was used (ResNet-18, ResNet-50, and ResNet-20 respectively). Overall, though I praise the clear effort that the authors put in this work, I feel the presentation of the methods and the results could be clearer. My advice would be to focus on one question (either colour/texture/shape or predictivity/overlap/deviation/pixel count) and write a coherent text that links the experiments in the main text together with the ones presented in the appendix, possibly explaining when necessary what changes in the training methods. This would help the reader to understand your aim and follow it throughout the text and the results.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Signal Strength and Noise Drive Feature Preference in CNN Image Classifiers ### Paper Abstract Feature preference in Convolutional Neural Network (CNN) image classifiers is integral to their decision making process, and while the topic has been well studied, it is still not understood at a fundamental level. We test a range of task relevant feature attributes (including shape, texture, and color) with varying degrees of signal and noise in highly controlled CNN image classification experiments using synthetic datasets to determine feature preferences. We find that CNNs will prefer features with stronger signal strength and lower noise irrespective of whether the feature is texture, shape, or color. This provides guidance for a predictive model for task relevant feature preferences, demonstrates pathways for bias in machine models that can be avoided with careful controls on experimental setup, and suggests that comparisons between how humans and machines prefer task relevant features in vision classification tasks should be revisited. ### Paper Keywords ["shape", "texture", "color", "signal strength", "cnn image classifiers", "preference", "convolutional neural network", "cnn"] ### Paper Content Signal Strength and Noise Drive Feature Preferencein CNN Image ClassifiersMax WolffWesleyan Universitymswolff@wesleyan.eduStuart Wolffs.wolff1621@gmail.comAbstractFeature preference in Convolutional Neural Network (CNN) image classifiers isintegral to their decision making process, and while the topic has been well studied,it is still not understood at a fundamental level. We test a range of task relevantfeature attributes (including shape, texture, and color) with varying degrees ofsignal and noise in highly controlled CNN image classification experiments usingsynthetic datasets to determine feature preferences. We find that CNNs will preferfeatures with stronger signal strength and lower noise irrespective of whetherthe feature is texture, shape, or color. This provides guidance for a predictivemodel for task relevant feature preferences, demonstrates pathways for bias inmachine models that can be avoided with careful controls on experimental setup,and suggests that comparisons between how humans and machines prefer taskrelevant features in vision classification tasks should be revisited.1 IntroductionDeep neural networks (DNNs) can be used for a wide range of tasks, yet we do not yet have afundamental understanding of how DNNs actually perform many of these tasks. In this paper wefocus on image classification and seek to explain why image classifiers select certain features of theinput space over others.Feature preference in CNNs has been explored through prior research and the results often suggestthat machines classify images very differently from humans. Adversarial example research hasshown that CNN classifiers can be easily fooled by small and imperceptible (at least to humans)manipulations to inputs. Ilyas et al. (2019) suggest that CNN classifiers key off of widespreadnon-robust brittle features that are present in the dataset but imperceptible to humans leading to amisalignment with human expectation. Jacobsen et al. (2019) suggest that one type of adversarialvulnerability is a result of narrow learning and is caused by an overreliance on a few highly predictivefeatures in their decisions rendering the models excessively invariant. They suggest this is a resultof cross-entropy: maximizing the bound of mutual information between the labels and the features.However, they do not offer an explanation for why models lock in on some highly predictive featuresand ignore others. Hendrycks & Dietterich (2019) developed a benchmark to test the robustness ofCNNs to perturbations and corruptions. Rusak et al. (2020) pointed out that the human visual systemis generally robust to a wide range of image noise but that machine models strongly degrade withvarious types of unseen corruptions.Bias in machine models is a very important topic that is being actively investigated. Geirhos et al.(2019) designed a set of cue conflict experiments to compare how machines and humans classifyImageNet objects. When ImageNet-trained CNNs were fed images with conflicting shape and textureEqual contribution.3rd Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM 2021) of theNeural Information Processing Systems (NeurIPS) conference, Virtual.(1) (2) (3) (4) (5)(6) (7) (8) (9) (10)Figure 1: Example features (contained in a64x64 box) used in the pairs matrix experi-ments. (1) blue circle (2) red circle (3) square(4) triangle (5) plus (6) green circle (7) yellowcircle (8) banded (9) blocky (10) wavyFigure 2: Top row: cue conflict examplesfrom Pairs Matrix 1. Bottom row: cue con-flict examples from Pairs Matrix 2.features, the results showed that while humans preferred to classify these cue conflicts according toshape, the machines preferred to classify the images according to texture, which was described astexture bias. Subsequent research by Hermann et al. (2019) showed that by augmenting the trainingdata of an ImageNet CNN, they were able to increase shape bias, and concluded that texture bias isnot an inductive bias of the underlying model. Another machine bias was identified by Shah et al.(2020)–simplicity bias–which they described as the tendency for neural network classifiers trainedwith gradient descent to form the “simple” decision boundary before a more robust “complex” one.They suggest that simplicity bias can limit the robustness and generalization of machine models.Geirhos et al. (2020) outlines various types of DNN biases and unintended failure examples whichthey describe as shortcut learning. This occurs when solutions are found to tasks that are not theresult of learning the intended human-like solution. Hermann & Lampinen (2020) used syntheticimage datasets and found that the “easiness” and “predictivity” (how often the feature predicts aclass) were positively correlated with a CNN’s preference for that feature.Most of the previous work on feature preference in CNNs has shown that machine models willprefer “easy” or “simple” features over more “complex” or “difficult” features, and this can lead toerrors, biases, and misalignment between machine and human vision. In this work, we present afoundation for what actually makes task relevant features “harder” or “easier” for CNNs to identify(and ultimately use) in classification tasks.Contributions•CNNs will prefer task relevant features that are represented with larger signal—larger numberof pixels—over task relevant features that are represented by smaller signal—smaller numberof pixels.•CNNs will prefer task relevant features that are represented with lower noise. We identifyseveral feature attributes that increase noise and therefore lower preference including:deviation, overlap, and predictivity.•CNNs show no strong preference between color, shape, or texture—feature equivalency—when signal and noise are carefully controlled.2 MethodsTo understand what features will be preferred by a CNN image classifier in a controlled environment,we start with ten basic, synthetic features, which can be seen in Figure 1. There are 3 different “shape”features, 3 different “texture” features, and 4 different “color” features. From these ten features, wecreate 45 different classes, each representing a different combination of two of the ten features in thesame image, against a black background. The 45 combinations are then separated into nine different“pairs matrix datasets” with five classes each, where each feature appears in each dataset exactly once.We then train a ResNet-18 He et al. (2015) on each set using a modified version of the torchvisionImageNet training script.2CNN ORClass 1 Class 2Class 1 Class 2 Class 3 Class 4 Class 5Figure 3: We designed a pairs matrix cue conflict classification experiment to test feature preference.In one of the pairs matrix experiments, a CNN was trained to classify the five classes of featurecombinations in the top row. If the CNN chose “Class 1” rather than “Class 2” in the image shown inthe bottom row, then the “plus” feature is preferred to the “blue circle” feature. As seen in Section 3,sufficiently increasing the size of the “blue circle” feature can shift the CNN’s preference towards the“blue circle” feature from the “plus” feature.After training, the classifiers were tested on cue conflict images. Specifically, we measure the trainedclassifiers’ responses to examples of all of the 45 combinations of features, regardless of whetherthe classifier was trained on those combinations (see Figure 3). Then, we recorded the number oftimes a feature’s class was predicted by the classifier, and divided it by the total number of times afeature appeared in the cue conflict test set. The results were then aggregated across all classifiersand datasets to generate a feature preference ranking. The more a feature’s class was predicted incue conflict images by a classifier, the more that feature was generally preferred. By manipulatingthe qualities of the original ten features, we were able to quantitatively measure the effect that thesemanipulations had on how much a feature was preferred by a classifier.We render 300 images per class for training, 100 images per class for validation, 100 images perclass for testing, and 100 images per feature combination during cue conflict testing. We create onedataset per set, and average preference results across five training runs. All models are trained for90 epochs with SGD using learning rate 0.1, which gets decayed by 0.1 at epochs 30 and 60, andwith weight decay 0.0001. Images are normalized by ImageNet per-channel means and standarddeviations. Features in training images are placed within a 192x192 box, padded with 32 pixels oneach side, and the 256x256 result is randomly cropped into a 224x224 image. This procedure is usedfor experiments detailed in Section 3.3 Factors That Influence Feature PreferenceIn this section we present the results from our pairs matrix experiments and explore the factors thateither increase or decrease feature preference.Pixel CountWe find that when variables are controlled, there is a high correlation between the number of pixelsused to represent a feature and that feature’s preference. Specifically, we construct a pairs matrix thatcontains three elementary shapes (triangle, square, and plus), four different colors contained within acircle (red, green, blue, and yellow), and three different textures (blocky, wavy, and banded). We varythe pixel count for each of the ten features and test for preference.Within these ten features, we create three feature groups, where each group contains features withapproximately the same number of pixels: one group contains one shape, color, and texture witha large number of pixels; one group contains one shape, color and texture with a small number ofpixels; and one group contains one shape, color, and texture with a medium number of pixels. Wehave one spare color that is inserted into the medium pixel group.We observe a strong correlation between the number of pixels that define a feature and the averagepreference given to that feature, which can be seen in Figure 4. Large and small feature groups are3500 750 1000 1250 1500 1750 2000 2250Number of Pixels0.300.350.400.450.500.550.600.650.70PreferenceR=0.93Pairs Matrix 1bluegreenplusredsquarebandedblockywavytriangleyellow(a)500 750 1000 1250 1500 1750 2000 2250Number of Pixels0.30.40.50.60.7R=0.92Pairs Matrix 2 (b)Figure 4: Feature preference is linearly correlated with pixel count. Feature preference is averagedacross five runs of a pairs matrix experiment. The dataset included four colors, three shapes, andthree textures. In (a), the color, shape, and texture with the largest number of pixels showed thehighest preference and the features with the smallest number of pixels were least preferred. In (b),the features from (a) that had the largest number of pixels were reversed with the features that had thesmallest number of pixels (the middle group was left unchanged) resulting in a reversal in featurepreference. This shows that the ResNet-18 does not prefer any feature type (color, shape, or texture),implying feature equivalency.0.1 0.2 0.3 0.4 0.5 0.6 0.70.5250.5500.5750.6000.6250.6500.6750.700PreferenceR=-0.93Color DeviationFigure 5: Feature preference is linearly cor-related with color deviation. The hue of theblue circle feature dataset in the pairs matrixexperiment is modified with U(;)duringboth training and testing. The pixel count washeld constant during this experiment. Resultsshow that increasing deviation of a featurewill decrease the preference.0.0 0.2 0.4 0.6 0.8Blue-Green Interpolation0.2750.3000.3250.3500.3750.4000.4250.450PreferenceR=-0.89R=-0.92Color OverlapFigure 6: Feature preference is linearly corre-lated with overlap between two features. Inthis experiment, the blue circle feature of ourpairs matrix is linearly interpolated towardsthe green circle feature. Higher values of blue-green interpolation indicate higher values ofcolor hue overlap between the two features.Pixel counts of all features were held constantduring this experiment. Results show that in-creasing overlap between two features willdecrease the preference for both features.reversed in Figure 4 (a) and (b), but the correlation between the number of pixels and preferenceremains. Moreover, this correlation holds across various feature types including shapes, colors, andtextures. This shows that for CNN classification the number of pixels that represent a task relevantfeature defines signal strength, which in turn drives feature preference. Importantly, when signalstrength is normalized, the CNN shows feature equivalency; no preference for color, shape, or texturefeatures was observed.DeviationDeviation on task relevant features is a common characteristic of popular classification datasets suchas MNIST and ImageNet, and increasing deviation will make a classification task more difficult.For example, a handwritten seven might exist in two variations: with a horizontal line through the40.5 0.6 0.7 0.8 0.9 1.0Predictivity0.350.400.450.500.550.600.65PreferenceR=0.99PredictivityFigure 7: Feature preference is linearly correlated with predictivity (presence). The predictivityof the blue circle feature of the pairs matrix is varied. Pixel counts were held constant during thisexperiment. Results show that decreasing predictivity (presence) will decrease the preference for afeature.center and without; a handwritten digit classification model would have to learn both. Thus, it followsthat adding deviation to a task relevant feature will likely make the feature less preferred by a CNN,as it increases the difficulty for a model to capture that feature. Once we established controls forsignal strength above, we moved to the second set of experiments which focused on quantifying theeffects of different types of deviation on task relevant feature preference. For these experiments,we conducted pair matrix experiments similar to what was used for signal strength, but we addeddeviation to the hue of a color circle during training. As displayed in Figure 5, the amount of huedeviation added to a color during training is linearly correlated with the preference of that feature.OverlapWe also hypothesize that inter-class feature overlap has a negative effect on feature preference. In thisexperiment, we linearly interpolate the blue color circle to the green color circle, and keep their pixelcounts the same. All aspects of every feature in Pairs Matrix 1 are kept constant, except for the bluecircle feature which we bring down to the medium pixel feature group. Like deviation, interpolatingtwo features together linearly decreases the preference of the CNN for both features, as can be seenin Figure 6. If class relevant features are interpolated together, then each feature loses the predictivepower they hold for their respective classes.PredictivityDrawing from experiments described by Hermann & Lampinen (2020), we conducted experimentswhere we varied the frequency that a feature is present in its given class for each set in a pairs matrixexperiment. For example, if the predictivity of a feature is set to 50%, the feature is present in only50% of training instances for that class. As seen in Figure 7, decreasing a feature’s predictivity willcause a decrease in the preference in a pairs matrix experiment. Like inter-class overlap betweenfeatures, decreasing a feature’s predictivity decreases that feature’s predictive power for its respectiveclass, which will result in a lower feature preference relative to other predictive feature options.4 Discussion and ConclusionThe results of these experiments show that CNNs exhibit signal preference. For vision recognition,the base signal of a feature may be described as the number of pixels used to represent that feature.Increasing the signal will increase the preference for that feature over other task relevant featureswith lower signal assuming all other variables are controlled as shown in Figure 4. The results alsoshow that increasing noise factors which make the feature more difficult to capture or decrease thepredictive power of the feature will lower feature preference as shown in Figures 5, 6, and 7. Forvision recognition, noise includes 1) deviation between feature instances within a class 2) inter-classoverlap between a task relevant feature and another task relevant feature, and 3) predictivity (presenceor absence of the feature on some class instances in the dataset).While performance of CNN image classifiers has surpassed the capability of humans, we still lack afundamental, formal understanding of how CNNs perform vision tasks. Gaining this understanding5will not only inform the development of safe and interpretable vision models, but also has the potentialto provide insight into how human vision functions as well.Essential to understanding visual processing systems is a predictive model for such a system’spreference for features in its input space. Based on the results of this paper, we propose a model forfeature preference generally expressed as:F(pr) = Signal (pixel count) - Noise (Deviation + Overlap + Predictivity)Having a predictive model for CNN feature preference has the potential to help in a range of researchtopics including interpretability, new data augmentation strategies and training objectives to expandthe range of task relevant features that are included thereby potentially improving generalizability,robustness, and accuracy for some tasks and datasets. While working towards the predictive modelwe also found a pathway to ascribe biases to machine models that might be an artifact of someexperimental setups. In particular, if Signal and Noise are not carefully controlled for, it is possible tofind or mask feature preferences based on the test set that is used without needing to make changes toa trained model or dataset. For example, in our synthetic dataset, we can easily show preference forcolors, shapes, or texture features by simply making the desired feature be defined by more pixels inthe test set. We can also shift the feature preference by adding or removing deviation, overlap, orpredictivity.We also consider the impact that these experiments might have on the comparisons that have beenmade between machine and human vision. When tasks and datasets are carefully controlled for Signaland Noise, we expect that feature preferences of machines and humans move closer in alignment,but this must be tested experimentally, and should be explored in future work. Future work shouldalso test for the extensibility of these results across other tasks (including unsupervised objectives),datasets, data augmentation strategies, and architectures. The results may also inform DNN learningtheory.ReferencesNicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J. Kellman. Deep convolutionalnetworks do not classify based on global object shape. PLOS Computational Biology , 14(12):1–43,12 2018. doi: 10.1371/journal.pcbi.1006613.Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, andWieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape biasimproves accuracy and robustness. In International Conference on Learning Representations ,2019.Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, MatthiasBethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. In Nature MachineIntelligence , 2020.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. arXiv preprint arXiv:1512.03385 , 2015.Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to commoncorruptions and perturbations. In International Conference on Learning Representations , 2019.Katherine L. Hermann and Andrew K. Lampinen. What shapes feature representations? Exploringdatasets, architectures, and training. arXiv e-prints , art. arXiv:2006.12433, June 2020.Katherine L. Hermann, Ting Chen, and Simon Kornblith. The Origins and Prevalence of TextureBias in Convolutional Neural Networks. arXiv e-prints , art. arXiv:1911.09071, November 2019.Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and AleksanderMadry. Adversarial examples are not bugs, they are features. In Advances in Neural InformationProcessing Systems , volume 32, pp. 125–136, 2019.Joern-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariancecauses adversarial vulnerability. In International Conference on Learning Representations , 2019.6E. Rusak, L. Schott, R. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel.A simple way to make neural networks robust against diverse image corruptions. In EuropeanConference on Computer Vision (ECCV) , 2020.Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. ThePitfalls of Simplicity Bias in Neural Networks. arXiv e-prints , art. arXiv:2006.07710, June 2020.A Revisiting Texture Bias(a) (b) (c) (d)Figure 8: Modifications applied to cue conflict stimuli, from lowest to highest shape preference. (a)no modification (b) exterior mask then landscape (c) exterior mask (d) exterior mask then resize(50%).MODIFICATION SHAPE BIAS TEXTURE BIAS ACC.21.4 78.6 65.8FULL SHAPES 23.2 76.8 65.7LANDSCAPE 55.2 44.8 58.8MASK 66.3 33.7 66.3FULL SHAPES (M) 72.8 27.2 66.5RESIZE (50%) 87.7 12.3 57.1RESIZE (25%) 89.1 10.9 42.7Table 1: Effect of each modification to the cue conflict stimuli of Geirhos et al. (2019) in order ofincreasing shape bias. Acc. refers to the percentage of stimuli that were classified according to eithershape type or texture type. Full shapes (M) refers to full shape features that had an exterior maskapplied to them.A.1 MethodsTo measure the texture bias of ImageNet-trained CNNs, we follow the procedure outlined by Geirhoset al. (2019). We use the style transfer shape-texture cue conflict and silhouette stimuli open-sourcedby the authors. We keep the texture bias measurement procedure exactly the same, and modify onlythe test images. Details and results of these experiments are contained in Section A.2. All texturebias measurements were recorded using a torchvision ResNet-50.A.2 ResultsGeirhos et al. (2019) showed that ImageNet-trained CNNs will classify an object according to itstexture rather than shape in what they described as texture bias. The result is intriguing and importantsince it suggests that ImageNet trained models seem to classify objects quite differently from humans.When we reviewed the cue conflict experiment of Geirhos et al. (2019) the test images appeared tocontain a disproportionate amount of texture signal compared to shape signal which could potentiallyskew the results towards texture bias. The reason for this is 1) during the style-transfer process, atexture will get mapped over the entire image, while the shape remains fixed in a portion of the image70.0 0.2 0.4 0.6 0.8 1.0Interpolation0.40.50.60.70.8Texture PreferenceR=0.99Exterior InterpolationFigure 9: ImageNet-trained ResNet-50 fea-ture cue conflict texture preference is linearlycorrelated with the signal strength of the cue-conflict background. In this experiment, thebackground of each cue conflict image wasmasked to white, and then interpolated to-wards the original cue-conflict background.Higher values of interpolation indicate highersimilarity to the original cue-conflict back-ground. As the background becomes moresimilar to the texture cue it increases the over-all signal of the texture feature and results inincreased texture bias.0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Fraction of Image Size0.100.150.200.250.300.35Texture PreferenceR=0.96ResizingFigure 10: ImageNet-trained ResNet-50 tex-ture preference is linearly correlated withmasked cue conflict feature size. As the fea-ture size increases the texture signal (surfacearea atR2) to shape signal ( R) ratio increaseswhich results in increasing texture bias.resulting in a large pixel count imbalance between texture and shape (in favor of texture) and 2)the stye-transfer process often distorts shape information. We revisit the experiment to see if ourhypothesis on signal and deviation can be used to gain increased control over the cue conflict test setand then study whether the conclusion on texture bias still holds. In the experiments presented in thissection, we modify the texture-shape cue conflict images used in Geirhos et al. (2019) so that thetexture and shape signals (number of pixels) in each feature are varied in a controlled manner relativeto each other and the cue conflict preference is measured.MaskingTo mitigate the effect of the texture-shape signal imbalance in the cue conflict experiment, we maskthe background around the shape in each image. This eliminates texture signal outside of the object,and increases shape signal by increasing the contrast between the background and silhouette of theshape.We find that after performing this masking operation, ImageNet-trained CNNs show a preferencefor shape (66% shape bias, 34% texture bias) following the testing process that Geirhos et al. (2019)described. While Geirhos et al. (2019) and Baker et al. (2018) conducted a similar experiment thatstill resulted in a texture bias, there was a key difference that we believe led to a different result fromour experiment. In experiments performed in Geirhos et al. (2019) and Baker et al. (2018), a texturewas mapped onto the silhouette of a shape whereas in our experiment the texture was mapped ontothe object using style-transfer. Silhouettes do not contain all the shape signal, as there is some shapeinformation contained within an object that gets removed in a silhouette representation. In contrast,the style-transfer process preserves these important shape signals, so we believe that our experimentscreated a more accurate comparison between shape and texture preference in ImageNet-trainedCNNs.ResizingSince a texture gets mapped over the entire area of an object while shape information is containedin edges, we hypothesized that decreasing the size of the masked version of the texture-shape cueconflict test image would further increase shape signal relative to texture signal since the texturesignal will decrease proportional to dimension squared whereas shape will drop linearly as objectsize decreases. Indeed, there is a strong negative correlation between object size and CNN texturepreference (Figure 10).8Figure 11: Examples from the Binary shiftMNIST dataset.Using Only Full ShapesIn the cue conflict experiments of Geirhos et al. (2019), some of the shape features in the test setimages have incomplete shapes (e.g. the tip of a knife may be missing) whereas texture features aregenerally complete in all test images. This should further reduce the amount of shape informationcontained in a test image, and thus result in lower shape preference. To test this effect, we removed(manually selected) cue conflict test images with incomplete shapes from the cue conflict test set,and re-measured texture-shape preference. Interestingly, we saw no significant change in texturepreference using the original style-transfer images with incomplete shapes removed, but we didobserve an increase (8.5%) in shape preference after exterior masking. This further illustrates theimpact of background texture signal in the original style-transfer cue conflict test images.DiscussionInformed by the results outlined in Section 3, we demonstrated increased control over test imagesin the texture-shape cue conflict experiments proposed by Geirhos et al. (2019). In Hermann et al.(2019), the authors were able to shift the CNN from texture bias to shape bias by augmenting thetraining data of the ImageNet model but we were able to shift from texture bias to shape bias strictly bychanging the test images while using exactly the same ImageNet-trained model. We believe follow-upexperiments with full control over signal, deviation, overlap, and predictivity in the test images acrossall classes are needed to accurately quantify the level of texture bias in ImageNet-trained CNNs.B Revisiting Excessive InvarianceB.1 MethodsThe procedure for this experiment largely follows experiments done in Jacobsen et al. (2019). Weconstruct three different test sets, and one training set. For test sets, one is the unmodified MNISTtest set, one contains MNIST digits with location-based class-conditional pixels, and the last onlylocation-based class-conditional pixels. The training set is constructed by placing the location-basedclass-conditional pixel next to MNIST digits.To extract segments of the training set with a given amount of deviation, we trained a ResNet-20classifier on the original MNIST dataset, and created a new training set containing images closest,within a given percentage, to the per-class mean in latent space.We trained models using a ResNet-20 optimized using SGD with learning rate 0.1 (decayed by 0.1 atepochs 30 and 40) for 50 epochs, and with weight decay 0.0001. A random 55,000 images from thetotal 60,000 MNIST pool of training images was used for training and the rest was left for validation.Our model achieved 99.55% clean accuracy on unmodified MNIST. After training, a model is testedon all three test sets. We average all results across five training runs.B.2 ResultsJacobsen et al. (2019) demonstrated in their Binary shiftMNIST experiment that by adding a singlelocation-based, class-conditional pixel to an MNIST dataset during training but removed at test time,classification accuracy dropped from 100% to 13%–just slightly above random guessing. Clearly the90.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.20.30.40.50.60.70.80.9AccuracyR=-0.99Full MNISTFigure 12: In the Binary shiftMNIST ex-periment 1, the classification accuracy for atest set with just the MNIST digits increasesas the deviation of the MNIST digits in theBinary shiftMNIST training set decreases.The training set includes a location-based,class-conditional pixel for each class and theMNIST digits segmented by deviation.0.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.30.40.50.60.70.80.91.0AccuracyR=0.99Only Location-Based PixelFigure 13: In the Binary shiftMNIST exper-iment 2, classification accuracy for a test setwith just the location-based pixel decreasesas the deviation of the MNIST digits in theBinary shiftMNIST training set decreases.The training set includes a location-based,class-conditional pixel for each class and theMNIST digits segmented by deviation.0.0 0.2 0.4 0.6 0.8 1.0Training Data Deviation Segments0.98750.99000.99250.99500.99751.00001.0025AccuracyBinary shiftMNISTFigure 14: In the Binary shiftMNIST experiment 3, classification accuracy for a test set with both thelocation-based pixel and the MNIST digits remains relatively constant as the deviation of the MNISTdigits in the Binary shiftMNIST training set decreases. The training set includes a location-based,class-conditional pixel for each class and the MNIST training digits segmented by deviation.model had developed a strong preference for the pixel feature over the MNIST digit feature whenboth were equally predictive.Given our results above on feature preference, we wanted to see if we could increase the preferencefor the MNIST features in the Binary shiftMNIST experiment by decreasing the deviation of theMNIST features. We first separated the MNIST dataset into class segments based on the deviationfrom the mean in latent space of a ResNet-20 MNIST classifier. The deviation segments ranged from1% (all class instances were within 1% of the mean in latent space and then replicated), 5%, 10%,25%, 50%, and 100% (the full MNIST dataset).We then trained a ResNet-20 on each of these modified MNIST datasets, augmented with a singlelocation-based pixel for each class like the Binary shiftMNIST experiment. Results from thisexperiment, Binary shiftMNIST experiment 1, can be seen in Figure 12. We then tested this samemodel by removing the MNIST digits but leaving the location-based pixels in Binary shiftMNISTexperiment 2 (results shown in Figure 13). In Binary shiftMNIST experiment 3 we tested the samemodel with both the location-based pixels and MNIST digits present in the test images (results shownin Figure 14).B.3 DiscussionWhen the full MNIST dataset was trained with the location-based pixel features in Binary shiftMNISTexperiment 1, we were able to reproduce the result from Jacobsen et al. (2019) where the modelwas unable to accurately classify the MNIST features when the location-based pixel features were10removed from the test set. However, as we decreased the deviation in the MNIST training digits, wewere able to progressively increase the classification accuracy for the MNIST features suggestingthat the model began including the MNIST features in its feature representations. When the MNISTtraining features were within 5% of mean, the model was able to classify the MNIST test set witha relatively high accuracy. In Binary shiftMNIST experiment 2, we found that the classificationaccuracy decreased as the deviation in the MNIST training digits decreased showing that the modelbecame invariant to the location-based pixels once the deviation in the MNIST digits was sufficientlylow.When the model was tested with both location-based pixels and MNIST digital we found that it wasable to classify at a high accuracy for all the deviation segments. We believe that this shows that themodel was learning 1) just the location-based pixels when the MNIST digits had high deviation, 2)just the MNIST digits when they had low deviation because of the much larger signal provided by theMNIST features and 3) a subset of both the location-based pixels and MNIST digits in an entangledrepresentation for the mid level deviation segments (i.e. not able to accurately classify either featureseparately). This shows that the model we developed in Section 3 holds predictive potential for otherresults in machine learning.11<|im_end|> <|im_start|>assistant ### Review Title Interesting research question, but the research and paper structure is unclear ### Review Text The study presented in this paper aims to better understand which features of images play a more significant role in determining the CNN final classification decision. The research stems from a line of research from Geirhos and colleagues who showed that CNNs tend to have a bias towards the texture whereas humans tend to have a bias in favour of objects silhouette when having to decide between contrasting clues. Geirhos and colleagues test this bias by typically training a CNN on an image dataset and then presenting chimera test images composed by the texture of an object and the silhouette of another object. Then they measured whether the image is classified based on the silhouette or the texture. In the present paper, the authors aim to test the relative importance of a series of characteristics in a synthetic dataset: colour, shape, and texture. The authors report that when controlling for signal to noise ratio, none of the above features was preferred over the others when the CNN had to decide using contrasting clues. The idea of better understanding the inner works of a CNN and what feature of an image are the CNNs representing is a significant and impactful goal. And finding that no feature between colour, texture and shape is preferred when signal to noise ratio is of interest. However, I have some queries about the methods and the paper structure: * The authors state that their aim is to test colour, shape, and texture and, in the methods, they explain how they constructed the synthetic dataset. However, they never present the results for these features. In the results section the results from other tests and experiments are presented. * A ResNet-18 is trained on what, from the description in the methods, seems to be their standard dataset composed of synthetic images of different shape, colour, and texture. The network is then tested on a conflict task in which two conflicting images are presented together. However, in the experiments presented in the results different features have been manipulated. Specifically, the authors present experiments in which they manipulated pixel number, deviation, overlap, and predictivity of each feature. The results reported show that when a feature encompassed more pixels, when the signal to noise ratio was higher, and when a feature was more uniquely related to the object (less deviation, less overlap, and higher predictivity) that feature was preferred when the CNN was forced to choose between contrasting cues. How were these properties expressed on the train dataset? Was the network re-trained for each experiment? * The authors in the discussion propose a model to account for these features presented in the results section but I did not understand how the model was built and whether it was tested. And if it was, how was it tested and what were the results? * In the discussion the authors claim that they would expect feature preference between machines and humans to move closer in alignment if the signal to noise ratio was carefully controlled. I am not sure I understand this argument, especially considering that humans are able to successfully solve classification tasks even in bad visual conditions. * In the appendix two more experiments are proposed and they seem somewhat unrelated to the ones proposed in the main paper. In the first, the authors present an alternative to the experiment of Geirhos and colleagues. Specifically, they propose to use a different style transfer function, but it’s not clear what they wanted to maximize. First, it seems that the new style function makes it so that the number of pixels that form the shape are similar to the number of pixels that form the texture. It is claimed that the new style transfer function preserves shape information contained within the object which gets lost when focusing on the silhouette. * In the last experiment presented in the appendix a ResNet-20 was trained on the MINST and some modifications with the aim of investigating the effect of location-based pixels and deviation. * It is not clear why for each part of the paper (main paper, appendix experiment 1, appendix experiment 2) a different network was used (ResNet-18, ResNet-50, and ResNet-20 respectively). Overall, though I praise the clear effort that the authors put in this work, I feel the presentation of the methods and the results could be clearer. My advice would be to focus on one question (either colour/texture/shape or predictivity/overlap/deviation/pixel count) and write a coherent text that links the experiments in the main text together with the ones presented in the appendix, possibly explaining when necessary what changes in the training methods. This would help the reader to understand your aim and follow it throughout the text and the results. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
IKQYmATn893
ijcai.org/IJCAI/2021/Workshop/NSNLI
2021
Exploring Multi-hop Reasoning Process in NLU from the View of Bayesian Probability
["Yitian Li", "Jidong Tian", "Hao HE", "Yaohui Jin"]
Emerging pre-trained language models (PTLMs), such as BERT and RoBERTa, have already achieved great success on many natural language understanding (NLU) tasks, spurring widespread interest for their potential in scientific and social areas, with accompanying criticism on their ambiguousness of reasoning, especially multi-hop cases. Concretely, many studies have pointed out that these models lack true understandings of the reasoning process. In this work, we focus on multi-hop reasoning processes of PTLMs and perform an analysis on a logical reasoning dataset, Soft Reasoner. We first extend the dataset by constructing the implicit intermediate results during multi-hop reasoning in a semi-automatic way. Surprisingly, when testing on the extended dataset, PTLMs can even predict the correct conclusion when they cannot judge the corresponding intermediate results. To further analyze this phenomenon, we further compare PTLMs' reasoning processes with Bayesian inference processes to simulate humans' reasoning procedure. Results show that if a model is more in line with the Bayesian process, it tends to have a better generalization ability. Our Bayesian process method can be used as a method to evaluate the generalization ability of models.
["Reasoning", "Bayesian probability"]
Exploring Multi-hop Reasoning Process in NLU from the View of BayesianProbabilityYitian Li1;2,Jidong Tian1;2,Hao He1;2andYaohui Jin1;21MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University2State Key Lab of Advanced Optical Communication System and Network, Shanghai Jiao TongUniversityfyitian li, frank92, hehao, jinyh g@sjtu.edu.cnAbstractEmerging pre-trained language models (PTLMs),such as BERT and RoBERTa, have alreadyachieved great success on many natural languageunderstanding (NLU) tasks, spurring widespreadinterest for their potential in scientific and social ar-eas, with accompanying criticism on their ambigu-ousness of reasoning, especially multi-hop cases.Concretely, many studies have pointed out thatthese models lack true understandings of the rea-soning process. In this work, we focus on multi-hop reasoning processes of PTLMs and perform ananalysis on a logical reasoning dataset, Soft Rea-soner. We first extend the dataset by construct-ing the implicit intermediate results during multi-hop reasoning in a semi-automatic way. Surpris-ingly, when testing on the extended dataset, PTLMscan even predict the correct conclusion when theycannot judge the corresponding intermediate re-sults. To further analyze this phenomenon, wefurther compare PTLMs’ reasoning processes withBayesian inference processes to simulate humans’reasoning procedure. Results show that if a modelis more in line with the Bayesian process, it tends tohave a better generalization ability. Our Bayesianprocess method can be used as a method to evalu-ate the generalization ability of models.1 IntroductionNatural language understanding (NLU) is one of the crit-ical problems in NLP, which aims to empower machinesto understand language generated by humans [Hossain etal., 2020 ]. Many NLU systems, especially Transformers-based [Vaswani et al. , 2017 ]language models (BERT [Devlinet al. , 2019 ], RoBERTa [Liuet al. , 2019 ]), seem to be suc-cessful on many NLU-related tasks, such as question answer-ing (QA) and natural language inference (NLI) [Sinha et al. ,2019 ]. However, it is still inconclusive whether these modelscan truly understand natural language and make reasonabledecisions or not [Niven and Kao, 2019 ]. On the one hand,some evidence shows that PTLMs lack sufficient reasoningCorresponding Authorsability in complex reasoning tasks [Bhagavatula et al. , 2020;Liuet al. , 2020 ]. On the other hand, many studies pointout that correct decisions may come from spurious statisticalcorrelations rather than true reasoning abilities, resulting inpoor generalization and robustness [Jiang and Bansal, 2019;Kaushik and Lipton, 2018; Gururangan et al. , 2018 ]. There-fore, there has been an increasing need to understand howthese PTLMs work [Misra et al. , 2020 ].In this work, we explore to analyze how PTLMs work inNLU by introducing a novel simulation of the reasoning pro-cess. In reality, some classic studies have extracted atten-tion that implicitly reflects the reasoning process [Serranoand Smith, 2019; Abnar and Zuidema, 2020 ], while othersdirectly provide explicit evidence or proof to evaluate mod-els[Gontier et al. , 2020 ]. Furthermore, a recent kind ofmethod takes advantage of counterfactual instances to per-form analysis or attack models, which is another way to un-derstand the limitations of NLU models [Kurakin et al. , 2017;Jinet al. , 2020 ]. These studies provide significant views onhow to use the reasoning process to understand PTLMs’ rea-soning ability, but these studies hardly take into account theprobabilistic reasoning process of neural models.Based on the above analysis, there are two key points whendescribing the reasoning process of PTLMs. Firstly, accord-ing to the intuition of the human reasoning process, if amodel truly understands the reasoning process, it is also sup-posed to make correct predictions of the intermediate results.For example, when we have known the premises ( Bob likesadorable animals. Luna is a cat. A cat is an animal. Thecat is adorable. ) and wanted to judge the hypothesis ( Boblikes Luna. ), Humans are easy to conclude the intermediateresults ( Luna is an animal. Luna is adorable. ). Secondly,we build the analytical method that could measure the proba-bility distributions of the whole reasoning process rather thanonly provide deterministic reasoning results.Therefore, we concentrate on the reasoning process ofmulti-hop reasoning in NLU, and our proposed analyticalmethod also introduces the explicit intermediate results todescribe the reasoning process. Differently, we next intro-duce the Bayesian network to describe the probabilistic rea-soning process of humans based on the intermediate resultsand compare the PTLMs’ reasoning processes with such anetwork. As the previous work of Wang et al. [Wang et al. ,2019 ]mentions, a neural model inferring through the correctreasoning process has better generalization ability on zero-shot evaluations. Our experimental results on a logical bench-mark dataset, Soft Reasoner, further support this view fromthe probabilistic perspective, which means that a model betterconforming to the Bayesian network is easier to generalize.Our main contributions are summarized as:• We propose a novel analytical method to evaluate howPTLMs perform on multi-hop reasoning problems ofNLU. This method takes advantage of explicit interme-diate results to construct the Bayesian network that ben-efits to model a human-like and probabilistic reasoningprocess.• Experiments on the Soft Reasoner dataset provide evi-dence that if a model is more in line with the Bayesianprocess, it seems to make a more human-like reasoningprocess, making it easier to generalize.2 Related Work2.1 Benchmarking Natural Language ReasoningResearches on evaluating how PTLMs perform reasoningtasks have emerged quickly [McCoy et al. , 2019 ]. Mostof these studies are based on three different focuses of thereasoning process: attention, proof, and counterfactual sam-ples. Attention-based methods [Serrano and Smith, 2019;Abnar and Zuidema, 2020 ]take advantages of the atten-tion mechanism to provide the posterior validation to explainthe reasoning process. Proof-based methods [Gontier et al. ,2020 ]provide the priori reasoning path to evaluate models.Attack-based methods [Kurakin et al. , 2017; Jin et al. , 2020 ]construct counterfactual scenarios to explore models’ limita-tions. These studies have brought detailed views on PTLMs.For example, Niven and Kao [Niven and Kao, 2019 ]foundthat BERT tends to exploit the presence of cue words to pre-dict, such as ”not”. There are also studies [Gururangan et al. ,2018; Poliak et al. , 2018 ]that exposed how biases in datasetsinfluence PTLMs. Other findings [Glockner et al. , 2018;Carmona et al. , 2018 ]also revealed that some NLI modelsmight use fallible heuristics to make decisions. However,these studies are not adequate in analyzing the reasoning pro-cess of PTLMs as they neglected the probabilistic character-istics of neural models.2.2 Probabilistic Logical ReasoningConsidering reasoning in NLU, it is reasonable to involvelogical methods into neural models [Lage et al. , 2018 ]. Al-though traditional methods used hard logic rules [Qu andTang, 2019 ], more preferable methods for logical reasoningin NLU are probabilistic logical methods that better match theprobabilistic characteristics of neural models, such as Markovlogic network (MLN) [Richardson and Domingos, 2006;Singla and Domingos, 2005 ]and DistMult [Yang et al. ,2015 ]. Recently, Manhaeve et al. [Manhaeve et al. , 2018 ]proposed a probabilistic logic programming method that canfit the neural models perfectly. Qu and Tang [Qu and Tang,2019 ]proposed a probabilistic framework, pLogicNet, thatcan use logic rules to handle uncertainty in a principled waybased on Markov logic networks. Although these methodsPremises(")FAB$!C$"DHypothesis%HumanReasoningfabcdHypothesis%LMs’ReasoningPremises(")Figure 1: Comparison of the human reasoning process and the LMs’reasoning process. The yellow circles, including F;A;B;C;D , arethe perceived from premises ( ) directly, and their correspondinglowercase letters indicate their hidden space, while the red-borderedcircles (1,2) are the intermediate results in the Bayesian Net-work. The blue circle is the hypothesis ( ).focus more on integrating probabilistic logical reasoning andneural networks, their effectiveness also supports the viewthat understanding the reasoning process of neural models re-quires considering their probabilistic characteristics.3 Methodology3.1 Probabilistic Reasoning ProcessIn this paper, we introduce the directed acyclic graph (DAG)to model the human reasoning process as it can describe log-ical dependence among different propositions similar to hu-mans [Chen et al. , 2020; Talvitie and Koivisto, 2019 ]. How-ever, neural networks always make reasoning through a fully-connected bidirectional graph encoded in the neurons. Com-parisons are shown in Figure 1. To measure the similaritybetween the right graph and the left one in Figure 1, we arerequired to generate the intermediate results 1and2, andevaluate the connections in the right graph according to pathsdefined in DAG.Besides, PTLMs’ reasoning process is a probabilistic rea-soning process, especially in NLU [Petroni et al. , 2019 ].When given the context xand a proposition zto be judged,we use the form of [CLS ]x[SEP ]z[SEP ]as the input ofPTLMs. Then, PTLMs will output the probability P(zjx)calculated by the following equation, his a scoring functionor a negative energy function represented by a neural networkwith parameters .P(zjx) =softmax (h(z;x))=exp(h(z;x))Pz0exp(h(z0;x))(1)Different from neural networks, human-like reasoning be-longs to the deterministic type based on the DAG. This rea-soning process should be probabilized to match the reason-ing process of LMs. The Bayesian network is to build therelationship among probabilities based on DAG. Therefore,it is suitable to probabilize the human reasoning process byconverting the DAG to a probability graph, making it pos-sible to parse LMs. According to the topological structureof the probability graph, the conditional probability distribu-tions (CPD) of a set of random variables (x1;x2;x3;:::;xn)can be investigated. Therefore, we can use the Bayesian net-work based on the DAG to measure LMs.3.2 Probabilistic Probability AnalysisBased on intermediate results, we can perform the analysisby comparing prior probabilities from the Bayesian networkand PTLMs. PTLM’s probability ( P) can be computed byPTLMs directly:P=P(j ) (2)where2 f1;2;g,1,2, andare intermediate resultsand the hypothesis, and is the context splicing all premises.To calculate the Bayesian probabilities of intermediate re-sults, we should first define the initial probabilities. For theexample in Figure 1, we can take advantage of PTLMs tocalculate the conditional probabilities of A,B,C,D,F,and their negation propositions conditional on the context . These probabilities are used as the initial probabilities,which can be represented by P( ij )andP( ij ), where i2 fa;b;c;d;f g, wherea2 fA;Ag, andb,c,d, andfaresimilar. Actually, we regard the process to reason out ias aperceptual process because ican be judged directly by thecontext without the reasoning process. As the whole rea-soning process should be based on the perceptual process, it isreasonable to take these probabilities as the bases of Bayesianprobabilities.Based on initial probabilities, we consider the probabilityof the intermediate result 1as a factorized representation ofthe distribution, and it is computed by the product of proba-bilities of correct premises ( AandB) that are localized prob-abilities. Similarly, conditional probabilities of each hop canbe calculated iteratively. We define Pto represent the valuecalculated from the probability distribution.P(1jA; ) =Xb2fB;BgP(1jA;b; )P(bjA; )(3)From the probabilistic DAG, different premises, AandB,are independent conditional on (A?Bj ). Therefore,Equation 3 can be rewritten as Equation 4:P(1jA; ) =Xb2fB;BgP(1jA;b; )P(bj )(4)Another premise Bis similar to A, so the Bayesian prob-ability of the intermediate result 1can be calculated. Also,the calculation can be further simplified by the independentcondition of (1? ja;b), shown in Equation 5:P(1j ) =Xa2fA;AgP(1ja; )P(aj )=Xa2fA;Agb2fB;BgP(1ja;b; )P(bj )P(aj )=Xa2fA;Agb2fB;BgP(1ja;b)P(bj )P(aj )(5)(Input Premises)F: If someone sees the rabbit then they like the rabbit.A: The bear needs the tiger.B: If someone needs the tiger then the tiger sees the cat.C: If the tiger sees the cat then the cat chases the bear.D: If someone chases the bear then they need the lion.Hypothesis (Node-3): The cat needs the lion.Answer : TrueReasoning Path : A + B! C! D(Intermediate results)1(Node-1): The tiger sees the cat.2(Node-2): The cat chases the bear.(Negation Examples)A: The bear does not need the tiger.B: If someone needs the tiger then the tiger does not seethe cat. (CWA)1: The tiger does not see the cat.2: The cat does not chase the bear.Figure 2: An example of a multi-hop instance, including premises( ), hypothesis ( ), label, reasoning path, intermediate results(1;2) and some negation examples of premises.Based on the same calculations, all prior probabilities ofintermediate results and the final hypothesis ( P) can be ob-tained from the theoretical Bayesian network and PTLMs.We utilize the Kullback-Leibler (KL) divergence to measurethe difference of their distributions, which is used to analyzethe PTLMs’ reasoning ability. The calculation follows Equa-tion 6, where x2 f1j ; 2j ;j g.KL(PjjP)) =NXi=1(P(xi) logP(xi)P(xi)) (6)WherePrepresents the probability result based onBayesian calculation and Pis the direct PTLMs’ inferenceresult.4 Experiment4.1 Experimental SettingsDataset We perform analysis on the Soft Reasoner datasetproposed by Clark et al. [Clark et al. , 2020 ]and modify itsmulti-hop sub-sets by introducing the intermediate results asnodes and their negative propositions based on close-worldassumption (CWA), shown in Figure 2. Based on the methodof dataset construction, we constructed the intermediate re-sults (such as 1and2) of multi-hop reasoning in a semi-automated way.Pre-trained Language Model We select two fundamentalPTLMs (BERT-large and RoBERTa-large), hyper-parametersof which are the same to make fair comparisons. We havetrained each model more than three times and the hyper-parameters for training are shown in the Tabel 1. The tar-get of PTLMs is to predict true/false for each hypothesis (orintermediate results) conditional on premises.4.2 Result and AnalysisWe first fine-tune PTLMs on the 2-hop training set and thenevaluate them on the modified 3-hop dataset. Based on theParameter BERT RoBERTaEmb. Dim. 1024 1024Max Length 256 256LR 5e55e5LR2 5e65e6L2 1e71e7LR Decay 1.0 1.0Epochs 30 30Early Stop 4 4Optimizer ADAM ADAMTable 1: Hyper-parameters for all models. Emb. Dim. is the dimen-sion of embeddings. LR represents learning rate on the linear layer,while LR 2represents learning rate on the PTLMs. L2 represents L2regularity. ES means early stop.Bayesian network, KL divergence of two intermediate results(KL-1 and KL-2) and the hypothesis (KL-3) are calculated.In this setting, the KL-1 and KL-2 are two in-domain metricsbecause the maximum reasoning hop of intermediate resultsis exactly 2, while KL-3 is an out-of-domain metric. Next,we evaluate BERT and RoBERTa on an in-domain test set(2-hop) and three out-of-domain test sets (3-hop, 5-hop, andzero-shot) from the original Soft Reasoner dataset. Theseout-of-domain, to a extent, can characterize the generaliza-tion ability of the trained model. These evaluations take theaccuracy as the metric. Results are shown in Table 2.Domain Metric BERT RoBERTaIn2-hop (%) 98.2 99.2KL-1 5.74 0.16KL-2 7.70 0.28Out3-hop (%) 83.5 91.25-hop (%) 56.7 79.3zero-shot (%) 85.7 93.1KL-3 10.11 1.51Table 2: Results of Bayesian analysis of BERT and RoBERTa.KL means the KL divergence between Bayesian probability andPTLM’s probability. Other metrics are accuracies on the corre-sponding test set.Kl-1 and KL-2 are the KL scores over 1-hop and2-hop examples respectively.From Table 2, there is no significant difference about theaccuracy of the model’s judgment on the final result betweenBERT and RoBERTa evaluated on the in-domain set. How-ever, RoBERTa has significantly lower in-domain KL met-rics (0.16 of KL-1 and 0.28 of KL-2) than BERT (whoseKL metrics are 5.74 and 7.70, respectively), which meansthat RoBERTa’s reasoning process is more in line with theBayesian reasoning process. This result is evidence that evenif a model can make correct predictions, its prediction processdoes not necessarily conform to the human reasoning process.Considering out-of-domain evaluations related to general-ization, RoBERTa performs surprisingly better than BERT onall three test sets, which means that RoBERTa has better gen-eralization ability in both more-hop reasoning (3-hop and 5-hop) and unseen (zero-shot) scenarios. Note that these resultsare consistent with in-domain KL metrics, which is evidencethat smaller KL metrics reflect better generalization to morecomplex scenarios. This conclusion conforms to the discov-ery of Wang et al. [Wang et al. , 2019 ].In general, experimental results support the intuition thatif a model can make probabilistic reasoning like humans, itwill have better generalization ability. We can conclude thatRoBERTa is more powerful to understand logical rules andapply them to reason than BERT, which conforms to the workof Talmor et al. [Talmor et al. , 2020 ]. In this sense, our an-alytical method provides a practical way to initially comparethe generalization abilities of different neural models throughKL metrics even without out-of-domain evaluation datasets.4.3 Case StudyWe perform a case study of the case in Figure 2. Its Bayesianprobabilities and PTLMs’ probabilities are displayed in Ta-ble 3.PropositionsBERT RoBERTaBayesian Model Bayesian ModelNode-1(1) 0.19 1.00 0.98 1.00Node-2(2) 0.13 0.54 0.96 1.00Node-3() 0.00 1.00 0.86 1.00Table 3: Case study by comparing Bayesian probabilities andPTLMs’ probabilities.From Table 3, RoBERTa can make the correct predic-tion with the probability of 1:00, roughly consistent with theBayesian probability of 0:86. Although BERT can make thecorrect prediction of the intermediate results and the final hy-pothesiswith the probability of 1:00, its Bayesian reason-ing process gives the opposite conclusion with the probabilityof0:00. Considering the accuracy, although such reasoningprocess of BERT does not conform to the human reasoningprocess, it is still regarded as successful reasoning. How-ever, the KL metric considers the difference between BERT’sprobabilities and Bayesian’s probabilities, allowing it to re-flect such a spurious condition. Therefore, the KL metric candescribe the generalization ability of PTLMs even if no out-of-domain evaluation is performed, but the in-domain accu-racy cannot.5 ConclusionAlthough pre-trained language models (PTLMs), such asBERT and RoBERTa, have achieved great success in manyNLU tasks, it is still challenging to understanding their truereasoning ability in the multi-hop reasoning scenarios. In thiswork, we propose a novel probabilistic analytical method toexplore PTLMs’ reasoning ability based on the constructedreasoning process (intermediate results). Specifically, wesimulate the reasoning process as a Bayesian network thatis a human-like reasoning process. Experiments on logicalreasoning datasets, Softer Reasoner, provides a new view thathuman-like neural models (fitting the Bayesian network) havea better ability to generalize. Similarly, it provides thoughtsfor adding the Bayesian probability process to neural networkanalysis in the future.References[Abnar and Zuidema, 2020 ]Samira Abnar and Willem H.Zuidema. Quantifying attention flow in transformers. InACL, 2020.[Bhagavatula et al. , 2020 ]Chandra Bhagavatula, Ronan LeBras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtz-man, Hannah Rashkin, Doug Downey, Wen-tau Yih, andYejin Choi. Abductive commonsense reasoning. In ICLR ,2020.[Carmona et al. , 2018 ]Vicente Iv ́an S ́anchez Carmona, JeffMitchell, and Sebastian Riedel. Behavior analysis of NLImodels: Uncovering the influence of three factors on ro-bustness. 2018.[Chen et al. , 2020 ]Wenqing Chen, Jidong Tian, LiqiangXiao, Hao He, and Yaohui Jin. Exploring logically depen-dent multi-task learning with causal inference. In EMNLP ,2020.[Clark et al. , 2020 ]Peter Clark, Oyvind Tafjord, and KyleRichardson. Transformers as soft reasoners over language.In Christian Bessiere, editor, IJCAI , 2020.[Devlin et al. , 2019 ]Jacob Devlin, Ming-Wei Chang, Ken-ton Lee, and Kristina Toutanova. BERT: pre-training ofdeep bidirectional transformers for language understand-ing. In NAACL-HLT , 2019.[Glockner et al. , 2018 ]Max Glockner, Vered Shwartz, andYoav Goldberg. Breaking NLI systems with sentences thatrequire simple lexical inferences. In Iryna Gurevych andYusuke Miyao, editors, ACL, 2018.[Gontier et al. , 2020 ]Nicolas Gontier, Koustuv Sinha, SivaReddy, and Christopher Pal. Meƒasuring systematic gen-eralization in neural proof generation with transformers.InNeurIPS , 2020.[Gururangan et al. , 2018 ]Suchin Gururangan, SwabhaSwayamdipta, Omer Levy, Roy Schwartz, Samuel R.Bowman, and Noah A. Smith. Annotation artifacts innatural language inference data. In NAACL-HLT , 2018.[Hossain et al. , 2020 ]Md Mosharaf Hossain, Venelin Ko-vatchev, Pranoy Dutta, Tiffany Kao, Elizabeth Wei, andEduardo Blanco. An analysis of natural language infer-ence benchmarks through the lens of negation. In EMNLP ,2020.[Jiang and Bansal, 2019 ]Yichen Jiang and Mohit Bansal.Avoiding reasoning shortcuts: Adversarial evaluation,training, and model development for multi-hop QA. InAnna Korhonen, David R. Traum, and Llu ́ıs M`arquez, ed-itors, ACL, 2019.[Jinet al. , 2020 ]Di Jin, Zhijing Jin, Joey Tianyi Zhou, andPeter Szolovits. Is BERT really robust? A strong base-line for natural language attack on text classification andentailment. In AAAI , 2020.[Kaushik and Lipton, 2018 ]Divyansh Kaushik andZachary C. Lipton. How much reading does readingcomprehension require? A critical investigation ofpopular benchmarks. In EMNLP , 2018.[Kurakin et al. , 2017 ]Alexey Kurakin, Ian J. Goodfellow,and Samy Bengio. Adversarial examples in the physicalworld. In ICLR , 2017.[Lage et al. , 2018 ]Isaac Lage, Andrew Slavin Ross,Samuel J. Gershman, Been Kim, and Finale Doshi-Velez.Human-in-the-loop interpretability prior. In NeurIPS ,2018.[Liuet al. , 2019 ]Yinhan Liu, Myle Ott, Naman Goyal,Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, MikeLewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta:A robustly optimized BERT pretraining approach. CoRR ,2019.[Liuet al. , 2020 ]Jian Liu, Leyang Cui, Hanmeng Liu, Dan-dan Huang, Yile Wang, and Yue Zhang. Logiqa: A chal-lenge dataset for machine reading comprehension withlogical reasoning. In IJCAI , pages 3622–3628, 2020.[Manhaeve et al. , 2018 ]Robin Manhaeve, Sebastijan Du-mancic, Angelika Kimmig, Thomas Demeester, andLuc De Raedt. Deepproblog: Neural probabilistic logicprogramming. In NeurIPS , 2018.[McCoy et al. , 2019 ]Tom McCoy, Ellie Pavlick, and TalLinzen. Right for the wrong reasons: Diagnosing syntacticheuristics in natural language inference. In ACL, 2019.[Misra et al. , 2020 ]Kanishka Misra, Allyson Ettinger, andJulia Rayz. Exploring bert’s sensitivity to lexical cues us-ing tests from semantic priming. In EMNLP , 2020.[Niven and Kao, 2019 ]Timothy Niven and Hung-Yu Kao.Probing neural network comprehension of natural lan-guage arguments. In ACL, 2019.[Petroni et al. , 2019 ]Fabio Petroni, Tim Rockt ̈aschel, Se-bastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yux-iang Wu, and Alexander H. Miller. Language models asknowledge bases? In EMNLP-IJCNLP , 2019.[Poliak et al. , 2018 ]Adam Poliak, Jason Naradowsky,Aparajita Haldar, Rachel Rudinger, and Benjamin VanDurme. Hypothesis only baselines in natural languageinference. In NAACL-HLT , 2018.[Qu and Tang, 2019 ]Meng Qu and Jian Tang. Probabilisticlogic neural networks for reasoning. In NeurIPS , 2019.[Richardson and Domingos, 2006 ]Matthew Richardson andPedro M. Domingos. Markov logic networks. Mach.Learn. , 2006.[Serrano and Smith, 2019 ]Sofia Serrano and Noah A.Smith. Is attention interpretable? In ACL, 2019.[Singla and Domingos, 2005 ]Parag Singla and Pedro M.Domingos. Discriminative training of markov logic net-works. In AAAI , 2005.[Sinha et al. , 2019 ]Koustuv Sinha, Shagun Sodhani, JinDong, Joelle Pineau, and William L. Hamilton. CLUTRR:A diagnostic benchmark for inductive reasoning from text.InEMNLP-IJCNLP , 2019.[Talmor et al. , 2020 ]Alon Talmor, Yanai Elazar, Yoav Gold-berg, and Jonathan Berant. olmpics - on what languagemodel pre-training captures. Trans. Assoc. Comput. Lin-guistics , 8, 2020.[Talvitie and Koivisto, 2019 ]Topi Talvitie and MikkoKoivisto. Counting and sampling markov equivalentdirected acyclic graphs. In AAAI , 2019.[Vaswani et al. , 2017 ]Ashish Vaswani, Noam Shazeer, NikiParmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,Lukasz Kaiser, and Illia Polosukhin. Attention is all youneed. In NeurIPS , 2017.[Wang et al. , 2019 ]Haoyu Wang, Mo Yu, Xiaoxiao Guo,Rajarshi Das, Wenhan Xiong, and Tian Gao. Do multi-hop readers dream of reasoning chains? ACL, 2019.[Yang et al. , 2015 ]Bishan Yang, Wen-tau Yih, XiaodongHe, Jianfeng Gao, and Li Deng. Embedding entities andrelations for learning and inference in knowledge bases. InICLR , 2015.
WASpMuR-AZp
Review
6: Marginally above acceptance threshold
This paper analyses whether reasoning capabilities of a pretrained language model can be compared to a Bayesian network for a probabilistic reasoning process. An interesting apporach of the paper is to test this out via intermediate results in the k-hop reasoning environment, where the model's probability of predicting an intermediate result 'agrees' with a probabilistic reasoning model that 'simulates'/aligns with human reasoning. The results section shows that RoBERTa model performs better than the BERT model, i.e. aligns with a probabilistic reasoning model better. This work seems like an interesting approach to analysing reasoning capabilities of language models and might spark further interest in this area. I found the following two things unclear: - "the analytics method should be capable of measuring the probability distributions of the whole reasoning process rather than only provide deterministic reasoning results" - I understood this as the language models should be able to measure the probability distributions of reasoning, which is not the case given that their probabilistic approach to language modeling is not trained specifically for reasoning. The reasoning elements these models learn seems to be more of a side-effect than the goal, in the case where they're trained as language model. In this paper they are trained on a specific reasoning dataset so that differs. I would suggest clarifying this sentence. - "Neural networks make reasoning through a fully-connected bidirectional graph" - On a high-level this seems like a plausible connection, but the letter choice f, a, b, c, d and F, A, B, C, D seems to imply a conenction there which is difficult to establish given that language models learn premises token-by-token or word-by-word, and not premise-by-premise. Please clarify this and make the connection more explicit (an example would be great). Smaller issues: - the title is cut off - "due to the probabilistic characteristics of PTLMs [Manhaeve et al. 2018]"; the work of Manhaeve et al does not consider language models so it seems like an inappropriate citation to support the probabilistic nature of PTLMs - Figure 1 mentions red-bordered circles, but there are no red bordered circles. Also, premises \psi are mentioned, but \psi is nowhere to be found in the figure - what are [CLS] and [SEP], please define - close world -> closed-world - trained each model three times - what does this mean? Trained with a different random seed? Why three?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Exploring Multi-hop Reasoning Process in NLU from the View of Bayesian Probability ### Paper Abstract Emerging pre-trained language models (PTLMs), such as BERT and RoBERTa, have already achieved great success on many natural language understanding (NLU) tasks, spurring widespread interest for their potential in scientific and social areas, with accompanying criticism on their ambiguousness of reasoning, especially multi-hop cases. Concretely, many studies have pointed out that these models lack true understandings of the reasoning process. In this work, we focus on multi-hop reasoning processes of PTLMs and perform an analysis on a logical reasoning dataset, Soft Reasoner. We first extend the dataset by constructing the implicit intermediate results during multi-hop reasoning in a semi-automatic way. Surprisingly, when testing on the extended dataset, PTLMs can even predict the correct conclusion when they cannot judge the corresponding intermediate results. To further analyze this phenomenon, we further compare PTLMs' reasoning processes with Bayesian inference processes to simulate humans' reasoning procedure. Results show that if a model is more in line with the Bayesian process, it tends to have a better generalization ability. Our Bayesian process method can be used as a method to evaluate the generalization ability of models. ### Paper Keywords ["Reasoning", "Bayesian probability"] ### Paper Content Exploring Multi-hop Reasoning Process in NLU from the View of BayesianProbabilityYitian Li1;2,Jidong Tian1;2,Hao He1;2andYaohui Jin1;21MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University2State Key Lab of Advanced Optical Communication System and Network, Shanghai Jiao TongUniversityfyitian li, frank92, hehao, jinyh g@sjtu.edu.cnAbstractEmerging pre-trained language models (PTLMs),such as BERT and RoBERTa, have alreadyachieved great success on many natural languageunderstanding (NLU) tasks, spurring widespreadinterest for their potential in scientific and social ar-eas, with accompanying criticism on their ambigu-ousness of reasoning, especially multi-hop cases.Concretely, many studies have pointed out thatthese models lack true understandings of the rea-soning process. In this work, we focus on multi-hop reasoning processes of PTLMs and perform ananalysis on a logical reasoning dataset, Soft Rea-soner. We first extend the dataset by construct-ing the implicit intermediate results during multi-hop reasoning in a semi-automatic way. Surpris-ingly, when testing on the extended dataset, PTLMscan even predict the correct conclusion when theycannot judge the corresponding intermediate re-sults. To further analyze this phenomenon, wefurther compare PTLMs’ reasoning processes withBayesian inference processes to simulate humans’reasoning procedure. Results show that if a modelis more in line with the Bayesian process, it tends tohave a better generalization ability. Our Bayesianprocess method can be used as a method to evalu-ate the generalization ability of models.1 IntroductionNatural language understanding (NLU) is one of the crit-ical problems in NLP, which aims to empower machinesto understand language generated by humans [Hossain etal., 2020 ]. Many NLU systems, especially Transformers-based [Vaswani et al. , 2017 ]language models (BERT [Devlinet al. , 2019 ], RoBERTa [Liuet al. , 2019 ]), seem to be suc-cessful on many NLU-related tasks, such as question answer-ing (QA) and natural language inference (NLI) [Sinha et al. ,2019 ]. However, it is still inconclusive whether these modelscan truly understand natural language and make reasonabledecisions or not [Niven and Kao, 2019 ]. On the one hand,some evidence shows that PTLMs lack sufficient reasoningCorresponding Authorsability in complex reasoning tasks [Bhagavatula et al. , 2020;Liuet al. , 2020 ]. On the other hand, many studies pointout that correct decisions may come from spurious statisticalcorrelations rather than true reasoning abilities, resulting inpoor generalization and robustness [Jiang and Bansal, 2019;Kaushik and Lipton, 2018; Gururangan et al. , 2018 ]. There-fore, there has been an increasing need to understand howthese PTLMs work [Misra et al. , 2020 ].In this work, we explore to analyze how PTLMs work inNLU by introducing a novel simulation of the reasoning pro-cess. In reality, some classic studies have extracted atten-tion that implicitly reflects the reasoning process [Serranoand Smith, 2019; Abnar and Zuidema, 2020 ], while othersdirectly provide explicit evidence or proof to evaluate mod-els[Gontier et al. , 2020 ]. Furthermore, a recent kind ofmethod takes advantage of counterfactual instances to per-form analysis or attack models, which is another way to un-derstand the limitations of NLU models [Kurakin et al. , 2017;Jinet al. , 2020 ]. These studies provide significant views onhow to use the reasoning process to understand PTLMs’ rea-soning ability, but these studies hardly take into account theprobabilistic reasoning process of neural models.Based on the above analysis, there are two key points whendescribing the reasoning process of PTLMs. Firstly, accord-ing to the intuition of the human reasoning process, if amodel truly understands the reasoning process, it is also sup-posed to make correct predictions of the intermediate results.For example, when we have known the premises ( Bob likesadorable animals. Luna is a cat. A cat is an animal. Thecat is adorable. ) and wanted to judge the hypothesis ( Boblikes Luna. ), Humans are easy to conclude the intermediateresults ( Luna is an animal. Luna is adorable. ). Secondly,we build the analytical method that could measure the proba-bility distributions of the whole reasoning process rather thanonly provide deterministic reasoning results.Therefore, we concentrate on the reasoning process ofmulti-hop reasoning in NLU, and our proposed analyticalmethod also introduces the explicit intermediate results todescribe the reasoning process. Differently, we next intro-duce the Bayesian network to describe the probabilistic rea-soning process of humans based on the intermediate resultsand compare the PTLMs’ reasoning processes with such anetwork. As the previous work of Wang et al. [Wang et al. ,2019 ]mentions, a neural model inferring through the correctreasoning process has better generalization ability on zero-shot evaluations. Our experimental results on a logical bench-mark dataset, Soft Reasoner, further support this view fromthe probabilistic perspective, which means that a model betterconforming to the Bayesian network is easier to generalize.Our main contributions are summarized as:• We propose a novel analytical method to evaluate howPTLMs perform on multi-hop reasoning problems ofNLU. This method takes advantage of explicit interme-diate results to construct the Bayesian network that ben-efits to model a human-like and probabilistic reasoningprocess.• Experiments on the Soft Reasoner dataset provide evi-dence that if a model is more in line with the Bayesianprocess, it seems to make a more human-like reasoningprocess, making it easier to generalize.2 Related Work2.1 Benchmarking Natural Language ReasoningResearches on evaluating how PTLMs perform reasoningtasks have emerged quickly [McCoy et al. , 2019 ]. Mostof these studies are based on three different focuses of thereasoning process: attention, proof, and counterfactual sam-ples. Attention-based methods [Serrano and Smith, 2019;Abnar and Zuidema, 2020 ]take advantages of the atten-tion mechanism to provide the posterior validation to explainthe reasoning process. Proof-based methods [Gontier et al. ,2020 ]provide the priori reasoning path to evaluate models.Attack-based methods [Kurakin et al. , 2017; Jin et al. , 2020 ]construct counterfactual scenarios to explore models’ limita-tions. These studies have brought detailed views on PTLMs.For example, Niven and Kao [Niven and Kao, 2019 ]foundthat BERT tends to exploit the presence of cue words to pre-dict, such as ”not”. There are also studies [Gururangan et al. ,2018; Poliak et al. , 2018 ]that exposed how biases in datasetsinfluence PTLMs. Other findings [Glockner et al. , 2018;Carmona et al. , 2018 ]also revealed that some NLI modelsmight use fallible heuristics to make decisions. However,these studies are not adequate in analyzing the reasoning pro-cess of PTLMs as they neglected the probabilistic character-istics of neural models.2.2 Probabilistic Logical ReasoningConsidering reasoning in NLU, it is reasonable to involvelogical methods into neural models [Lage et al. , 2018 ]. Al-though traditional methods used hard logic rules [Qu andTang, 2019 ], more preferable methods for logical reasoningin NLU are probabilistic logical methods that better match theprobabilistic characteristics of neural models, such as Markovlogic network (MLN) [Richardson and Domingos, 2006;Singla and Domingos, 2005 ]and DistMult [Yang et al. ,2015 ]. Recently, Manhaeve et al. [Manhaeve et al. , 2018 ]proposed a probabilistic logic programming method that canfit the neural models perfectly. Qu and Tang [Qu and Tang,2019 ]proposed a probabilistic framework, pLogicNet, thatcan use logic rules to handle uncertainty in a principled waybased on Markov logic networks. Although these methodsPremises(")FAB$!C$"DHypothesis%HumanReasoningfabcdHypothesis%LMs’ReasoningPremises(")Figure 1: Comparison of the human reasoning process and the LMs’reasoning process. The yellow circles, including F;A;B;C;D , arethe perceived from premises ( ) directly, and their correspondinglowercase letters indicate their hidden space, while the red-borderedcircles (1,2) are the intermediate results in the Bayesian Net-work. The blue circle is the hypothesis ( ).focus more on integrating probabilistic logical reasoning andneural networks, their effectiveness also supports the viewthat understanding the reasoning process of neural models re-quires considering their probabilistic characteristics.3 Methodology3.1 Probabilistic Reasoning ProcessIn this paper, we introduce the directed acyclic graph (DAG)to model the human reasoning process as it can describe log-ical dependence among different propositions similar to hu-mans [Chen et al. , 2020; Talvitie and Koivisto, 2019 ]. How-ever, neural networks always make reasoning through a fully-connected bidirectional graph encoded in the neurons. Com-parisons are shown in Figure 1. To measure the similaritybetween the right graph and the left one in Figure 1, we arerequired to generate the intermediate results 1and2, andevaluate the connections in the right graph according to pathsdefined in DAG.Besides, PTLMs’ reasoning process is a probabilistic rea-soning process, especially in NLU [Petroni et al. , 2019 ].When given the context xand a proposition zto be judged,we use the form of [CLS ]x[SEP ]z[SEP ]as the input ofPTLMs. Then, PTLMs will output the probability P(zjx)calculated by the following equation, his a scoring functionor a negative energy function represented by a neural networkwith parameters .P(zjx) =softmax (h(z;x))=exp(h(z;x))Pz0exp(h(z0;x))(1)Different from neural networks, human-like reasoning be-longs to the deterministic type based on the DAG. This rea-soning process should be probabilized to match the reason-ing process of LMs. The Bayesian network is to build therelationship among probabilities based on DAG. Therefore,it is suitable to probabilize the human reasoning process byconverting the DAG to a probability graph, making it pos-sible to parse LMs. According to the topological structureof the probability graph, the conditional probability distribu-tions (CPD) of a set of random variables (x1;x2;x3;:::;xn)can be investigated. Therefore, we can use the Bayesian net-work based on the DAG to measure LMs.3.2 Probabilistic Probability AnalysisBased on intermediate results, we can perform the analysisby comparing prior probabilities from the Bayesian networkand PTLMs. PTLM’s probability ( P) can be computed byPTLMs directly:P=P(j ) (2)where2 f1;2;g,1,2, andare intermediate resultsand the hypothesis, and is the context splicing all premises.To calculate the Bayesian probabilities of intermediate re-sults, we should first define the initial probabilities. For theexample in Figure 1, we can take advantage of PTLMs tocalculate the conditional probabilities of A,B,C,D,F,and their negation propositions conditional on the context . These probabilities are used as the initial probabilities,which can be represented by P( ij )andP( ij ), where i2 fa;b;c;d;f g, wherea2 fA;Ag, andb,c,d, andfaresimilar. Actually, we regard the process to reason out ias aperceptual process because ican be judged directly by thecontext without the reasoning process. As the whole rea-soning process should be based on the perceptual process, it isreasonable to take these probabilities as the bases of Bayesianprobabilities.Based on initial probabilities, we consider the probabilityof the intermediate result 1as a factorized representation ofthe distribution, and it is computed by the product of proba-bilities of correct premises ( AandB) that are localized prob-abilities. Similarly, conditional probabilities of each hop canbe calculated iteratively. We define Pto represent the valuecalculated from the probability distribution.P(1jA; ) =Xb2fB;BgP(1jA;b; )P(bjA; )(3)From the probabilistic DAG, different premises, AandB,are independent conditional on (A?Bj ). Therefore,Equation 3 can be rewritten as Equation 4:P(1jA; ) =Xb2fB;BgP(1jA;b; )P(bj )(4)Another premise Bis similar to A, so the Bayesian prob-ability of the intermediate result 1can be calculated. Also,the calculation can be further simplified by the independentcondition of (1? ja;b), shown in Equation 5:P(1j ) =Xa2fA;AgP(1ja; )P(aj )=Xa2fA;Agb2fB;BgP(1ja;b; )P(bj )P(aj )=Xa2fA;Agb2fB;BgP(1ja;b)P(bj )P(aj )(5)(Input Premises)F: If someone sees the rabbit then they like the rabbit.A: The bear needs the tiger.B: If someone needs the tiger then the tiger sees the cat.C: If the tiger sees the cat then the cat chases the bear.D: If someone chases the bear then they need the lion.Hypothesis (Node-3): The cat needs the lion.Answer : TrueReasoning Path : A + B! C! D(Intermediate results)1(Node-1): The tiger sees the cat.2(Node-2): The cat chases the bear.(Negation Examples)A: The bear does not need the tiger.B: If someone needs the tiger then the tiger does not seethe cat. (CWA)1: The tiger does not see the cat.2: The cat does not chase the bear.Figure 2: An example of a multi-hop instance, including premises( ), hypothesis ( ), label, reasoning path, intermediate results(1;2) and some negation examples of premises.Based on the same calculations, all prior probabilities ofintermediate results and the final hypothesis ( P) can be ob-tained from the theoretical Bayesian network and PTLMs.We utilize the Kullback-Leibler (KL) divergence to measurethe difference of their distributions, which is used to analyzethe PTLMs’ reasoning ability. The calculation follows Equa-tion 6, where x2 f1j ; 2j ;j g.KL(PjjP)) =NXi=1(P(xi) logP(xi)P(xi)) (6)WherePrepresents the probability result based onBayesian calculation and Pis the direct PTLMs’ inferenceresult.4 Experiment4.1 Experimental SettingsDataset We perform analysis on the Soft Reasoner datasetproposed by Clark et al. [Clark et al. , 2020 ]and modify itsmulti-hop sub-sets by introducing the intermediate results asnodes and their negative propositions based on close-worldassumption (CWA), shown in Figure 2. Based on the methodof dataset construction, we constructed the intermediate re-sults (such as 1and2) of multi-hop reasoning in a semi-automated way.Pre-trained Language Model We select two fundamentalPTLMs (BERT-large and RoBERTa-large), hyper-parametersof which are the same to make fair comparisons. We havetrained each model more than three times and the hyper-parameters for training are shown in the Tabel 1. The tar-get of PTLMs is to predict true/false for each hypothesis (orintermediate results) conditional on premises.4.2 Result and AnalysisWe first fine-tune PTLMs on the 2-hop training set and thenevaluate them on the modified 3-hop dataset. Based on theParameter BERT RoBERTaEmb. Dim. 1024 1024Max Length 256 256LR 5e55e5LR2 5e65e6L2 1e71e7LR Decay 1.0 1.0Epochs 30 30Early Stop 4 4Optimizer ADAM ADAMTable 1: Hyper-parameters for all models. Emb. Dim. is the dimen-sion of embeddings. LR represents learning rate on the linear layer,while LR 2represents learning rate on the PTLMs. L2 represents L2regularity. ES means early stop.Bayesian network, KL divergence of two intermediate results(KL-1 and KL-2) and the hypothesis (KL-3) are calculated.In this setting, the KL-1 and KL-2 are two in-domain metricsbecause the maximum reasoning hop of intermediate resultsis exactly 2, while KL-3 is an out-of-domain metric. Next,we evaluate BERT and RoBERTa on an in-domain test set(2-hop) and three out-of-domain test sets (3-hop, 5-hop, andzero-shot) from the original Soft Reasoner dataset. Theseout-of-domain, to a extent, can characterize the generaliza-tion ability of the trained model. These evaluations take theaccuracy as the metric. Results are shown in Table 2.Domain Metric BERT RoBERTaIn2-hop (%) 98.2 99.2KL-1 5.74 0.16KL-2 7.70 0.28Out3-hop (%) 83.5 91.25-hop (%) 56.7 79.3zero-shot (%) 85.7 93.1KL-3 10.11 1.51Table 2: Results of Bayesian analysis of BERT and RoBERTa.KL means the KL divergence between Bayesian probability andPTLM’s probability. Other metrics are accuracies on the corre-sponding test set.Kl-1 and KL-2 are the KL scores over 1-hop and2-hop examples respectively.From Table 2, there is no significant difference about theaccuracy of the model’s judgment on the final result betweenBERT and RoBERTa evaluated on the in-domain set. How-ever, RoBERTa has significantly lower in-domain KL met-rics (0.16 of KL-1 and 0.28 of KL-2) than BERT (whoseKL metrics are 5.74 and 7.70, respectively), which meansthat RoBERTa’s reasoning process is more in line with theBayesian reasoning process. This result is evidence that evenif a model can make correct predictions, its prediction processdoes not necessarily conform to the human reasoning process.Considering out-of-domain evaluations related to general-ization, RoBERTa performs surprisingly better than BERT onall three test sets, which means that RoBERTa has better gen-eralization ability in both more-hop reasoning (3-hop and 5-hop) and unseen (zero-shot) scenarios. Note that these resultsare consistent with in-domain KL metrics, which is evidencethat smaller KL metrics reflect better generalization to morecomplex scenarios. This conclusion conforms to the discov-ery of Wang et al. [Wang et al. , 2019 ].In general, experimental results support the intuition thatif a model can make probabilistic reasoning like humans, itwill have better generalization ability. We can conclude thatRoBERTa is more powerful to understand logical rules andapply them to reason than BERT, which conforms to the workof Talmor et al. [Talmor et al. , 2020 ]. In this sense, our an-alytical method provides a practical way to initially comparethe generalization abilities of different neural models throughKL metrics even without out-of-domain evaluation datasets.4.3 Case StudyWe perform a case study of the case in Figure 2. Its Bayesianprobabilities and PTLMs’ probabilities are displayed in Ta-ble 3.PropositionsBERT RoBERTaBayesian Model Bayesian ModelNode-1(1) 0.19 1.00 0.98 1.00Node-2(2) 0.13 0.54 0.96 1.00Node-3() 0.00 1.00 0.86 1.00Table 3: Case study by comparing Bayesian probabilities andPTLMs’ probabilities.From Table 3, RoBERTa can make the correct predic-tion with the probability of 1:00, roughly consistent with theBayesian probability of 0:86. Although BERT can make thecorrect prediction of the intermediate results and the final hy-pothesiswith the probability of 1:00, its Bayesian reason-ing process gives the opposite conclusion with the probabilityof0:00. Considering the accuracy, although such reasoningprocess of BERT does not conform to the human reasoningprocess, it is still regarded as successful reasoning. How-ever, the KL metric considers the difference between BERT’sprobabilities and Bayesian’s probabilities, allowing it to re-flect such a spurious condition. Therefore, the KL metric candescribe the generalization ability of PTLMs even if no out-of-domain evaluation is performed, but the in-domain accu-racy cannot.5 ConclusionAlthough pre-trained language models (PTLMs), such asBERT and RoBERTa, have achieved great success in manyNLU tasks, it is still challenging to understanding their truereasoning ability in the multi-hop reasoning scenarios. In thiswork, we propose a novel probabilistic analytical method toexplore PTLMs’ reasoning ability based on the constructedreasoning process (intermediate results). Specifically, wesimulate the reasoning process as a Bayesian network thatis a human-like reasoning process. Experiments on logicalreasoning datasets, Softer Reasoner, provides a new view thathuman-like neural models (fitting the Bayesian network) havea better ability to generalize. Similarly, it provides thoughtsfor adding the Bayesian probability process to neural networkanalysis in the future.References[Abnar and Zuidema, 2020 ]Samira Abnar and Willem H.Zuidema. Quantifying attention flow in transformers. InACL, 2020.[Bhagavatula et al. , 2020 ]Chandra Bhagavatula, Ronan LeBras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtz-man, Hannah Rashkin, Doug Downey, Wen-tau Yih, andYejin Choi. Abductive commonsense reasoning. In ICLR ,2020.[Carmona et al. , 2018 ]Vicente Iv ́an S ́anchez Carmona, JeffMitchell, and Sebastian Riedel. Behavior analysis of NLImodels: Uncovering the influence of three factors on ro-bustness. 2018.[Chen et al. , 2020 ]Wenqing Chen, Jidong Tian, LiqiangXiao, Hao He, and Yaohui Jin. Exploring logically depen-dent multi-task learning with causal inference. In EMNLP ,2020.[Clark et al. , 2020 ]Peter Clark, Oyvind Tafjord, and KyleRichardson. Transformers as soft reasoners over language.In Christian Bessiere, editor, IJCAI , 2020.[Devlin et al. , 2019 ]Jacob Devlin, Ming-Wei Chang, Ken-ton Lee, and Kristina Toutanova. BERT: pre-training ofdeep bidirectional transformers for language understand-ing. In NAACL-HLT , 2019.[Glockner et al. , 2018 ]Max Glockner, Vered Shwartz, andYoav Goldberg. Breaking NLI systems with sentences thatrequire simple lexical inferences. In Iryna Gurevych andYusuke Miyao, editors, ACL, 2018.[Gontier et al. , 2020 ]Nicolas Gontier, Koustuv Sinha, SivaReddy, and Christopher Pal. Meƒasuring systematic gen-eralization in neural proof generation with transformers.InNeurIPS , 2020.[Gururangan et al. , 2018 ]Suchin Gururangan, SwabhaSwayamdipta, Omer Levy, Roy Schwartz, Samuel R.Bowman, and Noah A. Smith. Annotation artifacts innatural language inference data. In NAACL-HLT , 2018.[Hossain et al. , 2020 ]Md Mosharaf Hossain, Venelin Ko-vatchev, Pranoy Dutta, Tiffany Kao, Elizabeth Wei, andEduardo Blanco. An analysis of natural language infer-ence benchmarks through the lens of negation. In EMNLP ,2020.[Jiang and Bansal, 2019 ]Yichen Jiang and Mohit Bansal.Avoiding reasoning shortcuts: Adversarial evaluation,training, and model development for multi-hop QA. InAnna Korhonen, David R. Traum, and Llu ́ıs M`arquez, ed-itors, ACL, 2019.[Jinet al. , 2020 ]Di Jin, Zhijing Jin, Joey Tianyi Zhou, andPeter Szolovits. Is BERT really robust? A strong base-line for natural language attack on text classification andentailment. In AAAI , 2020.[Kaushik and Lipton, 2018 ]Divyansh Kaushik andZachary C. Lipton. How much reading does readingcomprehension require? A critical investigation ofpopular benchmarks. In EMNLP , 2018.[Kurakin et al. , 2017 ]Alexey Kurakin, Ian J. Goodfellow,and Samy Bengio. Adversarial examples in the physicalworld. In ICLR , 2017.[Lage et al. , 2018 ]Isaac Lage, Andrew Slavin Ross,Samuel J. Gershman, Been Kim, and Finale Doshi-Velez.Human-in-the-loop interpretability prior. In NeurIPS ,2018.[Liuet al. , 2019 ]Yinhan Liu, Myle Ott, Naman Goyal,Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, MikeLewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta:A robustly optimized BERT pretraining approach. CoRR ,2019.[Liuet al. , 2020 ]Jian Liu, Leyang Cui, Hanmeng Liu, Dan-dan Huang, Yile Wang, and Yue Zhang. Logiqa: A chal-lenge dataset for machine reading comprehension withlogical reasoning. In IJCAI , pages 3622–3628, 2020.[Manhaeve et al. , 2018 ]Robin Manhaeve, Sebastijan Du-mancic, Angelika Kimmig, Thomas Demeester, andLuc De Raedt. Deepproblog: Neural probabilistic logicprogramming. In NeurIPS , 2018.[McCoy et al. , 2019 ]Tom McCoy, Ellie Pavlick, and TalLinzen. Right for the wrong reasons: Diagnosing syntacticheuristics in natural language inference. In ACL, 2019.[Misra et al. , 2020 ]Kanishka Misra, Allyson Ettinger, andJulia Rayz. Exploring bert’s sensitivity to lexical cues us-ing tests from semantic priming. In EMNLP , 2020.[Niven and Kao, 2019 ]Timothy Niven and Hung-Yu Kao.Probing neural network comprehension of natural lan-guage arguments. In ACL, 2019.[Petroni et al. , 2019 ]Fabio Petroni, Tim Rockt ̈aschel, Se-bastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yux-iang Wu, and Alexander H. Miller. Language models asknowledge bases? In EMNLP-IJCNLP , 2019.[Poliak et al. , 2018 ]Adam Poliak, Jason Naradowsky,Aparajita Haldar, Rachel Rudinger, and Benjamin VanDurme. Hypothesis only baselines in natural languageinference. In NAACL-HLT , 2018.[Qu and Tang, 2019 ]Meng Qu and Jian Tang. Probabilisticlogic neural networks for reasoning. In NeurIPS , 2019.[Richardson and Domingos, 2006 ]Matthew Richardson andPedro M. Domingos. Markov logic networks. Mach.Learn. , 2006.[Serrano and Smith, 2019 ]Sofia Serrano and Noah A.Smith. Is attention interpretable? In ACL, 2019.[Singla and Domingos, 2005 ]Parag Singla and Pedro M.Domingos. Discriminative training of markov logic net-works. In AAAI , 2005.[Sinha et al. , 2019 ]Koustuv Sinha, Shagun Sodhani, JinDong, Joelle Pineau, and William L. Hamilton. CLUTRR:A diagnostic benchmark for inductive reasoning from text.InEMNLP-IJCNLP , 2019.[Talmor et al. , 2020 ]Alon Talmor, Yanai Elazar, Yoav Gold-berg, and Jonathan Berant. olmpics - on what languagemodel pre-training captures. Trans. Assoc. Comput. Lin-guistics , 8, 2020.[Talvitie and Koivisto, 2019 ]Topi Talvitie and MikkoKoivisto. Counting and sampling markov equivalentdirected acyclic graphs. In AAAI , 2019.[Vaswani et al. , 2017 ]Ashish Vaswani, Noam Shazeer, NikiParmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,Lukasz Kaiser, and Illia Polosukhin. Attention is all youneed. In NeurIPS , 2017.[Wang et al. , 2019 ]Haoyu Wang, Mo Yu, Xiaoxiao Guo,Rajarshi Das, Wenhan Xiong, and Tian Gao. Do multi-hop readers dream of reasoning chains? ACL, 2019.[Yang et al. , 2015 ]Bishan Yang, Wen-tau Yih, XiaodongHe, Jianfeng Gao, and Li Deng. Embedding entities andrelations for learning and inference in knowledge bases. InICLR , 2015.<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This paper analyses whether reasoning capabilities of a pretrained language model can be compared to a Bayesian network for a probabilistic reasoning process. An interesting apporach of the paper is to test this out via intermediate results in the k-hop reasoning environment, where the model's probability of predicting an intermediate result 'agrees' with a probabilistic reasoning model that 'simulates'/aligns with human reasoning. The results section shows that RoBERTa model performs better than the BERT model, i.e. aligns with a probabilistic reasoning model better. This work seems like an interesting approach to analysing reasoning capabilities of language models and might spark further interest in this area. I found the following two things unclear: - "the analytics method should be capable of measuring the probability distributions of the whole reasoning process rather than only provide deterministic reasoning results" - I understood this as the language models should be able to measure the probability distributions of reasoning, which is not the case given that their probabilistic approach to language modeling is not trained specifically for reasoning. The reasoning elements these models learn seems to be more of a side-effect than the goal, in the case where they're trained as language model. In this paper they are trained on a specific reasoning dataset so that differs. I would suggest clarifying this sentence. - "Neural networks make reasoning through a fully-connected bidirectional graph" - On a high-level this seems like a plausible connection, but the letter choice f, a, b, c, d and F, A, B, C, D seems to imply a conenction there which is difficult to establish given that language models learn premises token-by-token or word-by-word, and not premise-by-premise. Please clarify this and make the connection more explicit (an example would be great). Smaller issues: - the title is cut off - "due to the probabilistic characteristics of PTLMs [Manhaeve et al. 2018]"; the work of Manhaeve et al does not consider language models so it seems like an inappropriate citation to support the probabilistic nature of PTLMs - Figure 1 mentions red-bordered circles, but there are no red bordered circles. Also, premises \psi are mentioned, but \psi is nowhere to be found in the figure - what are [CLS] and [SEP], please define - close world -> closed-world - trained each model three times - what does this mean? Trained with a different random seed? Why three? ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
D5lK-IW_xS
MIDL.io/2020/Conference
2020
Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets
["Joshua V. Stough"]
The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data.
["Convolutional Neural Networks", "Echocardiography", "Segmentation", "Data Augmentation"]
Medical Imaging with Deep Learning { Under Review 2020 Short Paper { MIDL 2020 submissionModel Averaging and Augmented Inference for StableEchocardiography Segmentation using 2D ConvNetsAuthor(s) names withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2020AbstractThe automatic segmentation of heart substructures in 2D echocardiography images is a goalcommon to both clinicians and researchers. Convolutional neural networks (CNNs) haverecently shown the best average performance. However, on the rare occasions that a trainedCNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop andvalidate two easily implementable schemes for regularizing performance in 2D CNNs: modelaveraging and augmented inference. Model averaging involves training multiple instances ofa CNN with data augmentation over a sampled training set. Augmented inference involvesaccumulating network output over augmentations of the test image. Using the recentlyreleased CAMUS echocardiography dataset, we show signicant incremental improvementin outlier performance over the baseline model. These encouraging results must still bevalidated against independent clinical data.Keywords: Convolutional Neural Networks, Echocardiography, Segmentation, Data Aug-mentation1. IntroductionEchocardiography is a ubiquitous imaging modality for diagnosing and managing patientswith cardiovascular disease (Virnig et al., 2014), a major cause of morbidity and mortalityglobally. Derived from the apical two- and four-chamber views (AP2/AP4) of an echo study,the left ventricular (LV) ejection fraction (EF) is the most common clinical index for mea-suring cardiac function. The time{consuming nature of the required manual delineations,and their high degree of inter-observer variability (Wood et al., 2014), has motivated thedevelopment of automatic techniques (Zhang et al., 2018).Among many automatic segmentation methods that have been proposed in echo overdecades (Noble and Boukerroui, 2006), convolutional neural networks (CNNs) have recentlyshown the most promise. In order to catalyze further development in this eld, Leclerc et al.(2019) recently published the large annotated CAMUS dataset, providing expert manualannotations of hundreds of individual echo frames, needed for the supervised training of suchmodels. The authors also tested numerous deep learning and prior segmentation techniques,reporting that deep CNNs produced the best results.However, as pixel-wise classiers without shape or topological constraints, typical CNNscan suer from catastrophic failures, particularly in poor quality images or those with ar-tifacts. While rare, these failures make CNNs not yet trustworthy for large-scale precisionmedicine applications using clinical data. To address such outliers Oktay et al. (2017) pro-posed an anatomically constrained CNN in 3D echo, where the training is regularized byc2020 A.n. withheld.Augmented InferenceAP4_ED ES AP2_ED ES0.70.80.91.0Left VentricleAP4_ED ES AP2_ED ES0.80.91.0LV EpicardiumAP4_ED ES AP2_ED ES0.60.70.80.91.0Left AtriumFigure 1: Dice distribution for each structure, by view and phase. The left side of each pairrepresents a single trained model; the right, the 8-fold model average.an additional loss based on compact encoding of the ground-truth labeled images. How-ever, Leclerc et al. (2019) could not reproduce those benets on the 2D CAMUS dataset.Additionally, C.Qin et al. (2018) have integrated CNN-based segmentation with motionestimation in 3D cardiac magnetic resonance.In this work we appropriate bootstrapping concepts to develop and validate two rela-tively practicable techniques for mitigating these outlier errors in 2D CNNs. The rst ismodel averaging, in which a test image is segmented by multiple instances of a CNN trainedwith data augmentation over a sampled training set. The second technique is augmentedinference, in which model output is accumulated over multiple augmentations of the testimage. We use these techniques on the CAMUS dataset and show signicant incrementalimprovement in outlier performance over the baseline model.2. MethodsIn this section we briey describe our CNN model, data augmentation, evaluation, andexperimental setup. Our model architecture is based on the popular U-net CNN (Ron-neberger et al., 2015). With 13M parameters, the model uses convolutional down- andup-sampling, additive skip connections, and group normalization(Wu and He, 2018) forimproved stability.To help regularize output, we train all models with data augmentation reecting thevariability observed in the CAMUS set and echocardiography studies generally. The aug-mentations are performed on the y and include random intensity windowing, rotationabout the transducer, and additive Gaussian noise.To evaluate performance, we report Dice overlap on the segmented 2D echo frames.ForSautoandSrefrepresenting the areas enclosed by the respective object contours, Diceoverlap measures the intersection area divided by the average, D(Sauto; Sref) = 2(jSauto\Srefj)=(jSautoj+jSrefj). Dice is a highly validated measure in 2D.The publicly-released CAMUS dataset consists of 450 patients, two (AP2/AP4) viewsper patient, and two annotated (diastolic/systolic, ED/ES) phases per view, totalling 1800echo frames and corresponding label masks (background, LV endocardium LV endo, LV epi-cardium LV epi, and the left atrium LA). Additional information for each patient includes2Augmented Inferenceage, sex, and reported ED/ES LV volumes and EF, along with the observed image qualityfor each view.We initially generated ten patient folds, stratied on both EF range ( 45%;55%; else )and reported AP2 image quality (good, medium, poor), as suggested (Leclerc et al., 2019).We then excluded two folds for a test set totalling 90 patients (20%). We then performed8-fold cross-validation training on the remaining patient folds: each iteration, the CNNis trained on seven folds while being validated against another for parameter optimiza-tion. Each view is trained separately, resulting in eight model instances per view that cangeneralize to the test patients.3. ResultsTo evaluate model averaging, we compare the 8-fold accumulated inference to a baselinemodel of an arbitrarily chosen single fold. The box plots of Figure 1 clearly show thatmodel averaging improves median performance and tightens the interquartile range acrossall structures, views, and phases, with a similar number of outliers ([ 3;+2] out of 90). Toevaluate augmented inference, we consider an outlier of the baseline model in Figure 2. Weaccumulate the model inferences over 200 augmentations of the echo frame, as inferenceis relatively inexpensive. The recorded rotational augmentations are inverted before accu-mulation. As a result of augmented inference, Dice scores are dramatically improved oversingle inference for all labels (LV endo0.69-0.83, LV epi0.80-0.95, LA 0.36-0.70).4. ConclusionsModel averaging and augmented inference are relatively practicable methods that can sig-nicantly mitigate catastrophic errors in 2D CNNs. Model averaging signicantly reducesinterquartile ranges, while augmented inference may dramatically improve segmentationsof outlier test images, such as those with imaging artifacts. Future work revolves aroundincorporating video information and generalizing to other clinical datasets.Figure 2: Augmented inference on a test case. Center frame: baseline model performanceon test image (LV endoyellow, LV epimagenta, LA blue, ground truth red). Rightframe: accumulated performance of augmented inferences of the same model.3Augmented InferenceReferencesC.Qin, W. Bai, J. Schlemper, S.E Petersen, S.K. Piechnik, S. Neubauer, and D. Rueckert.Joint learning of motion estimation and segmentation for cardiac mr image sequences.InProc. Int. Conf. on Medical Image Computing and Computer Assisted Intervention(MICCAI) , 2018. https://doi.org/10.1007/978-3-030-00934-2_53 .Sarah Leclerc, Erik Smistad, Jo ao Pedrosa, Andreas stvik, Frederic Cervenansky, Flo-rian Espinosa, Torvald Espeland, Erik Andreas Rye Berg, Pierre-Marc Jodoin, ThomasGrenier, Carole Lartizien, Jan D'hooge, Lasse Lovstakken, and Olivier Bernard. Deeplearning for segmentation using an open large-scale dataset in 2d echocardiography. IEEETrans Med Imaging , 2019. https://doi.org/10.1109/TMI.2019.2900516 ;https://www.creatis.insa-lyon.fr/Challenge/camus/index.html .JA Noble and D Boukerroui. Ultrasound image segmentation: a survey. IEEE Trans MedImaging , 25:987{1010, 2006. https://www.ncbi.nlm.nih.gov/pubmed/16894993 .Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, JoseCaballero, Stuart Cook, Antonio de Marvao, Timothy Dawes, Declan O'Regan, BernhardKainz, Ben Glocker, and Daniel Rueckert. Anatomically constrained neural networks(acnns): Application to cardiac image enhancement and segmentation. IEEE Trans MedImaging , 37:384{395, 2017. https://doi.org/10.1109/TMI.2017.2743464 .Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networksfor biomedical image segmentation. pages 234{241, 2015. https://doi.org/10.1007/978-3-319-24574-4_28 .Beth A. Virnig, Nathan D Shippee, Brian O'Donnell, Jessica Zeglin, and Shriram Parashu-ram. Trends in the use of echocardiography, 2007 to 2011, 2014. https://www.ncbi.nlm.nih.gov/books/NBK208663/ .Peter W. Wood, Jonathan B. Choy, Navin C. Nanda, and Harald Becher. Left ventricularejection fraction and volumes: It depends on the imaging method. Echocardiography , 31(1):87{100, 2014. https://doi.org/10.1111/echo.12331 .Yuxin Wu and Kaiming He. Group normalization. 2018. https://doi.org/10.1007/978-3-030-01261-8_1 .Jerey Zhang, Sravani Gajjala, Pulkit Agrawal, Georey H. Tison, Laura A. Hallock, Lau-ren Beussink-Nelson, Mats H. Lassen, Eugene Fan, Mandar A. Aras, ChaRandle Jordan,Kirsten E. Fleischmann, Michelle Melisko, Atif Qasim, Sanjiv J. Shah, Ruzena Bajcsy,and Rahul C. Deo. Fully automated echocardiogram interpretation in clinical practice.Circulation , 136(16):1623{1635, 2018. https://doi.org/10.1161/CIRCULATIONAHA.118.034338 .4
b_hjgIKX6U
Interesting paper but rather incremental contribution with very brief experimental evaluation
2: Weak reject
The paper proposes to improve results of echocardiography imagery segmentation using model averaging and augmented inference. These ideas are not particularly novel, but have proven to be valuable in multiple recent studies. In particular, the authors claim that averaging the predictions from multiple models improves performance and avoid the spectacular failures the single model prediction may sometimes exhibit. Additionally, data augmentation at test time also improves the results making them more stable. The authors have trained and evaluated their method on data from the CAMUS dataset. This dataset is pretty large and the data variability observed there is sufficient to evaluate the generalisation capabilities of the method proposed by the authors. Unfortunately, I find that the evaluation is not complete. First of all the authors only compare a randomly picked model from their 8-fold cross validation strategy with the average of the 8 fold. Would be interesting to see how a single model performs compared to an average of 2, 3, 4,..., 8 models. More importantly, it would be very good to see how the average of different architectures would work. Additionally, the authors seem to state that test-time augmentation has been only done on one example, which is the one used for qualitative analysis and that is reported in figure. It would have been really great to see a formal comparison of the performance with and without test time augmentation for the whole test set. Importantly, the box plot visualisation of the results leaves too much to the imagination of the readers. It would have been much better to include a table with results. Through a table, it would have been possible to show results for more experiments, even though some visibility on outliers might be lost (compared to box plots). I have no doubt that the technique proposed in the paper is valuable. Given the length constraints of short papers I also understand the fact that the experimental evaluation is compact. I still think it could have been better, via a table and show different angles over the advantages brought by the proposed technique.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets ### Paper Abstract The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data. ### Paper Keywords ["Convolutional Neural Networks", "Echocardiography", "Segmentation", "Data Augmentation"] ### Paper Content Medical Imaging with Deep Learning { Under Review 2020 Short Paper { MIDL 2020 submissionModel Averaging and Augmented Inference for StableEchocardiography Segmentation using 2D ConvNetsAuthor(s) names withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2020AbstractThe automatic segmentation of heart substructures in 2D echocardiography images is a goalcommon to both clinicians and researchers. Convolutional neural networks (CNNs) haverecently shown the best average performance. However, on the rare occasions that a trainedCNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop andvalidate two easily implementable schemes for regularizing performance in 2D CNNs: modelaveraging and augmented inference. Model averaging involves training multiple instances ofa CNN with data augmentation over a sampled training set. Augmented inference involvesaccumulating network output over augmentations of the test image. Using the recentlyreleased CAMUS echocardiography dataset, we show signicant incremental improvementin outlier performance over the baseline model. These encouraging results must still bevalidated against independent clinical data.Keywords: Convolutional Neural Networks, Echocardiography, Segmentation, Data Aug-mentation1. IntroductionEchocardiography is a ubiquitous imaging modality for diagnosing and managing patientswith cardiovascular disease (Virnig et al., 2014), a major cause of morbidity and mortalityglobally. Derived from the apical two- and four-chamber views (AP2/AP4) of an echo study,the left ventricular (LV) ejection fraction (EF) is the most common clinical index for mea-suring cardiac function. The time{consuming nature of the required manual delineations,and their high degree of inter-observer variability (Wood et al., 2014), has motivated thedevelopment of automatic techniques (Zhang et al., 2018).Among many automatic segmentation methods that have been proposed in echo overdecades (Noble and Boukerroui, 2006), convolutional neural networks (CNNs) have recentlyshown the most promise. In order to catalyze further development in this eld, Leclerc et al.(2019) recently published the large annotated CAMUS dataset, providing expert manualannotations of hundreds of individual echo frames, needed for the supervised training of suchmodels. The authors also tested numerous deep learning and prior segmentation techniques,reporting that deep CNNs produced the best results.However, as pixel-wise classiers without shape or topological constraints, typical CNNscan suer from catastrophic failures, particularly in poor quality images or those with ar-tifacts. While rare, these failures make CNNs not yet trustworthy for large-scale precisionmedicine applications using clinical data. To address such outliers Oktay et al. (2017) pro-posed an anatomically constrained CNN in 3D echo, where the training is regularized byc2020 A.n. withheld.Augmented InferenceAP4_ED ES AP2_ED ES0.70.80.91.0Left VentricleAP4_ED ES AP2_ED ES0.80.91.0LV EpicardiumAP4_ED ES AP2_ED ES0.60.70.80.91.0Left AtriumFigure 1: Dice distribution for each structure, by view and phase. The left side of each pairrepresents a single trained model; the right, the 8-fold model average.an additional loss based on compact encoding of the ground-truth labeled images. How-ever, Leclerc et al. (2019) could not reproduce those benets on the 2D CAMUS dataset.Additionally, C.Qin et al. (2018) have integrated CNN-based segmentation with motionestimation in 3D cardiac magnetic resonance.In this work we appropriate bootstrapping concepts to develop and validate two rela-tively practicable techniques for mitigating these outlier errors in 2D CNNs. The rst ismodel averaging, in which a test image is segmented by multiple instances of a CNN trainedwith data augmentation over a sampled training set. The second technique is augmentedinference, in which model output is accumulated over multiple augmentations of the testimage. We use these techniques on the CAMUS dataset and show signicant incrementalimprovement in outlier performance over the baseline model.2. MethodsIn this section we briey describe our CNN model, data augmentation, evaluation, andexperimental setup. Our model architecture is based on the popular U-net CNN (Ron-neberger et al., 2015). With 13M parameters, the model uses convolutional down- andup-sampling, additive skip connections, and group normalization(Wu and He, 2018) forimproved stability.To help regularize output, we train all models with data augmentation reecting thevariability observed in the CAMUS set and echocardiography studies generally. The aug-mentations are performed on the y and include random intensity windowing, rotationabout the transducer, and additive Gaussian noise.To evaluate performance, we report Dice overlap on the segmented 2D echo frames.ForSautoandSrefrepresenting the areas enclosed by the respective object contours, Diceoverlap measures the intersection area divided by the average, D(Sauto; Sref) = 2(jSauto\Srefj)=(jSautoj+jSrefj). Dice is a highly validated measure in 2D.The publicly-released CAMUS dataset consists of 450 patients, two (AP2/AP4) viewsper patient, and two annotated (diastolic/systolic, ED/ES) phases per view, totalling 1800echo frames and corresponding label masks (background, LV endocardium LV endo, LV epi-cardium LV epi, and the left atrium LA). Additional information for each patient includes2Augmented Inferenceage, sex, and reported ED/ES LV volumes and EF, along with the observed image qualityfor each view.We initially generated ten patient folds, stratied on both EF range ( 45%;55%; else )and reported AP2 image quality (good, medium, poor), as suggested (Leclerc et al., 2019).We then excluded two folds for a test set totalling 90 patients (20%). We then performed8-fold cross-validation training on the remaining patient folds: each iteration, the CNNis trained on seven folds while being validated against another for parameter optimiza-tion. Each view is trained separately, resulting in eight model instances per view that cangeneralize to the test patients.3. ResultsTo evaluate model averaging, we compare the 8-fold accumulated inference to a baselinemodel of an arbitrarily chosen single fold. The box plots of Figure 1 clearly show thatmodel averaging improves median performance and tightens the interquartile range acrossall structures, views, and phases, with a similar number of outliers ([ 3;+2] out of 90). Toevaluate augmented inference, we consider an outlier of the baseline model in Figure 2. Weaccumulate the model inferences over 200 augmentations of the echo frame, as inferenceis relatively inexpensive. The recorded rotational augmentations are inverted before accu-mulation. As a result of augmented inference, Dice scores are dramatically improved oversingle inference for all labels (LV endo0.69-0.83, LV epi0.80-0.95, LA 0.36-0.70).4. ConclusionsModel averaging and augmented inference are relatively practicable methods that can sig-nicantly mitigate catastrophic errors in 2D CNNs. Model averaging signicantly reducesinterquartile ranges, while augmented inference may dramatically improve segmentationsof outlier test images, such as those with imaging artifacts. Future work revolves aroundincorporating video information and generalizing to other clinical datasets.Figure 2: Augmented inference on a test case. Center frame: baseline model performanceon test image (LV endoyellow, LV epimagenta, LA blue, ground truth red). Rightframe: accumulated performance of augmented inferences of the same model.3Augmented InferenceReferencesC.Qin, W. Bai, J. Schlemper, S.E Petersen, S.K. Piechnik, S. Neubauer, and D. Rueckert.Joint learning of motion estimation and segmentation for cardiac mr image sequences.InProc. Int. Conf. on Medical Image Computing and Computer Assisted Intervention(MICCAI) , 2018. https://doi.org/10.1007/978-3-030-00934-2_53 .Sarah Leclerc, Erik Smistad, Jo ao Pedrosa, Andreas stvik, Frederic Cervenansky, Flo-rian Espinosa, Torvald Espeland, Erik Andreas Rye Berg, Pierre-Marc Jodoin, ThomasGrenier, Carole Lartizien, Jan D'hooge, Lasse Lovstakken, and Olivier Bernard. Deeplearning for segmentation using an open large-scale dataset in 2d echocardiography. IEEETrans Med Imaging , 2019. https://doi.org/10.1109/TMI.2019.2900516 ;https://www.creatis.insa-lyon.fr/Challenge/camus/index.html .JA Noble and D Boukerroui. Ultrasound image segmentation: a survey. IEEE Trans MedImaging , 25:987{1010, 2006. https://www.ncbi.nlm.nih.gov/pubmed/16894993 .Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, JoseCaballero, Stuart Cook, Antonio de Marvao, Timothy Dawes, Declan O'Regan, BernhardKainz, Ben Glocker, and Daniel Rueckert. Anatomically constrained neural networks(acnns): Application to cardiac image enhancement and segmentation. IEEE Trans MedImaging , 37:384{395, 2017. https://doi.org/10.1109/TMI.2017.2743464 .Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networksfor biomedical image segmentation. pages 234{241, 2015. https://doi.org/10.1007/978-3-319-24574-4_28 .Beth A. Virnig, Nathan D Shippee, Brian O'Donnell, Jessica Zeglin, and Shriram Parashu-ram. Trends in the use of echocardiography, 2007 to 2011, 2014. https://www.ncbi.nlm.nih.gov/books/NBK208663/ .Peter W. Wood, Jonathan B. Choy, Navin C. Nanda, and Harald Becher. Left ventricularejection fraction and volumes: It depends on the imaging method. Echocardiography , 31(1):87{100, 2014. https://doi.org/10.1111/echo.12331 .Yuxin Wu and Kaiming He. Group normalization. 2018. https://doi.org/10.1007/978-3-030-01261-8_1 .Jerey Zhang, Sravani Gajjala, Pulkit Agrawal, Georey H. Tison, Laura A. Hallock, Lau-ren Beussink-Nelson, Mats H. Lassen, Eugene Fan, Mandar A. Aras, ChaRandle Jordan,Kirsten E. Fleischmann, Michelle Melisko, Atif Qasim, Sanjiv J. Shah, Ruzena Bajcsy,and Rahul C. Deo. Fully automated echocardiogram interpretation in clinical practice.Circulation , 136(16):1623{1635, 2018. https://doi.org/10.1161/CIRCULATIONAHA.118.034338 .4<|im_end|> <|im_start|>assistant ### Review Title Interesting paper but rather incremental contribution with very brief experimental evaluation ### Review Text The paper proposes to improve results of echocardiography imagery segmentation using model averaging and augmented inference. These ideas are not particularly novel, but have proven to be valuable in multiple recent studies. In particular, the authors claim that averaging the predictions from multiple models improves performance and avoid the spectacular failures the single model prediction may sometimes exhibit. Additionally, data augmentation at test time also improves the results making them more stable. The authors have trained and evaluated their method on data from the CAMUS dataset. This dataset is pretty large and the data variability observed there is sufficient to evaluate the generalisation capabilities of the method proposed by the authors. Unfortunately, I find that the evaluation is not complete. First of all the authors only compare a randomly picked model from their 8-fold cross validation strategy with the average of the 8 fold. Would be interesting to see how a single model performs compared to an average of 2, 3, 4,..., 8 models. More importantly, it would be very good to see how the average of different architectures would work. Additionally, the authors seem to state that test-time augmentation has been only done on one example, which is the one used for qualitative analysis and that is reported in figure. It would have been really great to see a formal comparison of the performance with and without test time augmentation for the whole test set. Importantly, the box plot visualisation of the results leaves too much to the imagination of the readers. It would have been much better to include a table with results. Through a table, it would have been possible to show results for more experiments, even though some visibility on outliers might be lost (compared to box plots). I have no doubt that the technique proposed in the paper is valuable. Given the length constraints of short papers I also understand the fact that the experimental evaluation is compact. I still think it could have been better, via a table and show different angles over the advantages brought by the proposed technique. ### Review Rating 2: Weak reject ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
BJesDsA9t7
ICLR.cc/2019/Conference
2019
Better Accuracy with Quantified Privacy: Representations Learned via Reconstructive Adversarial Network
["Sicong Liu", "Anshumali Shrivastava", "Junzhao Du", "Lin Zhong"]
The remarkable success of machine learning, especially deep learning, has produced a variety of cloud-based services for mobile users. Such services require an end user to send data to the service provider, which presents a serious challenge to end-user privacy. To address this concern, prior works either add noise to the data or send features extracted from the raw data. They struggle to balance between the utility and privacy because added noise reduces utility and raw data can be reconstructed from extracted features. This work represents a methodical departure from prior works: we balance between a measure of privacy and another of utility by leveraging adversarial learning to find a sweeter tradeoff. We design an encoder that optimizes against the reconstruction error (a measure of privacy), adversarially by a Decoder, and the inference accuracy (a measure of utility) by a Classifier. The result is RAN, a novel deep model with a new training algorithm that automatically extracts features for classification that are both private and useful. It turns out that adversarially forcing the extracted features to only conveys the intended information required by classification leads to an implicit regularization leading to better classification accuracy than the original model which completely ignores privacy. Thus, we achieve better privacy with better utility, a surprising possibility in machine learning! We conducted extensive experiments on five popular datasets over four training schemes, and demonstrate the superiority of RAN compared with existing alternatives.
["end-user privacy", "utility", "feature learning", "adversarial training"]
ABSTRACTThe remarkable success of machine learning, especially deep learning, has pro-duced a variety of cloud-based services for mobile users. Such services require anend user to send data to the service provider, which presents a serious challenge toend-user privacy. To address this concern, prior works either add noise to the dataor send features extracted from the raw data. They struggle to balance betweenthe utility and privacy because added noise reduces utility and raw data can bereconstructed from extracted features.This work represents a methodical departure from prior works: we balance be-tween a measure of privacy and another of utility by leveraging adversarial learn-ing to find a sweeter tradeoff. We design an encoder that optimizes against thereconstruction error (a measure of privacy), adversarially by a Decoder, and theinference accuracy (a measure of utility) by a Classifier. The result is RAN , anovel deep model with a new training algorithm that automatically extracts fea-tures for classification that are both private and useful.It turns out that adversarially forcing the extracted features to only conveys theintended information required by classification leads to an implicit regularizationleading to better classification accuracy than the original model which completelyignores privacy. Thus, we achieve better privacy with better utility, a surprisingpossibility in machine learning! We conducted extensive experiments on five pop-ular datasets over four training schemes, and demonstrate the superiority of RANcompared with existing alternatives.1 INTRODUCTIONToday’s most robust and accurate models are boosted by deep learning techniques, which bene-fit a lot of mobile intelligent services, such as speech-based assistant ( e.g. Siri), face recognitionenabled phone-unlock ( e.g. FaceID). However, the uncontrolled submission of raw sound, image,and human activity data from mobile users to service provider has well-known privacy risks Abadiet al. (2016). For example, the underlying correlation detection, re-identification and other maliciousmining Dwork et al. (2017); Bhatia et al. (2016). Different from pinning hopes on service providersto anonymise data for privacy-preserving, we present to encode each piece of raw data in the end-user side and only send the encoded data to the service provider. And the encoded data must beboth private anduseful .Privacy can be quantified by the risk of sensitive raw data disclosure giventhe encoded data. For classification services, utility can be quantified by the inference accuracy,achieved by the service provider using a discriminative model.Existing solutions addressing the privacy concern struggle to balance between above two seem-ingly conflicting objectives: privacy vs. utility. An obvious and widely practiced solution to theabove problem is to transform the raw data into features and upload the features only, like GoogleNow GoogleNow (2018); Google Cloud Machine Learning Engine also provides API to prepro-cess the raw data into engineering features before uploading GoogleCloud (2018). This solutionnot only alleviates the privacy concern but also reduces the mobile data usage. However, it doesnot provide any quantifiable privacy guarantee. It is well known that we can reconstruct the rawdata from the features Mahendran & Vedaldi (2015). As a result, Ossia et al. (2017) further apply1Under review as a conference paper at ICLR 2019dimensionality reduction and add noise to the features before sending them to the service provider,which unfortunately result in inference accuracy degradation.Unlike previous work, we aim to systematically derive deep features for a sweeter tradeoff betweenprivacy andutility using deep neural networks, by leveraging adversarial training. Our key idea isto judiciously combine generative learning, for maximizing reconstruction error, and discriminativelearning, for minimizing discriminative error. Specifically, we present Reconstructive AdversarialNetwork (RAN), an end-to-end deep model with a new training algorithm. RAN controls two typesof descent gradients, i.e., reconstruction error and discriminative error, in back-propagation processto guide the training of a feature extractor or Encoder.Defining the exact adversarial attacker and finding the right measurement for privacy is an openproblem in itself Mendes & Vilela (2017). In this paper, we quantify Privacy using an intuitivemetric, i.e., the difficulty of reconstructing raw data via a generative model, or the reconstructionerror . In this case, the adversarial attacker is defined as a data reconstructor. Therefore, as shown inFigure 2, a RAN consists of three parts: a feature extractor (Encoder), a utility discriminator (Clas-sifier), and an adversarial reconstructor (Decoder). The output of the Encoder feeds to the inputof the Classifier and the Decoder. We envision the Encoder runs in mobile devices and processesraw data into features. The Classifier runs in an untrusted platform, e.g. the cloud. A maliciousparty can seek to reconstruct the raw data from the features using the Decoder. There is no theo-retic guarantee on end-to-end training the colloborated discriminative model and generative model.Therefore, we present a novel algorithm to train the RAN via an adversarial process, i.e., trainingthe Encoder with a Classifier first to improve intermediate features’ utility for discriminative tasksand confronting the Encoder with an adversary Decoder to enhance the features’ privacy. All threeparts, Encoder, Classifier and Decoder, are iteratively trained using gradient descent. From the man-ifold perspective, the two separate flows across RAN’s Encoder, Decoder and Classifier, i.e., decentgradients of discrimination error and reconstruction error from the end of Classifier and Decoderin back-propagation, guide the exact model parameter updating, which can iteratively derive theprivacy-specific and utility-imposed feature manifold.Using MNIST LeCun (1998), CIFAR-10Krizhevsky et al. (2014), ImageNet Deng et al. (2009),Ubisound Sicong et al. (2017) and Har UCI (2017) benchmark datasets, we show RAN is effectivein training an Encoder for end users to generate deep features that are both private and useful.Surprisingly, we observe adversarial learned features to remove redundant information, for privacy,even surpass the accuracy of the original model. Removing redundant information enhances the gen-eralization. Seex3 andxA for more details. This better generalization is as auspicious illustrationthat in practice, with machine learning, we can gain both utility and privacy at the same time.In the rest of this paper, we elaborate RAN’s design inx2 and evaluate the performance of RAN inx3. We next review the related work in x4 and conclude this work in x5. We finally present thetheoretic interpretation of RAN in AppendixxA.2 D ESIGN OF RANThis section first formulates the privacy preserving problem, and then elaborates on RAN’s design.2.1 P ROBLEM DEFINITION OF MOBILE DATA PRIVACY PRESERVINGMany services exist today to analyze data from end users. In this work, we do not trust serviceproviders for the privacy of data: they could be malicious or subject to malicious exploits. Forexample, as shown in Fig 1, an end user takes a picture of a product and send it to a cloud-basedservice to find a place to purchase it, which is indeed a service Amazon provides. A lot of sensitiveinformation could accidentally come with the picture, such as personal information and user locationin the background.Our key insight is that most services actually do not need the raw data. Therefore, the mobile usercan encode raw data into features through a multi-layer Encoder (E) on the client side and onlydeliver features to the service provider. Such features ideally should have following two properties:Utility: contain enough essential information of raw data so that they are useful for the intended2Under review as a conference paper at ICLR 2019Deep featureResultsMobile UserService ProviderEncoderTeabagClassifierFigure 1: Framework of mobile data privacy preserving. Mobile users leverage the learned Encoderto generate deep features from the raw data ( i.e., ”tea bag” picture) before submit it. And the serviceprovider use the learned Classifier based on the received deep features, to recognize the object in thepicture and recommend a seller.service, e.g., high accuracy for object recognition; Privacy: it is hard to recover the original infor-mation of raw data based on perturbed features through a reverse deep model Zhang et al. (2016).2.2 U TILITY AND PRIVACY METRICSIn this work, we focus on classification services. Therefore, utility is quantified as the inferenceaccuracy of a discriminative model, employed by the service provider. Since defining the exact ad-versarial attacker and finding the right measurement for privacy is an open problem in itself Mendes& Vilela (2017), this paper quantifies privacy by an intuitive metric, i.e., the reconstruction error ina reversed deep model, X, employed by a malicious party. The reconstruction error measures therisk of original data disclosure. Since the Encoder is distributed to mobile users, we assume it isavailable to both service providers and potential attackers. That is, both the service provider and themalicious party can train their models using raw data and their corresponding Encoder output. Assuch we can restate the desirable properties for the Encoder output as:Utility :MaxEprob (Y0i=Yi);i2TPrivacy :MaxEMinXjIiI0ij2;i2T(1)where,prob (Y0i=Yi)denotes the correct inference probability, i.e., accuracy, in the classificationservice with the testing data T.Y0iandYiis the inference class and the true label, respectively.jIiI0ij2is the Euclidean distance, i.e., reconstruction error, between the raw data Iiand the mimicdataI0ireconstructed by a malicious party with the Encoder output.The first objective (Utility) is well-understood for discriminative learning. It can be achieved via astandard optimization process, i.e., minimizing the cross entropy between the predicted label Y0iandground truth Yiin a supervised manner Kruse et al. (2013). The inner part of the second objective,MinXjIiI0ij2, is also well-understood for generative learning. On the other hand, the outer partMaxEjIiI0ij2is the opposite, i.e., maximizing the reconstruction error. Therefore, the Encoder andthe reverse deep model employed by the malicious party ( X) are adversarial to each other in theiroptimization objectives.Achieving above two objectives at the same time is challenging, since utility, i.e., maximized ac-curacy, and privacy, i.e., maximized reconstruction error, are conflicting objectives to the featureextractor, i.e., Encoder. When improving Utility, the Encoder must extract features to representthe relevant essence of data; when improving Privacy, the Encoder can discard the utility-relevantessence of the data. If not done properly, the Encoder output optimized for Utility leads to effectivedata reconstruction by a reverse model and therefore poor Privacy Rifai et al. (2011).2.3 A RCHITECTURE OF RANTo tackle above challenges, we present RAN to train a feature extractor, i.e., Encoder, with goodtrade-offs between privacy and utility. As shown in Fig 2, RAN employ two additional neural3Under review as a conference paper at ICLR 2019.......Encoder (E)Classifier (C)...Decoder (D)Raw data IGenerated data E(I)E(I)I&=D(EI)Inference result Y′=C(EI)O-O.O/Figure 2: Architecture of reconstructive adversarial network (RAN).network modules, Decoder ( D) and Classifier ( C), to train the Encoder ( E). The Classifier simulatesthe intended classification service; when RAN is trained by the service provider, the Classifier canbe the same discriminative model eventually used. The Decoder simulates a malicious attacker thatattempts to reconstruct the raw data from the Encoder output. All the three modules are end-to-endtrained to establish the Encoder (E) for end-users to extract deep features E(I)from raw data I.The training is an iterative process that will be elaborated in x2.4. Below we first introduce RAN’sneural network architecture, along with some empirically gained design insights.The Encoder (E) consists of an input layer, multiple convolutional layers, pooling layers,and batch-normalization layers. We note that the clever usage of pooling layers and batch-normalization layers contribute to deep feature’s utility and privacy. The batch-normalizationlayer helps the features’ utility because it normalize the activation to avoid being too high or toolow thus has an regularization affect Ioffe & Szegedy (2015). It contributes to features’ privacy aswell since it is hard for Decoder to recover detail information from normalized features. And then,the max-pooling layer is helpful to enhance feature’s privacy, because none of un-pooling tech-niques can recover fine details from size-reduced features through shifting small parts to preciselyarrange them into a larger meaningful structure Milletari et al. (2016).TheDecoder (D) is a usual Encoder turned upside down, composed of multiple un-pooling lay-ers Mahendran & Vedaldi (2015) and deconvolutional layers Zeiler et al. (2010). We note that theuse of Decoder in training Encoder is to simulate a malicious party. After obtaining a (binary)version of the Encoder, a malicious party is free to explore any neural architectures to reconstructthe raw data. In this paper, we choose a worst possible Decoder, i.e., an exactly layer-to-layer re-versed architecture to mirror the Encoder. That is, we assume a powerful adversarial Decoder thatknows the Encoder’s operations and connections in training. We also note that the architectureand training algorithm of RAN can easily incorporate other architectures as the Decoder.theClassifier (C) builds a multi-layer perceptron (MLP) to process deep features and output in-ference results with several full-connected layers Kruse et al. (2013). As we noted for the Decoderabove, a service provider can explore any neural architectures for its discriminative model, giventhe Encoder. The reason we choose this specific architecture to train the Encoder is because someof the most successful CNN architectures, e.g.VGG and AlexNet, which can be viewed as as theEncoder plus the Classifier of our choice.2.4 T RAINING ALGORITHM OF RANOur goal with RAN is to train an Encoder that can produce output that is both useful, i.e., leading tohigh inference accuracy when used for classification tasks, and private, i.e., leading to high recon-structive error when reverse engineered by an attacker. As we noted in x2.1, these two objectives canbe competing when taken naively. The key idea of RAN’s training algorithm is to train the Encoderalong with the Classifier and the Decoder, which simulate the service provider and a malicious at-tacker, respectively. Given a training dataset Tofmpairs of I, the raw data, and Y , the true label,we train a RAN through an iterative process with three stages:4Under review as a conference paper at ICLR 2019Algorithm 1: Mini-batch stochastic training of reconstructive adversarial network (RAN)Input: Dataset TOutput: RAN’s Weightsfe;d;cg1Initializee,d,c;2fornepochs do3 Sample mini-batch Iofmsamples from T;4 forksteps do5 Updateeandcby gradient ascent with learning rate l1: minimizeOd;6 Updatedby gradient ascent with learning rate l2: minimizeOg;7 end8 Updateeandcby gradient ascent with learning rate l3: minimizeOa;9end10*Note:nandkare two important hyper-parameters1. Discriminative training maximizes the accuracy in Classifier; mathematically, it minimizesthe cross entropy Hbetween predicted class C(E(Ii))and true label Yi:Od=mXi=1H(YiC(E(Ii))): (2)2. Generative training minimizes the reconstructive error by the Decoder:Og=mXi=1jIiD(E(Ii))j2(3)3. Adversarial training finds a tradeoff point between utility and privacy:Oa=mXi=1HjYiC(E(Ii))j(1)jIiD(E(Ii))j2(4)It is essentially a Lagrangian function of the objectives of the first two stages. is theLagrange multiplier that can be used to balance between utility and privacy.Algorithm 1 summarizes the three-stage training algorithm. And we leverage mini-batch techniquesto balance the training robustness and efficiency (line 3) Li et al. (2014). Within each epoch, we firstperform the discrminative and generative stages (line 5, 6) to initialize model weights. And then,we perform the adversarial stage (line 8) to seek a balance between utility and privacy. We note thatkin line 4 is a hyper-parameter of first two stages. These ksteps followed by a single iteration ofthe third stage is trying to synchronize the convergence speed of these three training stages well,borrowing existing techniques in generative adversarial network Goodfellow et al. (2014). Ourimplementation uses an overall optimized value, k= 3, through comparing several discrete options.And we leverage the AdamOptimizer Kingma & Ba (2014) with an adaptive learning rate for allthree stages (line 5, 6 and 8).3 E VALUATIONIn this section, we first compare RAN’s performance on privacy-utility tradeoff with three baselinesand then visualize the utility and privacy of resulting Encoder output.Evaluation tasks and models. We evaluate RAN, especially the resulting Encoder, with five pop-ular classification services. Specifically, RAN is evaluated for hand-written digit recognition ( T1:MNIST LeCun (1998)), image classification ( T2: CIFAR-10 Krizhevsky et al. (2014), T3: Ima-geNet Deng et al. (2009)), acoustic event sensing ( T4: UbiSound Sicong et al. (2017)), and theaccelerometer and gyroscope data based human activity recognition ( T5: Har UCI (2017)). Ac-cording to the sample size, the LeNet is selected as the neural architectures of RAN’s Encoder plusClassifier for T1,T4andT5, while AlexNet and VGG-16 are chosen for T2andT3, respectively.To assume a powerful adversary that knows the Encoder in the training, the RAN’s Decoder exactlymirrors its Encoder for each task in the training.5Under review as a conference paper at ICLR 2019Noisy DNN DNN(resized)RAN(a) Digit(MNIST)Noisy DNN DNN(resized)RAN (b) Image(ImageNet)Noisy DNN DNN(resized)RAN(c) Sound(UbiSound)Noisy DNN DNN(resized)RAN (d) Activity (Har)Figure 3: Performance comparison of RAN with three baselines on four datasets (MNIST, Ima-geNet, UbiSound and Har). Y-axis is the test reconstruction error, normalized by logoperation. AndX-axis represents the utility (accuracy).3.1 U TILITY VS . PRIVACY TRADEOFFSThis experiment illustrates the superiority of RAN compared with three state-of-the-art data privacypreserving baselines. It does so with five tasks ( T1;T2;T3;T4;T5). However, due to space limit wedo not show the results for CIFAR-10 because they are similar to those for ImageNet.Noisy Data (Noisy) method perturbs the raw data, through adding random Laplace noise to theraw dataIand then submit the noisy data Ito the service provider. This is a typical local dif-ferential privacy method He & Cai (2017); Dwork et al. (2010). The utility of noisy data is theinference accuracy in a standard deep model, and its privacy is evaluated by the information lossmetric, i.e.,jIIj2.DNN method encodes the raw data into deep features, using a DNN based encoder ( e.g.the con-volutional and pooling layers of LeNet, AlexNet, VGG), and only deliver deep features to theservice provider GoogleCloud (2018); GoogleNow (2018). Its privacy is tested by the reconstruc-tion error in a Deconvolutional model (mirrors of the encoder), and the the accuracy evaluates theutility in a DNN based classifier ( e.g.the fully-connected layers of LeNet, AlexNet, VGG).DNN(resized) method further perturbs above deep features through principal components anal-ysis and Laplace noise injection, and then deliver the perturbed deep features to the serviceprovider Ossia et al. (2017). Its privacy and utility are also tested by the deep model baseddecoder and classifier, same with that in the DNN baseline.RAN automatically transform the raw data into features, i.e., Encoder output, and then deliverthem to the service provider. The privacy of RAN’s Encoder output is tested by the reconstructionerror in a separately trained decoder, which is taught based on the binary version (input andoutput) of the trained RAN’s Encoder, to simulate a malicious attacker. And its utility is tested bythe inference accuracy in RAN’s Classifier.The DNN method provides a high utility standard, and the Noisy and DNN(resized) methods set astrict benchmark for RAN.Figure 3 summarises the Pareto front of the testing privacy-utility tradeoff by using three base-lines and RAN . In this thread of experiments, we inject various noise factor f0:1;0:2;:::0:9ginto each piece of testing data to test the trained Noisy and DNN(resized) baselines, which areboth noise related methods. And we test RAN models which are trained under several settingsf0:01;0:02;:::;0:9gof the Lagrange multiplier in Eq.4, to recover its tradeoff trends. First, we seeRAN’s Encoder output achieves the most stable privacy-utility tradeoff with a constrictive range,6Under review as a conference paper at ICLR 2019DNNUbisoundHarImageNetRANMNISTFigure 4: 3D visualization of the highly separable features learned by standard DNN and RAN’sEncoder output in the feature space. Different color in each figure standards for one class.Sailboat without waterSailboat with waterCar without roadCar with road++SailboatSailboatCarCarRANDNNFigure 5: Zoom in on two categories, i.e., sailboat and car in the feature space.compared to those encoded by other three baselines. Second, RAN’s Encoder output achieves thebest overall utility than other three baselines. Specifically, RAN’s output privacy (utility) is 95%on MNIST, Ubisound and Har, 85% on ImageNet, and 76% on CIFAR-10, with the propersetups, which is even larger than that of the original deep model (see DNN baseline). Whilethe accuracy in Noisy and DNN(resized) baselines is unstable, ranging from 20% to93%. Third,RAN’s output can attain the higher privacy than usual deep features in a traditional DNN, and guar-antee competitive privacy compared to others. Moreover, the RAN’s privacy quantified by RAN’sDecoder (the green dashed line in Figure 3) is averagely larger than that measured by a third-partyDecoder (green triangles in Figure 3) which is trained given the binary version of RAN’s Encoder.Summary. Overall, RAN outperforms other three baselines to attain a better privacy-utility tradeoffover five recognition tasks. Second, the features derived by the proposed learning algorithm toremove redundant (sensitive) information, for privacy, even surpass the accuracy of the originalmodel. We refer readers to xA for how and why it works from a theoretical perspective. Wealso note that the regularization parameter in RAN can be further systematically fine-tuned, e.g.,exponentially varied using reinforcement learning, so that discovers a better privacy-utility tradeoff.3.2 U TILITY VISUALIZATION OF RAN’ SENCODER OUTPUTTo illustrate the utility of RAN’s Encoder output, we visualize how the distribution of RAN’s En-coder output varies from traditional Depp features. First, as shown in Figure 4, RAN’s Encoderoutput are highly separable, in the feature space, similar to the deep features from traditional DNN.It reflects the utility for subsequent classification. Second, to zoom in on two categories of imagesfor more details in Figure 5, we see that RAN pushes the features towards the constrictive spacedominant dominated by the data without redundant information, i.e., ”sailboat without water” and”car without road”. While the traditional DNN may capture the background ”water” and ”road”information to help the classification of ”sailboat” and ”car”.Summary . First, RAN’s Encoder output is highly separable in feature space as standard DNN do,which indicates the high utility for the subsequent classification tasks. Second, the learning algo-rithm on RAN pushes features towards essential information and away from redundant background(sensitive) information (see more interpretations in xAppendix A).7Under review as a conference paper at ICLR 2019RawRANDNNNoisyImage 1Image 2DNN(resized)Figure 6: From left to right: raw image from ImageNet (Raw), image with Laplace noise (Noisy),images reconstructed from DNN’s features, resized DNN’s features, and RAN’s Encoder output.3.3 V ISUALIZING OF RAN’ SPRIVACYIn this experiment, we visualize the privacy of RAN’s Encoder output, i.e. private features, incomparison to other approaches, using two example images from ImageNet. Figure 6 illustrates thepixel image of the raw data, the noisy data, the mimic data reconstructed from DNN’s deep feature,and mimic data reconstructed from RAN’s private features from two ”bus” images from ImageNetdatasets. We can find that the image reconstructed by RAN’s Decoder are dramatically corruptedand hard to distinguish the exact information of raw images. As mentioned in x3.1, the RAN’sDecoder is more potent than a separately trained Decoder on reconstructing RAN’s hidden features.Summary . First, the corrupted reconstructed images by RAN certify the improved privacy of RAN’sEncoder output. Second, the reconstructed images from DNN’s features recover both object (bus)and background (road) information, while RAN’s Encoder tries to contain object information andremove background information. And then RAN leads better privacy and utility (generalization) toits hidden features. More interpretation is in appendix xA.4 R ELATED WORKOur work is closely related to the following categories of research.Privacy Preserving for Mobile Data : Unlike the typical privacy preserving techniques which areadopted by data collectors (service providers) to release data for public data mining, RAN keepsthe raw data under end-user’s control, i.e., the the user submits private features only, rather thanraw data, to service providers. For example, randomized noise addition He & Cai (2017) and Dif-ferential privacy Dwork et al. (2014); Abadi et al. (2016) techniques have been widely used byservice providers to anonymize/remove personally identifiable information or only releases statis-tical information to publicly release datasets. RAN outperforms Noisy data (a differential privacymethod) with better classification utility and competitive privacy ( x3.1), because RAN’s Encoder isend-to-end trained with collaborative utility-specified deep learning and privacy-imposed adversariallearning for a good trade-off between features utility and privacy.Privacy Preserving with Deep Learning : Generally, prior works adopt two classes of approachesto protect end-user’s raw data: the end user modifies raw data before delivering them to serviceproviders Ossia et al. (2017) or multiple end users cooperate to learn a global data mining results,without revealing their individual raw data Li et al. (2017). However, these segmented systematicmethods inevitably incur utility drops in subsequent recognition tasks. We has compared RAN withresized noisy deep features according to Ossia et al. (2017) ( x3.1), and concluded RAN achieves abetter utility against altering raw data into resized deep features. This is because RAN’s Encoder isalso trained along with a accuracy discriminator (Classifier) to guarantee utility.Deep Feature Learning Techniques : In order to generate special features to facilitate the subse-quent classification utility and protect raw data’s sensitive information from recovering by generativemodels, RAN is the first to present an end-to-end deep architecture to sidestep the black-box of col-laborative discriminative and generative learning via an end-to-end adversarial process. Today’s8Under review as a conference paper at ICLR 2019extensions of discriminative models, generative models, or both, have been studied to seek latentfeature variables, which contributes to inference accuracy but incurs easy data reconstruction by re-verse techniques Radford et al. (2015); Zhong et al. (2016). And some components used in existinggenerative models, such as sensitivity penalty in contractive autoencoder Rifai et al. (2011), dataprobability distribution in generative adversarial network Goodfellow et al. (2014) and KL diver-gence in variational autoencoder Doersch (2016), can be further integrated into RAN’s frameworkto define and enhance application-based privacy.5 C ONCLUSIONThis paper presents to establish a deep model for mobile data contributor, i.e., mobile users, to en-code the raw data into perturbed features before delivering it to the data collector or miner, i.e.,service provider. To realize it, we present RAN , a novel deep model for private and useful fea-ture transforming. RAN is the first to not only maximize feature’s classification accuracy but alsomaximize its reconstruction error via an end-to-end adversarial training process. In particular, RANconsists an Encoder for feature extracting, a Decoder for data reconstruction error (privacy) quan-tification from Encoder output and a Classifier for accuracy (utility) discrimination. The proposedtraining algorithm upon RAN’s contains three phase: discriminative learning function on Encoderand Classifier to boost their discriminative abilities, a generative stage on Decoder to improve its datagenerative capacity which stand in the position of Encoder’s adversary, and an adversarial stage onEncoder, Classifier and Decoder to achieve our design objectives. Evaluations on five widely useddatasets show that RAN’s Encoder output attains a notable privacy-utility tradeoff. In the future,we plan to investigate finer-grained manifold learning techniques on RAN for feature generalizationand privacy improvements.A few aspects of RAN do invite further research. First, the RAN framework and the training algo-rithm can accommodate different choices of privacy quantification, especially application-specificones. For example, we could measure the privacy by the hidden failure, i.e., the ratio between thebackground patterns that were discovered based on RANs Encoder output, and the sensitive patternsfounded from the raw data, in an object recognition task. Second, the training of two adversariesinRAN’s ,i.e., Encoder and Decoder, must be synchronized well to avoid model degradation. It isbecause of the convergence diversity of Encoder and Decoder. Therefore, some more efforts areneeded in RAN ,e.g.setting proper iteration steps kand learning rate.
B1xF4md62m
Nice idea. Need better experiments.
4: Ok but not good enough - rejection
Privacy concerns arise when data is shared with third parties, a common occurrence. This paper proposes a privacy-preserving classification framework that consists of an encoder that extracts features from data, a classifier that performs the actual classification, and a decoder that tries to reconstruct the original data. In a mobile computing setting, the encoder is deployed at the client side and the classification is performed on the server side which accesses only the output features of the encoder. The adversarial training process guarantees good accuracy of the classifier while there is no decoder being able to reconstruct the original input sample accurately. Experimental results are provided to confirm the usefulness of the algorithm. The problem of privacy-preserving learning is an important topic and the paper proposes an interesting framework for that. However, I think it needs to provide more solid evaluations of the proposed algorithm, and presentation also need to be improved a bit. Detailed comments: I don’t see a significant difference between RAN and DNN in Figure 5. Maybe more explanation or better visualization would help. The decoder used to measure privacy is very important. Can you provide more detail about the decoders used in all the four cases? If possible, evaluating the privacy with different decoders may provide a stronger evidence for the proposed method. It seems that DNN(resized) is a generalization of DNN. If so, by changing the magnitude of noise and projection dimensions for PCA should give a DNN(resized) result (in Figure 3) that is close to DNN. If the two NNs used in DNN and DNN(resized) are different, I believe it’s still possible to apply the algorithm in DNN(resized) to the NN used in DNN, and get a full trace in the figure as noise and projection changes, which would lead to more fair comparison. The abstract mentioned that the proposed algorithm works as an “implicit regularization leading to better classification accuracy than the original model which completely ignores privacy”. But I don’t see clearly from the experimental results how the accuracy compares to a non-private classifier. Section 2.2 mentioned how different kind of layers would help with the encoder’s utility and privacy. It would be better to back up the argument with some experiments. I think it needs to be made clearer how reconstruction error works as a measure of privacy. For example, an image which is totally unreadable for human eye might still leak sensitive information when fed into a machine learning model. In term of reference, it’s better to cite more articles with different kind of privacy attacks for how raw data can cause privacy risks. For the “Noisy Data” method, it’s better to cite more articles on differential privacy and local differential privacy. Some figures, like Figure 3 and 4, are hard to read. The author may consider making the figures larger (maybe with a 2 by 2 layout), adjusting the position of the legend & scale of x-axis for Figure 3, and using markers with different colors for Figure 4.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Better Accuracy with Quantified Privacy: Representations Learned via Reconstructive Adversarial Network ### Paper Abstract The remarkable success of machine learning, especially deep learning, has produced a variety of cloud-based services for mobile users. Such services require an end user to send data to the service provider, which presents a serious challenge to end-user privacy. To address this concern, prior works either add noise to the data or send features extracted from the raw data. They struggle to balance between the utility and privacy because added noise reduces utility and raw data can be reconstructed from extracted features. This work represents a methodical departure from prior works: we balance between a measure of privacy and another of utility by leveraging adversarial learning to find a sweeter tradeoff. We design an encoder that optimizes against the reconstruction error (a measure of privacy), adversarially by a Decoder, and the inference accuracy (a measure of utility) by a Classifier. The result is RAN, a novel deep model with a new training algorithm that automatically extracts features for classification that are both private and useful. It turns out that adversarially forcing the extracted features to only conveys the intended information required by classification leads to an implicit regularization leading to better classification accuracy than the original model which completely ignores privacy. Thus, we achieve better privacy with better utility, a surprising possibility in machine learning! We conducted extensive experiments on five popular datasets over four training schemes, and demonstrate the superiority of RAN compared with existing alternatives. ### Paper Keywords ["end-user privacy", "utility", "feature learning", "adversarial training"] ### Paper Content ABSTRACTThe remarkable success of machine learning, especially deep learning, has pro-duced a variety of cloud-based services for mobile users. Such services require anend user to send data to the service provider, which presents a serious challenge toend-user privacy. To address this concern, prior works either add noise to the dataor send features extracted from the raw data. They struggle to balance betweenthe utility and privacy because added noise reduces utility and raw data can bereconstructed from extracted features.This work represents a methodical departure from prior works: we balance be-tween a measure of privacy and another of utility by leveraging adversarial learn-ing to find a sweeter tradeoff. We design an encoder that optimizes against thereconstruction error (a measure of privacy), adversarially by a Decoder, and theinference accuracy (a measure of utility) by a Classifier. The result is RAN , anovel deep model with a new training algorithm that automatically extracts fea-tures for classification that are both private and useful.It turns out that adversarially forcing the extracted features to only conveys theintended information required by classification leads to an implicit regularizationleading to better classification accuracy than the original model which completelyignores privacy. Thus, we achieve better privacy with better utility, a surprisingpossibility in machine learning! We conducted extensive experiments on five pop-ular datasets over four training schemes, and demonstrate the superiority of RANcompared with existing alternatives.1 INTRODUCTIONToday’s most robust and accurate models are boosted by deep learning techniques, which bene-fit a lot of mobile intelligent services, such as speech-based assistant ( e.g. Siri), face recognitionenabled phone-unlock ( e.g. FaceID). However, the uncontrolled submission of raw sound, image,and human activity data from mobile users to service provider has well-known privacy risks Abadiet al. (2016). For example, the underlying correlation detection, re-identification and other maliciousmining Dwork et al. (2017); Bhatia et al. (2016). Different from pinning hopes on service providersto anonymise data for privacy-preserving, we present to encode each piece of raw data in the end-user side and only send the encoded data to the service provider. And the encoded data must beboth private anduseful .Privacy can be quantified by the risk of sensitive raw data disclosure giventhe encoded data. For classification services, utility can be quantified by the inference accuracy,achieved by the service provider using a discriminative model.Existing solutions addressing the privacy concern struggle to balance between above two seem-ingly conflicting objectives: privacy vs. utility. An obvious and widely practiced solution to theabove problem is to transform the raw data into features and upload the features only, like GoogleNow GoogleNow (2018); Google Cloud Machine Learning Engine also provides API to prepro-cess the raw data into engineering features before uploading GoogleCloud (2018). This solutionnot only alleviates the privacy concern but also reduces the mobile data usage. However, it doesnot provide any quantifiable privacy guarantee. It is well known that we can reconstruct the rawdata from the features Mahendran & Vedaldi (2015). As a result, Ossia et al. (2017) further apply1Under review as a conference paper at ICLR 2019dimensionality reduction and add noise to the features before sending them to the service provider,which unfortunately result in inference accuracy degradation.Unlike previous work, we aim to systematically derive deep features for a sweeter tradeoff betweenprivacy andutility using deep neural networks, by leveraging adversarial training. Our key idea isto judiciously combine generative learning, for maximizing reconstruction error, and discriminativelearning, for minimizing discriminative error. Specifically, we present Reconstructive AdversarialNetwork (RAN), an end-to-end deep model with a new training algorithm. RAN controls two typesof descent gradients, i.e., reconstruction error and discriminative error, in back-propagation processto guide the training of a feature extractor or Encoder.Defining the exact adversarial attacker and finding the right measurement for privacy is an openproblem in itself Mendes & Vilela (2017). In this paper, we quantify Privacy using an intuitivemetric, i.e., the difficulty of reconstructing raw data via a generative model, or the reconstructionerror . In this case, the adversarial attacker is defined as a data reconstructor. Therefore, as shown inFigure 2, a RAN consists of three parts: a feature extractor (Encoder), a utility discriminator (Clas-sifier), and an adversarial reconstructor (Decoder). The output of the Encoder feeds to the inputof the Classifier and the Decoder. We envision the Encoder runs in mobile devices and processesraw data into features. The Classifier runs in an untrusted platform, e.g. the cloud. A maliciousparty can seek to reconstruct the raw data from the features using the Decoder. There is no theo-retic guarantee on end-to-end training the colloborated discriminative model and generative model.Therefore, we present a novel algorithm to train the RAN via an adversarial process, i.e., trainingthe Encoder with a Classifier first to improve intermediate features’ utility for discriminative tasksand confronting the Encoder with an adversary Decoder to enhance the features’ privacy. All threeparts, Encoder, Classifier and Decoder, are iteratively trained using gradient descent. From the man-ifold perspective, the two separate flows across RAN’s Encoder, Decoder and Classifier, i.e., decentgradients of discrimination error and reconstruction error from the end of Classifier and Decoderin back-propagation, guide the exact model parameter updating, which can iteratively derive theprivacy-specific and utility-imposed feature manifold.Using MNIST LeCun (1998), CIFAR-10Krizhevsky et al. (2014), ImageNet Deng et al. (2009),Ubisound Sicong et al. (2017) and Har UCI (2017) benchmark datasets, we show RAN is effectivein training an Encoder for end users to generate deep features that are both private and useful.Surprisingly, we observe adversarial learned features to remove redundant information, for privacy,even surpass the accuracy of the original model. Removing redundant information enhances the gen-eralization. Seex3 andxA for more details. This better generalization is as auspicious illustrationthat in practice, with machine learning, we can gain both utility and privacy at the same time.In the rest of this paper, we elaborate RAN’s design inx2 and evaluate the performance of RAN inx3. We next review the related work in x4 and conclude this work in x5. We finally present thetheoretic interpretation of RAN in AppendixxA.2 D ESIGN OF RANThis section first formulates the privacy preserving problem, and then elaborates on RAN’s design.2.1 P ROBLEM DEFINITION OF MOBILE DATA PRIVACY PRESERVINGMany services exist today to analyze data from end users. In this work, we do not trust serviceproviders for the privacy of data: they could be malicious or subject to malicious exploits. Forexample, as shown in Fig 1, an end user takes a picture of a product and send it to a cloud-basedservice to find a place to purchase it, which is indeed a service Amazon provides. A lot of sensitiveinformation could accidentally come with the picture, such as personal information and user locationin the background.Our key insight is that most services actually do not need the raw data. Therefore, the mobile usercan encode raw data into features through a multi-layer Encoder (E) on the client side and onlydeliver features to the service provider. Such features ideally should have following two properties:Utility: contain enough essential information of raw data so that they are useful for the intended2Under review as a conference paper at ICLR 2019Deep featureResultsMobile UserService ProviderEncoderTeabagClassifierFigure 1: Framework of mobile data privacy preserving. Mobile users leverage the learned Encoderto generate deep features from the raw data ( i.e., ”tea bag” picture) before submit it. And the serviceprovider use the learned Classifier based on the received deep features, to recognize the object in thepicture and recommend a seller.service, e.g., high accuracy for object recognition; Privacy: it is hard to recover the original infor-mation of raw data based on perturbed features through a reverse deep model Zhang et al. (2016).2.2 U TILITY AND PRIVACY METRICSIn this work, we focus on classification services. Therefore, utility is quantified as the inferenceaccuracy of a discriminative model, employed by the service provider. Since defining the exact ad-versarial attacker and finding the right measurement for privacy is an open problem in itself Mendes& Vilela (2017), this paper quantifies privacy by an intuitive metric, i.e., the reconstruction error ina reversed deep model, X, employed by a malicious party. The reconstruction error measures therisk of original data disclosure. Since the Encoder is distributed to mobile users, we assume it isavailable to both service providers and potential attackers. That is, both the service provider and themalicious party can train their models using raw data and their corresponding Encoder output. Assuch we can restate the desirable properties for the Encoder output as:Utility :MaxEprob (Y0i=Yi);i2TPrivacy :MaxEMinXjIiI0ij2;i2T(1)where,prob (Y0i=Yi)denotes the correct inference probability, i.e., accuracy, in the classificationservice with the testing data T.Y0iandYiis the inference class and the true label, respectively.jIiI0ij2is the Euclidean distance, i.e., reconstruction error, between the raw data Iiand the mimicdataI0ireconstructed by a malicious party with the Encoder output.The first objective (Utility) is well-understood for discriminative learning. It can be achieved via astandard optimization process, i.e., minimizing the cross entropy between the predicted label Y0iandground truth Yiin a supervised manner Kruse et al. (2013). The inner part of the second objective,MinXjIiI0ij2, is also well-understood for generative learning. On the other hand, the outer partMaxEjIiI0ij2is the opposite, i.e., maximizing the reconstruction error. Therefore, the Encoder andthe reverse deep model employed by the malicious party ( X) are adversarial to each other in theiroptimization objectives.Achieving above two objectives at the same time is challenging, since utility, i.e., maximized ac-curacy, and privacy, i.e., maximized reconstruction error, are conflicting objectives to the featureextractor, i.e., Encoder. When improving Utility, the Encoder must extract features to representthe relevant essence of data; when improving Privacy, the Encoder can discard the utility-relevantessence of the data. If not done properly, the Encoder output optimized for Utility leads to effectivedata reconstruction by a reverse model and therefore poor Privacy Rifai et al. (2011).2.3 A RCHITECTURE OF RANTo tackle above challenges, we present RAN to train a feature extractor, i.e., Encoder, with goodtrade-offs between privacy and utility. As shown in Fig 2, RAN employ two additional neural3Under review as a conference paper at ICLR 2019.......Encoder (E)Classifier (C)...Decoder (D)Raw data IGenerated data E(I)E(I)I&=D(EI)Inference result Y′=C(EI)O-O.O/Figure 2: Architecture of reconstructive adversarial network (RAN).network modules, Decoder ( D) and Classifier ( C), to train the Encoder ( E). The Classifier simulatesthe intended classification service; when RAN is trained by the service provider, the Classifier canbe the same discriminative model eventually used. The Decoder simulates a malicious attacker thatattempts to reconstruct the raw data from the Encoder output. All the three modules are end-to-endtrained to establish the Encoder (E) for end-users to extract deep features E(I)from raw data I.The training is an iterative process that will be elaborated in x2.4. Below we first introduce RAN’sneural network architecture, along with some empirically gained design insights.The Encoder (E) consists of an input layer, multiple convolutional layers, pooling layers,and batch-normalization layers. We note that the clever usage of pooling layers and batch-normalization layers contribute to deep feature’s utility and privacy. The batch-normalizationlayer helps the features’ utility because it normalize the activation to avoid being too high or toolow thus has an regularization affect Ioffe & Szegedy (2015). It contributes to features’ privacy aswell since it is hard for Decoder to recover detail information from normalized features. And then,the max-pooling layer is helpful to enhance feature’s privacy, because none of un-pooling tech-niques can recover fine details from size-reduced features through shifting small parts to preciselyarrange them into a larger meaningful structure Milletari et al. (2016).TheDecoder (D) is a usual Encoder turned upside down, composed of multiple un-pooling lay-ers Mahendran & Vedaldi (2015) and deconvolutional layers Zeiler et al. (2010). We note that theuse of Decoder in training Encoder is to simulate a malicious party. After obtaining a (binary)version of the Encoder, a malicious party is free to explore any neural architectures to reconstructthe raw data. In this paper, we choose a worst possible Decoder, i.e., an exactly layer-to-layer re-versed architecture to mirror the Encoder. That is, we assume a powerful adversarial Decoder thatknows the Encoder’s operations and connections in training. We also note that the architectureand training algorithm of RAN can easily incorporate other architectures as the Decoder.theClassifier (C) builds a multi-layer perceptron (MLP) to process deep features and output in-ference results with several full-connected layers Kruse et al. (2013). As we noted for the Decoderabove, a service provider can explore any neural architectures for its discriminative model, giventhe Encoder. The reason we choose this specific architecture to train the Encoder is because someof the most successful CNN architectures, e.g.VGG and AlexNet, which can be viewed as as theEncoder plus the Classifier of our choice.2.4 T RAINING ALGORITHM OF RANOur goal with RAN is to train an Encoder that can produce output that is both useful, i.e., leading tohigh inference accuracy when used for classification tasks, and private, i.e., leading to high recon-structive error when reverse engineered by an attacker. As we noted in x2.1, these two objectives canbe competing when taken naively. The key idea of RAN’s training algorithm is to train the Encoderalong with the Classifier and the Decoder, which simulate the service provider and a malicious at-tacker, respectively. Given a training dataset Tofmpairs of I, the raw data, and Y , the true label,we train a RAN through an iterative process with three stages:4Under review as a conference paper at ICLR 2019Algorithm 1: Mini-batch stochastic training of reconstructive adversarial network (RAN)Input: Dataset TOutput: RAN’s Weightsfe;d;cg1Initializee,d,c;2fornepochs do3 Sample mini-batch Iofmsamples from T;4 forksteps do5 Updateeandcby gradient ascent with learning rate l1: minimizeOd;6 Updatedby gradient ascent with learning rate l2: minimizeOg;7 end8 Updateeandcby gradient ascent with learning rate l3: minimizeOa;9end10*Note:nandkare two important hyper-parameters1. Discriminative training maximizes the accuracy in Classifier; mathematically, it minimizesthe cross entropy Hbetween predicted class C(E(Ii))and true label Yi:Od=mXi=1H(YiC(E(Ii))): (2)2. Generative training minimizes the reconstructive error by the Decoder:Og=mXi=1jIiD(E(Ii))j2(3)3. Adversarial training finds a tradeoff point between utility and privacy:Oa=mXi=1HjYiC(E(Ii))j(1)jIiD(E(Ii))j2(4)It is essentially a Lagrangian function of the objectives of the first two stages. is theLagrange multiplier that can be used to balance between utility and privacy.Algorithm 1 summarizes the three-stage training algorithm. And we leverage mini-batch techniquesto balance the training robustness and efficiency (line 3) Li et al. (2014). Within each epoch, we firstperform the discrminative and generative stages (line 5, 6) to initialize model weights. And then,we perform the adversarial stage (line 8) to seek a balance between utility and privacy. We note thatkin line 4 is a hyper-parameter of first two stages. These ksteps followed by a single iteration ofthe third stage is trying to synchronize the convergence speed of these three training stages well,borrowing existing techniques in generative adversarial network Goodfellow et al. (2014). Ourimplementation uses an overall optimized value, k= 3, through comparing several discrete options.And we leverage the AdamOptimizer Kingma & Ba (2014) with an adaptive learning rate for allthree stages (line 5, 6 and 8).3 E VALUATIONIn this section, we first compare RAN’s performance on privacy-utility tradeoff with three baselinesand then visualize the utility and privacy of resulting Encoder output.Evaluation tasks and models. We evaluate RAN, especially the resulting Encoder, with five pop-ular classification services. Specifically, RAN is evaluated for hand-written digit recognition ( T1:MNIST LeCun (1998)), image classification ( T2: CIFAR-10 Krizhevsky et al. (2014), T3: Ima-geNet Deng et al. (2009)), acoustic event sensing ( T4: UbiSound Sicong et al. (2017)), and theaccelerometer and gyroscope data based human activity recognition ( T5: Har UCI (2017)). Ac-cording to the sample size, the LeNet is selected as the neural architectures of RAN’s Encoder plusClassifier for T1,T4andT5, while AlexNet and VGG-16 are chosen for T2andT3, respectively.To assume a powerful adversary that knows the Encoder in the training, the RAN’s Decoder exactlymirrors its Encoder for each task in the training.5Under review as a conference paper at ICLR 2019Noisy DNN DNN(resized)RAN(a) Digit(MNIST)Noisy DNN DNN(resized)RAN (b) Image(ImageNet)Noisy DNN DNN(resized)RAN(c) Sound(UbiSound)Noisy DNN DNN(resized)RAN (d) Activity (Har)Figure 3: Performance comparison of RAN with three baselines on four datasets (MNIST, Ima-geNet, UbiSound and Har). Y-axis is the test reconstruction error, normalized by logoperation. AndX-axis represents the utility (accuracy).3.1 U TILITY VS . PRIVACY TRADEOFFSThis experiment illustrates the superiority of RAN compared with three state-of-the-art data privacypreserving baselines. It does so with five tasks ( T1;T2;T3;T4;T5). However, due to space limit wedo not show the results for CIFAR-10 because they are similar to those for ImageNet.Noisy Data (Noisy) method perturbs the raw data, through adding random Laplace noise to theraw dataIand then submit the noisy data Ito the service provider. This is a typical local dif-ferential privacy method He & Cai (2017); Dwork et al. (2010). The utility of noisy data is theinference accuracy in a standard deep model, and its privacy is evaluated by the information lossmetric, i.e.,jIIj2.DNN method encodes the raw data into deep features, using a DNN based encoder ( e.g.the con-volutional and pooling layers of LeNet, AlexNet, VGG), and only deliver deep features to theservice provider GoogleCloud (2018); GoogleNow (2018). Its privacy is tested by the reconstruc-tion error in a Deconvolutional model (mirrors of the encoder), and the the accuracy evaluates theutility in a DNN based classifier ( e.g.the fully-connected layers of LeNet, AlexNet, VGG).DNN(resized) method further perturbs above deep features through principal components anal-ysis and Laplace noise injection, and then deliver the perturbed deep features to the serviceprovider Ossia et al. (2017). Its privacy and utility are also tested by the deep model baseddecoder and classifier, same with that in the DNN baseline.RAN automatically transform the raw data into features, i.e., Encoder output, and then deliverthem to the service provider. The privacy of RAN’s Encoder output is tested by the reconstructionerror in a separately trained decoder, which is taught based on the binary version (input andoutput) of the trained RAN’s Encoder, to simulate a malicious attacker. And its utility is tested bythe inference accuracy in RAN’s Classifier.The DNN method provides a high utility standard, and the Noisy and DNN(resized) methods set astrict benchmark for RAN.Figure 3 summarises the Pareto front of the testing privacy-utility tradeoff by using three base-lines and RAN . In this thread of experiments, we inject various noise factor f0:1;0:2;:::0:9ginto each piece of testing data to test the trained Noisy and DNN(resized) baselines, which areboth noise related methods. And we test RAN models which are trained under several settingsf0:01;0:02;:::;0:9gof the Lagrange multiplier in Eq.4, to recover its tradeoff trends. First, we seeRAN’s Encoder output achieves the most stable privacy-utility tradeoff with a constrictive range,6Under review as a conference paper at ICLR 2019DNNUbisoundHarImageNetRANMNISTFigure 4: 3D visualization of the highly separable features learned by standard DNN and RAN’sEncoder output in the feature space. Different color in each figure standards for one class.Sailboat without waterSailboat with waterCar without roadCar with road++SailboatSailboatCarCarRANDNNFigure 5: Zoom in on two categories, i.e., sailboat and car in the feature space.compared to those encoded by other three baselines. Second, RAN’s Encoder output achieves thebest overall utility than other three baselines. Specifically, RAN’s output privacy (utility) is 95%on MNIST, Ubisound and Har, 85% on ImageNet, and 76% on CIFAR-10, with the propersetups, which is even larger than that of the original deep model (see DNN baseline). Whilethe accuracy in Noisy and DNN(resized) baselines is unstable, ranging from 20% to93%. Third,RAN’s output can attain the higher privacy than usual deep features in a traditional DNN, and guar-antee competitive privacy compared to others. Moreover, the RAN’s privacy quantified by RAN’sDecoder (the green dashed line in Figure 3) is averagely larger than that measured by a third-partyDecoder (green triangles in Figure 3) which is trained given the binary version of RAN’s Encoder.Summary. Overall, RAN outperforms other three baselines to attain a better privacy-utility tradeoffover five recognition tasks. Second, the features derived by the proposed learning algorithm toremove redundant (sensitive) information, for privacy, even surpass the accuracy of the originalmodel. We refer readers to xA for how and why it works from a theoretical perspective. Wealso note that the regularization parameter in RAN can be further systematically fine-tuned, e.g.,exponentially varied using reinforcement learning, so that discovers a better privacy-utility tradeoff.3.2 U TILITY VISUALIZATION OF RAN’ SENCODER OUTPUTTo illustrate the utility of RAN’s Encoder output, we visualize how the distribution of RAN’s En-coder output varies from traditional Depp features. First, as shown in Figure 4, RAN’s Encoderoutput are highly separable, in the feature space, similar to the deep features from traditional DNN.It reflects the utility for subsequent classification. Second, to zoom in on two categories of imagesfor more details in Figure 5, we see that RAN pushes the features towards the constrictive spacedominant dominated by the data without redundant information, i.e., ”sailboat without water” and”car without road”. While the traditional DNN may capture the background ”water” and ”road”information to help the classification of ”sailboat” and ”car”.Summary . First, RAN’s Encoder output is highly separable in feature space as standard DNN do,which indicates the high utility for the subsequent classification tasks. Second, the learning algo-rithm on RAN pushes features towards essential information and away from redundant background(sensitive) information (see more interpretations in xAppendix A).7Under review as a conference paper at ICLR 2019RawRANDNNNoisyImage 1Image 2DNN(resized)Figure 6: From left to right: raw image from ImageNet (Raw), image with Laplace noise (Noisy),images reconstructed from DNN’s features, resized DNN’s features, and RAN’s Encoder output.3.3 V ISUALIZING OF RAN’ SPRIVACYIn this experiment, we visualize the privacy of RAN’s Encoder output, i.e. private features, incomparison to other approaches, using two example images from ImageNet. Figure 6 illustrates thepixel image of the raw data, the noisy data, the mimic data reconstructed from DNN’s deep feature,and mimic data reconstructed from RAN’s private features from two ”bus” images from ImageNetdatasets. We can find that the image reconstructed by RAN’s Decoder are dramatically corruptedand hard to distinguish the exact information of raw images. As mentioned in x3.1, the RAN’sDecoder is more potent than a separately trained Decoder on reconstructing RAN’s hidden features.Summary . First, the corrupted reconstructed images by RAN certify the improved privacy of RAN’sEncoder output. Second, the reconstructed images from DNN’s features recover both object (bus)and background (road) information, while RAN’s Encoder tries to contain object information andremove background information. And then RAN leads better privacy and utility (generalization) toits hidden features. More interpretation is in appendix xA.4 R ELATED WORKOur work is closely related to the following categories of research.Privacy Preserving for Mobile Data : Unlike the typical privacy preserving techniques which areadopted by data collectors (service providers) to release data for public data mining, RAN keepsthe raw data under end-user’s control, i.e., the the user submits private features only, rather thanraw data, to service providers. For example, randomized noise addition He & Cai (2017) and Dif-ferential privacy Dwork et al. (2014); Abadi et al. (2016) techniques have been widely used byservice providers to anonymize/remove personally identifiable information or only releases statis-tical information to publicly release datasets. RAN outperforms Noisy data (a differential privacymethod) with better classification utility and competitive privacy ( x3.1), because RAN’s Encoder isend-to-end trained with collaborative utility-specified deep learning and privacy-imposed adversariallearning for a good trade-off between features utility and privacy.Privacy Preserving with Deep Learning : Generally, prior works adopt two classes of approachesto protect end-user’s raw data: the end user modifies raw data before delivering them to serviceproviders Ossia et al. (2017) or multiple end users cooperate to learn a global data mining results,without revealing their individual raw data Li et al. (2017). However, these segmented systematicmethods inevitably incur utility drops in subsequent recognition tasks. We has compared RAN withresized noisy deep features according to Ossia et al. (2017) ( x3.1), and concluded RAN achieves abetter utility against altering raw data into resized deep features. This is because RAN’s Encoder isalso trained along with a accuracy discriminator (Classifier) to guarantee utility.Deep Feature Learning Techniques : In order to generate special features to facilitate the subse-quent classification utility and protect raw data’s sensitive information from recovering by generativemodels, RAN is the first to present an end-to-end deep architecture to sidestep the black-box of col-laborative discriminative and generative learning via an end-to-end adversarial process. Today’s8Under review as a conference paper at ICLR 2019extensions of discriminative models, generative models, or both, have been studied to seek latentfeature variables, which contributes to inference accuracy but incurs easy data reconstruction by re-verse techniques Radford et al. (2015); Zhong et al. (2016). And some components used in existinggenerative models, such as sensitivity penalty in contractive autoencoder Rifai et al. (2011), dataprobability distribution in generative adversarial network Goodfellow et al. (2014) and KL diver-gence in variational autoencoder Doersch (2016), can be further integrated into RAN’s frameworkto define and enhance application-based privacy.5 C ONCLUSIONThis paper presents to establish a deep model for mobile data contributor, i.e., mobile users, to en-code the raw data into perturbed features before delivering it to the data collector or miner, i.e.,service provider. To realize it, we present RAN , a novel deep model for private and useful fea-ture transforming. RAN is the first to not only maximize feature’s classification accuracy but alsomaximize its reconstruction error via an end-to-end adversarial training process. In particular, RANconsists an Encoder for feature extracting, a Decoder for data reconstruction error (privacy) quan-tification from Encoder output and a Classifier for accuracy (utility) discrimination. The proposedtraining algorithm upon RAN’s contains three phase: discriminative learning function on Encoderand Classifier to boost their discriminative abilities, a generative stage on Decoder to improve its datagenerative capacity which stand in the position of Encoder’s adversary, and an adversarial stage onEncoder, Classifier and Decoder to achieve our design objectives. Evaluations on five widely useddatasets show that RAN’s Encoder output attains a notable privacy-utility tradeoff. In the future,we plan to investigate finer-grained manifold learning techniques on RAN for feature generalizationand privacy improvements.A few aspects of RAN do invite further research. First, the RAN framework and the training algo-rithm can accommodate different choices of privacy quantification, especially application-specificones. For example, we could measure the privacy by the hidden failure, i.e., the ratio between thebackground patterns that were discovered based on RANs Encoder output, and the sensitive patternsfounded from the raw data, in an object recognition task. Second, the training of two adversariesinRAN’s ,i.e., Encoder and Decoder, must be synchronized well to avoid model degradation. It isbecause of the convergence diversity of Encoder and Decoder. Therefore, some more efforts areneeded in RAN ,e.g.setting proper iteration steps kand learning rate.<|im_end|> <|im_start|>assistant ### Review Title Nice idea. Need better experiments. ### Review Text Privacy concerns arise when data is shared with third parties, a common occurrence. This paper proposes a privacy-preserving classification framework that consists of an encoder that extracts features from data, a classifier that performs the actual classification, and a decoder that tries to reconstruct the original data. In a mobile computing setting, the encoder is deployed at the client side and the classification is performed on the server side which accesses only the output features of the encoder. The adversarial training process guarantees good accuracy of the classifier while there is no decoder being able to reconstruct the original input sample accurately. Experimental results are provided to confirm the usefulness of the algorithm. The problem of privacy-preserving learning is an important topic and the paper proposes an interesting framework for that. However, I think it needs to provide more solid evaluations of the proposed algorithm, and presentation also need to be improved a bit. Detailed comments: I don’t see a significant difference between RAN and DNN in Figure 5. Maybe more explanation or better visualization would help. The decoder used to measure privacy is very important. Can you provide more detail about the decoders used in all the four cases? If possible, evaluating the privacy with different decoders may provide a stronger evidence for the proposed method. It seems that DNN(resized) is a generalization of DNN. If so, by changing the magnitude of noise and projection dimensions for PCA should give a DNN(resized) result (in Figure 3) that is close to DNN. If the two NNs used in DNN and DNN(resized) are different, I believe it’s still possible to apply the algorithm in DNN(resized) to the NN used in DNN, and get a full trace in the figure as noise and projection changes, which would lead to more fair comparison. The abstract mentioned that the proposed algorithm works as an “implicit regularization leading to better classification accuracy than the original model which completely ignores privacy”. But I don’t see clearly from the experimental results how the accuracy compares to a non-private classifier. Section 2.2 mentioned how different kind of layers would help with the encoder’s utility and privacy. It would be better to back up the argument with some experiments. I think it needs to be made clearer how reconstruction error works as a measure of privacy. For example, an image which is totally unreadable for human eye might still leak sensitive information when fed into a machine learning model. In term of reference, it’s better to cite more articles with different kind of privacy attacks for how raw data can cause privacy risks. For the “Noisy Data” method, it’s better to cite more articles on differential privacy and local differential privacy. Some figures, like Figure 3 and 4, are hard to read. The author may consider making the figures larger (maybe with a 2 by 2 layout), adjusting the position of the legend & scale of x-axis for Figure 3, and using markers with different colors for Figure 4. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
eZllW0F5aM_
ICLR.cc/2021/Conference
2021
Don't stack layers in graph neural networks, wire them randomly
["Diego Valsesia", "Giulia Fracastoro", "Enrico Magli"]
Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Besides the classic vanishing gradient issues, recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this paper, we investigate the recently proposed randomly wired architectures in the context of graph neural networks. Instead of building deeper networks by stacking many layers, we prove that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over three tasks and five graph convolution definitions, using a recent benchmarking framework that addresses the reliability of previous testing methodologies.
["Graph neural networks", "random architectures"]
ABSTRACTGraph neural networks have become a staple in problems addressing learningand analysis of data defined over graphs. However, several results suggest aninherent difficulty in extracting better performance by increasing the number oflayers. Besides the classic vanishing gradient issues, recent works attribute this toa phenomenon peculiar to the extraction of node features in graph-based tasks, i.e.,the need to consider multiple neighborhood sizes at the same time and adaptivelytune them. In this paper, we investigate the recently proposed randomly wiredarchitectures in the context of graph neural networks. Instead of building deepernetworks by stacking many layers, we prove that employing a randomly-wiredarchitecture can be a more effective way to increase the capacity of the networkand obtain richer representations. We show that such architectures behave like anensemble of paths, which are able to merge contributions from receptive fields ofvaried size. Moreover, these receptive fields can also be modulated to be wider ornarrower through the trainable weights over the paths. We also provide extensiveexperimental evidence of the superior performance of randomly wired architecturesover three tasks and five graph convolution definitions, using a recent benchmarkingframework that addresses the reliability of previous testing methodologies.1 I NTRODUCTIONData defined over the nodes of graphs are ubiquitous. Social network profiles (Hamilton et al., 2017),molecular interactions (Duvenaud et al., 2015), citation networks (Sen et al., 2008), 3D point clouds(Simonovsky & Komodakis, 2017) are just examples of a wide variety of data types where describingthe domain as a graph allows to encode constraints and patterns among the data points. Exploitingthe graph structure is crucial in order to extract powerful representations of the data. However, this isnot a trivial task and only recently graph neural networks (GNNs) have started showing promisingapproaches to the problem. GNNs (Wu et al., 2020) extend the deep learning toolbox to deal with theirregularity of the graph domain. Much of the work has been focused on defining a graph convolutionoperation (Bronstein et al., 2017), i.e., a layer that is well-defined over the graph domain but alsoretains some of the key properties of convolution such as weight reuse and locality. A wide variety ofsuch graph convolution operators has been defined over the years, mostly based on neighborhoodaggregation schemes where the features of a node are transformed by processing the features ofits neighbors. Such schemes have been shown to be as powerful as the Weisfeiler-Lehman graphisomorphism test (Weisfeiler & Lehman, 1968; Xu et al., 2019), enabling them to simultaneuoslylearn data features and graph topology.However, contrary to classic literature on CNNs, few works (Li et al., 2019a; Dehmamy et al., 2019;Xu et al., 2018; Dwivedi et al., 2020) addressed GNNs architectures and their role in extractingpowerful representations. Several works, starting with the early GCN (Kipf & Welling, 2017), noticedan inability to build deep GNNs, often resulting in worse performance than that of methods thatdisregard the graph domain, when trying to build anything but very shallow networks. This callsfor exploring whether advances on CNN architectures can be translated to the GNN space, whileunderstanding the potentially different needs of graph representation learning.Li et al. (2019b) suggest that GCNs suffer from oversmoothing as several layers are stacked, resultingin the extraction of mostly low-frequency features. This is related to the lack of self-loop informationin this specific graph convolution. It is suggested that ResNet-like architectures mitigate the problemas the skip connections supply high frequency contributions. Xu et al. (2018) point out that thesize of the receptive field of a node, i.e., which nodes contribute to the features of the node under1Under review as a conference paper at ICLR 2021Figure 1: Random architectures aggregate ensembles of paths. This creates a variety of receptivefields (effective neighborhood sizes on the domain graph) that are combined to compute the output.Figure shows the domain graph where nodes are colored (red means high weight, blue low weight)according to the receptive field weighted by the path distribution of a domain node. The receptivefield is shown at all the architecture nodes directly contributing to the output. Histograms representthe distribution of path lengths from source to architecture node.consideration, plays a crucial role, but it can vary widely depending on the graph and too largereceptive fields may actually harm performance. They conclude that for graph-based problems itwould be optimal to learn how to adaptively merge contributions from receptive fields of multiplesize. For this reason they propose an architecture where each layer has a skip connection to the outputso that contributions at multiple depths (hence sizes of receptive fields) can be merged. Nonetheless,the problem of finding methods for effectively increasing the capacity of graph neural networks isstill standing, since stacking many layers has been proven to provide limited improvements (Li et al.,2019b; Oono & Suzuki, 2019; Alon & Yahav, 2020; NT & Maehara, 2019).In this paper, we argue that the recently proposed randomly wired architectures (Xie et al., 2019)are ideal for GNNs. In a randomly wired architecture, “layers” are arranged according to a randomdirected acyclic graph and data are propagated through the paths towards the output. Such architectureis ideal for GNNs because it realizes the intuition of Xu et al. (2018) of being able of merging receptivefields of varied size. Indeed, the randomly wired network can be seen as an extreme generalization oftheir jumping network approach where layer outputs can not only jump to the network output butto other layers as well, continuously merging receptive fields. Hence, randomly wired architecturesprovide a way of effectively scaling up GNNs, mitigating the depth problem and creating richerrepresentations. Fig. 1 shows a graphical representation of this concept by highlighting the six layersdirectly contributing to the output, having different receptive fields induced by the distribution ofpaths from the input.Our novel contributions can be summarized as follows: i) we are the first to analyze randomly wiredarchitectures and show that they are generalizations of ResNets when looked at as ensembles ofpaths (Veit et al., 2016); ii) we show that path ensembling allows to merge receptive fields of variedsize and that it can do so adaptively , i.e., trainable weights on the architecture edges can tune thedesired size of the receptive fields to be merged to achieve an optimal configuration for the problem;iii) we introduce improvements to the basic design of randomly wired architectures by optionallyembedding a path that sequentially goes through all layers in order to promote larger receptivefields when needed, and by presenting MonteCarlo DropPath, which decorrelates path contributionsby randomly dropping architecture edges; iv) we provide extensive experimental evidence, usinga recently introduced benchmarking framework (Dwivedi et al., 2020) to ensure significance andreproducibility, that randomly wired architectures consistently outperform ResNets, often by largemargins, for five of the most popular graph convolution definitions on three different tasks.2Under review as a conference paper at ICLR 20212 B ACKGROUND2.1 G RAPH NEURAL NETWORKSA major shortcoming of CNNs is that they are unable to process data defined on irregular domains.In particular, one case that is drawing attention is when the data structure can be described by a graphand the data are defined as vectors on the graph nodes. This setting can be found in many applications,including 3D point clouds (Wang et al., 2019; Valsesia et al., 2019), computational biology (Alipanahiet al., 2015; Duvenaud et al., 2015), and social networks (Kipf & Welling, 2017). However, extendingCNNs from data with a regular structure, such as images and video, to graph-structured data is notstraightforward if one wants to preserve useful properties such as locality and weight reuse.GNNs redefine the convolution operation so that the new layer definition can be used on domainsdescribed by graphs. The most widely adopted graph convolutions in the literature rely on messagepassing, where a weighted aggregation of the feature vectors in a neighborhood is computed. The GCN(Kipf & Welling, 2017) is arguably the simplest definition, applying the same linear transformationto all the node features, followed by neighborhood aggregation and non-linear activation:h(l+1)i =0@1jNijXj2NiWh(l)j1A:Variants of this definition have been developed, e.g., GraphSage (Hamilton et al., 2017) concatenatesthe feature vector of node ito the feature vectors of its neighbors, so that self-information can also beexploited; GIN (Xu et al., 2019) uses a multilayer perceptron instead of a linear transform, replacesaverage with sum to ensure injectivity and proposes a different way of computing the output by usingall the feature vectors produced by the intermediate layers. These definitions are all isotropic becausethey treat every edge in the same way. It has been observed that better representation capacity can beachieved using anistropic definitions, where every edge can have a different transformation, at thecost of increased computational complexity. The Gated GCN (Bresson & Laurent, 2017) and GAT(Veli ˇckovi ́c et al., 2017) definitions fall in this category.2.2 R ANDOMLY WIRED ARCHITECTURESIn recent work, Xie et al. (2019) explore whether it is possible to avoid handcrafted design of neuralnetwork architectures and, at the same time, avoid expensive neural architecture search methods(Elsken et al., 2019), by designing random architecture generators. They show that “layers” perform-ing convolution, normalization and non-linear activation can be connected in a random architecturegraph. Strong performance is observed on the traditional image classification task by outperformingstate-of-the-art architectures. The authors conjecture that random architectures generalize ResNetsand similar constructions, but the underlying principles of their excellent performance are unclear,as well as whether the performance translates to tasks other than image recognition or to operationsother than convolution on grids.3 R ANDOMLY WIRED GNN SIn this section, we first introduce randomly wired architectures and the notation we are going to use.We then analyze their behavior when viewed as ensembles of paths.∑ RELU GCONV BN σ (wia) σ (wib) σ (wic) h(i) h(a) h(b) h(c) Figure 2: An architecturenode is equivalent to a GNNlayer.A randomly wired architecture consists of a directed acyclic graph(DAG) connecting a source architecture node, which is fed with theinput data, to a sink architecture node. One should not confuse thearchitecture DAG with the graph representing the GNN domain: toavoid any source of confusion we will use the terms architecture nodes(edges) and domain nodes (edges), respectively. A domain node isa node of the graph that is fed as input to the GNN. An architecturenode is effectively a GNN layer performing the following operations(Fig. 2): i) aggregation of the inputs from other architecture nodesvia a weighted sum as in (Xie et al., 2019):h(i)=Xj2Ai!ijh(j)=Xj2Ai(wij)h(j); i= 1;:::;L1(1)3Under review as a conference paper at ICLR 2021beinga sigmoid function, Aithe set of direct predecessors of thearchitecture node i, andwija scalar trainable weight; ii) a non-linearactivation; iii) a graph-convolution operation (without output activation); iv) batch normalization.The architecture DAG is generated using a random graph generator. In this paper, we will focus onthe Erd ̋os-Renyi model where the adjacency matrix of the DAG is a strictly upper triangular matrixwith entries being realizations of a Bernoulli random variable with probability p. If multiple inputarchitecture nodes are randomly generated, they are all wired to a single global input. Multiple outputarchitecture nodes are averaged to obtain a global output. Other random generators may be used,e.g., small-world and scale-free random networks have been studied in (Xie et al., 2019). However, adifferent generator will display a different behavior concerning the properties we study in Sec. 3.1.3.1 R ANDOMLY WIRED ARCHITECTURES BEHAVE LIKE PATH ENSEMBLESIt has already been shown that ResNets behave like ensembles of relatively shallow networks, whereone can see the ResNet architecture as a collection of paths of varied lengths (Veit et al., 2016). Morespecifically, in a ResNet with nlayers, where all layers have a skip connection except the first oneand the last one, there are exactly 2L2paths, whose lengths follow a Binomial distribution (i.e., thenumber of paths of length lfrom layerkto the last layer isLk1l2), and the average path length isL2+ 1(Veit et al., 2016). In this section, we show that a randomly wired neural network can also beconsidered as an ensemble of networks with varied depth. However, in this case, the distribution ofthe path length is different from the one obtained with the ResNet, as shown in the following lemma(proof in the supplementary material).Lemma 3.1. Let us consider a randomly wired network with Larchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Theaverage number of paths of length lfrom nodekto the sink, where k<L , isE[N(k)l] =Lk1l2pl1and the average total number of paths from node kto the sink is E[N(k)] =p(1 +p)Lk1.We can observe that if p= 1, the randomly wired network converges to the ResNet architecture.This allows to think of randomly wired architectures as generalizations of ResNets as they enableincreased flexibility in the number and distribution of paths instead of enforcing the use of all 2L2.3.2 R ECEPTIVE FIELD ANALYSISIn the case of GNNs, we define the receptive field of a domain node as the neighborhood that affectsthe output features of that node. As discussed in Sec. 1, the work in (Xu et al., 2018) highlights thatone of the possible causes of the depth problem in GNNs is that the size of the receptive field is notadaptive and may rapidly become excessively large. Inspired by this observation, in this section weanalyze the receptive field of a randomly wired neural network. We show that the receptive field ofthe output is a combination of the receptive fields of shallower networks, induced by each of the paths.This allows to effectively merge the contributions from receptive fields of varied size. Moreover, weshow that the trainable parameters along the path edges modulate the contributions of various pathlengths and enable adaptive receptive fields, that can be tuned by the training procedure.We first introduce a definition of the receptive field of a feedforward graph neural network1.Definition 3.1. Given a feedforward graph neural network with Llayers, the receptive field of radiusLof a domain node is its L-hop neighborhood.In a randomly wired architecture, each path induces a corresponding receptive field whose radiusdepends on the length of the path. Then, the receptive field at the output of the network is obtainedby combining the receptive fields of all the paths. In order to analyze the contribution of paths ofdifferent lengths to the receptive field of the network, we introduce the concept of distribution of thereceptive field radius of the paths. Notice that if we consider a feedforward network with Llayers,the distribution of the receptive field radius is a delta centered in L.The following lemma allows to analyze the distribution of the receptive field radius in a randomlywired architecture.1We use the term “feedforward neural network” to indicate an architecture made of a simple line graph,without skip connections: this is a representation of one path.4Under review as a conference paper at ICLR 20210 20 40 60 80 100Receptive Field Radius00.020.040.060.080.10.120.14DistributionUnweightedWeightedUnweighted + SequentialWeighted + SequentialResNetFigure 3: Distribution of receptive field radius(p= 0:4,!ij= 1 for unweighted, !ij= 0:5for weighted).0 20 40 60 80 100Path length00.050.10.15Proportion of total pathsp=0.2p=0.4p=0.6p=0.8p=1.0 (ResNet)Figure 4: Path distribution as function of archi-tecture edge probability.Lemma 3.2. The derivative@y@x0of the output yof a randomly wired architecture with respect to theinputx0is@y@x0=Xp2P@yp@x0=Xp2PYfi;jg2Ep!ij@yp@x0=LXl=2Xp2Plp@yp@x0;whereypis the output of path p,ypis the output of path pwhen we consider all the aggregationweights equal to 1, p=@yp@x0=@yp@x0,Pis the set of all paths from source to sink, Lis the number ofarchitecture nodes, Plis the set of paths from source to sink of length landEpis the set of edges ofthe pathp.Proof. Direct computation.From Lemma 3.2, we can observe that the contribution of each path to the gradient is weighted byits corresponding architecture edge weights. Thus, we can define the following distribution of thereceptive field radius:l=Xp2Plp=Xp2PlYfi;jg2Ep!ij forl= 2;:::;n; (2)where we have assumed that the gradient@yp@x0depends only on the path length, as done in (Veit et al.,2016). This is a reasonable assumption if all the architecture nodes perform the same operation.The distribution of the receptive field radius is therefore influenced by the architecture edge weights.Figure 3 shows an example of how such weights can modify the radius distribution. If we consider!ij= 1for alliandj, we obtain that the radius distribution is equal to the path length distribution.In order to provide some insight into the role of parameter pin the distribution of the receptive fieldradius, we focus on this special case and analyze the distribution of the path lengths in a randomlywired architecture by introducing the following Lemma (proof in the supplementary material).Lemma 3.3. Let us consider a randomly wired network with Larchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Theaverage length of the paths from node kto the sink is E[l(k)]p1+p(Lk1) + 2 .Therefore, if p= 1 and!ij= 1 for alliandjthe radius distribution is a Binomial distributioncentered inL2+ 1(as in ResNets), instead when p<1the mean of the distribution is lower. Thepath length distribution for different pvalues is shown in Fig. 4. This shows that, differentlyfrom feedforward networks, the receptive field of ResNets and randomly wired architectures is acombination of receptive fields of varied sizes, where most of the contribution is given by shallowpaths, i.e. smaller receptive fields. The parameter pof the randomly wired neural network influencesthe distribution of the receptive field radius: a lower pvalue skews the distribution towards shallowerpaths, instead a higher pvalue skews the distribution towards longer paths.After having considered the special case where !ij= 1for alliandj, we now focus on the generalcase. Since the edge architecture weights are trainable parameters, they can be adapted to optimizethe distribution of the receptive field radius. This is one of the strongest advantages provided byrandomly wired architectures with respect to ResNets. This is particularly relevant in the context ofGNNs, where we may have a non-uniform growth of the receptive field caused by the irregularity5Under review as a conference paper at ICLR 2021of the graph structure (Xu et al., 2018). Notice that the randomly wired architecture can be seen asa generalization of the jumping knowledge networks proposed in (Xu et al., 2018), where all thearchitecture nodes, not only the last one, merge contributions from previous nodes. We also remarkthat, even if we modify the ResNet architecture by adding trainable weights to each branch of theresidual module, we cannot retrieve the behaviour of the randomly wired architecture. In fact, thelatter has intrinsically more granularity than a ResNet: the expected number of architecture edgeweights of a randomly wired network ispL(L+1)2, instead a weighted ResNet has only 2(L2)weights. Ideally, we would like to weight each path independently (i.e., directly optimizing thevalue ofpin Eq. (3.2) ). However, this is unfeasible because the number of parameters wouldbecome excessively high and the randomly wired architecture provides an effective tradeoff. Givenan architecture node, weighting in a different way each input edge is important because to each edgecorresponds a different length distribution of the paths going through such edge, as shown by thefollowing Lemma (proof in supplementary material).Lemma 3.4. Let us consider a randomly wired network with narchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Givenan edgefi;jgbetween the architecture nodes iandjwherei<j , the average length of the pathsfrom the source to the sink going through that edge is E[lij]p1+p(L(ji)3) + 4 .3.3 S EQUENTIAL PATHIn the previous sections we have shown that a randomly wired architecture behaves like an ensembleof paths merging contribution from receptive fields of varied size, where most of the contribution isprovided by shallow paths. As discussed previously, this provides numerous advantages with respectto feedforward networks and ResNets. However, some graph-based tasks may actually benefit from alarger receptive field (Li et al., 2019b), so it is interesting to provide randomly wired architectureswith mechanisms to directly promote longer paths. Differently from ResNets, in a randomly wiredneural network with Larchitecture nodes the longest path may be shorter than L, leading to a smallerreceptive field. In order to overcome this issue, we propose to modify the generation process of therandom architecture by imposing that it should also include the sequential path, i.e., the path traversingall architecture nodes. This design of the architecture skews the initial path length distribution towardslonger paths, which has the effect of promoting their usage. Nevertheless, the trainable architectureedge weights will ultimately define the importance of such contribution. Fig. 3 shows an example ofhow including the sequential path changes the distribution of the receptive field radius.3.4 M ONTE CARLO DROPPATH REGULARIZATIONThe randomly wired architecture offers new degrees of freedom to introduce regularization techniques.In particular, one could delete a few architecture edges during training with probability pdropas a wayto avoid co-adaptation of architecture nodes. This is reminiscent of DropOut (Srivastava et al., 2014)and DropConnect (Wan et al., 2013), although it is carried out at a higher level of abstraction, i.e.,connections between “layers” instead of neurons. It is also reminiscent of techniques used in NeuralArchitecture Search (Zoph et al., 2018) and the approach used in ImageNet experiments in (Xie et al.,2019), although implementation details are unclear for the latter.We propose to use a MonteCarlo approach where paths are also dropped in testing. Inference isperformed multiple times for different realizations of dropped architecture edges and results areaveraged. This allows to sample from the full predictive distribution induced by DropPath, as inMonteCarlo DropOut (Gal & Ghahramani, 2015). It is worth noting that MonteCarlo DropPathdecorrelates the contributions of paths in Eq. (3.2) even if they share architecture edges (proof insupplementary material), thus allowing finer control over the modulation of the receptive field radius.4 E XPERIMENTAL RESULTSExperimental evaluation of GNNs is a topic that has recently received great attention. The emergingconsensus is that benchmarking methods routinely used in past literature are inadequate and lackreproducibility. In particular, Vignac et al. (2020) showed that commonly used citation networkdatasets like CORA, CITESEER, PUBMED are too simple and skew results towards simpler archi-tectures or even promote ignoring the underlying graph. TU datasets are also recognized to be toosmall (Errica et al., 2019) and the high variability across splits does not allow for sound comparisonsacross methods. In order to evaluate the gains offered by randomly wired architectures across a widevariety of graph convolutions and tasks, we adopt a recently proposed GNN benchmarking framework(Dwivedi et al., 2020), that has introduced new datasets and allows for reproducible experiments.6Under review as a conference paper at ICLR 2021Table 1: ZINC Mean Absolute Error.L= 8 L= 16 L= 32GCN 0:4650:0120:4450:0220:4260:011RAN-GCN 0.4470:0191:5 0.3980:0152:1 0.3850:0153:7GIN 0:4440:0170:4610:0220:6330:089RAN-GIN 0.3980:0042:7 0.4260:0201:6 0.5400:1551:0GatedGCN 0:3390:0270:2840:0140:2770:025RAN-GatedGCN 0.3100:0101:1 0.2180:0174:7 0.2150:0252:5GraphSage 0:3630:0050:3550:0030:3510:009RAN-GraphSage 0.3680:0151:0 0.3400:0095:0 0.3330:0082:0GAT 0:4160:0160:3840:0110:3570:011RAN-GAT 0:4300:0200:9 0:3920:0120:7 0:3680:0111:0Table 2: CLUSTER Accuracy.L= 8 L= 16 L= 32GCN 48:713:0448:577:8555:623:12RAN-GCN 58.613:153:362.241:641:763.320:992:5GIN 49:931:7949:042:5144:965:56RAN-GIN 54.382:522:556.586:263:056.192:912:0GatedGCN 63:102:5470:091:8971:941:51RAN-GatedGCN 63:852:450:372.131:681:174.320:891:6GraphSage 66:220:7371:501:0370:231:77RAN-GraphSage 67.213:231:471:902:090:472.562:081:3GAT 54:354:3960:686:1055:414:31RAN-GAT 63.382:492:169.681:581:570.931:183:6We focus on testing five of the most commonly used graph convolution definitions: GCN (Kipf& Welling, 2017), GIN (Xu et al., 2019)2, Gated GCN (Bresson & Laurent, 2017), GraphSage(Hamilton et al., 2017), GAT (Veli ˇckovi ́c et al., 2017). We select three representative tasks introducedby (Dwivedi et al., 2020): graph regression on the ZINC dataset, node classification on the CLUSTERdataset, and graph classification with superpixels on CIFAR10. To ensure reproducibility, we useexactly the same setting as (Dwivedi et al., 2020). We are interested in the performance differencesbetween the baseline ResNet architecture, i.e., a feedforward architecture with skip connections afterevery layer, and the randomly wired architecture. It was already shown in (Dwivedi et al., 2020) thatResNet GNNs significantly outperform architectures without residual connections. We remark thatother works proposed methods to build deeper GNN (Rong et al., 2019; Zhao & Akoglu, 2019; Gonget al., 2020), but such techniques can be regarded as complementary to our work. We do not attemptto optimize a specific method, nor we are interested in comparing one graph convolution to another.A fair comparison is ensured by running both methods with the same number of trainable parametersand with the same hyperparameters. In particular, the learning rate of both methods is adaptivelydecayed between 103and105by halving according to the value of the validation loss, with apatience of 5 epochs. Stopping criterion is validation loss not improving for 5 epochs after reachingthe minimum learning rate. We average the results of all experiments over 4 runs with differentweight initializations and different random architecture graphs, drawn with p= 0:6. We also evaluateresults for multiple values of the total number of layers (architecture nodes) L, in order to show thatrandomly wired GNNs allow a more effective increase in capacity. The random architectures usesequential paths (Sec. 3.2) in the ZINC experiment, sequential paths and DropPath in the CLUSTERexperiment, and only DropPath in CIFAR103. The reason for these choices is that the regressiontask in ZINC and the node classification task in CLUSTER are particularly sensitive to the sizeof the receptive field, as observed by analyzing the experimental receptive radius (supplementarymaterial). On the other hand, CIFAR10 is bottlenecked by overfitting, and it greatly benefits from theregularizing effect of DropPath, as also observed on CLUSTER. The number of DropPath iterationsin testing was fixed to 16.4.1 R ANDOM GNN BENCHMARKINGThe results presented in this section show that randomly wired GNNs have compelling performancein many regards. First of all, they typically provide higher accuracy or lower error than their ResNetcounterparts for the same number of parameters. Moreover, they are more effective at increasingcapacity than stacking layers: while they are essentially equivalent to ResNets for very short networks,they enable larger gains when additional layers are introduced.Table 1 shows the results obtained on the ZINC dataset. The metric is mean absolute error (MAE), solower is better. The superscript reports the standard deviation among runs and the subscript reportsthe level of significance by measuring how many baseline standard deviations the average value of the2GIN and RAN-GIN compute the output as in (Xu et al., 2018), using the contributions of all architecturenodes.3We do not use DropPath for RAN-GIN on any experiment as we observed unstable behavior.7Under review as a conference paper at ICLR 2021Table 3: CIFAR10 Accuracy.L= 8 L= 16 L= 32GCN 54:850:2054:740:5254:760:53RAN-GCN 57.810:0814:857.290:444:958.490:217:0GIN 48:591:6047:141:7536:904:71RAN-GIN 52.520:662:552.071:782:842.737:931:2GatedGCN 68:270:8069:160:6669:460:47RAN-GatedGCN 68:861:640:772.000:444:373.500:688:6GraphSage 65:580:4666:120:1165:330:34RAN-GraphSage 65:310:380:666:101:110:267.680:376:9GAT 64:430:3363:610:6664:620:65RAN-GAT 66.180:655:366.270:164:066.010:382:1Table 4: Median relative gain over L= 4.L= 8 L= 16 L= 32ZINCResNet +7:88% +17 :06% +17 :99%Random +14.22% +21.81% +24.36%CLUSTERResNet +17:90% +15 :80% +14 :26%Random +20.75% +30.07% +32.41%CIFAR10ResNet 0:84%0:14%1:22%Random +1.31% +3.58% +4.10%Table 5: Comparison against SIGN and PPNPNum. param. GCN (no residuals) GCN RAN-GCN PPNP SIGNZINC 180k 0.526 0.465 0.447 0.746 0.566CLUSTER 180k 22.23 48.71 58.61 33.00 48.35CIFAR10 180k 51.16 54.85 57.81 36.37 52.49ZINC 360k 0.537 0.445 0.398 0.750 0.555CLUSTER 360k 19.26 48.57 62.24 37.37 48.51CIFAR10 360k 49.86 54.74 57.29 36.68 53.55ZINC 720k 0.649 0.426 0.385 0.804 0.574CLUSTER 720k 20.90 55.62 63.32 28.77 49.14CIFAR10 720k 47.47 54.76 58.49 38.54 53.72random architecture deviates from the average value of the baseline. Results are in bold if they are atleast1significant. The results show that the randomly wired GNNs typically outperform the ResNetbaseline by significant margins. Table 2 reports the node classification accuracy on the CLUSTERdataset. It can be seen that the random architectures achieve very significant improvements onthis dataset, especially for RAN-GCN, RAN-GIN and RAN-GAT. Table 3 reports the classificationaccuracy on CIFAR10 when the images are converted to graphs using superpixels. Also in thiscase, the randomly wired architecture greatly outperforms the baseline, in some cases achievinggains higher than 5. Finally, Table 4 shows the relative performance gain (relative improvement inaccuracy or mean absolute error), averaged over all the graph convolution definitions, with respectto a short 4-layer network, where random wiring and ResNets are almost equivalent (results insupplementary material). We can notice that deeper ResNets always provide lower gains with respectto their shallow counterpart than the randomly wired GNNs. Moreover, we observe monotonicallyincreasing gains for random GNNs while deeper ResNets are either unable to significantly extractmore performance beyond L= 16 or even provide worse results than the L= 4 network. Thissupports our claim that the randomly wired architecture is a superior way to increase GNN capacity.Finally, we compare the proposed method against two other frameworks for GNNs, namely PPNPKlicpera et al. (2018) and SIGN Rossi et al. (2020), which propose different approaches for solvingthe oversmoothing problem. Due to the significant differences in the approaches, providing a faircomparison is challenging. We decided to equalize the number of parameters across the methods, sincenotions as number of layers or features cannot be translated in all the frameworks (180k,360k,720kparameters correspond to the L= 8;16;32settings in the previous tables). Table ??shows theobtained results. We can observe that both PPNP on node classification and SIGN on all tasksoutperform the standard GCN architecture without skip connections, but they cannot outperformGCN with residual connections and the randomly wired GCN.8Under review as a conference paper at ICLR 2021Table 6: Edge probability, L= 16 , RAN-GCN.p= 0:2p= 0:4p= 0:6p= 0:8ZINC 0:4400:0250:4270:0250.4090:0100:4150:012CLUSTER 59:871:6460:712:2762.752:3262:932:75CIFAR10 56:530:6156:210:4857.440:4656:060:48Table 7: DropPath on CIFAR10, RAN-GatedGCN. No sequential path embedding.L= 8 L= 16 L= 32None 68:070:9470:780:3872:750:37DropPath 68.861:6472.000:4473.500:68Table 8: DropPath on CIFAR10, RAN-GatedGCN. No sequential path embedding.pdrop0 0.005 0.01 0.02 0.0370:780:3870:900:4672.000:4471:550:8371:091:79Table 9: Sequential path embedding on CLUSTER, RAN-GatedGCN. No DropPath.L= 8 L= 16 L= 32Fully random 56:935:1766:505:1070:381:07Random+Sequential 63.302:1568.891:8771.650:974.2 A BLATION STUDY4.2.1 E DGE PROBABILITYWe first investigate the impact of the probability pof drawing an edge in the random architecture.Table 6 shows the results for a basic random architecture without DropPath nor embedded sequentialpath. It appears that an optimal value of pexists that maximizes performance. This could be explainedby a tradeoff between size of receptive field and the ability to modulate it.4.2.2 D ROPPATHThe impact of DropPath on CIFAR10 is shown in Table 7. We found the improvement due toDropPath to be increasingly significant for a higher number of architecture nodes, as expected due tothe increased number of edges. The value of the drop probability pdrop= 0:01was not extensivelycross-validated. However, Table 8 shows that higher drop rates lowered performance.4.2.3 E MBEDDED SEQUENTIAL PATHThe impact of embedding a sequential path as explained in Sec. 3.2 is shown in Table 9. It can beobserved that its effect of promoting receptive fields with larger radius is useful on this task. Weremark that, while we do not report results due to space constraints, this is not always the case andsome tasks (e.g., CIFAR10) do not benefit from promoting larger receptive fields.5 C ONCLUSIONSWe showed how randomly wired architectures can boost the performance of GNNs by mergingreceptive fields of multiple size. Consistent and statistically significant improvements over a widerange of tasks and graph convolutions suggest considering them as the go-to choice for new models.9Under review as a conference paper at ICLR 2021
9q8J98wcmro
More details are needed.
5: Marginally below acceptance threshold
This paper utilizes Randomly Wired architectures to boost deep GNNs. Theoretical analyses verify that randomly wired architectures behave like path ensemble and it enables adaptive receptive field. Experimental results on three non-popular datasets demonstrate the strength of the proposed model. Overall, the idea is interesting. Yet this paper can be made better through the following aspects: 1. This paper contains confusing equations and notations. For example, in Eq 1., why w_ij equals \sigma(w_ij). What's the meaning of domain nodes and how do we connect the architecture nodes and the domain nodes. What's the definition of \mathcal{A}? 2. This paper only proposes the recursion formula but omits some basic definitions, i.e. the definition of h^{(i)}. Where does the recursion start? Is there an initialized h^{(0)}? 3. The algorithm framework is not clearly depicted. How do R-GCNs accomplish the graph propagation process? 4. Insufficient experimental comparisons. How do R-GCNs and GCNs perform when L=2. How do R-GCNs perform on standard node classification datasets, such as Cora, Citeseer, and Reddit, since deep GCNs fail particularly on node classificaiton. How do R-GCNs perform against other deep frameworks such as APPNP and JKNet, both of which resort to more sophisticated skip connections than ResGCN. #############post-rebuttal############ I have carefully checked all other reviewers' comments, the authors' response, and the revised version. Thank the authors for their detailed feedback. They have addressed my concerns on the unclear presentation. However, joining the comments from other reviewers (particularly R3), I still think there are two major issues that prevent me from further increasing my score. Q1. It is still unclear why the proposed model can tackle the over-smoothing issue in existing deep GCNs. This paper has theoretically revealed the benefit of adaptive ensemble paths towards better trainability. Given the claim in Introduction, it is still unclear why such benefit can be used to relieve over-smoothing, particularly due to the missing analysis of the output dynamics. As already pointed out by R3, [3] has set up a nice notion of framework on explaining how over-smoothing happens and why deep GCN fails. It is a pity that this paper has not put their analyses into this framework and discussed the relation with the over-smoothing issue. Actually, a more in-depth discussion of over-smoothing on general GCNs (including ResGCN, APPNP) has also provided in an arXiv preprint paper [4]. It does show that the residual networks are capable of slowing down the convergence speed to the subspace and thus alleviating over-smoothing. Since the idea of random wiring is initially proposed in CNNs, the contribution of this paper that we expect is to answer how this idea can be utilized to solve the specific weakness in the graph domain. Q2. The experimental evaluations are still unconvincing. It is thankful that the authors have additionally provided the performance of SIGN and APPNP in the revised version. Yet, the reported accuracies of APPNP seem weird and much worse than other baselines. I do not agree with the authors' response that APPNP is not intended to address the over-smoothing problem. As experimentally shown in [5] and theoretically analyzed in [4], keeping the connection between each middle layer and the input layer is able to prevent the output from converging to the subspace caused by over-smoothing, and thereby deliver desired performance with the increase of depth. As this paper has conducted experiments on a newly-public benchmark under inconsistent experimental setting up (raised by R3), it is hard to justify the significance of the proposed idea compared with previous methods, specifically given the irrational observations on APPNP. Hence, I still believe this paper is below the acceptance line. [3] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations, 2020. [4] Tackling Over-Smoothing for General Graph Convolutional Networks, arXiv 2020. [5] Simple and Deep Graph Convolutional Networks, NIPS 2020.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Don't stack layers in graph neural networks, wire them randomly ### Paper Abstract Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Besides the classic vanishing gradient issues, recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this paper, we investigate the recently proposed randomly wired architectures in the context of graph neural networks. Instead of building deeper networks by stacking many layers, we prove that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over three tasks and five graph convolution definitions, using a recent benchmarking framework that addresses the reliability of previous testing methodologies. ### Paper Keywords ["Graph neural networks", "random architectures"] ### Paper Content ABSTRACTGraph neural networks have become a staple in problems addressing learningand analysis of data defined over graphs. However, several results suggest aninherent difficulty in extracting better performance by increasing the number oflayers. Besides the classic vanishing gradient issues, recent works attribute this toa phenomenon peculiar to the extraction of node features in graph-based tasks, i.e.,the need to consider multiple neighborhood sizes at the same time and adaptivelytune them. In this paper, we investigate the recently proposed randomly wiredarchitectures in the context of graph neural networks. Instead of building deepernetworks by stacking many layers, we prove that employing a randomly-wiredarchitecture can be a more effective way to increase the capacity of the networkand obtain richer representations. We show that such architectures behave like anensemble of paths, which are able to merge contributions from receptive fields ofvaried size. Moreover, these receptive fields can also be modulated to be wider ornarrower through the trainable weights over the paths. We also provide extensiveexperimental evidence of the superior performance of randomly wired architecturesover three tasks and five graph convolution definitions, using a recent benchmarkingframework that addresses the reliability of previous testing methodologies.1 I NTRODUCTIONData defined over the nodes of graphs are ubiquitous. Social network profiles (Hamilton et al., 2017),molecular interactions (Duvenaud et al., 2015), citation networks (Sen et al., 2008), 3D point clouds(Simonovsky & Komodakis, 2017) are just examples of a wide variety of data types where describingthe domain as a graph allows to encode constraints and patterns among the data points. Exploitingthe graph structure is crucial in order to extract powerful representations of the data. However, this isnot a trivial task and only recently graph neural networks (GNNs) have started showing promisingapproaches to the problem. GNNs (Wu et al., 2020) extend the deep learning toolbox to deal with theirregularity of the graph domain. Much of the work has been focused on defining a graph convolutionoperation (Bronstein et al., 2017), i.e., a layer that is well-defined over the graph domain but alsoretains some of the key properties of convolution such as weight reuse and locality. A wide variety ofsuch graph convolution operators has been defined over the years, mostly based on neighborhoodaggregation schemes where the features of a node are transformed by processing the features ofits neighbors. Such schemes have been shown to be as powerful as the Weisfeiler-Lehman graphisomorphism test (Weisfeiler & Lehman, 1968; Xu et al., 2019), enabling them to simultaneuoslylearn data features and graph topology.However, contrary to classic literature on CNNs, few works (Li et al., 2019a; Dehmamy et al., 2019;Xu et al., 2018; Dwivedi et al., 2020) addressed GNNs architectures and their role in extractingpowerful representations. Several works, starting with the early GCN (Kipf & Welling, 2017), noticedan inability to build deep GNNs, often resulting in worse performance than that of methods thatdisregard the graph domain, when trying to build anything but very shallow networks. This callsfor exploring whether advances on CNN architectures can be translated to the GNN space, whileunderstanding the potentially different needs of graph representation learning.Li et al. (2019b) suggest that GCNs suffer from oversmoothing as several layers are stacked, resultingin the extraction of mostly low-frequency features. This is related to the lack of self-loop informationin this specific graph convolution. It is suggested that ResNet-like architectures mitigate the problemas the skip connections supply high frequency contributions. Xu et al. (2018) point out that thesize of the receptive field of a node, i.e., which nodes contribute to the features of the node under1Under review as a conference paper at ICLR 2021Figure 1: Random architectures aggregate ensembles of paths. This creates a variety of receptivefields (effective neighborhood sizes on the domain graph) that are combined to compute the output.Figure shows the domain graph where nodes are colored (red means high weight, blue low weight)according to the receptive field weighted by the path distribution of a domain node. The receptivefield is shown at all the architecture nodes directly contributing to the output. Histograms representthe distribution of path lengths from source to architecture node.consideration, plays a crucial role, but it can vary widely depending on the graph and too largereceptive fields may actually harm performance. They conclude that for graph-based problems itwould be optimal to learn how to adaptively merge contributions from receptive fields of multiplesize. For this reason they propose an architecture where each layer has a skip connection to the outputso that contributions at multiple depths (hence sizes of receptive fields) can be merged. Nonetheless,the problem of finding methods for effectively increasing the capacity of graph neural networks isstill standing, since stacking many layers has been proven to provide limited improvements (Li et al.,2019b; Oono & Suzuki, 2019; Alon & Yahav, 2020; NT & Maehara, 2019).In this paper, we argue that the recently proposed randomly wired architectures (Xie et al., 2019)are ideal for GNNs. In a randomly wired architecture, “layers” are arranged according to a randomdirected acyclic graph and data are propagated through the paths towards the output. Such architectureis ideal for GNNs because it realizes the intuition of Xu et al. (2018) of being able of merging receptivefields of varied size. Indeed, the randomly wired network can be seen as an extreme generalization oftheir jumping network approach where layer outputs can not only jump to the network output butto other layers as well, continuously merging receptive fields. Hence, randomly wired architecturesprovide a way of effectively scaling up GNNs, mitigating the depth problem and creating richerrepresentations. Fig. 1 shows a graphical representation of this concept by highlighting the six layersdirectly contributing to the output, having different receptive fields induced by the distribution ofpaths from the input.Our novel contributions can be summarized as follows: i) we are the first to analyze randomly wiredarchitectures and show that they are generalizations of ResNets when looked at as ensembles ofpaths (Veit et al., 2016); ii) we show that path ensembling allows to merge receptive fields of variedsize and that it can do so adaptively , i.e., trainable weights on the architecture edges can tune thedesired size of the receptive fields to be merged to achieve an optimal configuration for the problem;iii) we introduce improvements to the basic design of randomly wired architectures by optionallyembedding a path that sequentially goes through all layers in order to promote larger receptivefields when needed, and by presenting MonteCarlo DropPath, which decorrelates path contributionsby randomly dropping architecture edges; iv) we provide extensive experimental evidence, usinga recently introduced benchmarking framework (Dwivedi et al., 2020) to ensure significance andreproducibility, that randomly wired architectures consistently outperform ResNets, often by largemargins, for five of the most popular graph convolution definitions on three different tasks.2Under review as a conference paper at ICLR 20212 B ACKGROUND2.1 G RAPH NEURAL NETWORKSA major shortcoming of CNNs is that they are unable to process data defined on irregular domains.In particular, one case that is drawing attention is when the data structure can be described by a graphand the data are defined as vectors on the graph nodes. This setting can be found in many applications,including 3D point clouds (Wang et al., 2019; Valsesia et al., 2019), computational biology (Alipanahiet al., 2015; Duvenaud et al., 2015), and social networks (Kipf & Welling, 2017). However, extendingCNNs from data with a regular structure, such as images and video, to graph-structured data is notstraightforward if one wants to preserve useful properties such as locality and weight reuse.GNNs redefine the convolution operation so that the new layer definition can be used on domainsdescribed by graphs. The most widely adopted graph convolutions in the literature rely on messagepassing, where a weighted aggregation of the feature vectors in a neighborhood is computed. The GCN(Kipf & Welling, 2017) is arguably the simplest definition, applying the same linear transformationto all the node features, followed by neighborhood aggregation and non-linear activation:h(l+1)i =0@1jNijXj2NiWh(l)j1A:Variants of this definition have been developed, e.g., GraphSage (Hamilton et al., 2017) concatenatesthe feature vector of node ito the feature vectors of its neighbors, so that self-information can also beexploited; GIN (Xu et al., 2019) uses a multilayer perceptron instead of a linear transform, replacesaverage with sum to ensure injectivity and proposes a different way of computing the output by usingall the feature vectors produced by the intermediate layers. These definitions are all isotropic becausethey treat every edge in the same way. It has been observed that better representation capacity can beachieved using anistropic definitions, where every edge can have a different transformation, at thecost of increased computational complexity. The Gated GCN (Bresson & Laurent, 2017) and GAT(Veli ˇckovi ́c et al., 2017) definitions fall in this category.2.2 R ANDOMLY WIRED ARCHITECTURESIn recent work, Xie et al. (2019) explore whether it is possible to avoid handcrafted design of neuralnetwork architectures and, at the same time, avoid expensive neural architecture search methods(Elsken et al., 2019), by designing random architecture generators. They show that “layers” perform-ing convolution, normalization and non-linear activation can be connected in a random architecturegraph. Strong performance is observed on the traditional image classification task by outperformingstate-of-the-art architectures. The authors conjecture that random architectures generalize ResNetsand similar constructions, but the underlying principles of their excellent performance are unclear,as well as whether the performance translates to tasks other than image recognition or to operationsother than convolution on grids.3 R ANDOMLY WIRED GNN SIn this section, we first introduce randomly wired architectures and the notation we are going to use.We then analyze their behavior when viewed as ensembles of paths.∑ RELU GCONV BN σ (wia) σ (wib) σ (wic) h(i) h(a) h(b) h(c) Figure 2: An architecturenode is equivalent to a GNNlayer.A randomly wired architecture consists of a directed acyclic graph(DAG) connecting a source architecture node, which is fed with theinput data, to a sink architecture node. One should not confuse thearchitecture DAG with the graph representing the GNN domain: toavoid any source of confusion we will use the terms architecture nodes(edges) and domain nodes (edges), respectively. A domain node isa node of the graph that is fed as input to the GNN. An architecturenode is effectively a GNN layer performing the following operations(Fig. 2): i) aggregation of the inputs from other architecture nodesvia a weighted sum as in (Xie et al., 2019):h(i)=Xj2Ai!ijh(j)=Xj2Ai(wij)h(j); i= 1;:::;L1(1)3Under review as a conference paper at ICLR 2021beinga sigmoid function, Aithe set of direct predecessors of thearchitecture node i, andwija scalar trainable weight; ii) a non-linearactivation; iii) a graph-convolution operation (without output activation); iv) batch normalization.The architecture DAG is generated using a random graph generator. In this paper, we will focus onthe Erd ̋os-Renyi model where the adjacency matrix of the DAG is a strictly upper triangular matrixwith entries being realizations of a Bernoulli random variable with probability p. If multiple inputarchitecture nodes are randomly generated, they are all wired to a single global input. Multiple outputarchitecture nodes are averaged to obtain a global output. Other random generators may be used,e.g., small-world and scale-free random networks have been studied in (Xie et al., 2019). However, adifferent generator will display a different behavior concerning the properties we study in Sec. 3.1.3.1 R ANDOMLY WIRED ARCHITECTURES BEHAVE LIKE PATH ENSEMBLESIt has already been shown that ResNets behave like ensembles of relatively shallow networks, whereone can see the ResNet architecture as a collection of paths of varied lengths (Veit et al., 2016). Morespecifically, in a ResNet with nlayers, where all layers have a skip connection except the first oneand the last one, there are exactly 2L2paths, whose lengths follow a Binomial distribution (i.e., thenumber of paths of length lfrom layerkto the last layer isLk1l2), and the average path length isL2+ 1(Veit et al., 2016). In this section, we show that a randomly wired neural network can also beconsidered as an ensemble of networks with varied depth. However, in this case, the distribution ofthe path length is different from the one obtained with the ResNet, as shown in the following lemma(proof in the supplementary material).Lemma 3.1. Let us consider a randomly wired network with Larchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Theaverage number of paths of length lfrom nodekto the sink, where k<L , isE[N(k)l] =Lk1l2pl1and the average total number of paths from node kto the sink is E[N(k)] =p(1 +p)Lk1.We can observe that if p= 1, the randomly wired network converges to the ResNet architecture.This allows to think of randomly wired architectures as generalizations of ResNets as they enableincreased flexibility in the number and distribution of paths instead of enforcing the use of all 2L2.3.2 R ECEPTIVE FIELD ANALYSISIn the case of GNNs, we define the receptive field of a domain node as the neighborhood that affectsthe output features of that node. As discussed in Sec. 1, the work in (Xu et al., 2018) highlights thatone of the possible causes of the depth problem in GNNs is that the size of the receptive field is notadaptive and may rapidly become excessively large. Inspired by this observation, in this section weanalyze the receptive field of a randomly wired neural network. We show that the receptive field ofthe output is a combination of the receptive fields of shallower networks, induced by each of the paths.This allows to effectively merge the contributions from receptive fields of varied size. Moreover, weshow that the trainable parameters along the path edges modulate the contributions of various pathlengths and enable adaptive receptive fields, that can be tuned by the training procedure.We first introduce a definition of the receptive field of a feedforward graph neural network1.Definition 3.1. Given a feedforward graph neural network with Llayers, the receptive field of radiusLof a domain node is its L-hop neighborhood.In a randomly wired architecture, each path induces a corresponding receptive field whose radiusdepends on the length of the path. Then, the receptive field at the output of the network is obtainedby combining the receptive fields of all the paths. In order to analyze the contribution of paths ofdifferent lengths to the receptive field of the network, we introduce the concept of distribution of thereceptive field radius of the paths. Notice that if we consider a feedforward network with Llayers,the distribution of the receptive field radius is a delta centered in L.The following lemma allows to analyze the distribution of the receptive field radius in a randomlywired architecture.1We use the term “feedforward neural network” to indicate an architecture made of a simple line graph,without skip connections: this is a representation of one path.4Under review as a conference paper at ICLR 20210 20 40 60 80 100Receptive Field Radius00.020.040.060.080.10.120.14DistributionUnweightedWeightedUnweighted + SequentialWeighted + SequentialResNetFigure 3: Distribution of receptive field radius(p= 0:4,!ij= 1 for unweighted, !ij= 0:5for weighted).0 20 40 60 80 100Path length00.050.10.15Proportion of total pathsp=0.2p=0.4p=0.6p=0.8p=1.0 (ResNet)Figure 4: Path distribution as function of archi-tecture edge probability.Lemma 3.2. The derivative@y@x0of the output yof a randomly wired architecture with respect to theinputx0is@y@x0=Xp2P@yp@x0=Xp2PYfi;jg2Ep!ij@yp@x0=LXl=2Xp2Plp@yp@x0;whereypis the output of path p,ypis the output of path pwhen we consider all the aggregationweights equal to 1, p=@yp@x0=@yp@x0,Pis the set of all paths from source to sink, Lis the number ofarchitecture nodes, Plis the set of paths from source to sink of length landEpis the set of edges ofthe pathp.Proof. Direct computation.From Lemma 3.2, we can observe that the contribution of each path to the gradient is weighted byits corresponding architecture edge weights. Thus, we can define the following distribution of thereceptive field radius:l=Xp2Plp=Xp2PlYfi;jg2Ep!ij forl= 2;:::;n; (2)where we have assumed that the gradient@yp@x0depends only on the path length, as done in (Veit et al.,2016). This is a reasonable assumption if all the architecture nodes perform the same operation.The distribution of the receptive field radius is therefore influenced by the architecture edge weights.Figure 3 shows an example of how such weights can modify the radius distribution. If we consider!ij= 1for alliandj, we obtain that the radius distribution is equal to the path length distribution.In order to provide some insight into the role of parameter pin the distribution of the receptive fieldradius, we focus on this special case and analyze the distribution of the path lengths in a randomlywired architecture by introducing the following Lemma (proof in the supplementary material).Lemma 3.3. Let us consider a randomly wired network with Larchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Theaverage length of the paths from node kto the sink is E[l(k)]p1+p(Lk1) + 2 .Therefore, if p= 1 and!ij= 1 for alliandjthe radius distribution is a Binomial distributioncentered inL2+ 1(as in ResNets), instead when p<1the mean of the distribution is lower. Thepath length distribution for different pvalues is shown in Fig. 4. This shows that, differentlyfrom feedforward networks, the receptive field of ResNets and randomly wired architectures is acombination of receptive fields of varied sizes, where most of the contribution is given by shallowpaths, i.e. smaller receptive fields. The parameter pof the randomly wired neural network influencesthe distribution of the receptive field radius: a lower pvalue skews the distribution towards shallowerpaths, instead a higher pvalue skews the distribution towards longer paths.After having considered the special case where !ij= 1for alliandj, we now focus on the generalcase. Since the edge architecture weights are trainable parameters, they can be adapted to optimizethe distribution of the receptive field radius. This is one of the strongest advantages provided byrandomly wired architectures with respect to ResNets. This is particularly relevant in the context ofGNNs, where we may have a non-uniform growth of the receptive field caused by the irregularity5Under review as a conference paper at ICLR 2021of the graph structure (Xu et al., 2018). Notice that the randomly wired architecture can be seen asa generalization of the jumping knowledge networks proposed in (Xu et al., 2018), where all thearchitecture nodes, not only the last one, merge contributions from previous nodes. We also remarkthat, even if we modify the ResNet architecture by adding trainable weights to each branch of theresidual module, we cannot retrieve the behaviour of the randomly wired architecture. In fact, thelatter has intrinsically more granularity than a ResNet: the expected number of architecture edgeweights of a randomly wired network ispL(L+1)2, instead a weighted ResNet has only 2(L2)weights. Ideally, we would like to weight each path independently (i.e., directly optimizing thevalue ofpin Eq. (3.2) ). However, this is unfeasible because the number of parameters wouldbecome excessively high and the randomly wired architecture provides an effective tradeoff. Givenan architecture node, weighting in a different way each input edge is important because to each edgecorresponds a different length distribution of the paths going through such edge, as shown by thefollowing Lemma (proof in supplementary material).Lemma 3.4. Let us consider a randomly wired network with narchitecture nodes, where thearchitecture DAG is generated according to a Erd ̋ os-Renyi graph generator with probability p. Givenan edgefi;jgbetween the architecture nodes iandjwherei<j , the average length of the pathsfrom the source to the sink going through that edge is E[lij]p1+p(L(ji)3) + 4 .3.3 S EQUENTIAL PATHIn the previous sections we have shown that a randomly wired architecture behaves like an ensembleof paths merging contribution from receptive fields of varied size, where most of the contribution isprovided by shallow paths. As discussed previously, this provides numerous advantages with respectto feedforward networks and ResNets. However, some graph-based tasks may actually benefit from alarger receptive field (Li et al., 2019b), so it is interesting to provide randomly wired architectureswith mechanisms to directly promote longer paths. Differently from ResNets, in a randomly wiredneural network with Larchitecture nodes the longest path may be shorter than L, leading to a smallerreceptive field. In order to overcome this issue, we propose to modify the generation process of therandom architecture by imposing that it should also include the sequential path, i.e., the path traversingall architecture nodes. This design of the architecture skews the initial path length distribution towardslonger paths, which has the effect of promoting their usage. Nevertheless, the trainable architectureedge weights will ultimately define the importance of such contribution. Fig. 3 shows an example ofhow including the sequential path changes the distribution of the receptive field radius.3.4 M ONTE CARLO DROPPATH REGULARIZATIONThe randomly wired architecture offers new degrees of freedom to introduce regularization techniques.In particular, one could delete a few architecture edges during training with probability pdropas a wayto avoid co-adaptation of architecture nodes. This is reminiscent of DropOut (Srivastava et al., 2014)and DropConnect (Wan et al., 2013), although it is carried out at a higher level of abstraction, i.e.,connections between “layers” instead of neurons. It is also reminiscent of techniques used in NeuralArchitecture Search (Zoph et al., 2018) and the approach used in ImageNet experiments in (Xie et al.,2019), although implementation details are unclear for the latter.We propose to use a MonteCarlo approach where paths are also dropped in testing. Inference isperformed multiple times for different realizations of dropped architecture edges and results areaveraged. This allows to sample from the full predictive distribution induced by DropPath, as inMonteCarlo DropOut (Gal & Ghahramani, 2015). It is worth noting that MonteCarlo DropPathdecorrelates the contributions of paths in Eq. (3.2) even if they share architecture edges (proof insupplementary material), thus allowing finer control over the modulation of the receptive field radius.4 E XPERIMENTAL RESULTSExperimental evaluation of GNNs is a topic that has recently received great attention. The emergingconsensus is that benchmarking methods routinely used in past literature are inadequate and lackreproducibility. In particular, Vignac et al. (2020) showed that commonly used citation networkdatasets like CORA, CITESEER, PUBMED are too simple and skew results towards simpler archi-tectures or even promote ignoring the underlying graph. TU datasets are also recognized to be toosmall (Errica et al., 2019) and the high variability across splits does not allow for sound comparisonsacross methods. In order to evaluate the gains offered by randomly wired architectures across a widevariety of graph convolutions and tasks, we adopt a recently proposed GNN benchmarking framework(Dwivedi et al., 2020), that has introduced new datasets and allows for reproducible experiments.6Under review as a conference paper at ICLR 2021Table 1: ZINC Mean Absolute Error.L= 8 L= 16 L= 32GCN 0:4650:0120:4450:0220:4260:011RAN-GCN 0.4470:0191:5 0.3980:0152:1 0.3850:0153:7GIN 0:4440:0170:4610:0220:6330:089RAN-GIN 0.3980:0042:7 0.4260:0201:6 0.5400:1551:0GatedGCN 0:3390:0270:2840:0140:2770:025RAN-GatedGCN 0.3100:0101:1 0.2180:0174:7 0.2150:0252:5GraphSage 0:3630:0050:3550:0030:3510:009RAN-GraphSage 0.3680:0151:0 0.3400:0095:0 0.3330:0082:0GAT 0:4160:0160:3840:0110:3570:011RAN-GAT 0:4300:0200:9 0:3920:0120:7 0:3680:0111:0Table 2: CLUSTER Accuracy.L= 8 L= 16 L= 32GCN 48:713:0448:577:8555:623:12RAN-GCN 58.613:153:362.241:641:763.320:992:5GIN 49:931:7949:042:5144:965:56RAN-GIN 54.382:522:556.586:263:056.192:912:0GatedGCN 63:102:5470:091:8971:941:51RAN-GatedGCN 63:852:450:372.131:681:174.320:891:6GraphSage 66:220:7371:501:0370:231:77RAN-GraphSage 67.213:231:471:902:090:472.562:081:3GAT 54:354:3960:686:1055:414:31RAN-GAT 63.382:492:169.681:581:570.931:183:6We focus on testing five of the most commonly used graph convolution definitions: GCN (Kipf& Welling, 2017), GIN (Xu et al., 2019)2, Gated GCN (Bresson & Laurent, 2017), GraphSage(Hamilton et al., 2017), GAT (Veli ˇckovi ́c et al., 2017). We select three representative tasks introducedby (Dwivedi et al., 2020): graph regression on the ZINC dataset, node classification on the CLUSTERdataset, and graph classification with superpixels on CIFAR10. To ensure reproducibility, we useexactly the same setting as (Dwivedi et al., 2020). We are interested in the performance differencesbetween the baseline ResNet architecture, i.e., a feedforward architecture with skip connections afterevery layer, and the randomly wired architecture. It was already shown in (Dwivedi et al., 2020) thatResNet GNNs significantly outperform architectures without residual connections. We remark thatother works proposed methods to build deeper GNN (Rong et al., 2019; Zhao & Akoglu, 2019; Gonget al., 2020), but such techniques can be regarded as complementary to our work. We do not attemptto optimize a specific method, nor we are interested in comparing one graph convolution to another.A fair comparison is ensured by running both methods with the same number of trainable parametersand with the same hyperparameters. In particular, the learning rate of both methods is adaptivelydecayed between 103and105by halving according to the value of the validation loss, with apatience of 5 epochs. Stopping criterion is validation loss not improving for 5 epochs after reachingthe minimum learning rate. We average the results of all experiments over 4 runs with differentweight initializations and different random architecture graphs, drawn with p= 0:6. We also evaluateresults for multiple values of the total number of layers (architecture nodes) L, in order to show thatrandomly wired GNNs allow a more effective increase in capacity. The random architectures usesequential paths (Sec. 3.2) in the ZINC experiment, sequential paths and DropPath in the CLUSTERexperiment, and only DropPath in CIFAR103. The reason for these choices is that the regressiontask in ZINC and the node classification task in CLUSTER are particularly sensitive to the sizeof the receptive field, as observed by analyzing the experimental receptive radius (supplementarymaterial). On the other hand, CIFAR10 is bottlenecked by overfitting, and it greatly benefits from theregularizing effect of DropPath, as also observed on CLUSTER. The number of DropPath iterationsin testing was fixed to 16.4.1 R ANDOM GNN BENCHMARKINGThe results presented in this section show that randomly wired GNNs have compelling performancein many regards. First of all, they typically provide higher accuracy or lower error than their ResNetcounterparts for the same number of parameters. Moreover, they are more effective at increasingcapacity than stacking layers: while they are essentially equivalent to ResNets for very short networks,they enable larger gains when additional layers are introduced.Table 1 shows the results obtained on the ZINC dataset. The metric is mean absolute error (MAE), solower is better. The superscript reports the standard deviation among runs and the subscript reportsthe level of significance by measuring how many baseline standard deviations the average value of the2GIN and RAN-GIN compute the output as in (Xu et al., 2018), using the contributions of all architecturenodes.3We do not use DropPath for RAN-GIN on any experiment as we observed unstable behavior.7Under review as a conference paper at ICLR 2021Table 3: CIFAR10 Accuracy.L= 8 L= 16 L= 32GCN 54:850:2054:740:5254:760:53RAN-GCN 57.810:0814:857.290:444:958.490:217:0GIN 48:591:6047:141:7536:904:71RAN-GIN 52.520:662:552.071:782:842.737:931:2GatedGCN 68:270:8069:160:6669:460:47RAN-GatedGCN 68:861:640:772.000:444:373.500:688:6GraphSage 65:580:4666:120:1165:330:34RAN-GraphSage 65:310:380:666:101:110:267.680:376:9GAT 64:430:3363:610:6664:620:65RAN-GAT 66.180:655:366.270:164:066.010:382:1Table 4: Median relative gain over L= 4.L= 8 L= 16 L= 32ZINCResNet +7:88% +17 :06% +17 :99%Random +14.22% +21.81% +24.36%CLUSTERResNet +17:90% +15 :80% +14 :26%Random +20.75% +30.07% +32.41%CIFAR10ResNet 0:84%0:14%1:22%Random +1.31% +3.58% +4.10%Table 5: Comparison against SIGN and PPNPNum. param. GCN (no residuals) GCN RAN-GCN PPNP SIGNZINC 180k 0.526 0.465 0.447 0.746 0.566CLUSTER 180k 22.23 48.71 58.61 33.00 48.35CIFAR10 180k 51.16 54.85 57.81 36.37 52.49ZINC 360k 0.537 0.445 0.398 0.750 0.555CLUSTER 360k 19.26 48.57 62.24 37.37 48.51CIFAR10 360k 49.86 54.74 57.29 36.68 53.55ZINC 720k 0.649 0.426 0.385 0.804 0.574CLUSTER 720k 20.90 55.62 63.32 28.77 49.14CIFAR10 720k 47.47 54.76 58.49 38.54 53.72random architecture deviates from the average value of the baseline. Results are in bold if they are atleast1significant. The results show that the randomly wired GNNs typically outperform the ResNetbaseline by significant margins. Table 2 reports the node classification accuracy on the CLUSTERdataset. It can be seen that the random architectures achieve very significant improvements onthis dataset, especially for RAN-GCN, RAN-GIN and RAN-GAT. Table 3 reports the classificationaccuracy on CIFAR10 when the images are converted to graphs using superpixels. Also in thiscase, the randomly wired architecture greatly outperforms the baseline, in some cases achievinggains higher than 5. Finally, Table 4 shows the relative performance gain (relative improvement inaccuracy or mean absolute error), averaged over all the graph convolution definitions, with respectto a short 4-layer network, where random wiring and ResNets are almost equivalent (results insupplementary material). We can notice that deeper ResNets always provide lower gains with respectto their shallow counterpart than the randomly wired GNNs. Moreover, we observe monotonicallyincreasing gains for random GNNs while deeper ResNets are either unable to significantly extractmore performance beyond L= 16 or even provide worse results than the L= 4 network. Thissupports our claim that the randomly wired architecture is a superior way to increase GNN capacity.Finally, we compare the proposed method against two other frameworks for GNNs, namely PPNPKlicpera et al. (2018) and SIGN Rossi et al. (2020), which propose different approaches for solvingthe oversmoothing problem. Due to the significant differences in the approaches, providing a faircomparison is challenging. We decided to equalize the number of parameters across the methods, sincenotions as number of layers or features cannot be translated in all the frameworks (180k,360k,720kparameters correspond to the L= 8;16;32settings in the previous tables). Table ??shows theobtained results. We can observe that both PPNP on node classification and SIGN on all tasksoutperform the standard GCN architecture without skip connections, but they cannot outperformGCN with residual connections and the randomly wired GCN.8Under review as a conference paper at ICLR 2021Table 6: Edge probability, L= 16 , RAN-GCN.p= 0:2p= 0:4p= 0:6p= 0:8ZINC 0:4400:0250:4270:0250.4090:0100:4150:012CLUSTER 59:871:6460:712:2762.752:3262:932:75CIFAR10 56:530:6156:210:4857.440:4656:060:48Table 7: DropPath on CIFAR10, RAN-GatedGCN. No sequential path embedding.L= 8 L= 16 L= 32None 68:070:9470:780:3872:750:37DropPath 68.861:6472.000:4473.500:68Table 8: DropPath on CIFAR10, RAN-GatedGCN. No sequential path embedding.pdrop0 0.005 0.01 0.02 0.0370:780:3870:900:4672.000:4471:550:8371:091:79Table 9: Sequential path embedding on CLUSTER, RAN-GatedGCN. No DropPath.L= 8 L= 16 L= 32Fully random 56:935:1766:505:1070:381:07Random+Sequential 63.302:1568.891:8771.650:974.2 A BLATION STUDY4.2.1 E DGE PROBABILITYWe first investigate the impact of the probability pof drawing an edge in the random architecture.Table 6 shows the results for a basic random architecture without DropPath nor embedded sequentialpath. It appears that an optimal value of pexists that maximizes performance. This could be explainedby a tradeoff between size of receptive field and the ability to modulate it.4.2.2 D ROPPATHThe impact of DropPath on CIFAR10 is shown in Table 7. We found the improvement due toDropPath to be increasingly significant for a higher number of architecture nodes, as expected due tothe increased number of edges. The value of the drop probability pdrop= 0:01was not extensivelycross-validated. However, Table 8 shows that higher drop rates lowered performance.4.2.3 E MBEDDED SEQUENTIAL PATHThe impact of embedding a sequential path as explained in Sec. 3.2 is shown in Table 9. It can beobserved that its effect of promoting receptive fields with larger radius is useful on this task. Weremark that, while we do not report results due to space constraints, this is not always the case andsome tasks (e.g., CIFAR10) do not benefit from promoting larger receptive fields.5 C ONCLUSIONSWe showed how randomly wired architectures can boost the performance of GNNs by mergingreceptive fields of multiple size. Consistent and statistically significant improvements over a widerange of tasks and graph convolutions suggest considering them as the go-to choice for new models.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title More details are needed. ### Review Text This paper utilizes Randomly Wired architectures to boost deep GNNs. Theoretical analyses verify that randomly wired architectures behave like path ensemble and it enables adaptive receptive field. Experimental results on three non-popular datasets demonstrate the strength of the proposed model. Overall, the idea is interesting. Yet this paper can be made better through the following aspects: 1. This paper contains confusing equations and notations. For example, in Eq 1., why w_ij equals \sigma(w_ij). What's the meaning of domain nodes and how do we connect the architecture nodes and the domain nodes. What's the definition of \mathcal{A}? 2. This paper only proposes the recursion formula but omits some basic definitions, i.e. the definition of h^{(i)}. Where does the recursion start? Is there an initialized h^{(0)}? 3. The algorithm framework is not clearly depicted. How do R-GCNs accomplish the graph propagation process? 4. Insufficient experimental comparisons. How do R-GCNs and GCNs perform when L=2. How do R-GCNs perform on standard node classification datasets, such as Cora, Citeseer, and Reddit, since deep GCNs fail particularly on node classificaiton. How do R-GCNs perform against other deep frameworks such as APPNP and JKNet, both of which resort to more sophisticated skip connections than ResGCN. #############post-rebuttal############ I have carefully checked all other reviewers' comments, the authors' response, and the revised version. Thank the authors for their detailed feedback. They have addressed my concerns on the unclear presentation. However, joining the comments from other reviewers (particularly R3), I still think there are two major issues that prevent me from further increasing my score. Q1. It is still unclear why the proposed model can tackle the over-smoothing issue in existing deep GCNs. This paper has theoretically revealed the benefit of adaptive ensemble paths towards better trainability. Given the claim in Introduction, it is still unclear why such benefit can be used to relieve over-smoothing, particularly due to the missing analysis of the output dynamics. As already pointed out by R3, [3] has set up a nice notion of framework on explaining how over-smoothing happens and why deep GCN fails. It is a pity that this paper has not put their analyses into this framework and discussed the relation with the over-smoothing issue. Actually, a more in-depth discussion of over-smoothing on general GCNs (including ResGCN, APPNP) has also provided in an arXiv preprint paper [4]. It does show that the residual networks are capable of slowing down the convergence speed to the subspace and thus alleviating over-smoothing. Since the idea of random wiring is initially proposed in CNNs, the contribution of this paper that we expect is to answer how this idea can be utilized to solve the specific weakness in the graph domain. Q2. The experimental evaluations are still unconvincing. It is thankful that the authors have additionally provided the performance of SIGN and APPNP in the revised version. Yet, the reported accuracies of APPNP seem weird and much worse than other baselines. I do not agree with the authors' response that APPNP is not intended to address the over-smoothing problem. As experimentally shown in [5] and theoretically analyzed in [4], keeping the connection between each middle layer and the input layer is able to prevent the output from converging to the subspace caused by over-smoothing, and thereby deliver desired performance with the increase of depth. As this paper has conducted experiments on a newly-public benchmark under inconsistent experimental setting up (raised by R3), it is hard to justify the significance of the proposed idea compared with previous methods, specifically given the irrational observations on APPNP. Hence, I still believe this paper is below the acceptance line. [3] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations, 2020. [4] Tackling Over-Smoothing for General Graph Convolutional Networks, arXiv 2020. [5] Simple and Deep Graph Convolutional Networks, NIPS 2020. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rJljdh4KDH
ICLR.cc/2020/Conference
2020
Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells
["Gengchen Mai", "Krzysztof Janowicz", "Bo Yan", "Rui Zhu", "Ling Cai", "Ni Lao"]
Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec ’s multi-scale representation can handle distributions at different scales.
["Grid cell", "space encoding", "spatially explicit model", "multi-scale periodic representation", "unsupervised learning"]
ABSTRACTUnsupervised text encoding models have recently fueled substantial progress inNatural Language Processing (NLP). The key idea is to use neural networks toconvert words in texts to vector space representations (embeddings) based on wordpositions in a sentence and their contexts, which are suitable for end-to-end trainingof downstream tasks. We see a strikingly similar situation in spatial analysis,which focuses on incorporating both absolute positions and spatial contexts ofgeographic objects such as Points of Interest (POIs) into models. A general-purposerepresentation model for space is valuable for a multitude of tasks. However, nosuch general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modelingdistributions with vastly different characteristics, which commonly emerges fromGIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows thatgrid cells in mammals provide a multi-scale periodic representation that functionsas a metric for location encoding and is critical for recognizing places and forpath-integration. Therefore, we propose a representation learning model calledSpace2Vec to encode the absolute positions and spatial relationships of places.We conduct experiments on two real-world geographic data for two differenttasks: 1) predicting types of POIs given their positions and context, 2) imageclassification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approachessuch as RBF kernels, multi-layer feed-forward nets, and tile embedding approachesfor location modeling and image classification tasks. Detailed analysis showsthat all baselines can at most well handle distribution at one scale but show poorperformances in other scales. In contrast, Space2Vec ’s multi-scale representationcan handle distributions at different scales.11 I NTRODUCTIONUnsupervised text encoding models such as Word2Vec (Mikolov et al., 2013), Glove (Penningtonet al., 2014), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018) have been effectively utilizedin many Natural Language Processing (NLP) tasks. At their core they train models which encodewords into vector space representations based on their positions in the text and their context. Asimilar situation can be encountered in the field of Geographic Information Science (GIScience). Forexample, spatial interpolation aims at predicting an attribute value, e.g., elevation, at an unsampledlocation based on the known attribute values of nearby samples. Geographic information has becomean important component to many tasks such as fine-grained image classification (Mac Aodha et al.,2019), point cloud classification and semantic segmentation (Qi et al., 2017), reasoning about Pointof Interest (POI) type similarity (Yan et al., 2017), land cover classification (Kussul et al., 2017), andgeographic question answering (Mai et al., 2019b). Developing a general model for vector spacerepresentation of any point in space would pave the way for many future applications.1Link to project repository: https://github.com/gengchenmai/space2vec1Published as a conference paper at ICLR 2020(a) Women’s Cloth (b) Education (c) Ripley’s K (d) Renormalized Ripley’s KFigure 1: The challenge of joint modeling distributions with very different characteristics. (a)(b) The POIlocations (red dots) in Las Vegas and Space2Vec predicted conditional likelihood of Women’s Clothing (witha clustered distribution) and Education (with an even distribution). The dark area in (b) indicates that thedowntown area has more POIs of other types than education. (c) Ripley’s K curves of POI types for whichSpace2Vec has the largest and smallest improvement over wrap (Mac Aodha et al., 2019). Each curve representsthe number of POIs of a certain type inside certain radios centered at every POI of that type; (d) Ripley’s Kcurves renormalized by POI densities and shown in log-scale. To efficiently achieve multi-scale representationSpace2Vec concatenates the grid cell encoding of 64 scales (with wave lengths ranging from 50 meters to 40kmeters) as the first layer of a deep model, and trains with POI data in an unsupervised fashion.However, existing models often utilize specific methods to deal with geographic information andoften disregards geographic coordinates. For example, Place2Vec (Yan et al., 2017) converts thecoordinates of POIs into spatially collocated POI pairs within certain distance bins, and does notpreserve information about the (cardinal) direction between points. Li et al. (2017) propose DCRNNfor traffic forecasting in which the traffic sensor network is converted to a distance weighted graphwhich necessarily forfeits information about the spatial layout of sensors. There is, however, nogeneral representation model beyond simply applying discretization (Berg et al., 2014; Tang et al.,2015) or feed-forward nets (Chu et al., 2019; Mac Aodha et al., 2019) to coordinates.A key challenge in developing a general-purpose representation model for space is how to deal withmixtures of distributions with very different characteristics (see an example in Figure 1), whichoften emerges in spatial datasets (McKenzie et al., 2015). For example, there are POI types withclustered distributions such as women’s clothing, while there are other POI types with regulardistributions such as education. These feature distributions co-exist in the same space, and yet wewant a single representation to accommodate all of them in a task such as location-aware imageclassification (Mac Aodha et al., 2019). Ripley’s K is a spatial analysis method used to describe pointpatterns over a given area of interest. Figure 1c shows the K plot of several POI types in Las Vegas.One can see that as the radius grows the numbers of POIs increase at different rates for different POItypes. In order to see the relative change of density at different scales, we renormalize the curvesby each POI type’s density and show it in log scale in Figure 1d. One can see two distinct POI typegroups with different distribution patterns with clustered and even distributions. If we want to modelthe distribution of these POIs by discretizing the study area into tiles, we have to use small grid sizesfor women’s clothing while using larger grid sizes for educations because smaller grid sizes lead toover- parameterization of the model and overfitting. In order to jointly describe these distributionsand their patterns, we need an encoding method which supports multi-scale representations .Nobel Prize winning Neuroscience research (Abbott & Callaway, 2014) has demonstrated that gridcells in mammals provide a multi-scale periodic representation that functions as a metric for locationencoding, which is critical for integrating self-motion. Moreover, Blair et al. (2007) show that themulti-scale periodic representation of grid cells can be simulated by summing three cosine gratingfunctions oriented 60apart, which may be regarded as a simple Fourier model of the hexagonallattice. This research inspired us to encode locations with multi-scale periodic representations. Ourassumption is that decomposed geographic coordinates helps machine learning models, such as deepneural nets, and multi-scale representations deal with the inefficiency of intrinsically single-scalemethods such as RFB kernels or discretization (tile embeddings). To validate this intuition, wepropose an encoder-decoder framework to encode the distribution of point-features2in space and2In GIS and spatial analysis, ‘features’ are representations of real-world entities. A tree can, for instance, bemodeled by a point-feature, while a street would be represented as a line string feature.2Published as a conference paper at ICLR 2020train such a model in an unsupervised manner. This idea of using sinusoid functions with differentfrequencies to encode positions is similar to the position encoding proposed in the Transformermodel (Vaswani et al., 2017). However, the position encoding model of Transformer deals witha discrete 1D space – the positions of words in a sentence – while our model works on higherdimensional continuous spaces such as the surface of earth.In summary, the contributions of our work are as follows:1.We propose an encoder-decoder encoding framework called Space2Vec using sinusoidfunctions with different frequencies to model absolute positions and spatial contexts. Wealso propose a multi-head attention mechanism based on context points. To the best of ourknowledge, this is the first attention model that explicitly considers the spatial relationshipsbetween the query point and context points.2.We conduct experiments on two real world geographic data for two different tasks: 1)predicting types of POIs given their positions and context, 2) image classification leveragingtheir geo-locations. Space2Vec outperforms well-established encoding methods such asRBF kernels, multi-layer feed-forward nets, and tile embedding approaches for locationmodeling and image classification.3.To understand the advantages of Space2Vec we visualize the firing patterns (response maps)of location models’ encoding layer neurons and show how they handle spatial structures atdifferent scales by integrating multi-scale representations. Furthermore the firing patternsfor the spatial context models neurons give insight into how the grid-like cells capture thedecreasing distance effect with multi-scale representations.2 P ROBLEM FORMULATIONDistributed representation of point-features in space can be formulated as follows. Given a setof pointsP tpiu, i.e., Points of Interests (POIs), in L-D space (L2;3) define a functionfP;pxq:RLÑRd(L!d), which is parameterized by and maps any coordinate xin space to avector representation of ddimension. Each point (e.g., a restaurant) pipxi;viqis associated witha location xiand attributes vi(i.e., POI features such as type, name, capacity, etc.). The functionfP;pxqencodes the probability distribution of point features over space and can give a representationof any point in the space. Attributes (e.g. place types such as Museum ) and coordinate of point canbe seen as analogies to words and word positions in commonly used word embedding models.3 R ELATED WORKThere has been theoretical research on neural network based path integration/spatial localizationmodels and their relationships with grid cells. Both Cueva & Wei (2018) and Banino et al. (2018)showed that grid-like spatial response patterns emerge in trained networks for navigation tasks whichdemonstrate that grid cells are critical for vector-based navigation. Moreover, Gao et al. (2019)propose a representational model for grid cells in navigation tasks which has good quality such asmagnified local isometry. All these research is focusing on understanding the relationship betweenthe grid-like spatial response patterns and navigation tasks from a theoretical perspective. In contrast,our goal focuses on utilizing these theoretical results on real world data in geoinformatics.Radial Basis Function (RBF) kernel is a well-established approach to generating learning friendlyrepresentation from points in space for machine learning algorithms such as SVM classification (Bau-dat & Anouar, 2001) and regression (Bierens, 1994). However, the representation is example based– i.e., the resultant model uses the positions of training examples as the centers of Gaussian kernelfunctions (Maz’ya & Schmidt, 1996). In comparison, the grid cell based location encoding relies onsine and cosine functions, and the resultant model is inductive and does not store training examples.Recently the computer vision community shows increasing interests in incorporating geographicinformation (e.g. coordinate encoding) into neural network architectures for multiple tasks suchas image classification (Tang et al., 2015) and fine grained recognition (Berg et al., 2014; Chuet al., 2019; Mac Aodha et al., 2019). Both Berg et al. (2014) and Tang et al. (2015) proposedto discretize the study area into regular grids. To model the geographical prior distribution of theimage categories, the grid id is used for GPS encoding instead of the raw coordinates. However,choosing the correct discretization is challenging (Openshaw, 1984; Fotheringham & Wong, 1991),3Published as a conference paper at ICLR 2020and incorrect choices can significantly affect the final performance (Moat et al., 2018; Lechner et al.,2012). In addition, discretization does not scale well in terms of memory use. To overcome thesedifficulties, both Chu et al. (2019) and Mac Aodha et al. (2019) advocated the idea of inductivelocation encoders which directly encode coordinates into a location embedding. However, both ofthem directly feed the coordinates into a feed-forward neural network (Chu et al., 2019) or residualblocks (Mac Aodha et al., 2019) without any feature decomposition strategy. Our experimentsshow that this direct encoding approach is insufficient to capture the spatial feature distribution andSpace2Vec significantly outperforms them by integrating spatial representations of different scales.4 M ETHODWe solve distributed representation of point-features in space (defined in Section 2) with an encoder-decoder architecture:1.Given a point pipxi;viqapoint space encoder Encpxqpqencodes location xiinto alocation embedding erxisPRdpxqand a point feature encoder Encpvqpqencodes its featureinto a feature embedding ervisPRdpvq.ererxis;ervissPRdis the full representationof pointpiPP, whereddpxqdpvq.r;srepresents vector concatenation. In contrast,geographic entities not in Pwithin the studied space can be represented by their locationembedding erxjssince its viis unknown.2.We developed two types of decoders which can be used independently or jointly. A locationdecoderDecspqreconstructs point feature embedding ervisgiven location embeddingerxis, and a spatial context decoder Deccpqreconstructs the feature embedding ervisof pointpibased on the space and feature embeddings tei1;:::;eij;:::;einuof nearestneighboring points tpi1;:::;pij;:::;pinu, wherenis a hyper-parameter.4.1 E NCODERPoint Feature Encoder Each pointpipxi;viqin a point setPis often associated with featuressuch as the air pollution station data associate with some air quality measures, a set of POIs withPOI types and names, a set of points from survey and mapping with elevation values, a set of pointsfrom geological survey with mineral content measure, and so on. The point feature encoder Encpvqpqencodes such features viinto a feature embedding ervisPRdpvq. The implementation of Encpvqpqdepends on the nature of these features. For example, if each point represents a POI with multiplePOI types (as in this study), the feature embedding erviscan simply be the mean of each POI types’embeddings ervis1H°Hh1tpqh;where tpqhindicates the hth POI type embedding of a POI piwithHPOI types. We apply L2normalization to the POI type embedding matrix.Point Space Encoder A part of the novelty of this paper is from the point space encoder Encpxqpq.We first introduce Theorem 1 which provide an analytical solution pxqas the base of encoding anylocation xPR2in 2D space to a distributed representation:Theorem 1. Letpxq peixaj;xy;j1;2;3qTPC3whereeicosisinis the Eulernotation of complex values; xaj;xyis the inner product of ajandx.a1;a2;a3PR2are 2D vectorssuch that the angle between akandalis2{3,@j;}aj}2?. LetCPC33be a random complexmatrix such as CCI. ThenpxqCpxq,MpxqCdiagppxqqCsatisfiespxxqMpxqpxq (1)andxpxxq;pxqydp1}x}2q (2)whered3is the dimension of pxqandxis a small displacement from x.The proof of Theorem 1 can be seen in Gao et al. (2019). pxq Cpxq PC3amounts to a6-dimension real value vector and each dimension shows a hexagon firing pattern which models thegrid cell behavior. Because of the periodicity of sinpqandcospq, this single scale representation pxqdoes not form a global codebook of 2D positions, i.e. there can be xy, butpxqpyq.Inspired by Theorem 1 and the multi-scale periodic representation of grid cells in mammals (Ab-bott & Callaway, 2014) we set up our point space encoder erxs Encpxqtheorypxqto use sine4Published as a conference paper at ICLR 2020and cosine functions of different frequencies to encode positions in space. Given any point xinthe studied 2D space, the space encoder Encpxqtheorypxq NNpPEptqpxqqwherePEptqpxq rPEptq0pxq;:::;PEptqspxq;:::;PEptqS1pxqsis a concatenation of multi-scale representations of dpxq6Sdimensions. Here Sis the total number of grid scales and s0;1;2;:::;S1.NNpqrepresentsfully connected ReLU layers. Let a1r1;0sT;a2r 1{2;?3{2sT;a3r 1{2;?3{2sTPR2be three unit vectors and the angle between any of them is 2{3.min;max are the minimum andmaximum grid scale and gmaxmin. At each scale s,PEptqspxqrPEptqs;1pxq;PEptqs;2pxq;PEptqs;3pxqsis a concatenation of three components, wherePEptqs;jpxqrcospxx;ajymings{pS1qq; sinpxx;ajymings{pS1qqs@j1;2;3; (3)NNpqandPEptqpxqare analogies of Candpxqin Theorem 1.Similarly we can define another space encoder Encpxqgridpxq NNpPEpgqpxqqinspired bythe position encoding model of Transformer (Vaswani et al., 2017), where PEpgqpxq rPEpgq0pxq;:::;PEpgqspxq;:::;PEpgqS1pxqsis still a concatenation of its multi-scale representations,whilePEpgqspxqrPEpgqs;1pxq;PEpgqs;2pxqshandles each component lofxseparately:PEpgqs;lpxqrcospxrlsmings{pS1qq; sinpxrlsmings{pS1qqs@l1;2 (4)4.2 D ECODERTwo types of decoders are designed for two major types of GIS problems: location modeling andspatial context modeling (See Section 5.1).Location Decoder Decspqdirectly reconstructs point feature embedding ervisgiven its spaceembedding erxis. We use one layer feed-forward neural network NN decpqervis1Decspxi;decsqNN decperxisq (5)For training we use inner product to compare the reconstructed feature embedding ervis1against thereal feature embeddings of ervisand other negative points (see training detail in Sec 4.3).Spatial Context Decoder Deccpqreconstructs the feature embedding ervisof the centerpointpibased on the space and feature embeddings tei1;:::;eij;:::;einuofnnearby pointstpi1;:::;pij;:::;pinu. Note that the feed-in order of context points should not affect the predic-tion results, which can be achieved by permutation invariant neural network architectures (Zaheeret al., 2017) like PointNet (Qi et al., 2017).ervis1Deccpxi;tei1;:::;eij;:::;einu;deccqgp1KK ̧k1n ̧j1ijkervijsq (6)Heregis an activation function such as sigmoid. ijkexppijkq°no1exppiokqis the attention of piwith itsjth neighbor through the kth attention head, andijkLeakyReLUpaTkrervisinit;ervijs;erxixijssq (7)where akPR2dpvqdpxqis the attention parameter in the kth attention head. The multi-head attentionmechanism is inspired by Graph Attention Network (Veli ˇckovi ́c et al., 2018) and Mai et al. (2019a).To represent the spatial relationship (distance and direction) between each context point pijpxij;vijqand the center point pi pxi;viq, we use the space encoder Encpxqpqto encode thedisplacement between them xijxixij. Note that we are modeling the spatial interactionsbetween the center point and ncontext points simultaneously.In Eq. 7, ervisinitindicates the initial guess of the feature embedding ervisof pointpiwhich is com-puted by using another multi-head attention layer as Eq. 6 where the weight 1ijkexpp1ijkq°no1expp1iokq.Here,1ijkis computed as Eq. 8 where the query embedding ervisis excluded.1ijkLeakyReLUpa1Tkrervijs;erxixijssq (8)5Published as a conference paper at ICLR 20204.3 U NSUPERVISED TRAININGThe unsupervised learning task can simply be maximizing the log likelihood of observing the truepointpiat position xiamong all the points in PLPpq ̧piPPlogPppi|pi1;:::;pij;:::;pinq ̧piPPlogexppervisTervis1q°poPPexppervosTervis1q(9)Here only the feature embedding of piis used (without location embedding) to prevent revealing theidentities of the point candidates, and renc;decsNegative sampling by Mikolov et al. (2013) can be used to improve the efficiency of trainingL1Ppq ̧piPPlogpervisTervis1q1|Ni| ̧poPNilogpervosTervis1q(10)HereNiPis a set of sampled negative points for pi(piRNi) andpxq1{p1exq.5 E XPERIMENTIn this section we compare Space2Vec with commonly used position encoding methods, and analyzethem both quantitatively and qualitatively.Baselines Our baselines include 1) direct directly applying feed-forward nets (Chu et al., 2019);2)tilediscretization (Berg et al., 2014; Adams et al., 2015; Tang et al., 2015); 3) wrap feed-forwardnets with coordinate wrapping (Mac Aodha et al., 2019); and 4) rbfRadial Basis Function (RBF)kernels (Baudat & Anouar, 2001; Bierens, 1994). See Appendix A.1 for details of the baselines.5.1 POI T YPE CLASSIFICATION TASKSDataset and Tasks To test the proposed model, we conduct experiments on geographic datasetswith POI position and type information. We utilize the open-source dataset published by YelpData Challenge and select all POIs within the Las Vegas downtown area3. There are 21,830 POIswith 1,191 different POI types in this dataset. Note that each POI may be associated with oneor more types, and we do not use any other meta-data such as business names, reviews for thisstudy. We project geographic coordinates into projection coordinates using the NAD83/Conus Albersprojection coordinate system4. The POIs are split into training, validation, and test dataset with ratios80%:10%:10%. We create two tasks setups which represent different types of modeling need inGeographic Information Science:Location Modeling predicts the feature information associated with a POI based on itslocation xirepresented by the location decoder Decspq. This represents a large number oflocation prediction problems such as image fine grained recognition with geographic prior(Chu et al., 2019), and species potential distribution prediction (Zuo et al., 2008).Spatial Context Modeling predicts the feature information associated with a POI basedon its contexttei1;:::;eij;:::;einurepresented by the spatial context decoder Deccpq. Thisrepresents a collections of spatial context prediction problem such as spatial context basedfacade image classification (Yan et al., 2018), and all spatial interpolation problems.We use POI prediction metrics to evaluate these models. Given the real point feature embeddingervisandNnegative feature embeddings Nitervisu, we compare the predicted ervis1withthem by cosine distance. The cosine scores are used to rank ervisandNnegative samples. Thenegative feature embeddings are the feature embeddings of points pjrandomly sampled from Pandpipj. We evaluate each model using Negative Log-Likelihood (NLL), Mean Reciprocal Rank(MRR) and HIT@5 (the chance of the true POI being ranked to top 5. We train and test each model10 times to estimate standard deviations. See Appendix A.2 for hyper-parameter selection details.5.1.1 L OCATION MODELING EVALUATIONWe first study location modeling with the location decoder Decspqin Section 4.2. We use a negativesample size of N100. Table 1 shows the average metrics of different models with their best hyper-3The geographic range is (35.989438, 36.270897) for latitude and (-115.047977, -115.3290609) for longitude.4https://epsg.io/5070-12526Published as a conference paper at ICLR 2020(a)direct (b)tile (c)wrap (d)rbf(=1k) (e)min=1k (f)min=500 (g)min=50Figure 2: Embedding clustering of (a) direct ; (b)tilewith the best cell size c500; (c)wrap (h3;o512); (d)rbfwith the best (1k) and 200 anchor points (red) and (e)(f)(h) theory models with different min,but fixedmax40kandS64. All models use 1 hidden ReLU layers of 512 neurons except wrap .Table 1: The evaluation results of different location models on the validation and test dataset.Train Validation TestingNLL NLL MRR HIT@5 MRR HIT@5random - 0.052 (0.002) 4.8 (0.5) 0.051 (0.002) 5.0 (0.5)direct 1.285 1.332 0.089 (0.001) 10.6 (0.2) 0.090 (0.001) 11.3 (0.2)tile(c=500) 1.118 1.261 0.123 (0.001) 16.8 (0.2) 0.120 (0.001) 17.1 (0.3)wrap (h=3,o=512) 1.222 1.288 0.112 (0.001) 14.6 (0.1) 0.119 (0.001) 15.8 (0.2)rbf(=1k) 1.209 1.279 0.115 (0.001) 15.2 (0.2) 0.123 (0.001) 16.8 (0.3)grid (min=50) 1.156 1.258 0.128 (0.001) 18.1 (0.3) 0.139 (0.001) 20.0 (0.2)hexa (min=50) 1.230 1.297 0.107 (0.001) 14.0 (0.2) 0.105 (0.001) 14.5 (0.2)theorydiag (min=50) 1.277 1.324 0.094 (0.001) 12.3 (0.3) 0.094 (0.002) 11.2 (0.3)theory (min=1k) 1.207 1.281 0.123 (0.002) 16.3 (0.5) 0.121 (0.001) 16.2 (0.1)theory (min=500) 1.188 1.269 0.132 (0.001) 17.6 (0.3) 0.129 (0.001) 17.7 (0.2)theory (min=50) 1.098 1.249 0.137 (0.002) 19.4 (0.1) 0.144 (0.001) 20.0 (0.2)parameter setting on the validation set. We can see that direct andtheorydiag are less competitive,only beating the random selection baseline. Other methods with single scale representations –includingtile,wrap , andrbf– perform better. The best results come from various version of thegrid cell models, which are capable of dealing with multi-scale representations.In order to understand the reason for the superiority of grid cell models we provide qualitative analysisof their representations. We apply hierarchical clustering to the location embeddings produced bystudied models using cosine distance as the distance metric (See Fig. 2). we can see that whenrestricted to large grid sizes ( min1k),theory has similar representation (Fig. 2d, 2e, and Fig. 4d,4e) and performance compared to rbf(1k). However it is able to significantly outperform rbf(1k) (andtileandwrap ) when small grid sizes ( min500;50) are available. The relativeimprovements over rbf(1k) are -0.2%, +0.6%, +2.1% MRR for min=1k, 500, 50 respectively.5.1.2 M ULTI -SCALE ANALYSIS OF LOCATION MODELINGIn order to show how our multi-scale location representation model will affect the prediction of POItypes with different distribution patterns, we classify all 1,191 POI types into three groups based onradiusr, which is derived from each POI types’ renormalized Ripley’s K curve (See Figure 1d forexamples). It indicates the x axis value of the intersection between the curve and the line of y3:0.A lowerrindicates a more clustered distribution patterns. These three groups are listed below:1. Clustered ( r¤100m): POI types with clustered distribution patterns;2. Middle ( 100m r 200m): POI types with less extreme scales;3. Even (r¥200m): POI types with even distribution patterns.Table 2 shows the performance ( MRR ) ofdirect ,tile,wrap ,rbf, and ourtheory model on the testdataset of the location modeling task with respect to these three different POI distribution groups.The numbers in pqindicate the MRR difference betweeb a baseline and theory .# POI refers to totalnumber of POI belong to each group5. We can see that 1) The two neural net approaches ( direct andwrap ) have no scale related parameter and are not performing ideally across all scales, with direct5The reason why the sum of # POI of these three groups does not equal to the total number of POI is becauseone POI can have multiple types and they may belonging to different groups.7Published as a conference paper at ICLR 2020Table 2: Comparing performances in different POI groups. We classify all 1,191 POI types into threegroups based on the radius rof their root types, where their renormalized Ripley’s K curve (SeeFigure 1d) reach 3.0: 1) Clustered ( r¤100m): POI types with clustered distribution patterns; 2)Middle ( 100m r 200m): POI types with unclear distribution patterns; 3) Even ( r¥200m):POI types with even distribution patterns. The MRR of wrap andtheory on those three groups areshown. The numbers in pqindicate the difference between the MRR of a baseline model and theMRR oftheory with respect to a specific group. #POI refers to the total number of POIs belongingto each group. Root Types indicates the root categories of those POI types belong to each group.POI Groups Clustered Middle Even(r¤100m) (100m r 200m) (r¥200m)direct 0.080 (-0.047) 0.108 (-0.030) 0.084 (-0.047)wrap 0.106 (-0.021) 0.126 (-0.012) 0.122 (-0.009)tile 0.108 (-0.019) 0.135 (-0.003) 0.111 (-0.020)rbf 0.112 (-0.015) 0.136 (-0.002) 0.119 (-0.012)theory 0.127 (-) 0.138 (-) 0.131 (-)# POI 16,016 7,443 3,915Root TypesRestaurants; Shopping; Food;Nightlife; Automotive; ActiveLife; Arts & Entertainment;Financial ServicesBeauty & Spas; Health & Medical;Local Services; Hotels & Travel;Professional Services;Public Services & GovernmentHome Services;Event Planning& Services;Pets; Educationperforms worse because of its simple single layer network. 2) The two approaches with built-in scaleparameter (tileandrbf) have to trade off the performance of different scales. Their best parametersettings lead to close performances to that of Space2Vec at the middle scale, while performing poorlyin both clustered and regular groups. These observation clearly shows that all baselines can at mostwell handle distribution at one scale but show poor performances in other scales. In contrast,Space2Vec’s multi-scale representation can handle distributions at different scales.5.1.3 S PATIAL CONTEXT MODELING EVALUATIONNext, we evaluate the spatial context decoder Deccpqin Sec. 4.2. We use the same evaluation setup as location modeling. The context points are obtained by querying the n-th nearest points usingPostGIS (n10). As for validation and test datasets, we make sure the center points are all unknownduring the training phase. Table 3 shows the evaluation results of different models for spatial contextmodeling. The baseline approaches ( direct ,tile,wrap ,rbf) generally perform poorly in contextmodeling. We designed specialized version of these approaches ( polar ,polar _tile,scaled _rbf)with polar coordinates, which lead to significantly improvements. Note that these are models proposedby us specialized for context modeling and therefore are less general than the grid cell approaches.Table 3: The evaluation results of different spatial context models on the validation and test dataset.All encoders contains a 1 hidden layer FFN. All grid cell encoders set min=10,max=10k.Train Validation TestingSpace 2Vec NLL NLL MRR HIT@5 MRR HIT@5none 1.163 1.297 0.159 (0.002) 22.4 (0.5) 0.167 (0.006) 23.4 (0.7)direct 1.151 1.282 0.170 (0.002) 24.6 (0.4) 0.175 (0.003) 24.7 (0.5)polar 1.157 1.283 0.176 (0.004) 25.4 (0.4) 0.178 (0.006) 24.9 (0.1)tilepc50q 1.163 1.298 0.173 (0.004) 24.0 (0.6) 0.173 (0.001) 23.4 (0.1)polar _tilepS64q 1.161 1.282 0.173 (0.003) 25.0 (0.1) 0.177 (0.001) 24.5 (0.3)wrap (h=2,o=512) 1.167 1.291 0.159 (0.001) 23.0 (0.1) 0.170 (0.001) 23.9 (0.2)rbfp50q 1.160 1.281 0.179 (0.002) 25.2 (0.6) 0.172 (0.001) 25.0 (0.1)scaled _rbf(=40,=0.1) 1.150 1.272 0.177 (0.002) 25.7 (0.1) 0.181 (0.001) 25.3 (0.1)grid (min=10) 1.172 1.285 0.178 (0.004) 24.9 (0.5) 0.181 (0.001) 25.1 (0.3)hexa (min=10) 1.156 1.289 0.173 (0.002) 24.0 (0.2) 0.183 (0.002) 25.3 (0.2)theorydiagpmin10q1.156 1.287 0.168 (0.001) 24.1 (0.4) 0.174 (0.005) 24.9 (0.1)theory (min=200) 1.168 1.295 0.159 (0.001) 23.1 (0.2) 0.170 (0.001) 23.2 (0.2)theory (min=50) 1.157 1.275 0.171 (0.001) 24.2 (0.3) 0.173 (0.001) 24.8 (0.4)theory (min=10) 1.158 1.280 0.177 (0.003) 25.2 (0.3) 0.185 (0.002) 25.7 (0.3)8Published as a conference paper at ICLR 2020(a)direct (b)polar (c)wrap (d)polar _tile (e)scaled _rbf (f)theory(g)direct (h)polar (i)wrap (j)polar _tile (k)scaled _rbf (l)theoryFigure 3: Embedding clustering in the original space of (a) direct ; (b)polar ; (c)wrap ,h=2,o=512;(d)polar _tile,S= 64, (e)scaled _rbf,= 40,=0.1; and (f) theory ,min10,max10k,S64. (g)(h)(i)(j)(k)(l) are the clustering results of the same models in the polar-distance spaceusing logpkxijk1q. All models use 1 hidden ReLU (except wrap ) layers of 512 neurons. Mostmodels except wrap can capture a shift when distance is around e51150meters.Nevertheless the grid cell approaches are able to perform better than the specialized approaches onthe test dataset while have competitive performance on validation dataset. See Appendix ??for thevisualization of context models. Actually the gains are small for all baseline approaches also. Thereason is that we expect location encoding to be less important when context information is accessible.Similarly as discussed in (Gao et al., 2019), it is when there is a lack of visual clues that the grid cellsof animals are the most helpful for their navigation.Figure 6 shows the location embedding clustering results in both Cartesian and polar coordinatesystems. We can see that direct (Fig. 3a, 3g) only captures the distance information when the contextPOI is very close ( logpkxijk1q¤5) while in the farther spatial context it purely models thedirection information. polar (Fig. 3b, 3h) has the similar behaviors but captures the distance infor-mation in a more fine-grained manner. wrap (Fig. 3c, 3i) mainly focuses on differentiating relativepositions in farther spatial context cont which might explain its lower performance6.polar _tile(Fig.3d) mostly responds to distance information. Interestingly, scaled _rbfandtheory have similarrepresentations in the polar coordinate system (Fig. 3k, 3l) and similar performance (Table 3). Whilescaled _rbfcaptures the gradually decreased distance effect with a scaled kernel size which becomeslarger in farther distance, theory achieves this by integrating representations of different scales.5.2 F INE-GRAINED IMAGE CLASSIFICATION TASKSTo demonstrate the generalizability of Space2Vec for space representation we utilized the proposedpoint space encoder Encpxqpqmodel in a well-known computer vision task: fine-grained imageclassification . As we discussed in Section 3, many studies (Berg et al., 2014; Chu et al., 2019;Mac Aodha et al., 2019) have shown that geographic prior information - where (and when) the imageis taken - is very important additional information for the fine-grained image classification task andcan substantially improve the model performance. For example, the appearance information is usuallynot sufficient to differentiate two visually similar species. In this case, the geographic prior becomesmuch more important because these two species may have very different spatial prior distributionssuch as the example of European Toads and Spiny Toads in Figure 1 of Mac Aodha et al. (2019).We adopt the task setup of Mac Aodha et al. (2019). During training we have a set of tuples DtpIi;xi;yi;piq|i1;:::;NuwhereIiindicates an image, yiPt1;2;:::;Cuis the correspondingclass label (species category), xi rlongitude i;latitudeisis the geographic coordinates wherethe image was taken, and piis the id of the photographer who took this image. At training time, alocation encoder is trained to capture the spatial prior information Ppy|xq. At inference time, piinformation is not available and the final image classification prediction is calculated based on the6Note thatwrap is original proposed by Mac Aodha et al. (2019) for location modelling, not spatial contextmodelling. This results indicates wrap is not good at this task.9Published as a conference paper at ICLR 2020Table 4: Fine-grained image classification results on two datasets: BirdSnap :and NABirds:. Theclassification accuracy is calculated by combining image classification predictions Ppy|Iqwithdifferent spatial priors Ppy|xq. Thegrid andtheory model use 1 hidden ReLU layers of 512neurons. The evaluation results of the baseline models are from Table 1 of Mac Aodha et al. (2019).BirdSnap:NABirds:No Prior (i.e. uniform) 70.07 76.08Nearest Neighbor (num) 77.76 79.99Nearest Neighbor (spatial) 77.98 80.79Adaptive Kernel (Berg et al., 2014) 78.65 81.11tile(Tang et al., 2015) (location only) 77.19 79.58wrap (Mac Aodha et al., 2019) (location only) 78.65 81.15rbf(=1k) 78.56 81.13grid (min=0.0001,max=360,S= 64) 79.44 81.28theory (min=0.0001,max=360,S= 64) 79.35 81.59combination of two models: 1) the trained location encoder which captures the spatial priors Ppy|xqand 2) the pretrained image classification model, InceptionV3 network (Szegedy et al., 2016), whichcapturesPpy|Iq. Bayesian theory has been used to derive the joint distribution Ppy|I;xq. SeeMac Aodha et al. (2019) for detail explanation as well as the loss function. Note that while Space2Vecoutperforms specialized density estimation methods such as Adaptive Kernel (Berg et al., 2014), itwould be interesting to explore early fusion Space2Vec ’s representations with the image module.We use two versions of our point space encoder Encpxqpqmodel (grid,theory ) as the locationencoder to capture the spatial prior information Ppy|xq. The evaluation results of our models as wellas multiple baselines are shown in Table 4. We can see that bothgrid,theory outperform previousmodels as well as that of Mac Aodha et al. (2019) on two fine-grained image classificationdatasets with significant sizes: BirdSnap :, NABirds:.theory shows superiority over grid onNABirds:while fail to outperform grid on BirdSnap:. Note that we only pick baseline modelswhich capture spatial-only prior and drop models which additionally consider time information.Bothgrid andtheory use 1 hidden ReLU layers of 512 neurons for NNpqand they have the samehyperparameters: min=0.0001,max=360,S= 64. Like Mac Aodha et al. (2019), the locationembedding size dpxqis 1024 and we train the location encoder for 30 epochs. Our implementation isbased on the original code7of Mac Aodha et al. (2019) for both model training and evaluation phase.6 C ONCLUSIONWe introduced an encoder-decoder framework as a general-purpose representation model for spaceinspired by biological grid cells’ multi-scale periodic representations. The model is an inductivelearning model and can be trained in an unsupervised manner. We conduct two experiments on POItype prediction based on 1) POI locations and 2) nearby POIs. The evaluation results demonstrate theeffectiveness of our model. Our analysis reveals that it is the ability to integrate representations ofdifferent scales that makes the grid cell models outperform other baselines on these two tasks. In thefuture, we hope to incorporate the presented framework to more complex GIS tasks such as socialnetwork analysis, and sea surface temperature prediction.ACKNOWLEDGMENTSThe presented work is partially funded by the NSF award 1936677 C-Accel Pilot - Track A1 (OpenKnowledge Network): Spatially-Explicit Models, Methods, And Services For Open KnowledgeNetworks , Esri Inc., and Microsoft AI for Earth Grant: Deep Species Spatio-temporal DistributionModeling for Biodiversity Hotspot Prediction . We thank Dr. Ruiqi Gao for discussions about gridcells, Dr. Wenyun Zuo for discussion about species potential distribution prediction and Dr. YingjieHu for his suggestions about the introduction section.7https://github.com/macaodha/geo_prior/10Published as a conference paper at ICLR 2020
Hye56-ChYr
Official Blind Review #3
6: Weak Accept
This paper presents a new method called "Space2Vec" to compute spatial embeddings of a pixel in a spatial data. The primary motivation of Space2Vec is to integrate representations of different spatial scales which could potentially make the spatial representations more informative and meaningful as features. Space2Vec is trained as a part of an encoder-decoder framework, where Space2Vec encodes the spatial features of all the points that are fed as input to the framework. They conducted experiments on real world geographic data where they predict types of point of interests (POIs) at given positions based on their 1) locations (location-modeling) and 2) spatial neighborhood (spatial context modeling). They evaluated Space2Vec against other ML approaches for encoding spatial information including RBF kernels, multi-layer feed forward nets, and tile embedding approaches. Their results indicate that that Space2Vec approach performs better (albeit marginally) than other ML methods. I am giving this paper a weak reject rating mainly because of weak results and lack of motivation for location modeling problem (where their approach performs significantly better than baselines). I explain my concerns below under detailed comments. Detailed Comments: 1) Motivation of location modeling problem does not sound compelling enough to me, especially in the context of Point of Interest(POI) classification approach. I could not imagine any scenario where access to information from spatial neighborhood will be denied. If authors could present strong motivating examples for this problem and demonstrate the utility of their proposed approach in that setting, that will make the paper much stronger. 2) In spatial context modeling problem, the improvements in the results (Table 2) appear to be marginal(0.185 against 0.181, 25.7 against 25.3). Authors should try out more datasets to convincingly justify the superiority of their approach over other methods. EDIT: AFTER RECEIVING AUTHOR'S RESPONSE I am satisfied with author's response to my comments. I am updating my rating to Weak Accept.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells ### Paper Abstract Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec ’s multi-scale representation can handle distributions at different scales. ### Paper Keywords ["Grid cell", "space encoding", "spatially explicit model", "multi-scale periodic representation", "unsupervised learning"] ### Paper Content ABSTRACTUnsupervised text encoding models have recently fueled substantial progress inNatural Language Processing (NLP). The key idea is to use neural networks toconvert words in texts to vector space representations (embeddings) based on wordpositions in a sentence and their contexts, which are suitable for end-to-end trainingof downstream tasks. We see a strikingly similar situation in spatial analysis,which focuses on incorporating both absolute positions and spatial contexts ofgeographic objects such as Points of Interest (POIs) into models. A general-purposerepresentation model for space is valuable for a multitude of tasks. However, nosuch general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modelingdistributions with vastly different characteristics, which commonly emerges fromGIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows thatgrid cells in mammals provide a multi-scale periodic representation that functionsas a metric for location encoding and is critical for recognizing places and forpath-integration. Therefore, we propose a representation learning model calledSpace2Vec to encode the absolute positions and spatial relationships of places.We conduct experiments on two real-world geographic data for two differenttasks: 1) predicting types of POIs given their positions and context, 2) imageclassification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approachessuch as RBF kernels, multi-layer feed-forward nets, and tile embedding approachesfor location modeling and image classification tasks. Detailed analysis showsthat all baselines can at most well handle distribution at one scale but show poorperformances in other scales. In contrast, Space2Vec ’s multi-scale representationcan handle distributions at different scales.11 I NTRODUCTIONUnsupervised text encoding models such as Word2Vec (Mikolov et al., 2013), Glove (Penningtonet al., 2014), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018) have been effectively utilizedin many Natural Language Processing (NLP) tasks. At their core they train models which encodewords into vector space representations based on their positions in the text and their context. Asimilar situation can be encountered in the field of Geographic Information Science (GIScience). Forexample, spatial interpolation aims at predicting an attribute value, e.g., elevation, at an unsampledlocation based on the known attribute values of nearby samples. Geographic information has becomean important component to many tasks such as fine-grained image classification (Mac Aodha et al.,2019), point cloud classification and semantic segmentation (Qi et al., 2017), reasoning about Pointof Interest (POI) type similarity (Yan et al., 2017), land cover classification (Kussul et al., 2017), andgeographic question answering (Mai et al., 2019b). Developing a general model for vector spacerepresentation of any point in space would pave the way for many future applications.1Link to project repository: https://github.com/gengchenmai/space2vec1Published as a conference paper at ICLR 2020(a) Women’s Cloth (b) Education (c) Ripley’s K (d) Renormalized Ripley’s KFigure 1: The challenge of joint modeling distributions with very different characteristics. (a)(b) The POIlocations (red dots) in Las Vegas and Space2Vec predicted conditional likelihood of Women’s Clothing (witha clustered distribution) and Education (with an even distribution). The dark area in (b) indicates that thedowntown area has more POIs of other types than education. (c) Ripley’s K curves of POI types for whichSpace2Vec has the largest and smallest improvement over wrap (Mac Aodha et al., 2019). Each curve representsthe number of POIs of a certain type inside certain radios centered at every POI of that type; (d) Ripley’s Kcurves renormalized by POI densities and shown in log-scale. To efficiently achieve multi-scale representationSpace2Vec concatenates the grid cell encoding of 64 scales (with wave lengths ranging from 50 meters to 40kmeters) as the first layer of a deep model, and trains with POI data in an unsupervised fashion.However, existing models often utilize specific methods to deal with geographic information andoften disregards geographic coordinates. For example, Place2Vec (Yan et al., 2017) converts thecoordinates of POIs into spatially collocated POI pairs within certain distance bins, and does notpreserve information about the (cardinal) direction between points. Li et al. (2017) propose DCRNNfor traffic forecasting in which the traffic sensor network is converted to a distance weighted graphwhich necessarily forfeits information about the spatial layout of sensors. There is, however, nogeneral representation model beyond simply applying discretization (Berg et al., 2014; Tang et al.,2015) or feed-forward nets (Chu et al., 2019; Mac Aodha et al., 2019) to coordinates.A key challenge in developing a general-purpose representation model for space is how to deal withmixtures of distributions with very different characteristics (see an example in Figure 1), whichoften emerges in spatial datasets (McKenzie et al., 2015). For example, there are POI types withclustered distributions such as women’s clothing, while there are other POI types with regulardistributions such as education. These feature distributions co-exist in the same space, and yet wewant a single representation to accommodate all of them in a task such as location-aware imageclassification (Mac Aodha et al., 2019). Ripley’s K is a spatial analysis method used to describe pointpatterns over a given area of interest. Figure 1c shows the K plot of several POI types in Las Vegas.One can see that as the radius grows the numbers of POIs increase at different rates for different POItypes. In order to see the relative change of density at different scales, we renormalize the curvesby each POI type’s density and show it in log scale in Figure 1d. One can see two distinct POI typegroups with different distribution patterns with clustered and even distributions. If we want to modelthe distribution of these POIs by discretizing the study area into tiles, we have to use small grid sizesfor women’s clothing while using larger grid sizes for educations because smaller grid sizes lead toover- parameterization of the model and overfitting. In order to jointly describe these distributionsand their patterns, we need an encoding method which supports multi-scale representations .Nobel Prize winning Neuroscience research (Abbott & Callaway, 2014) has demonstrated that gridcells in mammals provide a multi-scale periodic representation that functions as a metric for locationencoding, which is critical for integrating self-motion. Moreover, Blair et al. (2007) show that themulti-scale periodic representation of grid cells can be simulated by summing three cosine gratingfunctions oriented 60apart, which may be regarded as a simple Fourier model of the hexagonallattice. This research inspired us to encode locations with multi-scale periodic representations. Ourassumption is that decomposed geographic coordinates helps machine learning models, such as deepneural nets, and multi-scale representations deal with the inefficiency of intrinsically single-scalemethods such as RFB kernels or discretization (tile embeddings). To validate this intuition, wepropose an encoder-decoder framework to encode the distribution of point-features2in space and2In GIS and spatial analysis, ‘features’ are representations of real-world entities. A tree can, for instance, bemodeled by a point-feature, while a street would be represented as a line string feature.2Published as a conference paper at ICLR 2020train such a model in an unsupervised manner. This idea of using sinusoid functions with differentfrequencies to encode positions is similar to the position encoding proposed in the Transformermodel (Vaswani et al., 2017). However, the position encoding model of Transformer deals witha discrete 1D space – the positions of words in a sentence – while our model works on higherdimensional continuous spaces such as the surface of earth.In summary, the contributions of our work are as follows:1.We propose an encoder-decoder encoding framework called Space2Vec using sinusoidfunctions with different frequencies to model absolute positions and spatial contexts. Wealso propose a multi-head attention mechanism based on context points. To the best of ourknowledge, this is the first attention model that explicitly considers the spatial relationshipsbetween the query point and context points.2.We conduct experiments on two real world geographic data for two different tasks: 1)predicting types of POIs given their positions and context, 2) image classification leveragingtheir geo-locations. Space2Vec outperforms well-established encoding methods such asRBF kernels, multi-layer feed-forward nets, and tile embedding approaches for locationmodeling and image classification.3.To understand the advantages of Space2Vec we visualize the firing patterns (response maps)of location models’ encoding layer neurons and show how they handle spatial structures atdifferent scales by integrating multi-scale representations. Furthermore the firing patternsfor the spatial context models neurons give insight into how the grid-like cells capture thedecreasing distance effect with multi-scale representations.2 P ROBLEM FORMULATIONDistributed representation of point-features in space can be formulated as follows. Given a setof pointsP tpiu, i.e., Points of Interests (POIs), in L-D space (L2;3) define a functionfP;pxq:RLÑRd(L!d), which is parameterized by and maps any coordinate xin space to avector representation of ddimension. Each point (e.g., a restaurant) pipxi;viqis associated witha location xiand attributes vi(i.e., POI features such as type, name, capacity, etc.). The functionfP;pxqencodes the probability distribution of point features over space and can give a representationof any point in the space. Attributes (e.g. place types such as Museum ) and coordinate of point canbe seen as analogies to words and word positions in commonly used word embedding models.3 R ELATED WORKThere has been theoretical research on neural network based path integration/spatial localizationmodels and their relationships with grid cells. Both Cueva & Wei (2018) and Banino et al. (2018)showed that grid-like spatial response patterns emerge in trained networks for navigation tasks whichdemonstrate that grid cells are critical for vector-based navigation. Moreover, Gao et al. (2019)propose a representational model for grid cells in navigation tasks which has good quality such asmagnified local isometry. All these research is focusing on understanding the relationship betweenthe grid-like spatial response patterns and navigation tasks from a theoretical perspective. In contrast,our goal focuses on utilizing these theoretical results on real world data in geoinformatics.Radial Basis Function (RBF) kernel is a well-established approach to generating learning friendlyrepresentation from points in space for machine learning algorithms such as SVM classification (Bau-dat & Anouar, 2001) and regression (Bierens, 1994). However, the representation is example based– i.e., the resultant model uses the positions of training examples as the centers of Gaussian kernelfunctions (Maz’ya & Schmidt, 1996). In comparison, the grid cell based location encoding relies onsine and cosine functions, and the resultant model is inductive and does not store training examples.Recently the computer vision community shows increasing interests in incorporating geographicinformation (e.g. coordinate encoding) into neural network architectures for multiple tasks suchas image classification (Tang et al., 2015) and fine grained recognition (Berg et al., 2014; Chuet al., 2019; Mac Aodha et al., 2019). Both Berg et al. (2014) and Tang et al. (2015) proposedto discretize the study area into regular grids. To model the geographical prior distribution of theimage categories, the grid id is used for GPS encoding instead of the raw coordinates. However,choosing the correct discretization is challenging (Openshaw, 1984; Fotheringham & Wong, 1991),3Published as a conference paper at ICLR 2020and incorrect choices can significantly affect the final performance (Moat et al., 2018; Lechner et al.,2012). In addition, discretization does not scale well in terms of memory use. To overcome thesedifficulties, both Chu et al. (2019) and Mac Aodha et al. (2019) advocated the idea of inductivelocation encoders which directly encode coordinates into a location embedding. However, both ofthem directly feed the coordinates into a feed-forward neural network (Chu et al., 2019) or residualblocks (Mac Aodha et al., 2019) without any feature decomposition strategy. Our experimentsshow that this direct encoding approach is insufficient to capture the spatial feature distribution andSpace2Vec significantly outperforms them by integrating spatial representations of different scales.4 M ETHODWe solve distributed representation of point-features in space (defined in Section 2) with an encoder-decoder architecture:1.Given a point pipxi;viqapoint space encoder Encpxqpqencodes location xiinto alocation embedding erxisPRdpxqand a point feature encoder Encpvqpqencodes its featureinto a feature embedding ervisPRdpvq.ererxis;ervissPRdis the full representationof pointpiPP, whereddpxqdpvq.r;srepresents vector concatenation. In contrast,geographic entities not in Pwithin the studied space can be represented by their locationembedding erxjssince its viis unknown.2.We developed two types of decoders which can be used independently or jointly. A locationdecoderDecspqreconstructs point feature embedding ervisgiven location embeddingerxis, and a spatial context decoder Deccpqreconstructs the feature embedding ervisof pointpibased on the space and feature embeddings tei1;:::;eij;:::;einuof nearestneighboring points tpi1;:::;pij;:::;pinu, wherenis a hyper-parameter.4.1 E NCODERPoint Feature Encoder Each pointpipxi;viqin a point setPis often associated with featuressuch as the air pollution station data associate with some air quality measures, a set of POIs withPOI types and names, a set of points from survey and mapping with elevation values, a set of pointsfrom geological survey with mineral content measure, and so on. The point feature encoder Encpvqpqencodes such features viinto a feature embedding ervisPRdpvq. The implementation of Encpvqpqdepends on the nature of these features. For example, if each point represents a POI with multiplePOI types (as in this study), the feature embedding erviscan simply be the mean of each POI types’embeddings ervis1H°Hh1tpqh;where tpqhindicates the hth POI type embedding of a POI piwithHPOI types. We apply L2normalization to the POI type embedding matrix.Point Space Encoder A part of the novelty of this paper is from the point space encoder Encpxqpq.We first introduce Theorem 1 which provide an analytical solution pxqas the base of encoding anylocation xPR2in 2D space to a distributed representation:Theorem 1. Letpxq peixaj;xy;j1;2;3qTPC3whereeicosisinis the Eulernotation of complex values; xaj;xyis the inner product of ajandx.a1;a2;a3PR2are 2D vectorssuch that the angle between akandalis2{3,@j;}aj}2?. LetCPC33be a random complexmatrix such as CCI. ThenpxqCpxq,MpxqCdiagppxqqCsatisfiespxxqMpxqpxq (1)andxpxxq;pxqydp1}x}2q (2)whered3is the dimension of pxqandxis a small displacement from x.The proof of Theorem 1 can be seen in Gao et al. (2019). pxq Cpxq PC3amounts to a6-dimension real value vector and each dimension shows a hexagon firing pattern which models thegrid cell behavior. Because of the periodicity of sinpqandcospq, this single scale representation pxqdoes not form a global codebook of 2D positions, i.e. there can be xy, butpxqpyq.Inspired by Theorem 1 and the multi-scale periodic representation of grid cells in mammals (Ab-bott & Callaway, 2014) we set up our point space encoder erxs Encpxqtheorypxqto use sine4Published as a conference paper at ICLR 2020and cosine functions of different frequencies to encode positions in space. Given any point xinthe studied 2D space, the space encoder Encpxqtheorypxq NNpPEptqpxqqwherePEptqpxq rPEptq0pxq;:::;PEptqspxq;:::;PEptqS1pxqsis a concatenation of multi-scale representations of dpxq6Sdimensions. Here Sis the total number of grid scales and s0;1;2;:::;S1.NNpqrepresentsfully connected ReLU layers. Let a1r1;0sT;a2r 1{2;?3{2sT;a3r 1{2;?3{2sTPR2be three unit vectors and the angle between any of them is 2{3.min;max are the minimum andmaximum grid scale and gmaxmin. At each scale s,PEptqspxqrPEptqs;1pxq;PEptqs;2pxq;PEptqs;3pxqsis a concatenation of three components, wherePEptqs;jpxqrcospxx;ajymings{pS1qq; sinpxx;ajymings{pS1qqs@j1;2;3; (3)NNpqandPEptqpxqare analogies of Candpxqin Theorem 1.Similarly we can define another space encoder Encpxqgridpxq NNpPEpgqpxqqinspired bythe position encoding model of Transformer (Vaswani et al., 2017), where PEpgqpxq rPEpgq0pxq;:::;PEpgqspxq;:::;PEpgqS1pxqsis still a concatenation of its multi-scale representations,whilePEpgqspxqrPEpgqs;1pxq;PEpgqs;2pxqshandles each component lofxseparately:PEpgqs;lpxqrcospxrlsmings{pS1qq; sinpxrlsmings{pS1qqs@l1;2 (4)4.2 D ECODERTwo types of decoders are designed for two major types of GIS problems: location modeling andspatial context modeling (See Section 5.1).Location Decoder Decspqdirectly reconstructs point feature embedding ervisgiven its spaceembedding erxis. We use one layer feed-forward neural network NN decpqervis1Decspxi;decsqNN decperxisq (5)For training we use inner product to compare the reconstructed feature embedding ervis1against thereal feature embeddings of ervisand other negative points (see training detail in Sec 4.3).Spatial Context Decoder Deccpqreconstructs the feature embedding ervisof the centerpointpibased on the space and feature embeddings tei1;:::;eij;:::;einuofnnearby pointstpi1;:::;pij;:::;pinu. Note that the feed-in order of context points should not affect the predic-tion results, which can be achieved by permutation invariant neural network architectures (Zaheeret al., 2017) like PointNet (Qi et al., 2017).ervis1Deccpxi;tei1;:::;eij;:::;einu;deccqgp1KK ̧k1n ̧j1ijkervijsq (6)Heregis an activation function such as sigmoid. ijkexppijkq°no1exppiokqis the attention of piwith itsjth neighbor through the kth attention head, andijkLeakyReLUpaTkrervisinit;ervijs;erxixijssq (7)where akPR2dpvqdpxqis the attention parameter in the kth attention head. The multi-head attentionmechanism is inspired by Graph Attention Network (Veli ˇckovi ́c et al., 2018) and Mai et al. (2019a).To represent the spatial relationship (distance and direction) between each context point pijpxij;vijqand the center point pi pxi;viq, we use the space encoder Encpxqpqto encode thedisplacement between them xijxixij. Note that we are modeling the spatial interactionsbetween the center point and ncontext points simultaneously.In Eq. 7, ervisinitindicates the initial guess of the feature embedding ervisof pointpiwhich is com-puted by using another multi-head attention layer as Eq. 6 where the weight 1ijkexpp1ijkq°no1expp1iokq.Here,1ijkis computed as Eq. 8 where the query embedding ervisis excluded.1ijkLeakyReLUpa1Tkrervijs;erxixijssq (8)5Published as a conference paper at ICLR 20204.3 U NSUPERVISED TRAININGThe unsupervised learning task can simply be maximizing the log likelihood of observing the truepointpiat position xiamong all the points in PLPpq ̧piPPlogPppi|pi1;:::;pij;:::;pinq ̧piPPlogexppervisTervis1q°poPPexppervosTervis1q(9)Here only the feature embedding of piis used (without location embedding) to prevent revealing theidentities of the point candidates, and renc;decsNegative sampling by Mikolov et al. (2013) can be used to improve the efficiency of trainingL1Ppq ̧piPPlogpervisTervis1q1|Ni| ̧poPNilogpervosTervis1q(10)HereNiPis a set of sampled negative points for pi(piRNi) andpxq1{p1exq.5 E XPERIMENTIn this section we compare Space2Vec with commonly used position encoding methods, and analyzethem both quantitatively and qualitatively.Baselines Our baselines include 1) direct directly applying feed-forward nets (Chu et al., 2019);2)tilediscretization (Berg et al., 2014; Adams et al., 2015; Tang et al., 2015); 3) wrap feed-forwardnets with coordinate wrapping (Mac Aodha et al., 2019); and 4) rbfRadial Basis Function (RBF)kernels (Baudat & Anouar, 2001; Bierens, 1994). See Appendix A.1 for details of the baselines.5.1 POI T YPE CLASSIFICATION TASKSDataset and Tasks To test the proposed model, we conduct experiments on geographic datasetswith POI position and type information. We utilize the open-source dataset published by YelpData Challenge and select all POIs within the Las Vegas downtown area3. There are 21,830 POIswith 1,191 different POI types in this dataset. Note that each POI may be associated with oneor more types, and we do not use any other meta-data such as business names, reviews for thisstudy. We project geographic coordinates into projection coordinates using the NAD83/Conus Albersprojection coordinate system4. The POIs are split into training, validation, and test dataset with ratios80%:10%:10%. We create two tasks setups which represent different types of modeling need inGeographic Information Science:Location Modeling predicts the feature information associated with a POI based on itslocation xirepresented by the location decoder Decspq. This represents a large number oflocation prediction problems such as image fine grained recognition with geographic prior(Chu et al., 2019), and species potential distribution prediction (Zuo et al., 2008).Spatial Context Modeling predicts the feature information associated with a POI basedon its contexttei1;:::;eij;:::;einurepresented by the spatial context decoder Deccpq. Thisrepresents a collections of spatial context prediction problem such as spatial context basedfacade image classification (Yan et al., 2018), and all spatial interpolation problems.We use POI prediction metrics to evaluate these models. Given the real point feature embeddingervisandNnegative feature embeddings Nitervisu, we compare the predicted ervis1withthem by cosine distance. The cosine scores are used to rank ervisandNnegative samples. Thenegative feature embeddings are the feature embeddings of points pjrandomly sampled from Pandpipj. We evaluate each model using Negative Log-Likelihood (NLL), Mean Reciprocal Rank(MRR) and HIT@5 (the chance of the true POI being ranked to top 5. We train and test each model10 times to estimate standard deviations. See Appendix A.2 for hyper-parameter selection details.5.1.1 L OCATION MODELING EVALUATIONWe first study location modeling with the location decoder Decspqin Section 4.2. We use a negativesample size of N100. Table 1 shows the average metrics of different models with their best hyper-3The geographic range is (35.989438, 36.270897) for latitude and (-115.047977, -115.3290609) for longitude.4https://epsg.io/5070-12526Published as a conference paper at ICLR 2020(a)direct (b)tile (c)wrap (d)rbf(=1k) (e)min=1k (f)min=500 (g)min=50Figure 2: Embedding clustering of (a) direct ; (b)tilewith the best cell size c500; (c)wrap (h3;o512); (d)rbfwith the best (1k) and 200 anchor points (red) and (e)(f)(h) theory models with different min,but fixedmax40kandS64. All models use 1 hidden ReLU layers of 512 neurons except wrap .Table 1: The evaluation results of different location models on the validation and test dataset.Train Validation TestingNLL NLL MRR HIT@5 MRR HIT@5random - 0.052 (0.002) 4.8 (0.5) 0.051 (0.002) 5.0 (0.5)direct 1.285 1.332 0.089 (0.001) 10.6 (0.2) 0.090 (0.001) 11.3 (0.2)tile(c=500) 1.118 1.261 0.123 (0.001) 16.8 (0.2) 0.120 (0.001) 17.1 (0.3)wrap (h=3,o=512) 1.222 1.288 0.112 (0.001) 14.6 (0.1) 0.119 (0.001) 15.8 (0.2)rbf(=1k) 1.209 1.279 0.115 (0.001) 15.2 (0.2) 0.123 (0.001) 16.8 (0.3)grid (min=50) 1.156 1.258 0.128 (0.001) 18.1 (0.3) 0.139 (0.001) 20.0 (0.2)hexa (min=50) 1.230 1.297 0.107 (0.001) 14.0 (0.2) 0.105 (0.001) 14.5 (0.2)theorydiag (min=50) 1.277 1.324 0.094 (0.001) 12.3 (0.3) 0.094 (0.002) 11.2 (0.3)theory (min=1k) 1.207 1.281 0.123 (0.002) 16.3 (0.5) 0.121 (0.001) 16.2 (0.1)theory (min=500) 1.188 1.269 0.132 (0.001) 17.6 (0.3) 0.129 (0.001) 17.7 (0.2)theory (min=50) 1.098 1.249 0.137 (0.002) 19.4 (0.1) 0.144 (0.001) 20.0 (0.2)parameter setting on the validation set. We can see that direct andtheorydiag are less competitive,only beating the random selection baseline. Other methods with single scale representations –includingtile,wrap , andrbf– perform better. The best results come from various version of thegrid cell models, which are capable of dealing with multi-scale representations.In order to understand the reason for the superiority of grid cell models we provide qualitative analysisof their representations. We apply hierarchical clustering to the location embeddings produced bystudied models using cosine distance as the distance metric (See Fig. 2). we can see that whenrestricted to large grid sizes ( min1k),theory has similar representation (Fig. 2d, 2e, and Fig. 4d,4e) and performance compared to rbf(1k). However it is able to significantly outperform rbf(1k) (andtileandwrap ) when small grid sizes ( min500;50) are available. The relativeimprovements over rbf(1k) are -0.2%, +0.6%, +2.1% MRR for min=1k, 500, 50 respectively.5.1.2 M ULTI -SCALE ANALYSIS OF LOCATION MODELINGIn order to show how our multi-scale location representation model will affect the prediction of POItypes with different distribution patterns, we classify all 1,191 POI types into three groups based onradiusr, which is derived from each POI types’ renormalized Ripley’s K curve (See Figure 1d forexamples). It indicates the x axis value of the intersection between the curve and the line of y3:0.A lowerrindicates a more clustered distribution patterns. These three groups are listed below:1. Clustered ( r¤100m): POI types with clustered distribution patterns;2. Middle ( 100m r 200m): POI types with less extreme scales;3. Even (r¥200m): POI types with even distribution patterns.Table 2 shows the performance ( MRR ) ofdirect ,tile,wrap ,rbf, and ourtheory model on the testdataset of the location modeling task with respect to these three different POI distribution groups.The numbers in pqindicate the MRR difference betweeb a baseline and theory .# POI refers to totalnumber of POI belong to each group5. We can see that 1) The two neural net approaches ( direct andwrap ) have no scale related parameter and are not performing ideally across all scales, with direct5The reason why the sum of # POI of these three groups does not equal to the total number of POI is becauseone POI can have multiple types and they may belonging to different groups.7Published as a conference paper at ICLR 2020Table 2: Comparing performances in different POI groups. We classify all 1,191 POI types into threegroups based on the radius rof their root types, where their renormalized Ripley’s K curve (SeeFigure 1d) reach 3.0: 1) Clustered ( r¤100m): POI types with clustered distribution patterns; 2)Middle ( 100m r 200m): POI types with unclear distribution patterns; 3) Even ( r¥200m):POI types with even distribution patterns. The MRR of wrap andtheory on those three groups areshown. The numbers in pqindicate the difference between the MRR of a baseline model and theMRR oftheory with respect to a specific group. #POI refers to the total number of POIs belongingto each group. Root Types indicates the root categories of those POI types belong to each group.POI Groups Clustered Middle Even(r¤100m) (100m r 200m) (r¥200m)direct 0.080 (-0.047) 0.108 (-0.030) 0.084 (-0.047)wrap 0.106 (-0.021) 0.126 (-0.012) 0.122 (-0.009)tile 0.108 (-0.019) 0.135 (-0.003) 0.111 (-0.020)rbf 0.112 (-0.015) 0.136 (-0.002) 0.119 (-0.012)theory 0.127 (-) 0.138 (-) 0.131 (-)# POI 16,016 7,443 3,915Root TypesRestaurants; Shopping; Food;Nightlife; Automotive; ActiveLife; Arts & Entertainment;Financial ServicesBeauty & Spas; Health & Medical;Local Services; Hotels & Travel;Professional Services;Public Services & GovernmentHome Services;Event Planning& Services;Pets; Educationperforms worse because of its simple single layer network. 2) The two approaches with built-in scaleparameter (tileandrbf) have to trade off the performance of different scales. Their best parametersettings lead to close performances to that of Space2Vec at the middle scale, while performing poorlyin both clustered and regular groups. These observation clearly shows that all baselines can at mostwell handle distribution at one scale but show poor performances in other scales. In contrast,Space2Vec’s multi-scale representation can handle distributions at different scales.5.1.3 S PATIAL CONTEXT MODELING EVALUATIONNext, we evaluate the spatial context decoder Deccpqin Sec. 4.2. We use the same evaluation setup as location modeling. The context points are obtained by querying the n-th nearest points usingPostGIS (n10). As for validation and test datasets, we make sure the center points are all unknownduring the training phase. Table 3 shows the evaluation results of different models for spatial contextmodeling. The baseline approaches ( direct ,tile,wrap ,rbf) generally perform poorly in contextmodeling. We designed specialized version of these approaches ( polar ,polar _tile,scaled _rbf)with polar coordinates, which lead to significantly improvements. Note that these are models proposedby us specialized for context modeling and therefore are less general than the grid cell approaches.Table 3: The evaluation results of different spatial context models on the validation and test dataset.All encoders contains a 1 hidden layer FFN. All grid cell encoders set min=10,max=10k.Train Validation TestingSpace 2Vec NLL NLL MRR HIT@5 MRR HIT@5none 1.163 1.297 0.159 (0.002) 22.4 (0.5) 0.167 (0.006) 23.4 (0.7)direct 1.151 1.282 0.170 (0.002) 24.6 (0.4) 0.175 (0.003) 24.7 (0.5)polar 1.157 1.283 0.176 (0.004) 25.4 (0.4) 0.178 (0.006) 24.9 (0.1)tilepc50q 1.163 1.298 0.173 (0.004) 24.0 (0.6) 0.173 (0.001) 23.4 (0.1)polar _tilepS64q 1.161 1.282 0.173 (0.003) 25.0 (0.1) 0.177 (0.001) 24.5 (0.3)wrap (h=2,o=512) 1.167 1.291 0.159 (0.001) 23.0 (0.1) 0.170 (0.001) 23.9 (0.2)rbfp50q 1.160 1.281 0.179 (0.002) 25.2 (0.6) 0.172 (0.001) 25.0 (0.1)scaled _rbf(=40,=0.1) 1.150 1.272 0.177 (0.002) 25.7 (0.1) 0.181 (0.001) 25.3 (0.1)grid (min=10) 1.172 1.285 0.178 (0.004) 24.9 (0.5) 0.181 (0.001) 25.1 (0.3)hexa (min=10) 1.156 1.289 0.173 (0.002) 24.0 (0.2) 0.183 (0.002) 25.3 (0.2)theorydiagpmin10q1.156 1.287 0.168 (0.001) 24.1 (0.4) 0.174 (0.005) 24.9 (0.1)theory (min=200) 1.168 1.295 0.159 (0.001) 23.1 (0.2) 0.170 (0.001) 23.2 (0.2)theory (min=50) 1.157 1.275 0.171 (0.001) 24.2 (0.3) 0.173 (0.001) 24.8 (0.4)theory (min=10) 1.158 1.280 0.177 (0.003) 25.2 (0.3) 0.185 (0.002) 25.7 (0.3)8Published as a conference paper at ICLR 2020(a)direct (b)polar (c)wrap (d)polar _tile (e)scaled _rbf (f)theory(g)direct (h)polar (i)wrap (j)polar _tile (k)scaled _rbf (l)theoryFigure 3: Embedding clustering in the original space of (a) direct ; (b)polar ; (c)wrap ,h=2,o=512;(d)polar _tile,S= 64, (e)scaled _rbf,= 40,=0.1; and (f) theory ,min10,max10k,S64. (g)(h)(i)(j)(k)(l) are the clustering results of the same models in the polar-distance spaceusing logpkxijk1q. All models use 1 hidden ReLU (except wrap ) layers of 512 neurons. Mostmodels except wrap can capture a shift when distance is around e51150meters.Nevertheless the grid cell approaches are able to perform better than the specialized approaches onthe test dataset while have competitive performance on validation dataset. See Appendix ??for thevisualization of context models. Actually the gains are small for all baseline approaches also. Thereason is that we expect location encoding to be less important when context information is accessible.Similarly as discussed in (Gao et al., 2019), it is when there is a lack of visual clues that the grid cellsof animals are the most helpful for their navigation.Figure 6 shows the location embedding clustering results in both Cartesian and polar coordinatesystems. We can see that direct (Fig. 3a, 3g) only captures the distance information when the contextPOI is very close ( logpkxijk1q¤5) while in the farther spatial context it purely models thedirection information. polar (Fig. 3b, 3h) has the similar behaviors but captures the distance infor-mation in a more fine-grained manner. wrap (Fig. 3c, 3i) mainly focuses on differentiating relativepositions in farther spatial context cont which might explain its lower performance6.polar _tile(Fig.3d) mostly responds to distance information. Interestingly, scaled _rbfandtheory have similarrepresentations in the polar coordinate system (Fig. 3k, 3l) and similar performance (Table 3). Whilescaled _rbfcaptures the gradually decreased distance effect with a scaled kernel size which becomeslarger in farther distance, theory achieves this by integrating representations of different scales.5.2 F INE-GRAINED IMAGE CLASSIFICATION TASKSTo demonstrate the generalizability of Space2Vec for space representation we utilized the proposedpoint space encoder Encpxqpqmodel in a well-known computer vision task: fine-grained imageclassification . As we discussed in Section 3, many studies (Berg et al., 2014; Chu et al., 2019;Mac Aodha et al., 2019) have shown that geographic prior information - where (and when) the imageis taken - is very important additional information for the fine-grained image classification task andcan substantially improve the model performance. For example, the appearance information is usuallynot sufficient to differentiate two visually similar species. In this case, the geographic prior becomesmuch more important because these two species may have very different spatial prior distributionssuch as the example of European Toads and Spiny Toads in Figure 1 of Mac Aodha et al. (2019).We adopt the task setup of Mac Aodha et al. (2019). During training we have a set of tuples DtpIi;xi;yi;piq|i1;:::;NuwhereIiindicates an image, yiPt1;2;:::;Cuis the correspondingclass label (species category), xi rlongitude i;latitudeisis the geographic coordinates wherethe image was taken, and piis the id of the photographer who took this image. At training time, alocation encoder is trained to capture the spatial prior information Ppy|xq. At inference time, piinformation is not available and the final image classification prediction is calculated based on the6Note thatwrap is original proposed by Mac Aodha et al. (2019) for location modelling, not spatial contextmodelling. This results indicates wrap is not good at this task.9Published as a conference paper at ICLR 2020Table 4: Fine-grained image classification results on two datasets: BirdSnap :and NABirds:. Theclassification accuracy is calculated by combining image classification predictions Ppy|Iqwithdifferent spatial priors Ppy|xq. Thegrid andtheory model use 1 hidden ReLU layers of 512neurons. The evaluation results of the baseline models are from Table 1 of Mac Aodha et al. (2019).BirdSnap:NABirds:No Prior (i.e. uniform) 70.07 76.08Nearest Neighbor (num) 77.76 79.99Nearest Neighbor (spatial) 77.98 80.79Adaptive Kernel (Berg et al., 2014) 78.65 81.11tile(Tang et al., 2015) (location only) 77.19 79.58wrap (Mac Aodha et al., 2019) (location only) 78.65 81.15rbf(=1k) 78.56 81.13grid (min=0.0001,max=360,S= 64) 79.44 81.28theory (min=0.0001,max=360,S= 64) 79.35 81.59combination of two models: 1) the trained location encoder which captures the spatial priors Ppy|xqand 2) the pretrained image classification model, InceptionV3 network (Szegedy et al., 2016), whichcapturesPpy|Iq. Bayesian theory has been used to derive the joint distribution Ppy|I;xq. SeeMac Aodha et al. (2019) for detail explanation as well as the loss function. Note that while Space2Vecoutperforms specialized density estimation methods such as Adaptive Kernel (Berg et al., 2014), itwould be interesting to explore early fusion Space2Vec ’s representations with the image module.We use two versions of our point space encoder Encpxqpqmodel (grid,theory ) as the locationencoder to capture the spatial prior information Ppy|xq. The evaluation results of our models as wellas multiple baselines are shown in Table 4. We can see that bothgrid,theory outperform previousmodels as well as that of Mac Aodha et al. (2019) on two fine-grained image classificationdatasets with significant sizes: BirdSnap :, NABirds:.theory shows superiority over grid onNABirds:while fail to outperform grid on BirdSnap:. Note that we only pick baseline modelswhich capture spatial-only prior and drop models which additionally consider time information.Bothgrid andtheory use 1 hidden ReLU layers of 512 neurons for NNpqand they have the samehyperparameters: min=0.0001,max=360,S= 64. Like Mac Aodha et al. (2019), the locationembedding size dpxqis 1024 and we train the location encoder for 30 epochs. Our implementation isbased on the original code7of Mac Aodha et al. (2019) for both model training and evaluation phase.6 C ONCLUSIONWe introduced an encoder-decoder framework as a general-purpose representation model for spaceinspired by biological grid cells’ multi-scale periodic representations. The model is an inductivelearning model and can be trained in an unsupervised manner. We conduct two experiments on POItype prediction based on 1) POI locations and 2) nearby POIs. The evaluation results demonstrate theeffectiveness of our model. Our analysis reveals that it is the ability to integrate representations ofdifferent scales that makes the grid cell models outperform other baselines on these two tasks. In thefuture, we hope to incorporate the presented framework to more complex GIS tasks such as socialnetwork analysis, and sea surface temperature prediction.ACKNOWLEDGMENTSThe presented work is partially funded by the NSF award 1936677 C-Accel Pilot - Track A1 (OpenKnowledge Network): Spatially-Explicit Models, Methods, And Services For Open KnowledgeNetworks , Esri Inc., and Microsoft AI for Earth Grant: Deep Species Spatio-temporal DistributionModeling for Biodiversity Hotspot Prediction . We thank Dr. Ruiqi Gao for discussions about gridcells, Dr. Wenyun Zuo for discussion about species potential distribution prediction and Dr. YingjieHu for his suggestions about the introduction section.7https://github.com/macaodha/geo_prior/10Published as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This paper presents a new method called "Space2Vec" to compute spatial embeddings of a pixel in a spatial data. The primary motivation of Space2Vec is to integrate representations of different spatial scales which could potentially make the spatial representations more informative and meaningful as features. Space2Vec is trained as a part of an encoder-decoder framework, where Space2Vec encodes the spatial features of all the points that are fed as input to the framework. They conducted experiments on real world geographic data where they predict types of point of interests (POIs) at given positions based on their 1) locations (location-modeling) and 2) spatial neighborhood (spatial context modeling). They evaluated Space2Vec against other ML approaches for encoding spatial information including RBF kernels, multi-layer feed forward nets, and tile embedding approaches. Their results indicate that that Space2Vec approach performs better (albeit marginally) than other ML methods. I am giving this paper a weak reject rating mainly because of weak results and lack of motivation for location modeling problem (where their approach performs significantly better than baselines). I explain my concerns below under detailed comments. Detailed Comments: 1) Motivation of location modeling problem does not sound compelling enough to me, especially in the context of Point of Interest(POI) classification approach. I could not imagine any scenario where access to information from spatial neighborhood will be denied. If authors could present strong motivating examples for this problem and demonstrate the utility of their proposed approach in that setting, that will make the paper much stronger. 2) In spatial context modeling problem, the improvements in the results (Table 2) appear to be marginal(0.185 against 0.181, 25.7 against 25.3). Authors should try out more datasets to convincingly justify the superiority of their approach over other methods. EDIT: AFTER RECEIVING AUTHOR'S RESPONSE I am satisfied with author's response to my comments. I am updating my rating to Weak Accept. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
BJepX2A9tX
ICLR.cc/2019/Conference
2019
Rotation Equivariant Networks via Conic Convolution and the DFT
["Benjamin Chidester", "Minh N. Do", "Jian Ma"]
Performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Many image classification tasks, such as those related to cellular imaging, exhibit invariance to rotation. In particular, to aid convolutional neural networks in learning rotation invariance, we consider a simple, efficient conic convolutional scheme that encodes rotational equivariance, along with a method for integrating the magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encode global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). We evaluated the efficacy of CFNet as compared to a standard CNN and group-equivariant CNN (G-CNN) for several different image classification tasks and demonstrated improved performance, including classification accuracy, computational efficiency, and its robustness to hyperparameter selection. Taken together, we believe CFNet represents a new scheme that has the potential to improve many imaging analysis applications.
["deep learning", "rotation equivariance", "bioimaging analysis"]
ABSTRACTPerformance of neural networks can be significantly improved by encoding knowninvariance for particular tasks. In particular, to aid convolutional neural networksin learning rotation invariance, we consider a simple, efficient conic convolutionalscheme that encodes rotational equivariance, along with a method for integratingthe magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encodeglobal rotational invariance. We call our new method the Conic Convolution andDFT Network (CFNet). We evaluated the efficacy of CFNet as compared to astandard CNN and group-equivariant CNN (G-CNN) for several different imageclassification tasks and demonstrated improved performance, including classifi-cation accuracy, computational efficiency, and its robustness to hyperparameterselection. Taken together, we believe CFNet represents a new scheme that has thepotential to improve many imaging analysis applications.1 I NTRODUCTIONThough the appeal of neural networks is their versatility for arbitrary classification tasks, there isstill much benefit in designing them for particular problem settings. In particular, their effectivenesscan be greatly increased by encoding invariance to uniformative augmentations of the data (LeCunet al., 1989). If such invariance is not explicitly encoded, the network must learn it from the data,perhaps with the help of data augmentation, requiring more parameters and thereby increasing itssusceptibility to overfitting.A key invariance inherent to several computer vision settings, including satellite imagery and allforms of microscopy imagery, is rotation (Cheng et al., 2016; Boland & Murphy, 2001). Recently,there have been a variety of proposed approaches for encoding rotation equivariance and invari-ance, the most promising of which have formulated convolution over groups (Cohen & Welling,2016; Weiler et al., 2018). Notably, G-CNNs have been applied to several biological imaging tasks,producing state-of-the-art results (Weiler et al., 2018; Bekkers et al., 2018; Li et al., 2018).Here we propose a new rotation-equivariant convolutional scheme, called conic convolution , which,in contrast to group convolution, encodes equivariance while still operating over only the spatialdomain. Rather than convolving each filter across the entire image, as in standard convolution,rotated filters are convolved over corresponding conic regions of the input feature map that emanatefrom the origin, thereby transforming rotations in the input directly to rotations in the output. Thisscheme is intuitive, simple to implement, and computationally efficient. We also show that themethod yields improved performance over group convolution on several relevant applications.Additionally, we propose the integration of the magnitude response of the 2D-discrete-Fourier trans-form (2D-DFT) into a transition layer between convolutional and fully-connected layers to encoderotational invariance. Though the insight of using the DFT to encode rotational invariance hasbeen employed for texture classification using wavelets (Do & Vetterli, 2002; Jafari-Khouzani &Soltanian-Zadeh, 2005; Ojala et al., 2002; Charalampidis & Kasparis, 2002) and for general imageclassification (Schmidt & Roth, 2012), as of yet, its application to CNNs has been overlooked. As inthese prior works, rotations of the input are transformed to circular shifts, to which the magnitude re-sponse of the 2D-DFT is invariant, in the transformed space. Most other recently proposed rotation-invariance CNNs impose this invariance by applying a permutation-invariant operation, such as theaverage or maximum, over the rotation group, but since this operation is applied for each filter indi-vidually, possibly valuable pose information between filters is lost. In contrast, the 2D-DFT is able1Under review as a conference paper at ICLR 2019to integrate mutual pose information between different filter responses, yielding richer features forsubsequent layers.We demonstrate the effectiveness of these two novel contributions for various applications: clas-sifying rotated MNIST images, classifying synthetic images that model biomarker expression inmicroscopy images of cells, and localizing proteins in budding yeast cells (Kraus et al., 2017). Weshow that CFNet improves classification accuracy generally over the standard raster convolutionformulation and over the equivariant method of G-CNN across these settings. We also show thatthe 2D-DFT clearly improves performance across these diverse data sets, and that not only for conicconvolution, but also for group convolution. Source code for the implementation of CFNet will bemade available on GitHub.2 R ELATED WORKCohen & Welling (2016) introduced G-CNNs by formulating convolution over groups, includingrotation, translation, and flips, for neural networks, which has inspired many subsequent improve-ments. By convolving over groups, equivariance to these groups is maintained throughout the con-volutional layers, and invariance is enforced at the end of the network by pooling over groups. Thiswork was improved upon by the design of steerable filters (Weiler et al., 2018) for convolution, sim-ilar to those proposed by Worrall et al. (2017), which allow for finer sampling of rotations of filterswithout inducing artifacts. Steerable filters were first proposed by Freeman & Adelson (1991) andhad been explored previously for image classification (Liu et al., 2014), but as shallow features inthe context of HOG descriptors.An alternative means of encoding rotational equivariance is to transform the domain of the image toan alternative domain, such as the log-polar domain (Schmidt & Roth, 2012; Henriques & Vedaldi,2017) in which rotation becomes some other transformation that is easier to manage, in this case,translations. The suitability of this transformation depends upon the signal of interest, since thiswarping will introduce distortion, as pixels near the center of the image are sampled more denselythan pixels near the perimeter. In addition, its stability to translations in the original domain isof concern. Our proposed CFNet, by convolving over conic regions, also encodes global rotationequivariance about the origin, but without introducing such distortion, which greatly helps mitigateits susceptibility to translation. The recently developed spatial transform layer (Jaderberg et al.,2015) and deformable convolutional layer (Dai et al., 2017) allow the network to learn non-regularsampling patterns and can potentially help learning rotation invariance, though invariance is notexplicitly enforced, which would most likely be a challenge for tasks with small training data.A simple means for achieving rotation equivariance and invariance was proposed by Dieleman et al.(2016), in which feature maps of standard CNNs are made equivariant or invariant to rotation bycombinations of cyclic slicing, stacking, rolling, and pooling. RotEqNet (Marcos et al., 2017) im-proved upon this idea by storing, for each feature map for a corresponding filter, only the maximalresponse across rotations and the value of the corresponding rotation, to preserve pose information.This approach yielded improved results and considerable storage savings over (Dieleman et al.,2016) and G-CNN. These methods are most similar to our proposed conic convolution. However,in contrast, our method applies each filter only at the appropriate rotation within each conic region,which further saves on storage.To enforce rotation invariance, as noted, most of the previous methods apply some permutation-invariant, or pooling, operation over rotations. Cheng et al. (2016) recently proposed a strategy ofencouraging a network to learn a rotation invariant transform, and follow-up work improved thislearning process by incorporating a Fisher discriminant penalty (Cheng et al., 2018). However,the convolutional layers of the network do not maintain the property of rotation equivariance withthe input image, which requires that the network learn this equivariance and could therefore hinderperformance. Also, learning such a transform that generalizes to unseen data could prove difficultfor settings with limited training data. Schmidt & Roth (2012) previously proposed the 2D-DFT forrotational invariance. However, no method has yet been proposed to integrate the 2D-DFT into arotation-equivariant CNN.2Under review as a conference paper at ICLR 2019FFGroup ConvolutionFeature Map InputFFConic ConvolutionFeature Map InputStandard ConvolutionFFeature Map InputFilter SupportFilterFConvolution on Log-Polar TransformFeature MapInputTransformed RepresentationFigure 1: Comparison of convolution schemes. The domain of filter ‘F’ in the input and its corre-sponding outputs in the feature map are colored red. That of the rotation of ‘F’ by 180 degrees iscolored blue. The local support on the domain for the convolution at a few points for each schemeare shown in gray. Conic convolution, with rotations of 90 degrees in this example, encodes rotationequivariance without introducing distortion to the support of the filter in the original domain (unlikethe log-polar transform) and without requiring additional storage for feature maps (unlike groupconvolution). The example shown for group convolution is the first layer of a G-CNN, mappingfromZ2to the roto-translation group.3 F ORMULATION OF ROTATION EQUIVARIANCE AND INVARIANCE3.1 R OTATION -EQUIVARIANT QUADRANT CONVOLUTIONAL LAYERSWe begin our formulation with a simpler, special case of conic convolution, which we call quadrantconvolution . Its only difference from standard convolution is that the filter being convolved is rotatedbyr=2,r2f0;1;2;3g, depending upon the corresponding quadrant of the domain. We show thatfor quadrant convolution, rotations of =2in the input are straightforwardly associated with rotationsin the output feature map, which is a special form of equivariance called same-equivariance (ascoined by Dieleman et al. (2016)).For convenience, we represent feature maps, of dimension K,f:Z2!RK, and filters, :Z2!RK, of a network as functions over 2D space, as in (Cohen & Welling, 2016). Relevant to ourformulation is the set, or group, Gof two-dimensional rotation matrices of =2, which can be easilyparameterized by g(r), and which acts on points in Z2by matrix multiplication, i.e, for a given pointx= (u;v)2Z2,g(r)x=cos(r=2)sin(r=2)sin(r=2) cos(r=2)uv: (1)LetTgdenote the transformation of a function by a rotation in G, whereTgf(x),f(g1x)appliesthe inverse of gto an element of the domain of f. For an operation :F!F ,Fbeing the setofK-dimensional functions f, to exhibit same-equivariance, applying rotation either before or afterthe operation yields the same result, i.e.Tg(f) = (Tgf): (2)We now define quadrant convolution. The expression for convolution in a standard CNN is given byf(x) =XkXx02Z2k(x0)fk(xx0): (3)As noted in Cohen & Welling (2016), standard convolution does not exhibit rotational equivarianceunless certain constraints on the filters are met.3Under review as a conference paper at ICLR 2019Rotated FiltersConic Convolutional Layers DFT Convolutional-to-Full TransitionFully Connected Layers(a) (b) (c) (d) (e)Conic Convolution RegionRotated WeightsFeature MapsCircular-Shift SpaceFigure 2: The overall architecture of the proposed CFNet. (a) Filtering the image by various filtersat rotations in corresponding conic regions preserves rotation-equivariance. (b) Subsequent convo-lutional feature maps are filtered similarly. Rotation-invariance is encoded by the transition fromconvolutional to fully-connected layers, which consists of (c) element-wise multiplication and sum,denoted by, with rotated weight tensors, transforming rotation to circular shift, and (d) applica-tion of the magnitude response of the 2D-DFT to encode invariance to such shifts. (e) This output isreshaped and passed through the final, fully-connected layers.Quadrant convolution can be interpreted as weighting the convolution for each rotation with a func-tion!:Z2![0;1]that simply “selects” the appropriate quadrant of the domain, which we defineas!q(u;v),8<:1u>0;v0;1=4 (u;v) = (0;0);0 else:(4)Since the origin does not strictly belong to a particular quadrant, it is handled simply by averaging theresponse of the filter at all four rotations. Boundary values are assigned arbitrarily, but consistently,by the placement of the equality for either uorv. The output of the layer is then given by(f),Xg2G[Tg!] [[Tg]f]: (5)Example convolutional regions with appropriate filter rotations are shown in Fig. 1.The equivariance property is established (see Appendix) independent of the definition of !, yetits definition will greatly influence the performance of the network. For example, if !is simply theconstant 1=4, we have the simple example of equivariance mentioned above, equivalent to averagingthe filter responses.3.2 G ENERALIZATION TO CONIC CONVOLUTIONAL LAYERSThe above formulation can be generalized to conic convolution in which the rotation angle is de-creased by an arbitrary factor of =2R, for some positive integer R, instead of being fixed to =2.Rather than considering quadrants of the domain, we can consider conic regions emanating from theorigin, defined byC=n(x;y)2Z2: 0arccot (x=y) +I(y<0)<2Ro; (6)where I()is the indicator function. The weighting function is changed to have value one only overthis conic region:!R(u;v),8<:1 (u;v)2C;1=4R(u;v) = (0;0);0 else;(7)of which!1=!qis a special case.4Under review as a conference paper at ICLR 2019If we consider feature maps to be functions over the continuous domain R2, instead of Z2, and definethe groupGR, with parameterizationgR(r)x=cos(r=2R)sin(r=2R)sin(r=2R) cos(r=2R)uv; (8)forr2f0;1;:::; 4R1gandx= (u;v)2R2, it is easy to show similarly as above thatR(f),Xg2GR[Tg!R] [[Tg]f] (9)is equivariant to GR.However, due to subsampling artifacts when discretizing R2toZ2, as in an image, rotation equivari-ance for arbitrary values of Rcannot be guaranteed and can only be approximated. In particular, thefilters will have to be interpolated for rotations that are not a multiple of =2. In our experiments, wechose nearest neighbor interpolation, which at least preserves the energy of the filter under rotations.This defect notwithstanding, it can be shown that conic convolution maintains equivariance to ro-tations of=2, and as our experiments show in the following section, the approximation of finerangles of rotation can still improve performance. Additionally, we note that Rneed not be the samefor each layer, and it may be advantageous to use a finer discretization of rotations for early layers,when the feature maps are larger, and gradually decrease R.3.3 N ON-LINEAR OPERATIONSA note must be made about subsequent nonlinear operations for a convolutional layer. It is typicalin convolutional networks to perform subsampling, either by striding the convolution or by spatialpooling, to reduce the dimensionality of subsequent layers. Again, due to downsampling artifacts,rotational equivariance to rotations smaller than =2is not guaranteed. However, given that theindices of the plane of the feature map are in Z2and are therefore centered about the origin, adownsampling of D2Z>0can be applied while maintaining rotational equivariance for rotationsof=2, regardless of the choice of R. After subsampling, the result is passed through a non-linearactivation function :R!R, such as ReLU, with an added offset ck2R.3.4 E FFICIENCY OF COMPUTATION AND MEMORY USAGEIn theory, the response for each rotation in conic convolution is only needed over its correspond-ing conic region. However, since GPUs are more efficient operating on rectangular inputs, it isfaster to compute the convolution over each quadrant in which the conic region resides. In currentneural network libraries, the output of conic convolution can be achieved by convolving over thecorresponding quadrant, multiplying by the weighting function, summing the responses is in eachquadrant together, and then concatenating the responses of quadrants. For the special case of quad-rant convolution, this process incurs negligible additional computation beyond standard convolution.Additionally, conic convolution produces only one feature map per filter as in standard convolutionand therefore incurs no additional storage costs, in contrast to G-CNN and cyclic slicing, whichboth produce one map per rotation (Cohen & Welling, 2016; Dieleman et al., 2016), and two forRotEqNet, one for the filter response and one for the orientation (Marcos et al., 2017).3.5 R OTATION -INVARIANT TRANSITION USING THE MAGNITUDE OF THE 2D-DFTAfter the final convolutional layer of a CNN, some number of fully-connected layers will be appliedto combine information from the various filter responses. In general, fully-connected layers willnot maintain rotation equivariance or invariance properties. In a fully-convolutional network, con-volution and downsampling are applied until the spatial dimensions are eliminated and the resultingfeature map of the final convolutional layer is merely a vector, with dimension equal to the numberof filters.Rather than encoding invariance for each filter separately, as in most other recent works (Cohen &Welling, 2016; Weiler et al., 2018), we consider instead to transform the collective filter responsesto a space in which rotation becomes circular shift so that the 2D-DFT can be applied to encode5Under review as a conference paper at ICLR 2019invariance. The primary merit of the 2D-DFT as an invariant transform is that each output node is afunction of every input node, and not just the nodes of a particular filter response, thereby capturingmutual information across responses.Since the formulation of this transition involves the DFT, which is defined only for finite-lengthsignals, we switch to represent feature maps as tensors, rather than functions. We denote the featuremap generated by the penultimate convolutional layer by f2RMMK, whereM2Z>1.In a fully-convolutional network, the final convolutional layer is in reality just a fully-connectedlayer, in which the input fis passed through Nfully-connected filters, (n)2RMMK,n2f0;1;:::;N1g. The operation of this layer can be interpreted as the inner product of the functionand filter,h(n);fi. If we again consider rotations of the filter from the group GR,(n;r),hTgR(r)(n);fi; (10)this is equivalent to the first layer of a G-CNN, mapping from the spatial domain to GR(though thisgroup does not include the translation group since the convolution is only applied at the origin), androtations of the final convolutional layer fwill correspond to permutations of GR, which are justcircular shifts in of the second dimension of the matrix .The magnitude response of the 2D-DFT can be applied to to transform these circular shifts to aninvariant space,jDFTf gj(n;r) =N1Xn0=0R1Xr0=0(n0;r0)ej2n0nN+r0r4R: (11)This process of encoding rotation invariance corresponds to the ‘Convolutional-to-Full Transition’in Fig. 2. The result is then vectorized and passed into fully-connected layers that precede the finaloutput layer, as in a standard CNN. We note that it helped in practice to apply batch normalizationafter vectorizing, since the output of the magnitude of the 2D-DFT will not be normalized as such.The 2D-DFT, as a rotation invariant transform, can also be integrated into other rotation-equivariantnetworks, such as G-CNN. At the final layer of a fully-convolutional G-CNN, since the spatialdimension has been eliminated through successive convolutions and spatial downsampling, rotationis encoded along contiguous stacks of feature maps f2RL4of each filter at four rotations. In thisway, rotations similarly correspond to circular shifts in the final dimension. This representation isthen passed through the 2D-DFT, as in Eqn. 11.4 R ESULTS4.1 A PPLICATION TO ROTATED MNISTThe rotated MNIST data set (Larochelle et al., 2007) has been used as a benchmark for severalprevious works on rotation invariance. As in previous works, to tune the parameters of each method,we first trained various models on a set of 10,000 images, using training augmentation of rotationsof arbitrary angles as in Cohen & Welling (2016)1, and then selected the best model based on theaccuracy on a separate validation set of 5,000 images.Our best CFNet architecture consisted of six convolution layers; the first were conic convolutionsofR= 8 for the first three layers and R= 4 for the next four, with spatial max-pooling after thesecond layer. We used a filter size of three pixels, with 15 filters per layer. The final convolutionallayer was the DFT transition layer as described in the previous section, which was followed by anoutput layer of ten nodes. This architecture was similar in terms of number of layers and filters perlayer as that of the G-CNN of Cohen & Welling (2016). To evaluate the G-CNN with the DFT, theonly changes we made from the reported architecture for G-CNN was to reduce the number of filtersfor each layer to 7, to offset the addition of the 2D-DFT, which was applied to the output of the finalconvolutional layer.The results on a held-out set of 50,000 test images are shown in Table 1. Adding the DFT transitionto the output of G-CNN reduces the test error by 0.28%, demonstrating the value of incorporating1Though the paper did not state the use of training augmentation, code posted by the authors athttps://github.com/tscohen/gconv_experiments indicates rotations of arbitrary angles were used.6Under review as a conference paper at ICLR 2019Table 1: Test error on the rotated MNIST data set.Algorithm Test Error (%)Schmidt & Roth (2012) 3.98Cohen & Welling (2016) (CNN) 5.03Cohen & Welling (2016) (G-CNN) 2.28G-CNN + DFT 2.00CFNet 1.75Worrall et al. (2017) (H-Net) 1.69(a) Example images ofthree of the 50 classes.(b) Rotated examplesfrom a single class.0 2000 4000 6000 8000 10000steps0.40.50.60.70.80.91.0Accuracy(c)N= 50 .CFNetCNNG-CNNCNetG-CNN+DFT (d)N= 100 .(e) Example images ofcells of four of the 22yeast phenotypes fromKraus et al. (2017).0 2000 4000 6000 8000 10000steps0.20.30.40.50.60.70.8Accuracy(f)N= 50CFNetCNNG-CNN (g)N= 100Figure 3: Comparison of CFNet CNet, G-CNN, G-CNN+DFT, and a standard CNN on the GMMsynthetic biomarker images and on images of protein localization. (a,b) Example images, shown asheat maps for detail, showing inter- and intra-class variation. (e) Example images from yeast cellphenotype classes. Testing classification accuracy of methods on synthetic GMM images (c,d) andprotein localization (f,g) with varying numbers Nof training examples per class.mutual rotational information between filters when encoding invariance. The replacement of groupconvolution with conic convolution in CFNet leads to an even further reduction in error of 0.25%.Even with its simple conic convolutional scheme, CFNet is able to perform comparably to H-Net2,which constructs filters from the circular harmonic basis and operates on complex feature maps.4.2 A PPLICATION TO SYNTHETIC BIOMARKER IMAGESIn order to explicitly control the manifestation of rotational invariance, we created a set of syntheticimages, based upon Gaussian-mixture models (GMMs), which can also be used to emulate real-world microscopy images of biological signals (Zhao & Murphy, 2007). Example synthetic imagesfrom across and within classes are shown in Fig. 3a and Fig. 3b, respectively. We defined 50 distri-bution patterns and generated 50 and 100 examples per class for training and 200 examples per classfor testing. Each class was defined by a mixture of ten Gaussians. The image size was 50 pixels. Abatch size of 50 examples, a learning rate of 5103, and a weight decay `2penalty of 5104were used during training. To help all methods, we augmented the training data by rotations andrandom jitter of up to three pixels, as was done during image generation.2This result is without training augmentation; since H-Net can learn equivariance to arbitrary angles, aug-menting with rotations might not improve performance.7Under review as a conference paper at ICLR 2019Classification accuracies on the test set over training steps for various numbers of training samples,denoted by N, for several methods are shown in Figs. 3c-3d. A variety of configurations weretrained for each network, and each configuration was trained three times. The darkest line showsthe accuracy of the configuration that achieved the highest moving average, with a window size of100 steps, for each method. The spread of each method, which is the area between the point-wisemaximum and minimum of the error, is shaded with a light color, and three standard-deviationsaround the mean is shaded darker.We observe a consistent trend of CFNet outperforming G-CNN, which in turn marginally outper-forms the CNN, both in overall accuracy and in terms of the number of steps required to attain thataccuracy. Additionally, the spread of CFNet is mostly above even the best performing models ofG-CNN and the CNN, demonstrating that an instance of CFNet will outperform other methods evenif the best set of hyperparameters has not been chosen. We also included a network consisting ofconic convolutional layers, but without the DFT, noted as ‘CNet’, to show the relative merit of theDFT. CNet performs comparably to the standard CNN while requireing significantly less parame-ters to attain the same performance, though the true advantage of conic convolution is shown whenintegrated with the DFT to achieve global rotation invariance. In comparison, including the 2D-DFTincreases the performance of G-CNN, to a comparable level with CFNet in fact, though it does nottrain as quickly.4.3 A PPLICATION TO PROTEIN LOCALIZATION IN BUDDING YEAST CELLSWe extended our analysis to real biomarker images of budding yeast cells (Kraus et al., 2017), shownin Fig. 3e. Each image consists of four stains, where blue shows the cytoplasmic region, pink thenuclear region, red the bud neck, and green the protein of interest. The classification for each imageis the cellular subcompartmental region in which the protein is expressed, such as the cell periphery,mitochondria, or eisosomes, some of which exhibit very subtle differences.Fig. 3f-g shows the results of using CFNet, G-CNN, and a standard CNN to classify the proteinlocalization for each image. We used the same architecture as reported in Kraus et al. (2017) for allmethods, except that we removed the last convolutional layer and reduced the number of filters perlayer by roughly half for CFNet and G-CNN, to offset for encoding of equivariance and invariance.The same training parameters and data augmentation were used as for the synthetic data, except thata dropout probability of 0.8 was applied at the final layer and the maximum jitter was increasedto five pixels, since many examples were not well-centered. For each method, several iterationswere run and the spread and the best performing model are shown. Again, CFNet outperformsG-CNN and a standard CNN, when the number of training examples per class is either 50 or 100(see Fig. 3b-c), demonstrating that the gains of the 2D-DFT and proposed convolutional layerstranslate to real-world microscopy data. We note that the best reported algorithm that did not usedeep learning, called ensLOC (Chong et al., 2015; Koh et al., 2015), was only able to achieve anaverage precision of 0.49 for a less challenging set of yeast phenotypes and with 20,000 samples,whereas all runs of CFNet achieved an average precision of between 0.60 - 0.67 with 10% of thedata used for training.5 C ONCLUSIONWe have demonstrated the effectiveness of enforcing rotation equivariance and invariance in CNNsby means of the proposed conic convolutional layer and the 2D-DFT, even for group convolution.We believe that the proposed enhancements to the standard CNN will have much utility for futureapplications in relevant problem settings, in particular, high-throughput molecular and cellular imag-ing data (Rozenblatt-Rosen et al., 2017), where training data is usually sparse, especially for rarecellular events. In future work, we believe CFNet could be even further improved by constructingsteerable filters, as in (Weiler et al., 2018), to overcome sampling artifacts from rotating filters.
Hkgm-rB22m
Okay paper with limited novelty and lacking experimental evidence for main supposed advantages of proposed method
4: Ok but not good enough - rejection
Disclaimer: I've already reviewed an earlier version of this manuscript for NIPS 2018. # Summary In the context of image classification, the paper proposes a convolutional neural network architecture with rotation-equivariant feature maps that are eventually made rotation-invariant by using the magnitude of the 2D discrete Fourier transform (DFT). Classification experiments are conducted on three different datasets. The problem of rotation-invariant image classification addressed by the paper is important, since structures in images may appear at arbitrary orientations in many applications (e.g. microscopy). # Novelty The general ideas of the paper are sound, however seem to be of rather minor novelty. The paper claims two main novelties: (1) conic convolutions and (2) using the DFT magnitude for rotation invariant classification in the context of CNNs. - While (1) seems novel, the paper doesn't convince me that conic convolutions would be useful in practice. While they are more memory efficient, they achieve this by actually computing fewer features (each conic region is only processed by a particular orientation of a convolution kernel). Hence, they should (theoretically) be less powerful than group convolutions (i.e. G-CNN; the whole image is processed by a particular orientation of a convolution kernel). Furthermore, there are no experiments that demonstrate the advantages of the lower memory footprint of conic convolutions. - Novelty (2) is only because it hasn't been used in the context of CNNS, but there is no technical novelty here. Using the DFT magnitude for rotational invariance is an orthogonal contribution that can also be applied to G-CNN, which the paper also evaluates (this is good). # Experiments - Rotation-equivariant CNNs are expected to perform better than standard CNNs when only limited training data is available. However, this is not thoroughly evaluated in the paper, which only does a very coarse comparison (trained with N=50 or N=100 images). Since this is the main advantage of of CNNs with built-in rotation-equivariance, I expect a more thorough evaluation showing the results for several different training data sizes. - The savings in terms of computational and especially storage efficiency of CFNet are not really evaluated, only briefly mentioned in the very short section 3.4. Again, this needs to be expanded since computational/memory efficient is a supposed main advantage of conic convolutions over group convolutions. - In section 4.2: Why are image rotations used for data augmentation? (The whole point of the compared classification methods (expect for "CNN") is to be rotation-invariant.) It would be interesting to show the results with and without image rotation augmentation. - G-CNN+DFT is missing for the budding yeast cell classification experiment (section 4.3). As mentioned before, I suspect it would perform well or even better than CFNet. Also, why is Worrall et al. (2017) not compared to in sections 4.2 and 4.3.? (It was the best method in for rotation-MNIST.) # Clarity Although the writing is grammatically well done, I found it difficult to follow the explanation of the proposed method. In particular, the mathematics often add to my confusion instead of clearing it up. Given that the proposed concepts are not actually that complicated, I feel the paper makes heavy use of *mathiness* ("the use of mathematics that obfuscates or impresses rather than clarifies" [1]). [1]: Zachary C. Lipton, Jacob Steinhardt. "Troubling Trends in Machine Learning Scholarship.", https://arxiv.org/abs/1807.03341 # Missing explanations - Standard pooling will not preserve rotation-equivariance (RE). While section 3.2 mentions this, it doesn't really explain how pooling is changed to preserve RE. Furthermore, it is also not explained why a deep network based on conic convolutions remains RE after several downsampling and conic convolution layers. I feel there's a problem when the conic regions become tiny after several downsampling operations. Fig. 2 shows that fewer conic regions are used then, limiting the equivariance to 90 degree rotations. This seems like a conceptual limitation. - The paper says that the conic convolution layer uses a "filter size of three pixels", but fails to mention that this means there are currently strong interpolation artifacts, especially for finer degrees of rotation (the paper only rotates at most 8 times). Section 4 only briefly mentions that this would be alleviated by using steerable filters.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Rotation Equivariant Networks via Conic Convolution and the DFT ### Paper Abstract Performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Many image classification tasks, such as those related to cellular imaging, exhibit invariance to rotation. In particular, to aid convolutional neural networks in learning rotation invariance, we consider a simple, efficient conic convolutional scheme that encodes rotational equivariance, along with a method for integrating the magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encode global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). We evaluated the efficacy of CFNet as compared to a standard CNN and group-equivariant CNN (G-CNN) for several different image classification tasks and demonstrated improved performance, including classification accuracy, computational efficiency, and its robustness to hyperparameter selection. Taken together, we believe CFNet represents a new scheme that has the potential to improve many imaging analysis applications. ### Paper Keywords ["deep learning", "rotation equivariance", "bioimaging analysis"] ### Paper Content ABSTRACTPerformance of neural networks can be significantly improved by encoding knowninvariance for particular tasks. In particular, to aid convolutional neural networksin learning rotation invariance, we consider a simple, efficient conic convolutionalscheme that encodes rotational equivariance, along with a method for integratingthe magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encodeglobal rotational invariance. We call our new method the Conic Convolution andDFT Network (CFNet). We evaluated the efficacy of CFNet as compared to astandard CNN and group-equivariant CNN (G-CNN) for several different imageclassification tasks and demonstrated improved performance, including classifi-cation accuracy, computational efficiency, and its robustness to hyperparameterselection. Taken together, we believe CFNet represents a new scheme that has thepotential to improve many imaging analysis applications.1 I NTRODUCTIONThough the appeal of neural networks is their versatility for arbitrary classification tasks, there isstill much benefit in designing them for particular problem settings. In particular, their effectivenesscan be greatly increased by encoding invariance to uniformative augmentations of the data (LeCunet al., 1989). If such invariance is not explicitly encoded, the network must learn it from the data,perhaps with the help of data augmentation, requiring more parameters and thereby increasing itssusceptibility to overfitting.A key invariance inherent to several computer vision settings, including satellite imagery and allforms of microscopy imagery, is rotation (Cheng et al., 2016; Boland & Murphy, 2001). Recently,there have been a variety of proposed approaches for encoding rotation equivariance and invari-ance, the most promising of which have formulated convolution over groups (Cohen & Welling,2016; Weiler et al., 2018). Notably, G-CNNs have been applied to several biological imaging tasks,producing state-of-the-art results (Weiler et al., 2018; Bekkers et al., 2018; Li et al., 2018).Here we propose a new rotation-equivariant convolutional scheme, called conic convolution , which,in contrast to group convolution, encodes equivariance while still operating over only the spatialdomain. Rather than convolving each filter across the entire image, as in standard convolution,rotated filters are convolved over corresponding conic regions of the input feature map that emanatefrom the origin, thereby transforming rotations in the input directly to rotations in the output. Thisscheme is intuitive, simple to implement, and computationally efficient. We also show that themethod yields improved performance over group convolution on several relevant applications.Additionally, we propose the integration of the magnitude response of the 2D-discrete-Fourier trans-form (2D-DFT) into a transition layer between convolutional and fully-connected layers to encoderotational invariance. Though the insight of using the DFT to encode rotational invariance hasbeen employed for texture classification using wavelets (Do & Vetterli, 2002; Jafari-Khouzani &Soltanian-Zadeh, 2005; Ojala et al., 2002; Charalampidis & Kasparis, 2002) and for general imageclassification (Schmidt & Roth, 2012), as of yet, its application to CNNs has been overlooked. As inthese prior works, rotations of the input are transformed to circular shifts, to which the magnitude re-sponse of the 2D-DFT is invariant, in the transformed space. Most other recently proposed rotation-invariance CNNs impose this invariance by applying a permutation-invariant operation, such as theaverage or maximum, over the rotation group, but since this operation is applied for each filter indi-vidually, possibly valuable pose information between filters is lost. In contrast, the 2D-DFT is able1Under review as a conference paper at ICLR 2019to integrate mutual pose information between different filter responses, yielding richer features forsubsequent layers.We demonstrate the effectiveness of these two novel contributions for various applications: clas-sifying rotated MNIST images, classifying synthetic images that model biomarker expression inmicroscopy images of cells, and localizing proteins in budding yeast cells (Kraus et al., 2017). Weshow that CFNet improves classification accuracy generally over the standard raster convolutionformulation and over the equivariant method of G-CNN across these settings. We also show thatthe 2D-DFT clearly improves performance across these diverse data sets, and that not only for conicconvolution, but also for group convolution. Source code for the implementation of CFNet will bemade available on GitHub.2 R ELATED WORKCohen & Welling (2016) introduced G-CNNs by formulating convolution over groups, includingrotation, translation, and flips, for neural networks, which has inspired many subsequent improve-ments. By convolving over groups, equivariance to these groups is maintained throughout the con-volutional layers, and invariance is enforced at the end of the network by pooling over groups. Thiswork was improved upon by the design of steerable filters (Weiler et al., 2018) for convolution, sim-ilar to those proposed by Worrall et al. (2017), which allow for finer sampling of rotations of filterswithout inducing artifacts. Steerable filters were first proposed by Freeman & Adelson (1991) andhad been explored previously for image classification (Liu et al., 2014), but as shallow features inthe context of HOG descriptors.An alternative means of encoding rotational equivariance is to transform the domain of the image toan alternative domain, such as the log-polar domain (Schmidt & Roth, 2012; Henriques & Vedaldi,2017) in which rotation becomes some other transformation that is easier to manage, in this case,translations. The suitability of this transformation depends upon the signal of interest, since thiswarping will introduce distortion, as pixels near the center of the image are sampled more denselythan pixels near the perimeter. In addition, its stability to translations in the original domain isof concern. Our proposed CFNet, by convolving over conic regions, also encodes global rotationequivariance about the origin, but without introducing such distortion, which greatly helps mitigateits susceptibility to translation. The recently developed spatial transform layer (Jaderberg et al.,2015) and deformable convolutional layer (Dai et al., 2017) allow the network to learn non-regularsampling patterns and can potentially help learning rotation invariance, though invariance is notexplicitly enforced, which would most likely be a challenge for tasks with small training data.A simple means for achieving rotation equivariance and invariance was proposed by Dieleman et al.(2016), in which feature maps of standard CNNs are made equivariant or invariant to rotation bycombinations of cyclic slicing, stacking, rolling, and pooling. RotEqNet (Marcos et al., 2017) im-proved upon this idea by storing, for each feature map for a corresponding filter, only the maximalresponse across rotations and the value of the corresponding rotation, to preserve pose information.This approach yielded improved results and considerable storage savings over (Dieleman et al.,2016) and G-CNN. These methods are most similar to our proposed conic convolution. However,in contrast, our method applies each filter only at the appropriate rotation within each conic region,which further saves on storage.To enforce rotation invariance, as noted, most of the previous methods apply some permutation-invariant, or pooling, operation over rotations. Cheng et al. (2016) recently proposed a strategy ofencouraging a network to learn a rotation invariant transform, and follow-up work improved thislearning process by incorporating a Fisher discriminant penalty (Cheng et al., 2018). However,the convolutional layers of the network do not maintain the property of rotation equivariance withthe input image, which requires that the network learn this equivariance and could therefore hinderperformance. Also, learning such a transform that generalizes to unseen data could prove difficultfor settings with limited training data. Schmidt & Roth (2012) previously proposed the 2D-DFT forrotational invariance. However, no method has yet been proposed to integrate the 2D-DFT into arotation-equivariant CNN.2Under review as a conference paper at ICLR 2019FFGroup ConvolutionFeature Map InputFFConic ConvolutionFeature Map InputStandard ConvolutionFFeature Map InputFilter SupportFilterFConvolution on Log-Polar TransformFeature MapInputTransformed RepresentationFigure 1: Comparison of convolution schemes. The domain of filter ‘F’ in the input and its corre-sponding outputs in the feature map are colored red. That of the rotation of ‘F’ by 180 degrees iscolored blue. The local support on the domain for the convolution at a few points for each schemeare shown in gray. Conic convolution, with rotations of 90 degrees in this example, encodes rotationequivariance without introducing distortion to the support of the filter in the original domain (unlikethe log-polar transform) and without requiring additional storage for feature maps (unlike groupconvolution). The example shown for group convolution is the first layer of a G-CNN, mappingfromZ2to the roto-translation group.3 F ORMULATION OF ROTATION EQUIVARIANCE AND INVARIANCE3.1 R OTATION -EQUIVARIANT QUADRANT CONVOLUTIONAL LAYERSWe begin our formulation with a simpler, special case of conic convolution, which we call quadrantconvolution . Its only difference from standard convolution is that the filter being convolved is rotatedbyr=2,r2f0;1;2;3g, depending upon the corresponding quadrant of the domain. We show thatfor quadrant convolution, rotations of =2in the input are straightforwardly associated with rotationsin the output feature map, which is a special form of equivariance called same-equivariance (ascoined by Dieleman et al. (2016)).For convenience, we represent feature maps, of dimension K,f:Z2!RK, and filters, :Z2!RK, of a network as functions over 2D space, as in (Cohen & Welling, 2016). Relevant to ourformulation is the set, or group, Gof two-dimensional rotation matrices of =2, which can be easilyparameterized by g(r), and which acts on points in Z2by matrix multiplication, i.e, for a given pointx= (u;v)2Z2,g(r)x=cos(r=2)sin(r=2)sin(r=2) cos(r=2)uv: (1)LetTgdenote the transformation of a function by a rotation in G, whereTgf(x),f(g1x)appliesthe inverse of gto an element of the domain of f. For an operation :F!F ,Fbeing the setofK-dimensional functions f, to exhibit same-equivariance, applying rotation either before or afterthe operation yields the same result, i.e.Tg(f) = (Tgf): (2)We now define quadrant convolution. The expression for convolution in a standard CNN is given byf(x) =XkXx02Z2k(x0)fk(xx0): (3)As noted in Cohen & Welling (2016), standard convolution does not exhibit rotational equivarianceunless certain constraints on the filters are met.3Under review as a conference paper at ICLR 2019Rotated FiltersConic Convolutional Layers DFT Convolutional-to-Full TransitionFully Connected Layers(a) (b) (c) (d) (e)Conic Convolution RegionRotated WeightsFeature MapsCircular-Shift SpaceFigure 2: The overall architecture of the proposed CFNet. (a) Filtering the image by various filtersat rotations in corresponding conic regions preserves rotation-equivariance. (b) Subsequent convo-lutional feature maps are filtered similarly. Rotation-invariance is encoded by the transition fromconvolutional to fully-connected layers, which consists of (c) element-wise multiplication and sum,denoted by, with rotated weight tensors, transforming rotation to circular shift, and (d) applica-tion of the magnitude response of the 2D-DFT to encode invariance to such shifts. (e) This output isreshaped and passed through the final, fully-connected layers.Quadrant convolution can be interpreted as weighting the convolution for each rotation with a func-tion!:Z2![0;1]that simply “selects” the appropriate quadrant of the domain, which we defineas!q(u;v),8<:1u>0;v0;1=4 (u;v) = (0;0);0 else:(4)Since the origin does not strictly belong to a particular quadrant, it is handled simply by averaging theresponse of the filter at all four rotations. Boundary values are assigned arbitrarily, but consistently,by the placement of the equality for either uorv. The output of the layer is then given by(f),Xg2G[Tg!] [[Tg]f]: (5)Example convolutional regions with appropriate filter rotations are shown in Fig. 1.The equivariance property is established (see Appendix) independent of the definition of !, yetits definition will greatly influence the performance of the network. For example, if !is simply theconstant 1=4, we have the simple example of equivariance mentioned above, equivalent to averagingthe filter responses.3.2 G ENERALIZATION TO CONIC CONVOLUTIONAL LAYERSThe above formulation can be generalized to conic convolution in which the rotation angle is de-creased by an arbitrary factor of =2R, for some positive integer R, instead of being fixed to =2.Rather than considering quadrants of the domain, we can consider conic regions emanating from theorigin, defined byC=n(x;y)2Z2: 0arccot (x=y) +I(y<0)<2Ro; (6)where I()is the indicator function. The weighting function is changed to have value one only overthis conic region:!R(u;v),8<:1 (u;v)2C;1=4R(u;v) = (0;0);0 else;(7)of which!1=!qis a special case.4Under review as a conference paper at ICLR 2019If we consider feature maps to be functions over the continuous domain R2, instead of Z2, and definethe groupGR, with parameterizationgR(r)x=cos(r=2R)sin(r=2R)sin(r=2R) cos(r=2R)uv; (8)forr2f0;1;:::; 4R1gandx= (u;v)2R2, it is easy to show similarly as above thatR(f),Xg2GR[Tg!R] [[Tg]f] (9)is equivariant to GR.However, due to subsampling artifacts when discretizing R2toZ2, as in an image, rotation equivari-ance for arbitrary values of Rcannot be guaranteed and can only be approximated. In particular, thefilters will have to be interpolated for rotations that are not a multiple of =2. In our experiments, wechose nearest neighbor interpolation, which at least preserves the energy of the filter under rotations.This defect notwithstanding, it can be shown that conic convolution maintains equivariance to ro-tations of=2, and as our experiments show in the following section, the approximation of finerangles of rotation can still improve performance. Additionally, we note that Rneed not be the samefor each layer, and it may be advantageous to use a finer discretization of rotations for early layers,when the feature maps are larger, and gradually decrease R.3.3 N ON-LINEAR OPERATIONSA note must be made about subsequent nonlinear operations for a convolutional layer. It is typicalin convolutional networks to perform subsampling, either by striding the convolution or by spatialpooling, to reduce the dimensionality of subsequent layers. Again, due to downsampling artifacts,rotational equivariance to rotations smaller than =2is not guaranteed. However, given that theindices of the plane of the feature map are in Z2and are therefore centered about the origin, adownsampling of D2Z>0can be applied while maintaining rotational equivariance for rotationsof=2, regardless of the choice of R. After subsampling, the result is passed through a non-linearactivation function :R!R, such as ReLU, with an added offset ck2R.3.4 E FFICIENCY OF COMPUTATION AND MEMORY USAGEIn theory, the response for each rotation in conic convolution is only needed over its correspond-ing conic region. However, since GPUs are more efficient operating on rectangular inputs, it isfaster to compute the convolution over each quadrant in which the conic region resides. In currentneural network libraries, the output of conic convolution can be achieved by convolving over thecorresponding quadrant, multiplying by the weighting function, summing the responses is in eachquadrant together, and then concatenating the responses of quadrants. For the special case of quad-rant convolution, this process incurs negligible additional computation beyond standard convolution.Additionally, conic convolution produces only one feature map per filter as in standard convolutionand therefore incurs no additional storage costs, in contrast to G-CNN and cyclic slicing, whichboth produce one map per rotation (Cohen & Welling, 2016; Dieleman et al., 2016), and two forRotEqNet, one for the filter response and one for the orientation (Marcos et al., 2017).3.5 R OTATION -INVARIANT TRANSITION USING THE MAGNITUDE OF THE 2D-DFTAfter the final convolutional layer of a CNN, some number of fully-connected layers will be appliedto combine information from the various filter responses. In general, fully-connected layers willnot maintain rotation equivariance or invariance properties. In a fully-convolutional network, con-volution and downsampling are applied until the spatial dimensions are eliminated and the resultingfeature map of the final convolutional layer is merely a vector, with dimension equal to the numberof filters.Rather than encoding invariance for each filter separately, as in most other recent works (Cohen &Welling, 2016; Weiler et al., 2018), we consider instead to transform the collective filter responsesto a space in which rotation becomes circular shift so that the 2D-DFT can be applied to encode5Under review as a conference paper at ICLR 2019invariance. The primary merit of the 2D-DFT as an invariant transform is that each output node is afunction of every input node, and not just the nodes of a particular filter response, thereby capturingmutual information across responses.Since the formulation of this transition involves the DFT, which is defined only for finite-lengthsignals, we switch to represent feature maps as tensors, rather than functions. We denote the featuremap generated by the penultimate convolutional layer by f2RMMK, whereM2Z>1.In a fully-convolutional network, the final convolutional layer is in reality just a fully-connectedlayer, in which the input fis passed through Nfully-connected filters, (n)2RMMK,n2f0;1;:::;N1g. The operation of this layer can be interpreted as the inner product of the functionand filter,h(n);fi. If we again consider rotations of the filter from the group GR,(n;r),hTgR(r)(n);fi; (10)this is equivalent to the first layer of a G-CNN, mapping from the spatial domain to GR(though thisgroup does not include the translation group since the convolution is only applied at the origin), androtations of the final convolutional layer fwill correspond to permutations of GR, which are justcircular shifts in of the second dimension of the matrix .The magnitude response of the 2D-DFT can be applied to to transform these circular shifts to aninvariant space,jDFTf gj(n;r) =N1Xn0=0R1Xr0=0(n0;r0)ej2n0nN+r0r4R: (11)This process of encoding rotation invariance corresponds to the ‘Convolutional-to-Full Transition’in Fig. 2. The result is then vectorized and passed into fully-connected layers that precede the finaloutput layer, as in a standard CNN. We note that it helped in practice to apply batch normalizationafter vectorizing, since the output of the magnitude of the 2D-DFT will not be normalized as such.The 2D-DFT, as a rotation invariant transform, can also be integrated into other rotation-equivariantnetworks, such as G-CNN. At the final layer of a fully-convolutional G-CNN, since the spatialdimension has been eliminated through successive convolutions and spatial downsampling, rotationis encoded along contiguous stacks of feature maps f2RL4of each filter at four rotations. In thisway, rotations similarly correspond to circular shifts in the final dimension. This representation isthen passed through the 2D-DFT, as in Eqn. 11.4 R ESULTS4.1 A PPLICATION TO ROTATED MNISTThe rotated MNIST data set (Larochelle et al., 2007) has been used as a benchmark for severalprevious works on rotation invariance. As in previous works, to tune the parameters of each method,we first trained various models on a set of 10,000 images, using training augmentation of rotationsof arbitrary angles as in Cohen & Welling (2016)1, and then selected the best model based on theaccuracy on a separate validation set of 5,000 images.Our best CFNet architecture consisted of six convolution layers; the first were conic convolutionsofR= 8 for the first three layers and R= 4 for the next four, with spatial max-pooling after thesecond layer. We used a filter size of three pixels, with 15 filters per layer. The final convolutionallayer was the DFT transition layer as described in the previous section, which was followed by anoutput layer of ten nodes. This architecture was similar in terms of number of layers and filters perlayer as that of the G-CNN of Cohen & Welling (2016). To evaluate the G-CNN with the DFT, theonly changes we made from the reported architecture for G-CNN was to reduce the number of filtersfor each layer to 7, to offset the addition of the 2D-DFT, which was applied to the output of the finalconvolutional layer.The results on a held-out set of 50,000 test images are shown in Table 1. Adding the DFT transitionto the output of G-CNN reduces the test error by 0.28%, demonstrating the value of incorporating1Though the paper did not state the use of training augmentation, code posted by the authors athttps://github.com/tscohen/gconv_experiments indicates rotations of arbitrary angles were used.6Under review as a conference paper at ICLR 2019Table 1: Test error on the rotated MNIST data set.Algorithm Test Error (%)Schmidt & Roth (2012) 3.98Cohen & Welling (2016) (CNN) 5.03Cohen & Welling (2016) (G-CNN) 2.28G-CNN + DFT 2.00CFNet 1.75Worrall et al. (2017) (H-Net) 1.69(a) Example images ofthree of the 50 classes.(b) Rotated examplesfrom a single class.0 2000 4000 6000 8000 10000steps0.40.50.60.70.80.91.0Accuracy(c)N= 50 .CFNetCNNG-CNNCNetG-CNN+DFT (d)N= 100 .(e) Example images ofcells of four of the 22yeast phenotypes fromKraus et al. (2017).0 2000 4000 6000 8000 10000steps0.20.30.40.50.60.70.8Accuracy(f)N= 50CFNetCNNG-CNN (g)N= 100Figure 3: Comparison of CFNet CNet, G-CNN, G-CNN+DFT, and a standard CNN on the GMMsynthetic biomarker images and on images of protein localization. (a,b) Example images, shown asheat maps for detail, showing inter- and intra-class variation. (e) Example images from yeast cellphenotype classes. Testing classification accuracy of methods on synthetic GMM images (c,d) andprotein localization (f,g) with varying numbers Nof training examples per class.mutual rotational information between filters when encoding invariance. The replacement of groupconvolution with conic convolution in CFNet leads to an even further reduction in error of 0.25%.Even with its simple conic convolutional scheme, CFNet is able to perform comparably to H-Net2,which constructs filters from the circular harmonic basis and operates on complex feature maps.4.2 A PPLICATION TO SYNTHETIC BIOMARKER IMAGESIn order to explicitly control the manifestation of rotational invariance, we created a set of syntheticimages, based upon Gaussian-mixture models (GMMs), which can also be used to emulate real-world microscopy images of biological signals (Zhao & Murphy, 2007). Example synthetic imagesfrom across and within classes are shown in Fig. 3a and Fig. 3b, respectively. We defined 50 distri-bution patterns and generated 50 and 100 examples per class for training and 200 examples per classfor testing. Each class was defined by a mixture of ten Gaussians. The image size was 50 pixels. Abatch size of 50 examples, a learning rate of 5103, and a weight decay `2penalty of 5104were used during training. To help all methods, we augmented the training data by rotations andrandom jitter of up to three pixels, as was done during image generation.2This result is without training augmentation; since H-Net can learn equivariance to arbitrary angles, aug-menting with rotations might not improve performance.7Under review as a conference paper at ICLR 2019Classification accuracies on the test set over training steps for various numbers of training samples,denoted by N, for several methods are shown in Figs. 3c-3d. A variety of configurations weretrained for each network, and each configuration was trained three times. The darkest line showsthe accuracy of the configuration that achieved the highest moving average, with a window size of100 steps, for each method. The spread of each method, which is the area between the point-wisemaximum and minimum of the error, is shaded with a light color, and three standard-deviationsaround the mean is shaded darker.We observe a consistent trend of CFNet outperforming G-CNN, which in turn marginally outper-forms the CNN, both in overall accuracy and in terms of the number of steps required to attain thataccuracy. Additionally, the spread of CFNet is mostly above even the best performing models ofG-CNN and the CNN, demonstrating that an instance of CFNet will outperform other methods evenif the best set of hyperparameters has not been chosen. We also included a network consisting ofconic convolutional layers, but without the DFT, noted as ‘CNet’, to show the relative merit of theDFT. CNet performs comparably to the standard CNN while requireing significantly less parame-ters to attain the same performance, though the true advantage of conic convolution is shown whenintegrated with the DFT to achieve global rotation invariance. In comparison, including the 2D-DFTincreases the performance of G-CNN, to a comparable level with CFNet in fact, though it does nottrain as quickly.4.3 A PPLICATION TO PROTEIN LOCALIZATION IN BUDDING YEAST CELLSWe extended our analysis to real biomarker images of budding yeast cells (Kraus et al., 2017), shownin Fig. 3e. Each image consists of four stains, where blue shows the cytoplasmic region, pink thenuclear region, red the bud neck, and green the protein of interest. The classification for each imageis the cellular subcompartmental region in which the protein is expressed, such as the cell periphery,mitochondria, or eisosomes, some of which exhibit very subtle differences.Fig. 3f-g shows the results of using CFNet, G-CNN, and a standard CNN to classify the proteinlocalization for each image. We used the same architecture as reported in Kraus et al. (2017) for allmethods, except that we removed the last convolutional layer and reduced the number of filters perlayer by roughly half for CFNet and G-CNN, to offset for encoding of equivariance and invariance.The same training parameters and data augmentation were used as for the synthetic data, except thata dropout probability of 0.8 was applied at the final layer and the maximum jitter was increasedto five pixels, since many examples were not well-centered. For each method, several iterationswere run and the spread and the best performing model are shown. Again, CFNet outperformsG-CNN and a standard CNN, when the number of training examples per class is either 50 or 100(see Fig. 3b-c), demonstrating that the gains of the 2D-DFT and proposed convolutional layerstranslate to real-world microscopy data. We note that the best reported algorithm that did not usedeep learning, called ensLOC (Chong et al., 2015; Koh et al., 2015), was only able to achieve anaverage precision of 0.49 for a less challenging set of yeast phenotypes and with 20,000 samples,whereas all runs of CFNet achieved an average precision of between 0.60 - 0.67 with 10% of thedata used for training.5 C ONCLUSIONWe have demonstrated the effectiveness of enforcing rotation equivariance and invariance in CNNsby means of the proposed conic convolutional layer and the 2D-DFT, even for group convolution.We believe that the proposed enhancements to the standard CNN will have much utility for futureapplications in relevant problem settings, in particular, high-throughput molecular and cellular imag-ing data (Rozenblatt-Rosen et al., 2017), where training data is usually sparse, especially for rarecellular events. In future work, we believe CFNet could be even further improved by constructingsteerable filters, as in (Weiler et al., 2018), to overcome sampling artifacts from rotating filters.<|im_end|> <|im_start|>assistant ### Review Title Okay paper with limited novelty and lacking experimental evidence for main supposed advantages of proposed method ### Review Text Disclaimer: I've already reviewed an earlier version of this manuscript for NIPS 2018. # Summary In the context of image classification, the paper proposes a convolutional neural network architecture with rotation-equivariant feature maps that are eventually made rotation-invariant by using the magnitude of the 2D discrete Fourier transform (DFT). Classification experiments are conducted on three different datasets. The problem of rotation-invariant image classification addressed by the paper is important, since structures in images may appear at arbitrary orientations in many applications (e.g. microscopy). # Novelty The general ideas of the paper are sound, however seem to be of rather minor novelty. The paper claims two main novelties: (1) conic convolutions and (2) using the DFT magnitude for rotation invariant classification in the context of CNNs. - While (1) seems novel, the paper doesn't convince me that conic convolutions would be useful in practice. While they are more memory efficient, they achieve this by actually computing fewer features (each conic region is only processed by a particular orientation of a convolution kernel). Hence, they should (theoretically) be less powerful than group convolutions (i.e. G-CNN; the whole image is processed by a particular orientation of a convolution kernel). Furthermore, there are no experiments that demonstrate the advantages of the lower memory footprint of conic convolutions. - Novelty (2) is only because it hasn't been used in the context of CNNS, but there is no technical novelty here. Using the DFT magnitude for rotational invariance is an orthogonal contribution that can also be applied to G-CNN, which the paper also evaluates (this is good). # Experiments - Rotation-equivariant CNNs are expected to perform better than standard CNNs when only limited training data is available. However, this is not thoroughly evaluated in the paper, which only does a very coarse comparison (trained with N=50 or N=100 images). Since this is the main advantage of of CNNs with built-in rotation-equivariance, I expect a more thorough evaluation showing the results for several different training data sizes. - The savings in terms of computational and especially storage efficiency of CFNet are not really evaluated, only briefly mentioned in the very short section 3.4. Again, this needs to be expanded since computational/memory efficient is a supposed main advantage of conic convolutions over group convolutions. - In section 4.2: Why are image rotations used for data augmentation? (The whole point of the compared classification methods (expect for "CNN") is to be rotation-invariant.) It would be interesting to show the results with and without image rotation augmentation. - G-CNN+DFT is missing for the budding yeast cell classification experiment (section 4.3). As mentioned before, I suspect it would perform well or even better than CFNet. Also, why is Worrall et al. (2017) not compared to in sections 4.2 and 4.3.? (It was the best method in for rotation-MNIST.) # Clarity Although the writing is grammatically well done, I found it difficult to follow the explanation of the proposed method. In particular, the mathematics often add to my confusion instead of clearing it up. Given that the proposed concepts are not actually that complicated, I feel the paper makes heavy use of *mathiness* ("the use of mathematics that obfuscates or impresses rather than clarifies" [1]). [1]: Zachary C. Lipton, Jacob Steinhardt. "Troubling Trends in Machine Learning Scholarship.", https://arxiv.org/abs/1807.03341 # Missing explanations - Standard pooling will not preserve rotation-equivariance (RE). While section 3.2 mentions this, it doesn't really explain how pooling is changed to preserve RE. Furthermore, it is also not explained why a deep network based on conic convolutions remains RE after several downsampling and conic convolution layers. I feel there's a problem when the conic regions become tiny after several downsampling operations. Fig. 2 shows that fewer conic regions are used then, limiting the equivariance to 90 degree rotations. This seems like a conceptual limitation. - The paper says that the conic convolution layer uses a "filter size of three pixels", but fails to mention that this means there are currently strong interpolation artifacts, especially for finer degrees of rotation (the paper only rotates at most 8 times). Section 4 only briefly mentions that this would be alleviated by using steerable filters. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
S1lVhxSYPH
ICLR.cc/2020/Conference
2020
Ternary MobileNets via Per-Layer Hybrid Filter Banks
["Dibakar Gope", "Jesse G Beu", "Urmish Thakker", "Matthew Mattina"]
MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years. New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks. Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy. While quantization to sub-byte values (i.e. precision ≤ 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs. Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets. Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.
["Model compression", "ternary quantization", "energy-efficient models"]
ABSTRACTMobileNets family of computer vision neural networks have fueled tremendousprogress in the design and organization of resource-efficient architectures in re-cent years. New applications with stringent real-time requirements in highly con-strained devices require further compression of MobileNets-like already compute-efficient networks. Model quantization is a widely used technique to compress andaccelerate neural network inference and prior works have quantized MobileNetsto46bits albeit with a modest to significant drop in accuracy. While quanti-zation to sub-byte values (i.e. precision 8bits) has been valuable, even furtherquantization of MobileNets to binary or ternary values is necessary to realize sig-nificant energy savings and possibly runtime speedups on specialized hardware,such as ASICs and FPGAs. Under the key observation that convolutional filters ateach layer of a deep neural network may respond differently to ternary quantiza-tion, we propose a novel quantization method that generates per-layer hybrid filterbanks consisting of full-precision and ternary weight filters for MobileNets. Thelayer-wise hybrid filter banks essentially combine the strengths of full-precisionand ternary weight filters to derive a compact, energy-efficient architecture forMobileNets. Using this proposed quantization method, we quantized a substan-tial portion of weight filters of MobileNets to ternary values resulting in 27:98%savings in energy, and a 51:07% reduction in the model size, while achievingcomparable accuracy and no degradation in throughput on specialized hardwarein comparison to the baseline full-precision MobileNets.1 I NTRODUCTIONDeeper and wider convolutional neural networks (CNNs) has led to outstanding predictive perfor-mance in many machine learning tasks, such as image classification (He et al. (2016); Krizhevskyet al. (2012)), object detection (Redmon et al. (2016); Ren et al. (2015)), and semantic segmen-tation (Chen et al. (2018); Long et al. (2015)). However, the large model size and correspondingcomputational inefficiency of these networks often make it infeasible to run many real-time ma-chine learning applications on resource-constrained mobile and embedded hardware, such as smart-phones, AR/VR devices etc. To enable this computation and size compression of CNN models,one particularly effective approach has been the use of resource-efficient MobileNets architecture.MobileNets introduces depthwise-separable (DS) convolution as an efficient alternative to the stan-dard3-D convolution operation.While MobileNets architecture has been transformative, even furthercompression of MobileNets is valuable in order to make a wider range of applications available onconstrained platforms (Gope et al. (2019)).Model quantization has been a popular technique to facilitate that. Quantizing the weights of Mo-bileNets to binary (-1,1) or ternary (-1,0,1) values in particular has the potential to achieve significantimprovement in energy savings and possibly overall throughput especially on custom hardware, suchas ASICs and FPGAs while reducing the resultant model size considerably. This is attributed to thereplacement of multiplications by additions in binary- and ternary-weight networks. Multipliers oc-cupy considerably more area on chip than adders (Li & Liu (2016)), and consume significantly moreenergy than addition operations (Horowitz (2014); Andri et al. (2018)). A specialized hardware cantherefore trade off multiplications against additions and potentially accommodate considerably moreadders than multipliers to achieve a high throughput and significant savings in energy for binary- andternary-weight networks.1Under review as a conference paper at ICLR 2020However, prior approaches to binary and ternary quantization (Rastegari et al. (2016); Alemdar et al.(2016); Li & Liu (2016); Tschannen et al. (2018)) incur significant drop in prediction accuracy forMobileNets. Recent work on StrassenNets (Tschannen et al. (2018)) presents a more mathemati-cally profound way to approximate matrix multiplication computation (and, in turn, convolutions)using mostly ternary weights and a few full-precision weights. It essentially exploits Strassen’s al-gorithm to approximate a matrix multiplication of a weight matrix with feature maps, where theelements of the product matrix are generated by different combination of few intermediate termsthrough additions. Computation of each of the intermediate terms requires a multiplication alongwith combination of different elements of weights and feature maps through additions. The num-ber of intermediate terms (also called hidden layer width ) in StrassenNets therefore determines theaddition and multiplication budget of a convolutional layer and in turn decides the approximationerror of the corresponding convolution operation. While the results in (Tschannen et al. (2018))using StrassenNets demonstrates no loss in predictive performance when compared to full-precisionmodels for few networks, the effectiveness of StrassenNets is quite variable, however, depending onthe neural network architecture. We observe, for example, that while strassenifying is effective inreducing the model size of DS convolutional layers, this might come with a prohibitive increase inthe number of addition operations, reducing the energy efficiency of neural network inference.The exorbitant increase in additions primarily stems from the use of wide hidden layers for closelyapproximating each convolutional filter in a network layer. While this might be required for someof the convolutional filters in a layer, our observations indicate that all filters may not require widestrassenified hidden layers. As different filters in a network layer tend to capture different features,they may respond differently to ternary quantization, and, in turn, to strassenified convolution with aspecific hidden layer units. Some filters can be harder to approximate using ternary bits than others,and have larger impact on the model accuracy loss. Furthermore, given a constrained hidden layerbudget for StrassenNets, a group of filters extracting fairly similar features at a layer may respondfavorably to ternary quantization, while other filters of the layer extracting significantly differentfeatures from those may not.Guided by these insights, we propose a layer-wise hybrid filter banks for the MobileNets architecturecapable of giving start-of-the-art accuracy levels, while requiring a fraction of the model size andconsiderably fewer MAC and multiplication operations per inference. The end-to-end learning ofhybrid filter banks makes this possible by keeping precision critical convolutional filters in full-precision values and strassenifying quantization tolerant filters only to ternary values. The filters thatare most sensitive to quantization errors perform traditional convolutions with input feature maps,whereas ternary quantization tolerant filters can perform strassenified convolutions using narrowhidden layers. We apply this proposed quantization scheme to the state-of-the-art MobileNets-V1architecture. The hybrid filter banks for MobileNets achieves a 46:4%reduction in multiplications,and a 51:07% reduction in model size while incurring modest increase in additions. This translatesinto a 27:98% savings in energy required per inference while ensuring no degradation in throughputon a DNN hardware accelerator consisting of both MAC and adders when compared to the executionof baseline MobileNets on a MAC-only hardware accelerator. The hybrid filter banks accomplishesthis with a very minimal loss in accuracy of 0:51%.To the best of our knowledge, the hybridfilter banks proposed in this work is a first step towards quantizing the already compute-efficientMobileNets architecture to ternary values with a negligible loss in accuracy on a large-scale dataset,such as ImageNet.The remainder of the paper is organized as follows. Section 2 elaborates on the incentives behindthe use of per-layer hybrid filter banks for the MobileNets architecture and provides a brief overviewof current quantization algorithms along with our observations of applying them to the MobileNetsarchitecture. Failing to find a good balance between accuracy and computation costs shifts our focustowards designing layer-wise hybrid filter banks for MobileNets. Section 3 describes our hybridfilter banks. Section 4 presents results. Section 5 compares hybrid filter banks against prior worksand Section 6 concludes the paper.2 M ODEL QUANTIZATION LIMITATIONS FOR MOBILE NETSQuantization is an extremely popular approach to make DNNs, in particular convolutional neuralnetworks (CNNs), less resource demanding. This section briefly reviews the important existing2Under review as a conference paper at ICLR 2020works on ternary quantization, which we focus on in this paper, and illustrates their limitationsto motivate the development of per-layer hybrid filter banks for quantizing MobileNets to ternaryvalues.2.1 T ERNARY QUANTIZATION OF WEIGHTSIn order to observe the impact of ternary quantization (Courbariaux et al. (2015); Rastegari et al.(2016); Lin et al. (2017); Cai et al. (2017); Li & Liu (2016); Zhu et al. (2016); Zhou et al. (2016)), weapply the ternary weight quantization method from (Li & Liu (2016)) over the baseline MobileNets-V1 architecture. It approximates a full-precision weight Wfpby a ternary-valued Wtand a scalingfactor such that Wfpscaling factorWt. Ternary quantization of the weights of MobileNetsachieves substantial reduction in model size but at the cost of significant drop (by 9:66%, see Ta-ble 1) in predictive performance when compared to the full-precision model. Any increase in thesize of the MobileNets architecture to recover the accuracy loss while using ternary quantizationwill lead to a significant increase in the number of addition operations. Recent work on Strassen-Nets (Tschannen et al. (2018)), which we describe next, has shown the potential to achieve nearstate-of-the-art accuracy for a number of deep CNNs while maintaining acceptable increase in addi-tion operations.2.2 S TRASSEN NETSGiven two 22matrices, Strassen’s matrix multiplication algorithm computes their product using7multiplications instead of the 8required with a na ̈ıve implementation of matrix multiplication.It essentially converts the matrix multiplication operation to a 2-layer sum-product network (SPN)computation as shown below:vec(C) =Wc[(Wbvec(B))(Wavec(A))] (1)Wa,Wb2Krn2andWc2Kn2rare ternary matrices with K2f1;0;1g,vec(A)andvec(B)are the vectorization of the two input square matrices A,B2Rnn; andvec(C)represents thevectorized form of the product AB.denotes the element-wise product. The (Wbvec(B))and(Wavec(A))of the SPN compute rintermediate factors each from additions, and/or subtractionsof elements of AandBrealized by the two associated ternary matrices WaandWbrespectively.The two generated r-length intermediate factors are then element-wise multiplied to produce the r-length (Wbvec(B))(Wavec(A)). The outmost ternary matrix Wclater combines the relementsof the product (Wbvec(B))(Wavec(A))in different ways to generate the vectorized form ofproduct matrix C. Therefore, the width of the hidden layer of the SPN, r, decides the numberof multiplications required for the Strassen’s matrix multiplication algorithm. For example, giventwo22matrices, ternary matrices WaandWbwith sizes of 74can multiply them using7multiplications and 36additions. It is important to note that Strasssen’s algorithm requires ahidden layer with 7units here to compute the exact product matrix that a na ̈ıve matrix multiplicationalgorithm can obtain using 8multiplications.Building on top of Strassen’s matrix multiplication algorithm, the StrassenNets work (Tschannenet al. (2018)) instead realizes approximate matrix multiplications in DNN layers1using fewer hid-den layer units compared to the standard Strassen’s algorithm required to achieve the exact productmatrix. StrassenNets makes this possible by training a SPN-based DNN framework end-to-end tolearn the ternary weight matrices from the training data. The learned ternary weight matrices canthen approximate the otherwise exact matrix multiplications of the DNN layers with significantlyfewer multiplications than Strassen’s algorithm. The approximate transforms realized by the SPNs,adapted to the DNN architecture and application data under consideration, can enable precise con-trol over the number of multiplications and additions required per inference, creating an opportunityto tune DNN models to strike an optimal balance between accuracy and computational complexity.1A convolutional operation in DNN layers can be reduced to a general matrix multiplication (GEMM). Inthe context of strassenified matrix multiplications of a network layer, Ais associated with the weights or filtersof the layer and Bis associated with the corresponding activations or feature maps. As a result, after training,Waandvec(A)can be collapsed into a vector ^a=Wavec(A), as they are both fixed during inference.3Under review as a conference paper at ICLR 2020Table 1: Test accuracy along with the number of multiplications, additions, operations and modelsize for MobileNets-V1 and strassenified MobileNets-V1 (ST-MobileNets) with the width multiplier0:5on ImageNet dataset. ris the hidden layer width of a strassenified convolution layer, coutis thenumber of output channels of the corresponding convolution layer. A multiply-accumulate operationis abbreviated as MAC.Network Accuracy Muls Adds MACs Model Energy/inference Throughput(%) size (normalized) (normalized)MobileNets 65.2 - - 149.49M 2590.07KB 1 1(float16)MobileNets 55.54 - 149.49 - 323.75KB 0.2 2(TWN (Li & Liu (2016)))ST-MobileNets 48.92 0.77M 158.54M 8.69M 522.33KB 0.27 1.69(r= 0:5cout)ST-MobileNets 56.95 1.16M 236.16M 8.69M 631.76KB 0.37 1.17(r= 0:75cout)ST-MobileNets 61.8 1.55M 313.78M 8.69M 741.19KB 0.48 0.9(r=cout)ST-MobileNets 65.14 3.11M 624.27M 8.69M 1178.92KB 0.9 0.46(r= 2cout)The success of StrassenNets in achieving significant compression for 33convolutions (Tschan-nen et al. (2018)) and increasing visibility of DS convolutions in resource-constrained networksinspired us to apply StrassenNets over the already compute-efficient MobileNets architecture to re-duce its computational costs and model size even further. Further compression of DS layers will notonly enable more energy-efficient networks leading to longer lasting batteries, but also will openup the opportunities for more complex use-cases to fit in the limited memory budget of emergentDNN hardware accelerators. Among the various MobileNets architectures (Howard et al. (2017);Sandler et al. (2018); Howard et al. (2019)), in this work we extensively study the quantization ofMobileNets-V1 (Howard et al. (2017)). MobileNets-V1 stacks one 3x3and13DS convolutionallayers. A DS convolution first convolves each channel in the input feature map with a separate 2-Dfilter (depthwise convolution) and then uses 1x1pointwise convolutions to combine the outputs inthe depth dimension.2.2.1 S TRASSEN NETS FOR MOBILE NETSWe observe that although strassenifying MobileNets reduces multiplications significantly as ex-pected, it increases additions considerably in order to achieve an accuracy comparable to that ofthe state-of-the-art MobileNets with 16-bit floating-point weights. Table 1 captures our observation.The strassenified network with the r= 2coutconfiguration achieves a comparable accuracy to thatof the full-precision MobileNets while reducing multiplications by 97:91% but increasing additionsby317:59% (149:49M MACs of MobileNets vs. 3:11M multiplications and 624:27M additions ofST-MobileNets with r= 2cout). This in turn offers modest savings in energy required per inferencebut causes significant degradation in throughput (see Section 4 for details). As shown in Table 1, anumber of potential values for the hidden layer width ( r) were explored. Using fewer hidden unitse.g.r=coutthan this incurs a siginificant accuracy loss of 3:4%.2.2.2 C OMPUTE INEFFICIENCY OF STRASSEN NETS FOR MOBILE NETSIt is important to note here that although the number of additions does increase marginally withstrassenifying standard 33or55convolutional layers (Tschannen et al. (2018)), that trend doesnot hold true with strassenifying MobileNets dominated with DS layers. This stems from the factthat11pointwise convolutions dominate the compute bandwidth of a neural network with DSlayers (Howard et al. (2017)) and strassenifying a 11pointwise convolution requires executingtwo equal-sized (for r=cout)11convolution operations (with ternary weight filters) in place ofthe standard 11convolution, as shown in Figure 2(a). This results in a significant increase ( 2 : 1or100% ) in additions in comparison to the execution of the standard 11convolution. In contrastto that, as Figure 2(a) illustrates, a 33strassenified convolution with r=coutinstead requiresexecuting a 33convolution and a 11convolution with ternary weight filters, causing a marginalincrease ( 10 : 9 or11:1%) in additions compared to the execution of the standard 33convolution.4Under review as a conference paper at ICLR 2020L2-loss with 2 hidden units: 0.02, 4 hidden units: 0.08hidden units: 0.0-0.88 0.92 -0.45Feature map*-0.12 -0.40 0.780.24 0.29 -0.23-1 2 -1-1 2 -1-1 2 -1Vertical lines detector-0.88 0.92 -0.45Feature map*-0.12 -0.40 0.780.24 0.29 -0.230 -1 0-1 5 -10 -1 0Sharpen filterL2-loss with 2hidden units: 0.094 hidden units: 0.09, 8hidden units: 0.01(a) Variance in the sensitivity of filters toquantization.fjfka b1 0 0 1 -1 1Wc=*a ce fg hFeature map Convolutional filters1 0 0 1Wa=01 0 11 0 0 000 0 11 0 1000 1-11 0 0 1Wb=1 00 00 0 1-1-1 10 00 0 0 101 0 10 10 1 0 00 0 1 01 01 -1 1000(b) Ease of ternary quantization for a filterbank with common values.Figure 1: Understanding the sensitivity of individual and group of filters to ternary quantization.This overhead of addition operations with strassenified DS convolutions increases in proportion tothe width of the strassenified hidden layers, i.e. to the size of the ternary convolution operations, asobserved in Table 1. As a result, a strassenified DS convolution layer may incur enough overhead tooffset the benefit of strassenifying a DS convolution layer.While Tschannen et al. (2018) demonstrates better trade-offs requiring a modest ( 29:63%) increasein additions when strassenifying ResNet-18 architecture dominated with 33convolutions, thisdoes not continue once StrassenNets is applied over MobileNets. This also indicates that the DSconvolutions, owing to efficiency in number of parameters than 33convolutions, are more proneto quantization error and this manifests when StrassenNets is applied. Considering the fact thatMAC operations typically consume about five times more energy than addition operations for 16-bitfloating-point values (Horowitz (2014); Andri et al. (2018)) (see Section 4 for details), an about317:59% increase in additions in place of about 98% saving on multiplications will result in dimin-ishing or no returns in terms of energy savings and runtime speedups even on specialized hardwaredominated with adders. This increase in computational costs associated with strassenified DS con-volutions in conjunction with the high accuracy and low latency requirements of mobile applicationscall for a model architecture exploration that can leverage the compute efficiency of DS layers andmodel size reduction of strassenified convolutions while maintaining acceptable or no increase inadditions.The accuracy drop using a strassenified MobileNets with the r=coutconfiguration essentiallyindicates that each layer perhaps introduces a certain amount of quantization error owing to lowerhidden width and that error accrues over multiple quantized layers. On the other hand, although astrassenified MobileNets with r= 2coutrecovers the accuracy loss of the r=coutconfiguration,it makes a strong assumption that all filters require wider strassenified hidden layers to quantize toternary bits to preserve the representational power of the baseline full-precision network. Whilethis might be true for some of the convolutional filters, not all filters need to be quantized using ther= 2coutconfiguration. This observation stems from the following two reasons:(a) Different sensitivity of individual filters to StrassenNets. Different convolutional filters tendto extract different type of features, ranging from simple features (e.g. edge detection) to morecomplicated higher-level (e.g. facial shapes) or object specific features. As a result, different filtersmay respond differently to ternary quantization. That basically means there are filters that are easyto quantize to ternary values using narrower hidden layers while still ensuring low L2 reconstructionerror in output feature maps.On the other hand, there are weight filters that require wider strassenifiedhidden layers to ensure a low or modest L2 loss.Given a feature map, Figure 1(a) presents a scenario where a strassenified vertical lines detector withfewer hidden layer units can closely approximate the output map (with low L2 reconstruction loss)produced otherwise using its full-precision counterpart. However a convolutional filter that sharpenimages requires a wider hidden layer to ensure a low L2 loss (see Appendix C.1 for more details).Note that we only consider 2D filters for illustration purpose, whereas this difference in complexityshould exist in 3D filters common to CNNs.5Under review as a conference paper at ICLR 2020(b) Different sensitivity of group of filters to StrassenNets. Furthermore, there exists groupsof convolutional filters at each layer that either tend to extract fairly similar features with slightlydifferent orientations (e.g. two filters attempting to detect edges rotated by few degrees) or haveother numerical-structural similarities. As a result, when these groups of convolutional filters arequantized to ternary values using StrassenNets, they may share many hidden layer elements. Thesegroups of convolutional filters with similar value structure in turn are more amenable to quantizationusing fewer hidden layer units than filters with no common value structure. Given a constrainedhidden layer budget for StrassenNets (e.g. r=cout), these groups of convolutional filters maytogether respond well to ternary quantization while other dissimilar filters struggle to be strassenifiedalongside them with low quantization error, due to the restricted hidden layer bandwidth.Figure 1(b) illustrates a case when two filters fjandfk, having some common value structure, canlearn to perform exact convolution with a 22feature map using only 6multiplications insteadof the 7required otherwise for unique filters lacking common value structure. A set of ternaryweight matrices with fewer hidden units implementing an exact convolution in this case is shown inFigure 1(b) (see Appendix A for more details).Motivated by these observations, we propose a novel quantization method – one that will only quan-tizeeasy-to-quantize weight filters of a network layer to ternary values (to restrict the increase inadditions) while also preserving the representational ability of the overall network by relying on fewfull-precision difficult-to-quantize weight filters. This layer-wise hybrid filter bank strategy exploitsa full-precision network’s strength as a highly-accurate classifier and couples that with Strassen-Nets to achieve significant reduction in model size and number of multiplications. This quantizationtechnique essentially maintains a good balance between overall computational costs and predictiveperformance of the overall network.3 P ER-LAYER HYBRID FILTER BANKSWe propose a quantization method that can quantize a substantial fraction of convolutional filters toternary values at each layer while relying on few remaining full-precision filters to preserve the rep-resentational power of the original full-precision network. As easy-to-quantize filters are quantizedonly using StrassenNets leaving the difficult-to-quantize filters in full-precision values, this shouldin turn require narrow hidden layers for quantizing them resulting in an overall reduction in com-putations (additions along with MAC operations) and memory footprint while ensuring no loss inaccuracy. This is in sharp contrast to quantizing all the filters of each layer using wide hidden layersto preserve the representational power of MobileNets which led to significant increase in additionsas we have seen in Section 2.2.1.Architecture. The proposed quantization method convolves the same input feature map with fullprecision weight filters and ternary weight filters in parallel, concatenating the feature maps fromeach convolutions into an unified feature map. This concatenated feature map is fed as input to thenext network layer. At each layer, the combination of the two convolutions from full-precision andternary filters ensures that they combine to form a output feature map of identical shape as in thebaseline full-precision network. For instance, given an input feature map with cinchannels, thequantization technique applies traditional convolution with kfull-precision weight filters Wfpofshapecinwkhkand strassen convolution with coutkternary weight filters Wtto producea feature map of total coutchannels for a layer. Here coutis the number of channels in the outputvolume of the corresponding convolution layer in the baseline full-precision network, and wk,hkare the kernel size. For the sake of simplicity, bias term is not included in this discussion. Thefraction of channels generated in an output feature map from the full-precision weight filters, (orin others words the channels generated from the ternary weight filters, 1) is a hyperparameterin our quantization technique and it decides the representational power and computational costs ofMobileNets with hybrid filter banks.Figure 2(b) shows the organization of the hybrid filter bank for a MobileNets layer. Each of theconvolutional layers of MobileNets, including the 33layer and the 11pointwise convolutionsof the following 13depthwise-separable layers, are quantized using hybrid filter banks, where %of output channels at each layer is generated using full-precision weight filters and the remainingoutput channels using ternary weight filters. The depthwise convolutions of the depthwise-separablelayers are not quantized using either StrassenNets or our hybrid filter banks. This is primarily due6Under review as a conference paper at ICLR 20203x3 conv using ternary WbWavec(A)StrassenNets1x1convusingternaryWcWavec(A)StrassenNetsTraditional 3x3 convolution using full -precision weightsIncrease in ADDS= #ADDsof(3x3conv+1x1conv)#MACsof3x3traditional conv= 109Traditional 1x1 convolution using full -precision weights1x1convusingternaryWb1x1convusingternaryWcIncrease in ADDS= #ADDsof(1x1conv+1x1conv)#MACsof1x1traditional conv= 21(a) Application of StrassenNets to 3 x 3 and 1 x 1 convolu-tion. The cost of elementwise multiplication with intermedi-ateWavec(A)is comparably negligible and hence is ignoredin estimating the increase in additions.WbPrevious Depthwiseconvolutional layerTraditional 1x1 convolution using full-precision weightsStrassen 1x1 convolution using ternary weightsChannel concatenationWcWavec(A)(b) A MobileNets pointwise layer with hy-brid filter bank.Figure 2: MobileNets with hybrid filter banks.to the following reasons: (a) they do not dominate the compute bandwidth of MobileNets (Howardet al. (2017)), (b) as per our observations, quantizing those to ternary values hurt the accuracy sig-nificantly without offering any significant savings in either model size or computational costs. Thestrassenified convolutions portion of hybrid filter banks at each layer are quantized using a numberofrvalues, where ris the hidden layer width of a strassenified convolution layer. Ther << 2coutconfiguration in conjunction with an optimal non-zero should offer substantial savings in modelsize and addition operations without compromising accuracy in comparison to a fully strassenifiedMobileNets architecture with r= 2coutconfiguration. The presented quantization technique canalso be applied to the fully-connected layer parameters, however, we only focus on convolutionlayers in this work. We compress the last fully-connected layer of MobileNets uniformly usingStrassenNets. The per-layer hybrid filter banks proposed here is inspired by the Inception modulefrom the GoogLeNet architecture (Szegedy et al. (2015)) (see Appendix B for more details).End-to-end training. The full-precision filters along with the strassenified weight filters for eachlayer are trained jointly so as to maximize accuracy. A gradient-descent (GD) based training algo-rithm is used to train the network with hybrid filter banks end-to-end. Before the training begins,depending on the value of , the topcoutchannels of a feature map are configured to generatefrom full-precision traditional convolutions, and the remaining 1coutchannels are forced togenerate from ternary strassenified convolutions. Note that the order of the channels generated inthe output feature volume by either full-precision filters or ternary filters is not important, as theoutput feature map comprising all the channels generated forms the input of the subsequent layerand the weights in the subsequent layer can adjust to accommodate that. During the end-to-endtraining process, the organization of hybrid filter banks tend to influence the difficult-to-quantizefilters (that require full-precision filters to extract features) to be trained using full-precision values,and the filters that are less susceptible to ternary quantizationto be trained using ternary values fromstrassenified convolutions. Furthermore, in order to recover any accuracy loss of the hybrid networkcompressed with strassenified matrix computations, knowledge distillation (KD) is exploited duringtraining, as described in( Tschannen et al. (2018)). Using KD, an uncompressed teacher networkcan transfer its prediction ability to a compressed student network by navigating its training. We usethe uncompressed hybrid network as the teacher network and the compressed strassenified networkas the student network here.4 E XPERIMENTS AND RESULTSDatasets and experimental setup. We evaluate the MobileNets-V1 architecture compris-ing proposed per-layer hybrid filter banks (Hybrid MobileNets) on the ImageNet (ILSVRC2012)7Under review as a conference paper at ICLR 2020dataset (Deng et al. (2009)) and compare it against the state-of-the-art MobileNets (Howard et al.(2017)) with 16-bit floating-point weights. The baseline and other network architectures presentedhere use a width multiplier of 0:52to reduce training costs with limited GPU resources. We usethe MXNet framework (Chen et al. (2015)) based GluonCV toolkit3to train the networks. Thisis primarily attributed to the better top- 1accuracy ( 65:2%) of MobileNets-V1 (width multipler of0:5) achieved by the GluonCV toolkit4when compared to the top- 1accuracy of 63:3%observed bythe corresponding publicly available model in the Tensorflow framework (Abadi et al. (2016)). Inthis work, the baseline MobileNets and the full-precision filters of the hybrid filter banks use 16-bitfloating-point weights. We quantize the activations of the baseline and proposed architectures to16-bit floating-point values. A 8-bit representation of weights and activations should not alter theconclusions made in this work. At the time of writing this paper, GluonCV toolkit does not supporttraining with 8-bit weights and activations.Hybrid MobileNets architecture training. We use the Nesterov accelerated gradient (NAG)optimization algorithm and follow the other training hyperparameters described in the GluonCVframework for training the baseline full-precision MobileNets, strassenified MobileNets and ourproposed Hybrid MobileNets. We begin by training the Hybrid MobileNets with full-precisionstrassen matrices ( Wa,Wb, andWc) for200epochs. With a mini-batch size per GPU of 128on a4GPU system, the learning rate is initially chosen as 0:2, and later gradually reduced to zero followinga cosine decay function as used in the GluonCV framework for training the baseline full-precisionMobileNets (see Appendix C.2 for more details).We then activate quantization for these strassen matrices and the training continues for another75epochs with initial learning rate of 0:02and progressively smaller learning rates. Quantizationconverts a full-precision strassen matrix to a ternary-valued matrix along with a scaling factor (e.g.,Wb= scaling factor * Wtb). To evaluate our hypothesis that some full-precision filters are changingsignificantly to recover features lost due to quantization, we measured the L2 distance between theirpre- and post-quantization weight vectors. We found the L2 distances fit a normal distribution: mostfilters experience low-to-moderate changes to their weight vectors while a few exceptional filterssaw very significant movement. This supports our claim that the full-precision filters are preservingthe overall representational power of the network.Finally, we fix the strassen matrices of the hybrid filter banks to their learned ternary values and con-tinue training for another 25epochs to ensure that the scaling factors associated with these matricescan be absorbed by full-precision vec(A)portion of strassenified matrix multiplication.Energy and throughput modeling for hybrid filter banks. The proposed per-layer hybrid filterbanks for MobileNets can be executed by existing DNN hardware accelerators, such as DaDian-Nao (Chen et al. (2014)) and TPU (Jouppi et al. (2017)) consisting of only MAC units. However, inorder to achieve an energy- and runtime- efficient execution of hybrid filter banks dominated withadditions, we propose a custom hardware accelerator, where a fraction of MAC units are replaced bylow-cost adders within the same silicon area. A 16-bit floating-point MAC unit takes about twice thearea of a 16-bit floating-point adder (Lutz (2019)). Given a fixed silicon area and a model configura-tion for Hybrid MobileNets, the ratio of MAC units to adders in the proposed hardware accelerator isdecided in such a way that the maximum possible throughput can be achieved for the configuration.In order to estimate the energy required per inference of baseline and proposed models, we use theenergy consumption numbers of 16-bit floating-point adder and MAC unit mentioned in (Horowitz(2014)).Hybrid MobileNets architecture evaluation. One of the main focus of our evaluation is the studyof howimpacts on the performance of our models. This parameter, that can be independentlyset for each convolutional layer in the network, is directly proportional to the number of learnableparameters in a given layer. In this work, we use identical value of for all the layers of HybridMobileNets. We believe use of different values for different layers may result in better cost accuracytrade-offs. We leave this exploration for future work. Ideally small values of andrare desired to2Using a width multiplier of 0:5halves the number of channels used in each layer of the original MobileNetsarchitecture (Howard et al. (2017)).3GluonCV: a Deep Learning Toolkit for Computer Vision, https://gluon-cv.mxnet.io/index.html4https://gluon-cv.mxnet.io/model zoo/classification.html#mobilenet5As this configuration is likely to observe an accuracy of 63.47, we did not collect the accuracy result forthis configuration.8Under review as a conference paper at ICLR 2020Table 2: Top-1 accuracy along with the computational costs, model size, and energy per inferencefor baseline MobileNets-V1, ST-MobileNets, and Hybrid MobileNets on ImageNet dataset. is thefraction of channels generated by the full-precision weight filters at each layer , coutis the number ofremaining channels generated by the ternary strassen filters at the corresponding convolutional layer,ris the hidden layer width of the strassenified convolutions. The last column shows the throughputof proposed models on an area-equivalent hardware accelerator comprising both MAC and adderunits when compared to the throughput of baseline MobileNets with 16-bit floating-point weightson a MAC-only accelerator.Network Alpha r Acc. Muls, Adds MACs Model Energy/inference Throughput() (%) size (normalized) (normalized)MobileNets - - 65.2 - 149.49M 2590.07KB 1 1(float16)ST-MobileNets 0 2cout 65.14 3.11M, 624.27M 8.69M 1178.92KB 0.9 0.46MobileNets cout5- 1.16M, 204.63M 43.76M 1004.67KB 0.56 1.02(Hybrid 0.25 1:33cout 63.47 1.55M, 270.95M 43.76M 1097.07KB 0.65 0.83filter banks) 2cout 65.2 2.33M, 405.59M 43.76M 1284.65KB 0.84 0.6MobileNets cout 64.13 0.97M, 157.84M 61.3M 1131.43KB 0.62 1.06(Hybrid 0.375 1.6 cout 64.17 1.55M, 250.34M 61.3M 1260.44KB 0.74 0.8filter banks) 2 cout 65.2 1.94M, 312.01M 61.3M 1346.45KB 0.83 0.68MobileNets 0.5 cout 64.69 1.28M ,142.37M 78.83M 1267.13KB 0.72 1(Hybrid 2cout 65.17 1.55M, 228.68M 78.83M 1327.88KB 0.83 0.77filter banks)achieve significant reduction in MAC along with addition operations while preserving the baselineaccuracy.We search the model hyperparameters space systematically to develop Hybrid MobileNets. Table 2captures the top- 1accuracy of the Hybrid MobileNets for various configurations of and hiddenlayer width r, along with their impact on computational costs, model size, energy required perinference, and throughput and and compares that against baseline full-precision MobileNets, andST-MobileNets. As shown in Table 2, the ST-MobileNets and various configurations of HybridMobileNets offer comparable reduction (about 50%) in model size over the baseline full-precisionMobilenets. While the r= 2coutconfigurations for different values of (0:25,0:375, and 0:5)can preserve the baseline top- 1accuracy of 65:2%and offer modest savings in energy requiredper inference, that comes at the cost of large increase in additions. This in turn causes significantdegradation in throughput on the proposed hardware accelerator when compared to the throughputof the baseline full-precision MobileNets on an existing DNN accelerator consisting of only MACunits. On the other end, the coutr <2coutconfigurations with the of0:25and0:375incurmodest to significant drop in top- 1accuracy possibly owing to lack of enough full-precision weightsfilters at each hybrid filter bank to preserve the representational ability of the overall network. Ther<coutconfigurations for different values of leads to large drop in prediction accuracy and henceis not shown in Table 2.The Hybrid MobileNets with the = 0:5andr=coutconfiguration strikes an optimal balancebetween accuracy, computational costs, energy, and throughput. It achieves comparable accuracy tothat of the baseline MobileNets, strassenified and Hybrid MobileNets with the r= 2coutconfigura-tion while reducing the number of MACs, and multiplications by 47:26%, and 46:4%respectivelyand requiring a modest ( 45:51%) increase in additions over the baseline MobileNets architecture.Of particular note is that it reduces the number of additions to about 142:37M when compared to624:27M additions of ST-MobileNets described in Section 2. The significant reduction in MACoperations and modest increase in additions over the baseline full-precision MobileNets in turntranslates into 27:98% savings in energy required per inference while ensuring no degradation inthroughput in comparison to the execution of baseline MobileNets on a MAC-only hardware accel-erator. This reduction in additions is primarily attributed to strassenifying easy-to-quantize filtersusing fewer hidden units ( r=cout) while relying on full-precision filters to generate 50% channelsat each layer and preserve the representational ability of the overall MobileNets architecture. Owingto the substantial presence of ternary weights matrices, the Hybrid MobileNets with the = 0:5and9Under review as a conference paper at ICLR 2020r=coutconfiguration reduces the model size to 1267:13KB when compared to 2590:07KB of thebaseline MobileNets network thus enabling a 51:07% savings in model size. The use of knowledgedistillation in training the ST-MobileNets and Hybrid MobileNets does not result in any tangiblechange in accuracy.In summary, the Hybrid MobileNets reduces model size by 51:07% and energy required per inferenceby27:98% while incurring a negligible loss in accuracy and no degradation in throughput whencompared to the baseline full-precision MobileNets. It is important to note that because of thelarge savings in model size, our Hybrid MobileNets will have significantly fewer accesses to theenergy/power-hungry DRAM. This in conjunction with skipping ineffectual computations of zero-valued weights in our proposed hardware accelerator (as exploited by (Zhang et al. (2016))), owingto about 4050% of sparsity in the ternary weight matrices of strassenified layers as we observe,will improve the energy savings and run-time performance even further. Our current energy andthroughput modeling does not take this into account. We leave this exploration for future work.5 R ELATED WORKWeight pruning. Sparsifying filters and pruning channels are widely used methods to make neu-ral networks more resource-efficient. Unstructured filter sparsity inducing techniques either ob-serve poor hardware characteristics or incur modest to significant drop in model accuracy for Mo-bileNets (Zhu & Gupta (2017)). Recent work on channel pruning (He et al. (2018)) demonstratesnegligible drop in accuracy for MobileNets while achieving significant reduction in computationalcosts. As different channel pruning (He et al. (2018); Zhuang et al. (2018); He et al. (2017)) andfilter pruning techniques (Han et al. (2015); Narang et al. (2017); Zhu & Gupta (2017); Guo et al.(2016); Aghasi et al. (2017); Wen et al. (2016); Luo et al. (2017); Yang et al. (2018); Gordon et al.(2018)) are orthogonal to our compression scheme, they can be used in conjunction with HybridMobileNets to further reduce model size and computational complexity.Network quantization. Recent works on binary/ternary quantization either do not demonstratetheir potential to quantize MobileNets on ImageNet dataset (Yang et al. (2019); Zhuang et al. (2019);Zhu et al. (2019); Sun et al. (2019); Zhang et al. (2018a); Guo et al. (2017)) or incur modest to sig-nificant drop in accuracy while quantizing MobileNets with 4-6-bit weights (Wang et al. (2019); Liu& Mattina (2019); Louizos et al. (2019)) (see Appendix D for more details). The hybrid filter bankssuccessfully quantizes a significant fraction of weight filters of MobileNets to ternary values whileachieving comparable accuracy to that of baseline full-precision model on ImageNet. Nevertheless,the hybrid filter banks can benefit further by adopting these prior proposals.Tensor decomposition. Besides pruning and quantization, tensor decomposition techniques(Jaderberg et al. (2014); Tai et al. (2015); Wen et al. (2017)) exploit parameter redundancy to obtainlow-rank approximations of weight matrices without compromising model accuracy. Full-precisionweights filters and Strassen matrices of our hybrid filter banks can adopt these prior proposals tofurther reduce model size and computational complexity.Compact network architectures. While we show promising results for MobileNets-V1 here,the benefits of hybrid filter banks should scale when extended to other popular resource-efficientarchitectures dominated with either DS convolutions, such as MobileNets-V2 (Sandler et al. (2018)),ShuffleNet (Zhang et al. (2018b)), and Xception (Chollet (2017)) or standard 33convolutions.6 C ONCLUSION AND FUTURE WORKIn this work, we propose per-layer hybrid filter banks for MobileNets capable of quantizing itsweights to ternary values while exhibiting start-of-the-art accuracy on a large-scale dataset and re-quiring a fraction of the model size and considerably lower energy per inference pass. We use 16-bitfloating-point format to represent the intermediate activations and traditional weight filters of hybridfilter banks in this work. In future, we plan to explore the impact of quantizing them to 8-bit or less.In addition, it will be interesting to see how channel pruning (He et al. (2018); Zhuang et al. (2018))assists in reducing the computational complexity of strassenified MobileNets.10Under review as a conference paper at ICLR 2020
ryxEfElTYB
Official Blind Review #2
3: Weak Reject
The authors focus on quantizing the MobileNets architecture to ternary values, resulting in less space and compute. The space of making neural networks more energy efficient is vital towards their deployment in the real world. I think the authors over-state their claims of no loss in accuracy, in Table 2 we see a clear loss in accuracy from MobileNets to MobileNets + Hybrid Filter Banks. I think this research is quite incremental over MobileNets and is unlikely to spur further research strains. I think a better venue for this research may be a more systems-focused conference or journal. There is a significant amount of compute and training complexity required to reduce the model size, e.g. versus model pruning or tensor decomposition. It seems this research would be incredibly difficult to reproduce.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Ternary MobileNets via Per-Layer Hybrid Filter Banks ### Paper Abstract MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years. New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks. Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy. While quantization to sub-byte values (i.e. precision ≤ 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs. Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets. Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets. ### Paper Keywords ["Model compression", "ternary quantization", "energy-efficient models"] ### Paper Content ABSTRACTMobileNets family of computer vision neural networks have fueled tremendousprogress in the design and organization of resource-efficient architectures in re-cent years. New applications with stringent real-time requirements in highly con-strained devices require further compression of MobileNets-like already compute-efficient networks. Model quantization is a widely used technique to compress andaccelerate neural network inference and prior works have quantized MobileNetsto46bits albeit with a modest to significant drop in accuracy. While quanti-zation to sub-byte values (i.e. precision 8bits) has been valuable, even furtherquantization of MobileNets to binary or ternary values is necessary to realize sig-nificant energy savings and possibly runtime speedups on specialized hardware,such as ASICs and FPGAs. Under the key observation that convolutional filters ateach layer of a deep neural network may respond differently to ternary quantiza-tion, we propose a novel quantization method that generates per-layer hybrid filterbanks consisting of full-precision and ternary weight filters for MobileNets. Thelayer-wise hybrid filter banks essentially combine the strengths of full-precisionand ternary weight filters to derive a compact, energy-efficient architecture forMobileNets. Using this proposed quantization method, we quantized a substan-tial portion of weight filters of MobileNets to ternary values resulting in 27:98%savings in energy, and a 51:07% reduction in the model size, while achievingcomparable accuracy and no degradation in throughput on specialized hardwarein comparison to the baseline full-precision MobileNets.1 I NTRODUCTIONDeeper and wider convolutional neural networks (CNNs) has led to outstanding predictive perfor-mance in many machine learning tasks, such as image classification (He et al. (2016); Krizhevskyet al. (2012)), object detection (Redmon et al. (2016); Ren et al. (2015)), and semantic segmen-tation (Chen et al. (2018); Long et al. (2015)). However, the large model size and correspondingcomputational inefficiency of these networks often make it infeasible to run many real-time ma-chine learning applications on resource-constrained mobile and embedded hardware, such as smart-phones, AR/VR devices etc. To enable this computation and size compression of CNN models,one particularly effective approach has been the use of resource-efficient MobileNets architecture.MobileNets introduces depthwise-separable (DS) convolution as an efficient alternative to the stan-dard3-D convolution operation.While MobileNets architecture has been transformative, even furthercompression of MobileNets is valuable in order to make a wider range of applications available onconstrained platforms (Gope et al. (2019)).Model quantization has been a popular technique to facilitate that. Quantizing the weights of Mo-bileNets to binary (-1,1) or ternary (-1,0,1) values in particular has the potential to achieve significantimprovement in energy savings and possibly overall throughput especially on custom hardware, suchas ASICs and FPGAs while reducing the resultant model size considerably. This is attributed to thereplacement of multiplications by additions in binary- and ternary-weight networks. Multipliers oc-cupy considerably more area on chip than adders (Li & Liu (2016)), and consume significantly moreenergy than addition operations (Horowitz (2014); Andri et al. (2018)). A specialized hardware cantherefore trade off multiplications against additions and potentially accommodate considerably moreadders than multipliers to achieve a high throughput and significant savings in energy for binary- andternary-weight networks.1Under review as a conference paper at ICLR 2020However, prior approaches to binary and ternary quantization (Rastegari et al. (2016); Alemdar et al.(2016); Li & Liu (2016); Tschannen et al. (2018)) incur significant drop in prediction accuracy forMobileNets. Recent work on StrassenNets (Tschannen et al. (2018)) presents a more mathemati-cally profound way to approximate matrix multiplication computation (and, in turn, convolutions)using mostly ternary weights and a few full-precision weights. It essentially exploits Strassen’s al-gorithm to approximate a matrix multiplication of a weight matrix with feature maps, where theelements of the product matrix are generated by different combination of few intermediate termsthrough additions. Computation of each of the intermediate terms requires a multiplication alongwith combination of different elements of weights and feature maps through additions. The num-ber of intermediate terms (also called hidden layer width ) in StrassenNets therefore determines theaddition and multiplication budget of a convolutional layer and in turn decides the approximationerror of the corresponding convolution operation. While the results in (Tschannen et al. (2018))using StrassenNets demonstrates no loss in predictive performance when compared to full-precisionmodels for few networks, the effectiveness of StrassenNets is quite variable, however, depending onthe neural network architecture. We observe, for example, that while strassenifying is effective inreducing the model size of DS convolutional layers, this might come with a prohibitive increase inthe number of addition operations, reducing the energy efficiency of neural network inference.The exorbitant increase in additions primarily stems from the use of wide hidden layers for closelyapproximating each convolutional filter in a network layer. While this might be required for someof the convolutional filters in a layer, our observations indicate that all filters may not require widestrassenified hidden layers. As different filters in a network layer tend to capture different features,they may respond differently to ternary quantization, and, in turn, to strassenified convolution with aspecific hidden layer units. Some filters can be harder to approximate using ternary bits than others,and have larger impact on the model accuracy loss. Furthermore, given a constrained hidden layerbudget for StrassenNets, a group of filters extracting fairly similar features at a layer may respondfavorably to ternary quantization, while other filters of the layer extracting significantly differentfeatures from those may not.Guided by these insights, we propose a layer-wise hybrid filter banks for the MobileNets architecturecapable of giving start-of-the-art accuracy levels, while requiring a fraction of the model size andconsiderably fewer MAC and multiplication operations per inference. The end-to-end learning ofhybrid filter banks makes this possible by keeping precision critical convolutional filters in full-precision values and strassenifying quantization tolerant filters only to ternary values. The filters thatare most sensitive to quantization errors perform traditional convolutions with input feature maps,whereas ternary quantization tolerant filters can perform strassenified convolutions using narrowhidden layers. We apply this proposed quantization scheme to the state-of-the-art MobileNets-V1architecture. The hybrid filter banks for MobileNets achieves a 46:4%reduction in multiplications,and a 51:07% reduction in model size while incurring modest increase in additions. This translatesinto a 27:98% savings in energy required per inference while ensuring no degradation in throughputon a DNN hardware accelerator consisting of both MAC and adders when compared to the executionof baseline MobileNets on a MAC-only hardware accelerator. The hybrid filter banks accomplishesthis with a very minimal loss in accuracy of 0:51%.To the best of our knowledge, the hybridfilter banks proposed in this work is a first step towards quantizing the already compute-efficientMobileNets architecture to ternary values with a negligible loss in accuracy on a large-scale dataset,such as ImageNet.The remainder of the paper is organized as follows. Section 2 elaborates on the incentives behindthe use of per-layer hybrid filter banks for the MobileNets architecture and provides a brief overviewof current quantization algorithms along with our observations of applying them to the MobileNetsarchitecture. Failing to find a good balance between accuracy and computation costs shifts our focustowards designing layer-wise hybrid filter banks for MobileNets. Section 3 describes our hybridfilter banks. Section 4 presents results. Section 5 compares hybrid filter banks against prior worksand Section 6 concludes the paper.2 M ODEL QUANTIZATION LIMITATIONS FOR MOBILE NETSQuantization is an extremely popular approach to make DNNs, in particular convolutional neuralnetworks (CNNs), less resource demanding. This section briefly reviews the important existing2Under review as a conference paper at ICLR 2020works on ternary quantization, which we focus on in this paper, and illustrates their limitationsto motivate the development of per-layer hybrid filter banks for quantizing MobileNets to ternaryvalues.2.1 T ERNARY QUANTIZATION OF WEIGHTSIn order to observe the impact of ternary quantization (Courbariaux et al. (2015); Rastegari et al.(2016); Lin et al. (2017); Cai et al. (2017); Li & Liu (2016); Zhu et al. (2016); Zhou et al. (2016)), weapply the ternary weight quantization method from (Li & Liu (2016)) over the baseline MobileNets-V1 architecture. It approximates a full-precision weight Wfpby a ternary-valued Wtand a scalingfactor such that Wfpscaling factorWt. Ternary quantization of the weights of MobileNetsachieves substantial reduction in model size but at the cost of significant drop (by 9:66%, see Ta-ble 1) in predictive performance when compared to the full-precision model. Any increase in thesize of the MobileNets architecture to recover the accuracy loss while using ternary quantizationwill lead to a significant increase in the number of addition operations. Recent work on Strassen-Nets (Tschannen et al. (2018)), which we describe next, has shown the potential to achieve nearstate-of-the-art accuracy for a number of deep CNNs while maintaining acceptable increase in addi-tion operations.2.2 S TRASSEN NETSGiven two 22matrices, Strassen’s matrix multiplication algorithm computes their product using7multiplications instead of the 8required with a na ̈ıve implementation of matrix multiplication.It essentially converts the matrix multiplication operation to a 2-layer sum-product network (SPN)computation as shown below:vec(C) =Wc[(Wbvec(B))(Wavec(A))] (1)Wa,Wb2Krn2andWc2Kn2rare ternary matrices with K2f1;0;1g,vec(A)andvec(B)are the vectorization of the two input square matrices A,B2Rnn; andvec(C)represents thevectorized form of the product AB.denotes the element-wise product. The (Wbvec(B))and(Wavec(A))of the SPN compute rintermediate factors each from additions, and/or subtractionsof elements of AandBrealized by the two associated ternary matrices WaandWbrespectively.The two generated r-length intermediate factors are then element-wise multiplied to produce the r-length (Wbvec(B))(Wavec(A)). The outmost ternary matrix Wclater combines the relementsof the product (Wbvec(B))(Wavec(A))in different ways to generate the vectorized form ofproduct matrix C. Therefore, the width of the hidden layer of the SPN, r, decides the numberof multiplications required for the Strassen’s matrix multiplication algorithm. For example, giventwo22matrices, ternary matrices WaandWbwith sizes of 74can multiply them using7multiplications and 36additions. It is important to note that Strasssen’s algorithm requires ahidden layer with 7units here to compute the exact product matrix that a na ̈ıve matrix multiplicationalgorithm can obtain using 8multiplications.Building on top of Strassen’s matrix multiplication algorithm, the StrassenNets work (Tschannenet al. (2018)) instead realizes approximate matrix multiplications in DNN layers1using fewer hid-den layer units compared to the standard Strassen’s algorithm required to achieve the exact productmatrix. StrassenNets makes this possible by training a SPN-based DNN framework end-to-end tolearn the ternary weight matrices from the training data. The learned ternary weight matrices canthen approximate the otherwise exact matrix multiplications of the DNN layers with significantlyfewer multiplications than Strassen’s algorithm. The approximate transforms realized by the SPNs,adapted to the DNN architecture and application data under consideration, can enable precise con-trol over the number of multiplications and additions required per inference, creating an opportunityto tune DNN models to strike an optimal balance between accuracy and computational complexity.1A convolutional operation in DNN layers can be reduced to a general matrix multiplication (GEMM). Inthe context of strassenified matrix multiplications of a network layer, Ais associated with the weights or filtersof the layer and Bis associated with the corresponding activations or feature maps. As a result, after training,Waandvec(A)can be collapsed into a vector ^a=Wavec(A), as they are both fixed during inference.3Under review as a conference paper at ICLR 2020Table 1: Test accuracy along with the number of multiplications, additions, operations and modelsize for MobileNets-V1 and strassenified MobileNets-V1 (ST-MobileNets) with the width multiplier0:5on ImageNet dataset. ris the hidden layer width of a strassenified convolution layer, coutis thenumber of output channels of the corresponding convolution layer. A multiply-accumulate operationis abbreviated as MAC.Network Accuracy Muls Adds MACs Model Energy/inference Throughput(%) size (normalized) (normalized)MobileNets 65.2 - - 149.49M 2590.07KB 1 1(float16)MobileNets 55.54 - 149.49 - 323.75KB 0.2 2(TWN (Li & Liu (2016)))ST-MobileNets 48.92 0.77M 158.54M 8.69M 522.33KB 0.27 1.69(r= 0:5cout)ST-MobileNets 56.95 1.16M 236.16M 8.69M 631.76KB 0.37 1.17(r= 0:75cout)ST-MobileNets 61.8 1.55M 313.78M 8.69M 741.19KB 0.48 0.9(r=cout)ST-MobileNets 65.14 3.11M 624.27M 8.69M 1178.92KB 0.9 0.46(r= 2cout)The success of StrassenNets in achieving significant compression for 33convolutions (Tschan-nen et al. (2018)) and increasing visibility of DS convolutions in resource-constrained networksinspired us to apply StrassenNets over the already compute-efficient MobileNets architecture to re-duce its computational costs and model size even further. Further compression of DS layers will notonly enable more energy-efficient networks leading to longer lasting batteries, but also will openup the opportunities for more complex use-cases to fit in the limited memory budget of emergentDNN hardware accelerators. Among the various MobileNets architectures (Howard et al. (2017);Sandler et al. (2018); Howard et al. (2019)), in this work we extensively study the quantization ofMobileNets-V1 (Howard et al. (2017)). MobileNets-V1 stacks one 3x3and13DS convolutionallayers. A DS convolution first convolves each channel in the input feature map with a separate 2-Dfilter (depthwise convolution) and then uses 1x1pointwise convolutions to combine the outputs inthe depth dimension.2.2.1 S TRASSEN NETS FOR MOBILE NETSWe observe that although strassenifying MobileNets reduces multiplications significantly as ex-pected, it increases additions considerably in order to achieve an accuracy comparable to that ofthe state-of-the-art MobileNets with 16-bit floating-point weights. Table 1 captures our observation.The strassenified network with the r= 2coutconfiguration achieves a comparable accuracy to thatof the full-precision MobileNets while reducing multiplications by 97:91% but increasing additionsby317:59% (149:49M MACs of MobileNets vs. 3:11M multiplications and 624:27M additions ofST-MobileNets with r= 2cout). This in turn offers modest savings in energy required per inferencebut causes significant degradation in throughput (see Section 4 for details). As shown in Table 1, anumber of potential values for the hidden layer width ( r) were explored. Using fewer hidden unitse.g.r=coutthan this incurs a siginificant accuracy loss of 3:4%.2.2.2 C OMPUTE INEFFICIENCY OF STRASSEN NETS FOR MOBILE NETSIt is important to note here that although the number of additions does increase marginally withstrassenifying standard 33or55convolutional layers (Tschannen et al. (2018)), that trend doesnot hold true with strassenifying MobileNets dominated with DS layers. This stems from the factthat11pointwise convolutions dominate the compute bandwidth of a neural network with DSlayers (Howard et al. (2017)) and strassenifying a 11pointwise convolution requires executingtwo equal-sized (for r=cout)11convolution operations (with ternary weight filters) in place ofthe standard 11convolution, as shown in Figure 2(a). This results in a significant increase ( 2 : 1or100% ) in additions in comparison to the execution of the standard 11convolution. In contrastto that, as Figure 2(a) illustrates, a 33strassenified convolution with r=coutinstead requiresexecuting a 33convolution and a 11convolution with ternary weight filters, causing a marginalincrease ( 10 : 9 or11:1%) in additions compared to the execution of the standard 33convolution.4Under review as a conference paper at ICLR 2020L2-loss with 2 hidden units: 0.02, 4 hidden units: 0.08hidden units: 0.0-0.88 0.92 -0.45Feature map*-0.12 -0.40 0.780.24 0.29 -0.23-1 2 -1-1 2 -1-1 2 -1Vertical lines detector-0.88 0.92 -0.45Feature map*-0.12 -0.40 0.780.24 0.29 -0.230 -1 0-1 5 -10 -1 0Sharpen filterL2-loss with 2hidden units: 0.094 hidden units: 0.09, 8hidden units: 0.01(a) Variance in the sensitivity of filters toquantization.fjfka b1 0 0 1 -1 1Wc=*a ce fg hFeature map Convolutional filters1 0 0 1Wa=01 0 11 0 0 000 0 11 0 1000 1-11 0 0 1Wb=1 00 00 0 1-1-1 10 00 0 0 101 0 10 10 1 0 00 0 1 01 01 -1 1000(b) Ease of ternary quantization for a filterbank with common values.Figure 1: Understanding the sensitivity of individual and group of filters to ternary quantization.This overhead of addition operations with strassenified DS convolutions increases in proportion tothe width of the strassenified hidden layers, i.e. to the size of the ternary convolution operations, asobserved in Table 1. As a result, a strassenified DS convolution layer may incur enough overhead tooffset the benefit of strassenifying a DS convolution layer.While Tschannen et al. (2018) demonstrates better trade-offs requiring a modest ( 29:63%) increasein additions when strassenifying ResNet-18 architecture dominated with 33convolutions, thisdoes not continue once StrassenNets is applied over MobileNets. This also indicates that the DSconvolutions, owing to efficiency in number of parameters than 33convolutions, are more proneto quantization error and this manifests when StrassenNets is applied. Considering the fact thatMAC operations typically consume about five times more energy than addition operations for 16-bitfloating-point values (Horowitz (2014); Andri et al. (2018)) (see Section 4 for details), an about317:59% increase in additions in place of about 98% saving on multiplications will result in dimin-ishing or no returns in terms of energy savings and runtime speedups even on specialized hardwaredominated with adders. This increase in computational costs associated with strassenified DS con-volutions in conjunction with the high accuracy and low latency requirements of mobile applicationscall for a model architecture exploration that can leverage the compute efficiency of DS layers andmodel size reduction of strassenified convolutions while maintaining acceptable or no increase inadditions.The accuracy drop using a strassenified MobileNets with the r=coutconfiguration essentiallyindicates that each layer perhaps introduces a certain amount of quantization error owing to lowerhidden width and that error accrues over multiple quantized layers. On the other hand, although astrassenified MobileNets with r= 2coutrecovers the accuracy loss of the r=coutconfiguration,it makes a strong assumption that all filters require wider strassenified hidden layers to quantize toternary bits to preserve the representational power of the baseline full-precision network. Whilethis might be true for some of the convolutional filters, not all filters need to be quantized using ther= 2coutconfiguration. This observation stems from the following two reasons:(a) Different sensitivity of individual filters to StrassenNets. Different convolutional filters tendto extract different type of features, ranging from simple features (e.g. edge detection) to morecomplicated higher-level (e.g. facial shapes) or object specific features. As a result, different filtersmay respond differently to ternary quantization. That basically means there are filters that are easyto quantize to ternary values using narrower hidden layers while still ensuring low L2 reconstructionerror in output feature maps.On the other hand, there are weight filters that require wider strassenifiedhidden layers to ensure a low or modest L2 loss.Given a feature map, Figure 1(a) presents a scenario where a strassenified vertical lines detector withfewer hidden layer units can closely approximate the output map (with low L2 reconstruction loss)produced otherwise using its full-precision counterpart. However a convolutional filter that sharpenimages requires a wider hidden layer to ensure a low L2 loss (see Appendix C.1 for more details).Note that we only consider 2D filters for illustration purpose, whereas this difference in complexityshould exist in 3D filters common to CNNs.5Under review as a conference paper at ICLR 2020(b) Different sensitivity of group of filters to StrassenNets. Furthermore, there exists groupsof convolutional filters at each layer that either tend to extract fairly similar features with slightlydifferent orientations (e.g. two filters attempting to detect edges rotated by few degrees) or haveother numerical-structural similarities. As a result, when these groups of convolutional filters arequantized to ternary values using StrassenNets, they may share many hidden layer elements. Thesegroups of convolutional filters with similar value structure in turn are more amenable to quantizationusing fewer hidden layer units than filters with no common value structure. Given a constrainedhidden layer budget for StrassenNets (e.g. r=cout), these groups of convolutional filters maytogether respond well to ternary quantization while other dissimilar filters struggle to be strassenifiedalongside them with low quantization error, due to the restricted hidden layer bandwidth.Figure 1(b) illustrates a case when two filters fjandfk, having some common value structure, canlearn to perform exact convolution with a 22feature map using only 6multiplications insteadof the 7required otherwise for unique filters lacking common value structure. A set of ternaryweight matrices with fewer hidden units implementing an exact convolution in this case is shown inFigure 1(b) (see Appendix A for more details).Motivated by these observations, we propose a novel quantization method – one that will only quan-tizeeasy-to-quantize weight filters of a network layer to ternary values (to restrict the increase inadditions) while also preserving the representational ability of the overall network by relying on fewfull-precision difficult-to-quantize weight filters. This layer-wise hybrid filter bank strategy exploitsa full-precision network’s strength as a highly-accurate classifier and couples that with Strassen-Nets to achieve significant reduction in model size and number of multiplications. This quantizationtechnique essentially maintains a good balance between overall computational costs and predictiveperformance of the overall network.3 P ER-LAYER HYBRID FILTER BANKSWe propose a quantization method that can quantize a substantial fraction of convolutional filters toternary values at each layer while relying on few remaining full-precision filters to preserve the rep-resentational power of the original full-precision network. As easy-to-quantize filters are quantizedonly using StrassenNets leaving the difficult-to-quantize filters in full-precision values, this shouldin turn require narrow hidden layers for quantizing them resulting in an overall reduction in com-putations (additions along with MAC operations) and memory footprint while ensuring no loss inaccuracy. This is in sharp contrast to quantizing all the filters of each layer using wide hidden layersto preserve the representational power of MobileNets which led to significant increase in additionsas we have seen in Section 2.2.1.Architecture. The proposed quantization method convolves the same input feature map with fullprecision weight filters and ternary weight filters in parallel, concatenating the feature maps fromeach convolutions into an unified feature map. This concatenated feature map is fed as input to thenext network layer. At each layer, the combination of the two convolutions from full-precision andternary filters ensures that they combine to form a output feature map of identical shape as in thebaseline full-precision network. For instance, given an input feature map with cinchannels, thequantization technique applies traditional convolution with kfull-precision weight filters Wfpofshapecinwkhkand strassen convolution with coutkternary weight filters Wtto producea feature map of total coutchannels for a layer. Here coutis the number of channels in the outputvolume of the corresponding convolution layer in the baseline full-precision network, and wk,hkare the kernel size. For the sake of simplicity, bias term is not included in this discussion. Thefraction of channels generated in an output feature map from the full-precision weight filters, (orin others words the channels generated from the ternary weight filters, 1) is a hyperparameterin our quantization technique and it decides the representational power and computational costs ofMobileNets with hybrid filter banks.Figure 2(b) shows the organization of the hybrid filter bank for a MobileNets layer. Each of theconvolutional layers of MobileNets, including the 33layer and the 11pointwise convolutionsof the following 13depthwise-separable layers, are quantized using hybrid filter banks, where %of output channels at each layer is generated using full-precision weight filters and the remainingoutput channels using ternary weight filters. The depthwise convolutions of the depthwise-separablelayers are not quantized using either StrassenNets or our hybrid filter banks. This is primarily due6Under review as a conference paper at ICLR 20203x3 conv using ternary WbWavec(A)StrassenNets1x1convusingternaryWcWavec(A)StrassenNetsTraditional 3x3 convolution using full -precision weightsIncrease in ADDS= #ADDsof(3x3conv+1x1conv)#MACsof3x3traditional conv= 109Traditional 1x1 convolution using full -precision weights1x1convusingternaryWb1x1convusingternaryWcIncrease in ADDS= #ADDsof(1x1conv+1x1conv)#MACsof1x1traditional conv= 21(a) Application of StrassenNets to 3 x 3 and 1 x 1 convolu-tion. The cost of elementwise multiplication with intermedi-ateWavec(A)is comparably negligible and hence is ignoredin estimating the increase in additions.WbPrevious Depthwiseconvolutional layerTraditional 1x1 convolution using full-precision weightsStrassen 1x1 convolution using ternary weightsChannel concatenationWcWavec(A)(b) A MobileNets pointwise layer with hy-brid filter bank.Figure 2: MobileNets with hybrid filter banks.to the following reasons: (a) they do not dominate the compute bandwidth of MobileNets (Howardet al. (2017)), (b) as per our observations, quantizing those to ternary values hurt the accuracy sig-nificantly without offering any significant savings in either model size or computational costs. Thestrassenified convolutions portion of hybrid filter banks at each layer are quantized using a numberofrvalues, where ris the hidden layer width of a strassenified convolution layer. Ther << 2coutconfiguration in conjunction with an optimal non-zero should offer substantial savings in modelsize and addition operations without compromising accuracy in comparison to a fully strassenifiedMobileNets architecture with r= 2coutconfiguration. The presented quantization technique canalso be applied to the fully-connected layer parameters, however, we only focus on convolutionlayers in this work. We compress the last fully-connected layer of MobileNets uniformly usingStrassenNets. The per-layer hybrid filter banks proposed here is inspired by the Inception modulefrom the GoogLeNet architecture (Szegedy et al. (2015)) (see Appendix B for more details).End-to-end training. The full-precision filters along with the strassenified weight filters for eachlayer are trained jointly so as to maximize accuracy. A gradient-descent (GD) based training algo-rithm is used to train the network with hybrid filter banks end-to-end. Before the training begins,depending on the value of , the topcoutchannels of a feature map are configured to generatefrom full-precision traditional convolutions, and the remaining 1coutchannels are forced togenerate from ternary strassenified convolutions. Note that the order of the channels generated inthe output feature volume by either full-precision filters or ternary filters is not important, as theoutput feature map comprising all the channels generated forms the input of the subsequent layerand the weights in the subsequent layer can adjust to accommodate that. During the end-to-endtraining process, the organization of hybrid filter banks tend to influence the difficult-to-quantizefilters (that require full-precision filters to extract features) to be trained using full-precision values,and the filters that are less susceptible to ternary quantizationto be trained using ternary values fromstrassenified convolutions. Furthermore, in order to recover any accuracy loss of the hybrid networkcompressed with strassenified matrix computations, knowledge distillation (KD) is exploited duringtraining, as described in( Tschannen et al. (2018)). Using KD, an uncompressed teacher networkcan transfer its prediction ability to a compressed student network by navigating its training. We usethe uncompressed hybrid network as the teacher network and the compressed strassenified networkas the student network here.4 E XPERIMENTS AND RESULTSDatasets and experimental setup. We evaluate the MobileNets-V1 architecture compris-ing proposed per-layer hybrid filter banks (Hybrid MobileNets) on the ImageNet (ILSVRC2012)7Under review as a conference paper at ICLR 2020dataset (Deng et al. (2009)) and compare it against the state-of-the-art MobileNets (Howard et al.(2017)) with 16-bit floating-point weights. The baseline and other network architectures presentedhere use a width multiplier of 0:52to reduce training costs with limited GPU resources. We usethe MXNet framework (Chen et al. (2015)) based GluonCV toolkit3to train the networks. Thisis primarily attributed to the better top- 1accuracy ( 65:2%) of MobileNets-V1 (width multipler of0:5) achieved by the GluonCV toolkit4when compared to the top- 1accuracy of 63:3%observed bythe corresponding publicly available model in the Tensorflow framework (Abadi et al. (2016)). Inthis work, the baseline MobileNets and the full-precision filters of the hybrid filter banks use 16-bitfloating-point weights. We quantize the activations of the baseline and proposed architectures to16-bit floating-point values. A 8-bit representation of weights and activations should not alter theconclusions made in this work. At the time of writing this paper, GluonCV toolkit does not supporttraining with 8-bit weights and activations.Hybrid MobileNets architecture training. We use the Nesterov accelerated gradient (NAG)optimization algorithm and follow the other training hyperparameters described in the GluonCVframework for training the baseline full-precision MobileNets, strassenified MobileNets and ourproposed Hybrid MobileNets. We begin by training the Hybrid MobileNets with full-precisionstrassen matrices ( Wa,Wb, andWc) for200epochs. With a mini-batch size per GPU of 128on a4GPU system, the learning rate is initially chosen as 0:2, and later gradually reduced to zero followinga cosine decay function as used in the GluonCV framework for training the baseline full-precisionMobileNets (see Appendix C.2 for more details).We then activate quantization for these strassen matrices and the training continues for another75epochs with initial learning rate of 0:02and progressively smaller learning rates. Quantizationconverts a full-precision strassen matrix to a ternary-valued matrix along with a scaling factor (e.g.,Wb= scaling factor * Wtb). To evaluate our hypothesis that some full-precision filters are changingsignificantly to recover features lost due to quantization, we measured the L2 distance between theirpre- and post-quantization weight vectors. We found the L2 distances fit a normal distribution: mostfilters experience low-to-moderate changes to their weight vectors while a few exceptional filterssaw very significant movement. This supports our claim that the full-precision filters are preservingthe overall representational power of the network.Finally, we fix the strassen matrices of the hybrid filter banks to their learned ternary values and con-tinue training for another 25epochs to ensure that the scaling factors associated with these matricescan be absorbed by full-precision vec(A)portion of strassenified matrix multiplication.Energy and throughput modeling for hybrid filter banks. The proposed per-layer hybrid filterbanks for MobileNets can be executed by existing DNN hardware accelerators, such as DaDian-Nao (Chen et al. (2014)) and TPU (Jouppi et al. (2017)) consisting of only MAC units. However, inorder to achieve an energy- and runtime- efficient execution of hybrid filter banks dominated withadditions, we propose a custom hardware accelerator, where a fraction of MAC units are replaced bylow-cost adders within the same silicon area. A 16-bit floating-point MAC unit takes about twice thearea of a 16-bit floating-point adder (Lutz (2019)). Given a fixed silicon area and a model configura-tion for Hybrid MobileNets, the ratio of MAC units to adders in the proposed hardware accelerator isdecided in such a way that the maximum possible throughput can be achieved for the configuration.In order to estimate the energy required per inference of baseline and proposed models, we use theenergy consumption numbers of 16-bit floating-point adder and MAC unit mentioned in (Horowitz(2014)).Hybrid MobileNets architecture evaluation. One of the main focus of our evaluation is the studyof howimpacts on the performance of our models. This parameter, that can be independentlyset for each convolutional layer in the network, is directly proportional to the number of learnableparameters in a given layer. In this work, we use identical value of for all the layers of HybridMobileNets. We believe use of different values for different layers may result in better cost accuracytrade-offs. We leave this exploration for future work. Ideally small values of andrare desired to2Using a width multiplier of 0:5halves the number of channels used in each layer of the original MobileNetsarchitecture (Howard et al. (2017)).3GluonCV: a Deep Learning Toolkit for Computer Vision, https://gluon-cv.mxnet.io/index.html4https://gluon-cv.mxnet.io/model zoo/classification.html#mobilenet5As this configuration is likely to observe an accuracy of 63.47, we did not collect the accuracy result forthis configuration.8Under review as a conference paper at ICLR 2020Table 2: Top-1 accuracy along with the computational costs, model size, and energy per inferencefor baseline MobileNets-V1, ST-MobileNets, and Hybrid MobileNets on ImageNet dataset. is thefraction of channels generated by the full-precision weight filters at each layer , coutis the number ofremaining channels generated by the ternary strassen filters at the corresponding convolutional layer,ris the hidden layer width of the strassenified convolutions. The last column shows the throughputof proposed models on an area-equivalent hardware accelerator comprising both MAC and adderunits when compared to the throughput of baseline MobileNets with 16-bit floating-point weightson a MAC-only accelerator.Network Alpha r Acc. Muls, Adds MACs Model Energy/inference Throughput() (%) size (normalized) (normalized)MobileNets - - 65.2 - 149.49M 2590.07KB 1 1(float16)ST-MobileNets 0 2cout 65.14 3.11M, 624.27M 8.69M 1178.92KB 0.9 0.46MobileNets cout5- 1.16M, 204.63M 43.76M 1004.67KB 0.56 1.02(Hybrid 0.25 1:33cout 63.47 1.55M, 270.95M 43.76M 1097.07KB 0.65 0.83filter banks) 2cout 65.2 2.33M, 405.59M 43.76M 1284.65KB 0.84 0.6MobileNets cout 64.13 0.97M, 157.84M 61.3M 1131.43KB 0.62 1.06(Hybrid 0.375 1.6 cout 64.17 1.55M, 250.34M 61.3M 1260.44KB 0.74 0.8filter banks) 2 cout 65.2 1.94M, 312.01M 61.3M 1346.45KB 0.83 0.68MobileNets 0.5 cout 64.69 1.28M ,142.37M 78.83M 1267.13KB 0.72 1(Hybrid 2cout 65.17 1.55M, 228.68M 78.83M 1327.88KB 0.83 0.77filter banks)achieve significant reduction in MAC along with addition operations while preserving the baselineaccuracy.We search the model hyperparameters space systematically to develop Hybrid MobileNets. Table 2captures the top- 1accuracy of the Hybrid MobileNets for various configurations of and hiddenlayer width r, along with their impact on computational costs, model size, energy required perinference, and throughput and and compares that against baseline full-precision MobileNets, andST-MobileNets. As shown in Table 2, the ST-MobileNets and various configurations of HybridMobileNets offer comparable reduction (about 50%) in model size over the baseline full-precisionMobilenets. While the r= 2coutconfigurations for different values of (0:25,0:375, and 0:5)can preserve the baseline top- 1accuracy of 65:2%and offer modest savings in energy requiredper inference, that comes at the cost of large increase in additions. This in turn causes significantdegradation in throughput on the proposed hardware accelerator when compared to the throughputof the baseline full-precision MobileNets on an existing DNN accelerator consisting of only MACunits. On the other end, the coutr <2coutconfigurations with the of0:25and0:375incurmodest to significant drop in top- 1accuracy possibly owing to lack of enough full-precision weightsfilters at each hybrid filter bank to preserve the representational ability of the overall network. Ther<coutconfigurations for different values of leads to large drop in prediction accuracy and henceis not shown in Table 2.The Hybrid MobileNets with the = 0:5andr=coutconfiguration strikes an optimal balancebetween accuracy, computational costs, energy, and throughput. It achieves comparable accuracy tothat of the baseline MobileNets, strassenified and Hybrid MobileNets with the r= 2coutconfigura-tion while reducing the number of MACs, and multiplications by 47:26%, and 46:4%respectivelyand requiring a modest ( 45:51%) increase in additions over the baseline MobileNets architecture.Of particular note is that it reduces the number of additions to about 142:37M when compared to624:27M additions of ST-MobileNets described in Section 2. The significant reduction in MACoperations and modest increase in additions over the baseline full-precision MobileNets in turntranslates into 27:98% savings in energy required per inference while ensuring no degradation inthroughput in comparison to the execution of baseline MobileNets on a MAC-only hardware accel-erator. This reduction in additions is primarily attributed to strassenifying easy-to-quantize filtersusing fewer hidden units ( r=cout) while relying on full-precision filters to generate 50% channelsat each layer and preserve the representational ability of the overall MobileNets architecture. Owingto the substantial presence of ternary weights matrices, the Hybrid MobileNets with the = 0:5and9Under review as a conference paper at ICLR 2020r=coutconfiguration reduces the model size to 1267:13KB when compared to 2590:07KB of thebaseline MobileNets network thus enabling a 51:07% savings in model size. The use of knowledgedistillation in training the ST-MobileNets and Hybrid MobileNets does not result in any tangiblechange in accuracy.In summary, the Hybrid MobileNets reduces model size by 51:07% and energy required per inferenceby27:98% while incurring a negligible loss in accuracy and no degradation in throughput whencompared to the baseline full-precision MobileNets. It is important to note that because of thelarge savings in model size, our Hybrid MobileNets will have significantly fewer accesses to theenergy/power-hungry DRAM. This in conjunction with skipping ineffectual computations of zero-valued weights in our proposed hardware accelerator (as exploited by (Zhang et al. (2016))), owingto about 4050% of sparsity in the ternary weight matrices of strassenified layers as we observe,will improve the energy savings and run-time performance even further. Our current energy andthroughput modeling does not take this into account. We leave this exploration for future work.5 R ELATED WORKWeight pruning. Sparsifying filters and pruning channels are widely used methods to make neu-ral networks more resource-efficient. Unstructured filter sparsity inducing techniques either ob-serve poor hardware characteristics or incur modest to significant drop in model accuracy for Mo-bileNets (Zhu & Gupta (2017)). Recent work on channel pruning (He et al. (2018)) demonstratesnegligible drop in accuracy for MobileNets while achieving significant reduction in computationalcosts. As different channel pruning (He et al. (2018); Zhuang et al. (2018); He et al. (2017)) andfilter pruning techniques (Han et al. (2015); Narang et al. (2017); Zhu & Gupta (2017); Guo et al.(2016); Aghasi et al. (2017); Wen et al. (2016); Luo et al. (2017); Yang et al. (2018); Gordon et al.(2018)) are orthogonal to our compression scheme, they can be used in conjunction with HybridMobileNets to further reduce model size and computational complexity.Network quantization. Recent works on binary/ternary quantization either do not demonstratetheir potential to quantize MobileNets on ImageNet dataset (Yang et al. (2019); Zhuang et al. (2019);Zhu et al. (2019); Sun et al. (2019); Zhang et al. (2018a); Guo et al. (2017)) or incur modest to sig-nificant drop in accuracy while quantizing MobileNets with 4-6-bit weights (Wang et al. (2019); Liu& Mattina (2019); Louizos et al. (2019)) (see Appendix D for more details). The hybrid filter bankssuccessfully quantizes a significant fraction of weight filters of MobileNets to ternary values whileachieving comparable accuracy to that of baseline full-precision model on ImageNet. Nevertheless,the hybrid filter banks can benefit further by adopting these prior proposals.Tensor decomposition. Besides pruning and quantization, tensor decomposition techniques(Jaderberg et al. (2014); Tai et al. (2015); Wen et al. (2017)) exploit parameter redundancy to obtainlow-rank approximations of weight matrices without compromising model accuracy. Full-precisionweights filters and Strassen matrices of our hybrid filter banks can adopt these prior proposals tofurther reduce model size and computational complexity.Compact network architectures. While we show promising results for MobileNets-V1 here,the benefits of hybrid filter banks should scale when extended to other popular resource-efficientarchitectures dominated with either DS convolutions, such as MobileNets-V2 (Sandler et al. (2018)),ShuffleNet (Zhang et al. (2018b)), and Xception (Chollet (2017)) or standard 33convolutions.6 C ONCLUSION AND FUTURE WORKIn this work, we propose per-layer hybrid filter banks for MobileNets capable of quantizing itsweights to ternary values while exhibiting start-of-the-art accuracy on a large-scale dataset and re-quiring a fraction of the model size and considerably lower energy per inference pass. We use 16-bitfloating-point format to represent the intermediate activations and traditional weight filters of hybridfilter banks in this work. In future, we plan to explore the impact of quantizing them to 8-bit or less.In addition, it will be interesting to see how channel pruning (He et al. (2018); Zhuang et al. (2018))assists in reducing the computational complexity of strassenified MobileNets.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The authors focus on quantizing the MobileNets architecture to ternary values, resulting in less space and compute. The space of making neural networks more energy efficient is vital towards their deployment in the real world. I think the authors over-state their claims of no loss in accuracy, in Table 2 we see a clear loss in accuracy from MobileNets to MobileNets + Hybrid Filter Banks. I think this research is quite incremental over MobileNets and is unlikely to spur further research strains. I think a better venue for this research may be a more systems-focused conference or journal. There is a significant amount of compute and training complexity required to reduce the model size, e.g. versus model pruning or tensor decomposition. It seems this research would be incredibly difficult to reproduce. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
MPO4oML_JC
ICLR.cc/2021/Conference
2021
Coordinated Multi-Agent Exploration Using Shared Goals
["Iou-Jen Liu", "Unnat Jain", "Alex Schwing"]
Exploration is critical for good results of deep reinforcement learning algorithms and has drawn much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. It was recognized recently that noise-based exploration is suboptimal in multi-agent settings, and exploration methods that consider agents' cooperation have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and don't coordinate their exploration efforts toward those states. To address this shortcoming, in this paper, we proposed coordinated multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected by a normalized entropy-based technique from multiple projected state spaces. Then, agents are trained to reach the goal in a coordinated manner. We demonstrated that our approach needs only $1\%-5\%$ of the environment steps to achieve similar or better returns than state-of-the-art baselines on various sparse-reward tasks, including a sparse-reward version of the Starcraft multi-agent challenge (SMAC).
["Multi-agent RL", "Deep RL", "Exploration"]
ABSTRACTExploration is critical for good results of deep reinforcement learning algorithmsand has attracted much attention. However, existing multi-agent deep reinforce-ment learning algorithms still use mostly noise-based techniques. It was recog-nized recently that noise-based exploration is suboptimal in multi-agent settings,and exploration methods that consider agents’ cooperation have been developed.However, existing methods suffer from a common challenge: agents struggle toidentify states that are worth exploring, and don’t coordinate their exploration ef-forts toward those states. To address this shortcoming, in this paper, we proposedcoordinated multi-agent exploration (CMAE): agents share a common goal whileexploring. The goal is selected by a normalized entropy-based technique frommultiple projected state spaces. Then, agents are trained to reach the goal in acoordinated manner. We demonstrated that our approach needs only 1%−5%of the environment steps to achieve similar or better returns than state-of-the-artbaselines on various sparse-reward tasks, including a sparse-reward version of theStarcraft multi-agent challenge (SMAC).1 I NTRODUCTIONCooperative multi-agent reinforcement learning (MARL) is an increasingly important field. Indeed,many real-world problems are naturally modeled using MARL techniques. For instance, tasks fromareas as diverse as robot fleet coordination (Swamy et al., 2020; H ̈uttenrauch et al., 2019) and au-tonomous traffic control (Bazzan, 2008; Sunehag et al., 2018) fit MARL formulations.To address MARL problems, early work followed the independent single-agent reinforcement learn-ing paradigm (Tampuu et al., 2015; Tan, 1993; Matignon et al., 2012). However, more recently,specifically tailored techniques such as monotonic value function factorization (QMIX) (Rashidet al., 2018), multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), andcounterfactual multi-agent policy gradients (COMA) (Foerster et al., 2018) have been developed.Those methods excel in a multi-agent setting because they address the non-stationary issue ofMARL and develop communication protocols between agents. Despite those advances and theresulting reported performance improvements, a common issue remained: all of the aforementionedmethods use exploration techniques from classical algorithms. Specifically, these methods employnoise-based exploration, i.e., the exploration policy is a noisy version of the actor policy. For in-stance, Lowe et al. (2017) add Ornstein-Uhlenbeck (OU) (Uhlenbeck & Ornstein, 1930) noise orGaussian noise to the actor policy. Foerster et al. (2016); Rashid et al. (2018); Yang et al. (2018);Foerster et al. (2017) use variants of ǫ-greedy exploration, where a random suboptimal action isselected with probability ǫ.It was recognized recently that use of classical exploration techniques is sub-optimal in a multi-agentreinforcement learning setting. Specifically, Mahajan et al. (2019) show that QMIX with ǫ-greedyexploration results in slow exploration and sub-optimality. Mahajan et al. (2019) improve explo-ration by conditioning an agent’s behavior on a shared latent variable controlled by a hierarchicalpolicy. Even more recently, Wang et al. (2020) encourage coordinated exploration by consideringthe influence of one agent’s behavior on other agents’ behaviors.While all of the aforementioned exploration techniques for multi-agent reinforcement learning sig-nificantly improve results, they suffer from a common challenge: agents struggle to identify statesthat are worth exploring, and don’t coordinate their exploration efforts toward those states. To give1Under review as a conference paper at ICLR 2021an example, consider a push-box task, where two agents need to jointly push a heavy box to aspecific location before observing a reward. In this situation, instead of exploring the environmentindependently, agents need to coordinate pushing the box within the environment to find the specificlocation.To address this issue, we propose coordinated multi-agent exploration (CMAE), where multipleagents share a common goal. We achieve this by first projecting the joint state space to multiplesubspaces. We develop a normalized entropy (Cover & Thomas., 1991)-based technique to select agoal from the under-explored subspaces. Then, exploration policies are trained to reach the goals ina coordinated manner.To show that CMAE improves results, we evaluate our approach on various environments withsparse-rewards from Wang et al. (2020), and the sparse-reward version of the Starcraft multi-agentchallenge (SMAC) (Samvelyan et al., 2019), which requires coordinated actions among agents overextended time steps before observing a reward. The experimental results show that our approachneeds only 1%−5%of environment steps to achieve similar or better average test episode returnsthan current state-of-the-art baselines.2 P RELIMINARIESIn this section, we define the multi-agent Markov decision process (MDP) in Sec. 2.1, and introducethe multi-agent reinforcement learning setting in Sec. 2.2.2.1 M ULTI -AGENT MARKOV DECISION PROCESSWe model a cooperative multi-agent system as a multi-agent Markov decision process (MDP). Ann-agent MDP is defined by a tuple G= (S,A,T,R,Z,O,n,γ,H).Sis the global state space ofthe environment. Ais the action space. At each time step t, each agent’s policy πi,i∈ {1,...,n},selects an action ati∈ A. All selected actions form a joint action at∈ An. The transition functionTmaps the current state stand the joint action atto the next state st+1,i.e.,T:S ×An→ S . Allagents receive a collective reward rt∈Raccording to the reward function R:S ×An→R. Thegoal of all agents’ policies is to maximize the collective expected return/summationtextHt=0γtrt, whereγ∈[0,1]is the discount factor, His the horizon, and rtis the collective reward obtained at timestep t. Eachagentiobserves local observation oti∈ Z according to the observation function O:S → Z . Note,observations usually reveal partial information about the global state. For instance, suppose theglobal state contains the location of agents, while the local observation of an agent may only containthe location of other agents within a limited distance. All agents’ local observations form a jointobservation, denoted by ot.A global state space Sis the product of component spaces Vi,i.e.,S=/producttextMi=1Vi, whereVi⊆R(Samvelyan et al., 2019; Lowe et al., 2017; Rashid et al., 2018; Foerster et al., 2018; Mahajan et al.,2019). We refer to Vias a ‘state component.’ The set of all component spaces of a product spaceis referred to as the component set. For instance, the component set of Sis{Vi|i∈ {1,...,M}}.Each entity, e.g., agents, objects, etc., in the environment are described by a set of state compo-nents. We refer to a set of state components that is associated with an entity in the environmentas an ‘entity set.’ For instance, in a 2-agent push-box environment, where two agents can onlycollaboratively push a box to a goal location, we have the global state space S=/producttext6i=1Vi, where{V1,V2},{V3,V4},{V5,V6}represent the location of agent one, agent two, and the box, separately.Consequently, {V1,V2},{V3,V4},{V5,V6}are three entity sets.2.2 M ULTI -AGENT REINFORCEMENT LEARNINGIn this paper, we follow the standard centralized training and decentralized execution (CTDE)paradigm (Lowe et al., 2017; Rashid et al., 2018; Foerster et al., 2018; Mahajan et al., 2019). Thatis, at training time, the learning algorithm has access to all agents’ local observations, actions, andthe global state. At execution time, i.e., at test time, each individual agent’s policy only has accessto its own local observation.The proposed CMAE is applicable to off-policy MARL methods ( e.g., Rashid et al., 2018; Loweet al., 2017; Sunehag et al., 2018; Matignon et al., 2012). In off-policy MARL, exploration poli-2Under review as a conference paper at ICLR 2021Algorithm 1: Training with Coordinated Multi-Agent Exploration (CMAE)Initialize exploration policies {μi}ni=1, target policies {πi}ni=1, counters {ck}Kk=1;Initialize the environment and replay buffer D;Initialize α= 1;forepisode= 1...M doReset the environment, and observe global states stand observations ot= (ot1...otn);fort= 1...T doUpdateCounter( {ck}Kk=1,st,ot) ;Select actions atusing a mixture of exploration and target policies αμi+(1−α)πi,αdecreases linearly to 0;Applyatto the environment;Observe rewards rt, statest+1, and local observations ot+1;Add transition tuple {st,ot,a,st+1,ot+1,rt}toD;TrainTarget( {πi}ni=1,D);endTrainExp( {μi}ni=1,{ck}Kk=1,D);endciesμi,i∈ {1,...,n}are responsible for collecting data from the environment. The data inthe form of transition tuple (st,ot,at,st+1,ot+1)is stored in a replay memory D,i.e.,D={(st,ot,at,st+1,ot+1)}t. The target policies are trained using transition tuples from the replaymemory.3 C OORDINATED MULTI -AGENT EXPLORATION (CMAE)In the following we first present an overview of CMAE before we discuss the method more formally.Overview: The goal is to train the target policies {πi}i∈{1,...,n}ofnagents to maximize the en-vironment episode return. Classical off-policy algorithms (Lowe et al., 2017; Rashid et al., 2018)typically use a noisy version of the target policies πias the exploration policies μi,i.e., to collectdata actions are selected based on exploration policies μi. In contrast, in CMAE, we propose to trainthe exploration policies by training with a modified reward. Specifically, target polices are trainedto maximize the usual external episode return. In contrast, exploration policies are trained to collectdata from subspaces that haven’t been well explored. We find this strategy to significantly improvetraining of target policies in the multi-agent reinforcement learning setting because this strategy canencourage multiple agents to jointly explore configurations of the state space.Alg. 1 summarizes this approach. At each step, a mixture of the exploration policies {μi}ni=1andtarget policies {πi}ni=1are used to select actions. The resulting experience tuple is then stored in areplay memory D. The target policies are trained directly using the data within the replay memoryDat each step. Note that the exploration policies will only be updated at the end of each episodeusing a reshaped reward that encourages exploration polices to explore under-explored subspaces ina collaborative manner.In the following we will provide details about how we propose to train the exploration policies.3.1 T RAINING OF EXPLORATION POLICIESTo train the exploration policies μi,i∈ {1,...,n}we use a modified reward ˆr. This modifiedreward specifies the goal of the exploration. For example, in the two-agent push-box task, we specifya specific joint location of both agents and the box as a goal. Note, the agents will ignore all externalrewards and only see positive reward when the goal, i.e., the specified position is reached. Thereward for the goal is set to bwhile the rewards for all other situations are zero.To find the goal situation we use Kcounters ck,k∈ {1,...,K}. A counter ckoperates on alow-dimensional subspace Skof the state space S,i.e.,Sk⊆ S. Occurrence of every configurationsk∈ Skwithin the low-dimensional subspace will be recorded using the current replay buffer D.3Under review as a conference paper at ICLR 2021Algorithm 2: Train Exploration Policies (TrainExp)Input: exploration policies {μi}ni=1, counters {ck}Kk=1, replay buffer D;Initialize bonus b;Compute normalized entropy η(k)of subspace kbased on associated counter ck;k∗= argminkη(k);Sample a batch B={si}Mi=1fromD;g= argmins∈Bck∗(projk∗(s));for{st,ot,a,st+1,ot+1,rt} ∈ D doifst==gthenrt=b;elsert= 0;endUpdate{μi}ni=1by{st,ot,a,st+1,ot+1,rt}endLet projkbe the projection from global state space to the subspace k. Formally, we obtainck(sk) =/summationdisplays∈D✶[projk(s) =sk],where ✶[·]is the indicator function (1 if argument is true; 0 otherwise) and projk(s)denotes therestriction of state s∈ S to subspace Sk. Note that we are successively incrementing the counts,i.e., the counters ckare not recomputed from scratch every time we train the exploration policies.We subsequently normalize the counters ck(sk)into a probability distribution pk(sk) =ck(sk)//summationtextˆsk∈Skck(ˆsk)which is then used to compute a normalized entropy ηk=H/H max=−(/summationtexts∈Skpk(s)logpk(s))/log(|Sk|). We select the subspace k∗with the smallest normalized en-tropy. From this subspace we choose the joint goal state gby first sampling a batch of states Bfromthe replay buffer. From those states we select in a second step the state with the smallest count as thegoal state g,i.e.,g= argmin s∈Bck∗(s). Sampling of states is performed in order to avoid selectingunreachable states as a goal, i.e., we encourage to explore states that we have seen rarely but at leastonce.Given the goal state g, we train the exploration policies μiusing the replay buffer Dmodified by arevised reward ˆr=bifsk∗=g. Note,ˆr= 0otherwise. Consequently, the exploration policies μifocus exclusively on achieving the desired goal g. This strategy is summarized in Alg. 2.As an alternative to the aforementioned subspace selection method, one could use probabilisticsubspace selection, where the probability of a subspace being chosen is inversely proportional to itsnormalized entropy. The two different subspace selection approaches result in different explorationbehaviors. Specifically, the probabilistic subspace selection will encourage exploration policies toexplore more subspaces while the smallest normalized entropy method focuses on the most under-explored subspace.3.2 S ELECTING SUBSPACESWhichKsubspaces do we choose? As discussed in Sec. 2.1, we assume the global state spaceSto be composed out of a set of Mcomponent spaces Vi. The number of possible subspaces isequivalent to the size of the powerset, i.e.,2M. This is clearly intractable.To address this, we select a subset of subspaces in levels. In each level l, we consider lentities jointly.Recall, that entities are agents, objects, etc., that are represented within the state space S. Supposethe global state space has Nentity sets A1,...,A N. In level l≤N, a subspace’s component set isthe union of ldistinct entity sets. Formally, let DEbe a component set of a subspace in level l, wehaveDE=/uniondisplayi∈EAi,∀E∈/parenleftbigg{1,...,N}l/parenrightbigg,4Under review as a conference paper at ICLR 2021where/parenleftbig{1,...,N}l/parenrightbigrepresents the set of all l-combinations of {1,...,N}.Note that there are many equivalent component sets in a level, if agents are homogeneous, i.e., ifagents have identical action and observation space and are controlled by the same policy.To see this, consider the two-agent push-box task again. The state space Sis composed of the com-ponent set {V1,V2,V3,V4,V5,V6}, with three entity sets {V1,V2},{V3,V4},{V5,V6}representingthe location of agent one, agent two, and the box. Suppose the two agents are homogeneous. Thecomponent sets {V1,V2,V5,V6}and{V3,V4,V5,V6}are equivalent, because both of them considerthe locations of one agent and the box jointly. Since the agents are homogeneous, it is irrelevantwhich agent is considered. Assigning different counters to equivalent subspaces will encourage anexploration policy to visit states that are visited by fellow homogeneous agents. This results in lessefficient exploration. Therefore, equivalent subspaces share one common counter. The subspace Ska counter ckis associated with is defined by Sk=/producttextVi∈DEkVi, whereEkis a component set ofsubspace k.In addition, we also consider level 0, where the component set of each subspace has only one ele-ment. Empirically, we found that level 0subspace leads to very efficient exploration in some tasks.Note that this strategy of selecting subspaces is relatively simple still and does not scale well. Wedefer development of more complex selection strategies to future work. Here we are primarilyinterested in studying the efficacy of training an exploration strategy with such rewards, which westudy in the next section.4 E XPERIMENTAL RESULTSWe evaluate the proposed CMAE approach on two challenging environments: 1) the sparse-rewardcooperative task from Wang et al. (2020); 2) the sparse-reward version of the Starcraft multi-agentchallenge (SMAC) (Samvelyan et al., 2019). In both environments, agents need to coordinate theirbehavior over extended periods of time to obtain a reward.Environments: We first consider the following four tasks on the sparse-reward environments pro-vided by Wang et al. (2020):•Pass: Two agents operate within two rooms of a 30×30grid. There is one switch in eachroom, the rooms are separated by a door and agents start in the same room. The door willopen only when one of the switches is occupied. The agents see collective positive rewardand the episode terminates only when both agents changed to the other room. The task isconsidered solved if both agents are in the right room.•Secret-room : Secret-room extends Pass. There are two agents and four rooms. One largeroom on the left and three small rooms on the right. There is one door between each smallroom and the large room. The switch in the large room controls all three doors. The switchin each small room only controls the rooms door. The agents need to navigate to one of thethree small rooms, i.e. target room, to receive positive reward. The grid size is 25×25.The task is considered solved if both agents are in the target room.•Push-box : There are two agents and one box in a 15×15grid. Agents need to push the boxto the wall to receive positive reward. The box is heavy, so both agents need to push thebox in the same direction at the same time to move the box. The task is considered solvedif the box is pushed to the wall.•Island : Two agents, nine treasures, and a wolf operate in a 10×10grid. Agents get a col-lective reward of 10 for collecting a treasure and a collective reward of 300 when crushingthe wolf. The wolf and agents have maximum energy of eight and five respectively. Theenergy will decrease by one when being attacked. Therefore, one agent cannot crush thewolf. The agents need to collaborate to complete the task. The task is considered solved ifthe wolf is killed.We also consider four tasks in SMAC (Samvelyan et al., 2019):•3m-dense : There are three marines in each team. Agents need to collaboratively eliminatethe three marines on the other team. An agent sees a reward of +1when it causes damage5Under review as a conference paper at ICLR 20210 1 2 3Environment Steps 1e60.00.20.40.60.81.0Success RatePassOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RateSecret RoomOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RatePush BoxOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.8Success RateIslandOursIDQ+RNDIDQ+ -greedyEDITEITIFigure 1: Results on Pass,Secret room ,Push-box , and Island environment.Method (target success rate) Ours EITI EDTI IDQ IDQ + RNDPass (80%) 2.61M±0.10M 384M±1.2M 381M±2.8M >500M>500MSecret-Room ( 80%) 0.71M±0.05M 448M±10.0M 382M±9.4M >500M>500MPush-Box ( 10%) 0.52M±0.04M 307M±2.3M 160M±12.1M >500M>500MPush-Box ( 80%) 0.68M±0.02M 307M±3.9M 160M±8.2M >500M>500MIsland (20%) 7.50M±0.12M 480M±5.2M 322M±1.4M >500M>500MIsland (50%) 13.9M±0.21M >500M >500M >500M>500MTable 1: Environment steps required to achieve the indicated target success rate on Pass,SecretRoom ,Push-Box , and Island environments.to an enemy’s health. A reward of −1is received when its health decreases. All the rewardsare collective. A reward of +200 is obtained when all enemies are eliminated.•8m-dense : Similar to 3m-dense , but with eight marines on each team.•3m-sparse : Similar to 3m-dense , but the reward is sparse. Agents only see reward +1whenall enemies are crushed.•8m-sparse : Similar to 3m-sparse , but with eight marines on each team.Experimental Setup:For the grid world task, we combine CMAE with Q-learning. For Pass,Secret-room , and Push-box, the Q value function is represented via a table. For Island we use a DQN (Mnih et al., 2013;2015). The Q-function is parameterized by a three-layer perceptron (MLP) with 64 hidden units perlayer and ReLU activation function. We compare CMAE with exploration via information-theoreticinfluence (EITI) and exploration via decision-theoretic influence (EDTI) (Wang et al., 2020), whichare state-of-the-art algorithms on the four tasks. EITI and EDTI (Wang et al., 2020) results areobtained using the official code. For a more complete comparison, we also show the results ofindependent Q-learning (IDQ) with ǫ-greedy and independent Q-learning with popular single agentexploration techniques, such as random network distillation (Burda et al., 2019).For the SMAC tasks, we combine CMAE with the official code for QMIX (Rashid et al., 2018). Wecompare with MA VEN (Mahajan et al., 2019), QMIX (Rashid et al., 2018), QTRAN (Son et al.,2019), and VDN (Sunehag et al., 2018). All of the aforementioned methods reported impressiveperformance on the dense-reward version of SMAC. To our best knowledge, CMAE is the first toreport results on the sparse-reward version of any SMAC tasks.For all the experiments, we consider level 0 to level 3 subspaces. Please see the Appendix for moredetails.Evaluation Protocol: To assess efficacy of CMAE we use the following evaluation procedure: wetest the target policies in an independent test environment every 400k environment steps during thetraining. Each test consists of ten testing episodes. We repeat all experiments using three runs withdifferent seeds.Results: We first compare CMAE with EITI and EDTI on Pass,Secret Room ,Push-Box , and Island .The results are summarized in Fig. 1, where test task success rate versus number of environmentsteps is shown. We observe CMAE to achieve a 100% success rate on Pass,Secret Room , andPush-Box within 2M environment steps. In contrast, EITI and EDTI (Wang et al., 2020) need morethan 300M steps to achieve an 80% success rate (Tab. 1). In Island , CMAE achieves a success rate6Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e60.00.20.40.60.81.0Success Rate3m SparseOursQMIXVDNQTRANMAVEN0.0 0.5 1.0 1.5 2.0Environment Steps 1e60.00.20.40.60.8Success Rate8m SparseOursQMIXVDNQTRANMAVEN0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e60.00.20.40.60.81.0Success Rate3m DenseOursQMIXVDNQTRANMAVEN0.0 0.5 1.0 1.5 2.0Environment Steps 1e60.00.20.40.60.81.0Success Rate8m DenseOursQMIXVDNQTRANMAVENFigure 2: Results on SMAC: 3m-sparse ,8m-sparse ,3m-dense ,8m-dense environment.0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4Environment Steps 1e60.30.40.50.60.70.80.91.0Normalized EntropyPush Box Subspace Efficiencya1_xa1_ybox_xbox_ya1_xybox_xya1_xy + a2_xya1_xy + box_xya1_xy + a2_xy + box_xy0.0 0.5 1.0 1.5 2.0 2.5 3.0Environment Steps 1e60.30.40.50.60.70.80.91.0Normalized EntropyPass Subspace Efficiencya1_xa1_ydoor_opena1_xya1 + a2_xya1_xy + door_opena1_xy + a2_xy + door_open0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00Environment Steps 1e70.20.30.40.50.60.70.80.91.0Normalized EntropyIsland Subspace Efficiencya1_xa1_ya1_healthwolf_xwolf_ywolf_healtha1_xy_healthwolf_xy_healtha1_xy_health+ a2_xy_healtha1_xy_health+ wolf_xy_healtha1_xy_health+ a2_xy_health+ wolf_xy_healthFigure 3: Normalized entropy of different levels lonPush-box ,Pass, and Island environments.(capture rate) above 50% within 20M environment steps (Fig. 1). In contrast, EITI and EDTI needmore than 480M and322M steps to achieve a 20% success rate (Tab. 1). The main reasons that EITIand EDTI need much more environment steps: they require a large number of samples to estimatethe influence of one agent’s behavior on other agents’ between each update. Specifically, EITI andEDTI need 64,000 environment steps between each update, which makes them less sample efficient.IDQ with ǫ-greedy and IDQ with RND does not achieve any success in those tasks.On SMAC, we compare CMAE with MA VEN (Mahajan et al., 2019), QMIX (Rashid et al., 2018),QTRAN (Son et al., 2019), and VDN (Sunehag et al., 2018). The results are summarized in Fig. 2.In sparse-reward tasks, MA VEN, QMIX, QTRAN, and VDN have at most 2%winning rate. Incontrast, CMAE achieves a win rate higher than 80%. Recently, Taiga et al. (2020) point out thatmany existing exploration strategies excel in challenging sparse-reward tasks but fail in simple tasksthat can be solved by using classical methods such as ǫ-greedy. To ensure CMAE doesn’t failin simpler tasks, we run experiments on dense-reward SMAC tasks. As shown in Fig. 2, CMAEachieves similar performance to state-of-the-art baselines in simpler dense-reward SMAC tasks.To investigate which subspaces CMAE selects, we plot the normalized entropy of different sub-spaces in Fig. 3. In the Push-Box task, CMAE mostly chooses the boxes location to explore. In theIsland task, CMAE mostly explores the health of the wolf. For the Pass task, instead of exploringsubspaces, CMAE explores the full global space.We also compare the exploration behavior of CMAE to ǫ-greedy using the Secret room environment.As shown in Fig. 4, early during training, both CMAE and ǫ-greedy explore only locations in the leftroom. However, after 1.5M steps, CMAE agents are able to frequently visit the right three roomswhileǫ-greedy still mostly visits the left room.Following the reviewers’ suggestions, we also consider a shared count-based bonus on the groupobservations as a baseline. We study Q-learning with this shared count-based bonus on the groupobservations for the Secret-room andPush-box tasks. The shared count method achieves a 5.1%±1.3%and2.2%±1.1%success rate on Secret-room andPush-box respectively. In contrast, ourapproach can achieve a 100%±0.1%success rate in both tasks. The training curves are shownin Fig. 5. The count-based bonus method is sub-optimal because group observation is not necessarilythe most efficient subspace to explore. This demonstrates the effectiveness of our subspace selectionmechanism.In addition, to demonstrate CMAE is applicable to a wide variety of challenging tasks. We conductexperiments on the SMAC 6hvs8z-dense (super hard) and6hvs8z-sparse (super hard) tasks,where the opponent agents’ AI are set to the ‘super hard’ level. In 6hvs8z-dense (super hard) ,an agent sees a reward of +1when it causes damage to an enemy’s health. A reward of −1isreceived when its health decreases. In 6hvs8z-sparse (super hard) , an agent can only see non-zero7Under review as a conference paper at ICLR 2021Oursε-greedyAgent 1 Agent 2 Agent 1 Agent 2300K env. steps 1.5M env. stepsFigure 4: Visitation map of Ours (CMAE) and baseline on the Secret-Room environment.0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RateSecret Room OursShared count0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RatePush Box OursShared countFigure 5: Results of our approach and shared count baseline on Secret-room andPush-box .reward when an opponent is eliminated or a teammate is eliminated. We compare our approach toMA VEN Mahajan et al. (2019), which reports the state-of-the-art results on this task. All approachesare trained for 8M steps. In 6hvs8z-sparse (super hard) , CMAE achieves a 45.6%±3.2%winrate while MA VEN achieves a 4.3%±0.9%win rate. In 6hvs8z-dense (super hard) , CMAE andMA VEN achieve a 60.9%±1.3%and61.2%±2.3%success rate respectively. This illustratesthat dense reward environments tend to be easier than sparse ones. The training curves are shownin Fig. 6.5 R ELATED WORKWe discuss recently developed methods for exploration in reinforcement learning, multi-agent rein-forcement learning and concurrent reinforcement learning subsequently.Exploration for Reinforcement Learning: A wide variety of exploration techniques for deep rein-forcement learning have been studied, deviating from classical noise-based methods. Generalizationof count-based approaches, which give near-optimal results in tabular reinforcement learning, to en-vironments with continuous state spaces have been proposed. For instance, Bellemare et al. (2016)propose a density model to measure the agent’s uncertainty. Pseudo-counts are derived from thedensity model which give rise to an exploration bonus encouraging assessment of rarely visitedstates. Inspired by Bellemare et al. (2016), Ostrovski et al. (2017) discussed a neural density model,to estimate the pseudo count, and Tang et al. (2017) use a hash function to estimate the count.Besides count-based approaches, meta-policy gradient Xu et al. (2018) uses the actor policy’s im-provement as the reward to train an exploration policy. The resulting exploration policy differs fromthe actor policy, and enables more global exploration. Stadie et al. (2016) propose an explorationstrategy based on assigning an exploration bonus from a concurrently learned environment model.Lee et al. (2020) cast exploration as a state marginal matching (SMM) problem and aim to learn8Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8Environment Steps 1e70.00.10.20.30.40.5Success Rate6h8z sparseOursMAVEN0.0 0.2 0.4 0.6 0.8Environment Steps 1e70.00.10.20.30.40.50.6Success Rate6h8z denseOursMAVENFigure 6: Results of our approach and MA VEN on 6hvs8z-sparse (super hard) and6hvs8z-dense(super hard) .a policy for which the state marginal distribution matches a uniform distribution. Other relatedworks on exploration include curiosity-driven exploration Pathak et al. (2017), diversity-driven ex-ploration Hong et al. (2018), GEP-PG Colas et al. (2018), EX2Fu et al. (2017), and bootstrapDQN Osband et al. (2016). In contrast to our approach, all the techniques mentioned above targetsingle-agent deep reinforcement learning.Multi-agent Reinforcement Learning: MADDPG Lowe et al. (2017) uses a central critic thatconsiders other agents’ action policies to handle the non-stationary environment issues in the multi-agent setting. DIAL Foerster et al. (2016) uses an end-to-end differentiable architecture that allowsagents to learn to communicate. Jiang & Lu (2018) propose an attentional communication modelthat learns when communication is helpful for a cooperative setting. Foerster et al. (2017) add a‘fingerprint’ to each transition tuple in the replay memory to track the age of the transition tupleand stabilize training. In ‘Self-Other-Modeling’ (SOM) Raileanu et al. (2018) an agent uses its ownpolicy to predict others agents’ behavior and states.While inter-agent communication Lowe et al. (2017); Jiang & Lu (2018); Foerster et al. (2016);Rashid et al. (2018); Omidshafiei et al. (2017); Jain et al. (2019) has been considered, for exploration,multi-agent approaches rely on classical noise-based exploration. As discussed in Sec. 1, a noise-based approach prevents the agents from sharing their understanding of the environment. A teamof cooperative agents with a noise-based exploration policy can only explore local regions that areclose to their individual actor policy, which contrasts the approach from the proposed method.Recently, approaches that consider coordinated exploration have been proposed. Multi-agent varia-tional exploration (MA VEN) (Mahajan et al., 2019) introduces a latent space for hierarchical control.Agents condition their behavior on the latent variable to perform committed exploration. Influenced-based exploration (Wang et al., 2020) captures the influence of one agent’s behavior on others.Agents are encouraged to visit ‘interaction points’ that will change other agents’ behaviour.Concurrent Reinforcement Learning: Dimakopoulou & Roy (2018) study coordinated explo-ration in concurrent reinforcement learning, maintaining an environment model and extending pos-terior sampling such that agents explore in a coordinated fashion. Parisotto et al. (2019) proposedconcurrent meta reinforcement learning (CMRL) which permits a set of parallel agents to communi-cate with each other and find efficient exploration strategies. The concurrent setting fundamentallydiffers from the multi-agent setting of our approach. In a concurrent setting, agents operate in differ-ent instances of an environment, i.e., one agent’s action has no effect on the observation and rewardsreceived by other agents. In contrast, in the multi-agent setting, agents use the same instance of anenvironment. An agent’s action changes observations and rewards observed by other agents.6 C ONCLUSIONWe propose coordinated multi-agent exploration (CMAE). It defines shared goals and learns coor-dinated exploration policies. We studied subspace selection which helps to find a goal for efficientexploration. Empirically, we demonstrate that CMAE increases exploration efficiency significantly.Compared to state-of-the-art baselines, CMAE needs only 1−5%of the data to achieve similar orbetter results on various sparse-reward tasks. We hope this is a first step toward efficient coordinatedMARL exploration. Going forward we will study more complicated subspace selection techniquesand scale to more agents.9Under review as a conference paper at ICLR 2021
AFPeknCI_oT
Proposed technique has limited applicability
5: Marginally below acceptance threshold
The paper proposes to improve the exponential sample complexity of finding a coordinated multi-agent strategy by learning an exploration policy for each agent that conditions on a shared goal. The exploration policy is mixed with the normal RL policy according to a parameter alpha, which is scaled down over time. The shared goal that agents pursue is selected by using an explicit counter mechanism over objects in the environment. Strengths: - The paper is well written. - The reduction in sample complexity due to this technique is very large. - Algorithms 1 and 2 are clear. Weaknesses: - The proposed counter mechanism relies on being able to manually identify entities in the environment, such as the box in the push-box environment. This has limited applicability to real-world problems with large-dimensional or visual state spaces, in which entities are not obvious a priori. Being able to explicitly count the number of times an agent has experienced an entity in a specific configuration is not a realistic expectation for interesting, real-world problems. Therefore, it is unclear how this method can be applied beyond simple tabular settings and video games. - Similarly, it seems that deciding which subspaces are equivalent requires a significant amount of domain knowledge into each problem, and does not seem to be generally applicable. - Why not benchmark against QMIX + RND, since both are tested independently? Other suggestions: - Typo on p. 3 "tuple is then store in a replay memory" -> stored - Why was the number of steps between updates for EITI and EDTI held constant at 64,000? How many steps between updates were used for the proposed technique?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Coordinated Multi-Agent Exploration Using Shared Goals ### Paper Abstract Exploration is critical for good results of deep reinforcement learning algorithms and has drawn much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. It was recognized recently that noise-based exploration is suboptimal in multi-agent settings, and exploration methods that consider agents' cooperation have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and don't coordinate their exploration efforts toward those states. To address this shortcoming, in this paper, we proposed coordinated multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected by a normalized entropy-based technique from multiple projected state spaces. Then, agents are trained to reach the goal in a coordinated manner. We demonstrated that our approach needs only $1\%-5\%$ of the environment steps to achieve similar or better returns than state-of-the-art baselines on various sparse-reward tasks, including a sparse-reward version of the Starcraft multi-agent challenge (SMAC). ### Paper Keywords ["Multi-agent RL", "Deep RL", "Exploration"] ### Paper Content ABSTRACTExploration is critical for good results of deep reinforcement learning algorithmsand has attracted much attention. However, existing multi-agent deep reinforce-ment learning algorithms still use mostly noise-based techniques. It was recog-nized recently that noise-based exploration is suboptimal in multi-agent settings,and exploration methods that consider agents’ cooperation have been developed.However, existing methods suffer from a common challenge: agents struggle toidentify states that are worth exploring, and don’t coordinate their exploration ef-forts toward those states. To address this shortcoming, in this paper, we proposedcoordinated multi-agent exploration (CMAE): agents share a common goal whileexploring. The goal is selected by a normalized entropy-based technique frommultiple projected state spaces. Then, agents are trained to reach the goal in acoordinated manner. We demonstrated that our approach needs only 1%−5%of the environment steps to achieve similar or better returns than state-of-the-artbaselines on various sparse-reward tasks, including a sparse-reward version of theStarcraft multi-agent challenge (SMAC).1 I NTRODUCTIONCooperative multi-agent reinforcement learning (MARL) is an increasingly important field. Indeed,many real-world problems are naturally modeled using MARL techniques. For instance, tasks fromareas as diverse as robot fleet coordination (Swamy et al., 2020; H ̈uttenrauch et al., 2019) and au-tonomous traffic control (Bazzan, 2008; Sunehag et al., 2018) fit MARL formulations.To address MARL problems, early work followed the independent single-agent reinforcement learn-ing paradigm (Tampuu et al., 2015; Tan, 1993; Matignon et al., 2012). However, more recently,specifically tailored techniques such as monotonic value function factorization (QMIX) (Rashidet al., 2018), multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), andcounterfactual multi-agent policy gradients (COMA) (Foerster et al., 2018) have been developed.Those methods excel in a multi-agent setting because they address the non-stationary issue ofMARL and develop communication protocols between agents. Despite those advances and theresulting reported performance improvements, a common issue remained: all of the aforementionedmethods use exploration techniques from classical algorithms. Specifically, these methods employnoise-based exploration, i.e., the exploration policy is a noisy version of the actor policy. For in-stance, Lowe et al. (2017) add Ornstein-Uhlenbeck (OU) (Uhlenbeck & Ornstein, 1930) noise orGaussian noise to the actor policy. Foerster et al. (2016); Rashid et al. (2018); Yang et al. (2018);Foerster et al. (2017) use variants of ǫ-greedy exploration, where a random suboptimal action isselected with probability ǫ.It was recognized recently that use of classical exploration techniques is sub-optimal in a multi-agentreinforcement learning setting. Specifically, Mahajan et al. (2019) show that QMIX with ǫ-greedyexploration results in slow exploration and sub-optimality. Mahajan et al. (2019) improve explo-ration by conditioning an agent’s behavior on a shared latent variable controlled by a hierarchicalpolicy. Even more recently, Wang et al. (2020) encourage coordinated exploration by consideringthe influence of one agent’s behavior on other agents’ behaviors.While all of the aforementioned exploration techniques for multi-agent reinforcement learning sig-nificantly improve results, they suffer from a common challenge: agents struggle to identify statesthat are worth exploring, and don’t coordinate their exploration efforts toward those states. To give1Under review as a conference paper at ICLR 2021an example, consider a push-box task, where two agents need to jointly push a heavy box to aspecific location before observing a reward. In this situation, instead of exploring the environmentindependently, agents need to coordinate pushing the box within the environment to find the specificlocation.To address this issue, we propose coordinated multi-agent exploration (CMAE), where multipleagents share a common goal. We achieve this by first projecting the joint state space to multiplesubspaces. We develop a normalized entropy (Cover & Thomas., 1991)-based technique to select agoal from the under-explored subspaces. Then, exploration policies are trained to reach the goals ina coordinated manner.To show that CMAE improves results, we evaluate our approach on various environments withsparse-rewards from Wang et al. (2020), and the sparse-reward version of the Starcraft multi-agentchallenge (SMAC) (Samvelyan et al., 2019), which requires coordinated actions among agents overextended time steps before observing a reward. The experimental results show that our approachneeds only 1%−5%of environment steps to achieve similar or better average test episode returnsthan current state-of-the-art baselines.2 P RELIMINARIESIn this section, we define the multi-agent Markov decision process (MDP) in Sec. 2.1, and introducethe multi-agent reinforcement learning setting in Sec. 2.2.2.1 M ULTI -AGENT MARKOV DECISION PROCESSWe model a cooperative multi-agent system as a multi-agent Markov decision process (MDP). Ann-agent MDP is defined by a tuple G= (S,A,T,R,Z,O,n,γ,H).Sis the global state space ofthe environment. Ais the action space. At each time step t, each agent’s policy πi,i∈ {1,...,n},selects an action ati∈ A. All selected actions form a joint action at∈ An. The transition functionTmaps the current state stand the joint action atto the next state st+1,i.e.,T:S ×An→ S . Allagents receive a collective reward rt∈Raccording to the reward function R:S ×An→R. Thegoal of all agents’ policies is to maximize the collective expected return/summationtextHt=0γtrt, whereγ∈[0,1]is the discount factor, His the horizon, and rtis the collective reward obtained at timestep t. Eachagentiobserves local observation oti∈ Z according to the observation function O:S → Z . Note,observations usually reveal partial information about the global state. For instance, suppose theglobal state contains the location of agents, while the local observation of an agent may only containthe location of other agents within a limited distance. All agents’ local observations form a jointobservation, denoted by ot.A global state space Sis the product of component spaces Vi,i.e.,S=/producttextMi=1Vi, whereVi⊆R(Samvelyan et al., 2019; Lowe et al., 2017; Rashid et al., 2018; Foerster et al., 2018; Mahajan et al.,2019). We refer to Vias a ‘state component.’ The set of all component spaces of a product spaceis referred to as the component set. For instance, the component set of Sis{Vi|i∈ {1,...,M}}.Each entity, e.g., agents, objects, etc., in the environment are described by a set of state compo-nents. We refer to a set of state components that is associated with an entity in the environmentas an ‘entity set.’ For instance, in a 2-agent push-box environment, where two agents can onlycollaboratively push a box to a goal location, we have the global state space S=/producttext6i=1Vi, where{V1,V2},{V3,V4},{V5,V6}represent the location of agent one, agent two, and the box, separately.Consequently, {V1,V2},{V3,V4},{V5,V6}are three entity sets.2.2 M ULTI -AGENT REINFORCEMENT LEARNINGIn this paper, we follow the standard centralized training and decentralized execution (CTDE)paradigm (Lowe et al., 2017; Rashid et al., 2018; Foerster et al., 2018; Mahajan et al., 2019). Thatis, at training time, the learning algorithm has access to all agents’ local observations, actions, andthe global state. At execution time, i.e., at test time, each individual agent’s policy only has accessto its own local observation.The proposed CMAE is applicable to off-policy MARL methods ( e.g., Rashid et al., 2018; Loweet al., 2017; Sunehag et al., 2018; Matignon et al., 2012). In off-policy MARL, exploration poli-2Under review as a conference paper at ICLR 2021Algorithm 1: Training with Coordinated Multi-Agent Exploration (CMAE)Initialize exploration policies {μi}ni=1, target policies {πi}ni=1, counters {ck}Kk=1;Initialize the environment and replay buffer D;Initialize α= 1;forepisode= 1...M doReset the environment, and observe global states stand observations ot= (ot1...otn);fort= 1...T doUpdateCounter( {ck}Kk=1,st,ot) ;Select actions atusing a mixture of exploration and target policies αμi+(1−α)πi,αdecreases linearly to 0;Applyatto the environment;Observe rewards rt, statest+1, and local observations ot+1;Add transition tuple {st,ot,a,st+1,ot+1,rt}toD;TrainTarget( {πi}ni=1,D);endTrainExp( {μi}ni=1,{ck}Kk=1,D);endciesμi,i∈ {1,...,n}are responsible for collecting data from the environment. The data inthe form of transition tuple (st,ot,at,st+1,ot+1)is stored in a replay memory D,i.e.,D={(st,ot,at,st+1,ot+1)}t. The target policies are trained using transition tuples from the replaymemory.3 C OORDINATED MULTI -AGENT EXPLORATION (CMAE)In the following we first present an overview of CMAE before we discuss the method more formally.Overview: The goal is to train the target policies {πi}i∈{1,...,n}ofnagents to maximize the en-vironment episode return. Classical off-policy algorithms (Lowe et al., 2017; Rashid et al., 2018)typically use a noisy version of the target policies πias the exploration policies μi,i.e., to collectdata actions are selected based on exploration policies μi. In contrast, in CMAE, we propose to trainthe exploration policies by training with a modified reward. Specifically, target polices are trainedto maximize the usual external episode return. In contrast, exploration policies are trained to collectdata from subspaces that haven’t been well explored. We find this strategy to significantly improvetraining of target policies in the multi-agent reinforcement learning setting because this strategy canencourage multiple agents to jointly explore configurations of the state space.Alg. 1 summarizes this approach. At each step, a mixture of the exploration policies {μi}ni=1andtarget policies {πi}ni=1are used to select actions. The resulting experience tuple is then stored in areplay memory D. The target policies are trained directly using the data within the replay memoryDat each step. Note that the exploration policies will only be updated at the end of each episodeusing a reshaped reward that encourages exploration polices to explore under-explored subspaces ina collaborative manner.In the following we will provide details about how we propose to train the exploration policies.3.1 T RAINING OF EXPLORATION POLICIESTo train the exploration policies μi,i∈ {1,...,n}we use a modified reward ˆr. This modifiedreward specifies the goal of the exploration. For example, in the two-agent push-box task, we specifya specific joint location of both agents and the box as a goal. Note, the agents will ignore all externalrewards and only see positive reward when the goal, i.e., the specified position is reached. Thereward for the goal is set to bwhile the rewards for all other situations are zero.To find the goal situation we use Kcounters ck,k∈ {1,...,K}. A counter ckoperates on alow-dimensional subspace Skof the state space S,i.e.,Sk⊆ S. Occurrence of every configurationsk∈ Skwithin the low-dimensional subspace will be recorded using the current replay buffer D.3Under review as a conference paper at ICLR 2021Algorithm 2: Train Exploration Policies (TrainExp)Input: exploration policies {μi}ni=1, counters {ck}Kk=1, replay buffer D;Initialize bonus b;Compute normalized entropy η(k)of subspace kbased on associated counter ck;k∗= argminkη(k);Sample a batch B={si}Mi=1fromD;g= argmins∈Bck∗(projk∗(s));for{st,ot,a,st+1,ot+1,rt} ∈ D doifst==gthenrt=b;elsert= 0;endUpdate{μi}ni=1by{st,ot,a,st+1,ot+1,rt}endLet projkbe the projection from global state space to the subspace k. Formally, we obtainck(sk) =/summationdisplays∈D✶[projk(s) =sk],where ✶[·]is the indicator function (1 if argument is true; 0 otherwise) and projk(s)denotes therestriction of state s∈ S to subspace Sk. Note that we are successively incrementing the counts,i.e., the counters ckare not recomputed from scratch every time we train the exploration policies.We subsequently normalize the counters ck(sk)into a probability distribution pk(sk) =ck(sk)//summationtextˆsk∈Skck(ˆsk)which is then used to compute a normalized entropy ηk=H/H max=−(/summationtexts∈Skpk(s)logpk(s))/log(|Sk|). We select the subspace k∗with the smallest normalized en-tropy. From this subspace we choose the joint goal state gby first sampling a batch of states Bfromthe replay buffer. From those states we select in a second step the state with the smallest count as thegoal state g,i.e.,g= argmin s∈Bck∗(s). Sampling of states is performed in order to avoid selectingunreachable states as a goal, i.e., we encourage to explore states that we have seen rarely but at leastonce.Given the goal state g, we train the exploration policies μiusing the replay buffer Dmodified by arevised reward ˆr=bifsk∗=g. Note,ˆr= 0otherwise. Consequently, the exploration policies μifocus exclusively on achieving the desired goal g. This strategy is summarized in Alg. 2.As an alternative to the aforementioned subspace selection method, one could use probabilisticsubspace selection, where the probability of a subspace being chosen is inversely proportional to itsnormalized entropy. The two different subspace selection approaches result in different explorationbehaviors. Specifically, the probabilistic subspace selection will encourage exploration policies toexplore more subspaces while the smallest normalized entropy method focuses on the most under-explored subspace.3.2 S ELECTING SUBSPACESWhichKsubspaces do we choose? As discussed in Sec. 2.1, we assume the global state spaceSto be composed out of a set of Mcomponent spaces Vi. The number of possible subspaces isequivalent to the size of the powerset, i.e.,2M. This is clearly intractable.To address this, we select a subset of subspaces in levels. In each level l, we consider lentities jointly.Recall, that entities are agents, objects, etc., that are represented within the state space S. Supposethe global state space has Nentity sets A1,...,A N. In level l≤N, a subspace’s component set isthe union of ldistinct entity sets. Formally, let DEbe a component set of a subspace in level l, wehaveDE=/uniondisplayi∈EAi,∀E∈/parenleftbigg{1,...,N}l/parenrightbigg,4Under review as a conference paper at ICLR 2021where/parenleftbig{1,...,N}l/parenrightbigrepresents the set of all l-combinations of {1,...,N}.Note that there are many equivalent component sets in a level, if agents are homogeneous, i.e., ifagents have identical action and observation space and are controlled by the same policy.To see this, consider the two-agent push-box task again. The state space Sis composed of the com-ponent set {V1,V2,V3,V4,V5,V6}, with three entity sets {V1,V2},{V3,V4},{V5,V6}representingthe location of agent one, agent two, and the box. Suppose the two agents are homogeneous. Thecomponent sets {V1,V2,V5,V6}and{V3,V4,V5,V6}are equivalent, because both of them considerthe locations of one agent and the box jointly. Since the agents are homogeneous, it is irrelevantwhich agent is considered. Assigning different counters to equivalent subspaces will encourage anexploration policy to visit states that are visited by fellow homogeneous agents. This results in lessefficient exploration. Therefore, equivalent subspaces share one common counter. The subspace Ska counter ckis associated with is defined by Sk=/producttextVi∈DEkVi, whereEkis a component set ofsubspace k.In addition, we also consider level 0, where the component set of each subspace has only one ele-ment. Empirically, we found that level 0subspace leads to very efficient exploration in some tasks.Note that this strategy of selecting subspaces is relatively simple still and does not scale well. Wedefer development of more complex selection strategies to future work. Here we are primarilyinterested in studying the efficacy of training an exploration strategy with such rewards, which westudy in the next section.4 E XPERIMENTAL RESULTSWe evaluate the proposed CMAE approach on two challenging environments: 1) the sparse-rewardcooperative task from Wang et al. (2020); 2) the sparse-reward version of the Starcraft multi-agentchallenge (SMAC) (Samvelyan et al., 2019). In both environments, agents need to coordinate theirbehavior over extended periods of time to obtain a reward.Environments: We first consider the following four tasks on the sparse-reward environments pro-vided by Wang et al. (2020):•Pass: Two agents operate within two rooms of a 30×30grid. There is one switch in eachroom, the rooms are separated by a door and agents start in the same room. The door willopen only when one of the switches is occupied. The agents see collective positive rewardand the episode terminates only when both agents changed to the other room. The task isconsidered solved if both agents are in the right room.•Secret-room : Secret-room extends Pass. There are two agents and four rooms. One largeroom on the left and three small rooms on the right. There is one door between each smallroom and the large room. The switch in the large room controls all three doors. The switchin each small room only controls the rooms door. The agents need to navigate to one of thethree small rooms, i.e. target room, to receive positive reward. The grid size is 25×25.The task is considered solved if both agents are in the target room.•Push-box : There are two agents and one box in a 15×15grid. Agents need to push the boxto the wall to receive positive reward. The box is heavy, so both agents need to push thebox in the same direction at the same time to move the box. The task is considered solvedif the box is pushed to the wall.•Island : Two agents, nine treasures, and a wolf operate in a 10×10grid. Agents get a col-lective reward of 10 for collecting a treasure and a collective reward of 300 when crushingthe wolf. The wolf and agents have maximum energy of eight and five respectively. Theenergy will decrease by one when being attacked. Therefore, one agent cannot crush thewolf. The agents need to collaborate to complete the task. The task is considered solved ifthe wolf is killed.We also consider four tasks in SMAC (Samvelyan et al., 2019):•3m-dense : There are three marines in each team. Agents need to collaboratively eliminatethe three marines on the other team. An agent sees a reward of +1when it causes damage5Under review as a conference paper at ICLR 20210 1 2 3Environment Steps 1e60.00.20.40.60.81.0Success RatePassOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RateSecret RoomOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RatePush BoxOursIDQ+RNDIDQ+ -greedyEDITEITI0.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.8Success RateIslandOursIDQ+RNDIDQ+ -greedyEDITEITIFigure 1: Results on Pass,Secret room ,Push-box , and Island environment.Method (target success rate) Ours EITI EDTI IDQ IDQ + RNDPass (80%) 2.61M±0.10M 384M±1.2M 381M±2.8M >500M>500MSecret-Room ( 80%) 0.71M±0.05M 448M±10.0M 382M±9.4M >500M>500MPush-Box ( 10%) 0.52M±0.04M 307M±2.3M 160M±12.1M >500M>500MPush-Box ( 80%) 0.68M±0.02M 307M±3.9M 160M±8.2M >500M>500MIsland (20%) 7.50M±0.12M 480M±5.2M 322M±1.4M >500M>500MIsland (50%) 13.9M±0.21M >500M >500M >500M>500MTable 1: Environment steps required to achieve the indicated target success rate on Pass,SecretRoom ,Push-Box , and Island environments.to an enemy’s health. A reward of −1is received when its health decreases. All the rewardsare collective. A reward of +200 is obtained when all enemies are eliminated.•8m-dense : Similar to 3m-dense , but with eight marines on each team.•3m-sparse : Similar to 3m-dense , but the reward is sparse. Agents only see reward +1whenall enemies are crushed.•8m-sparse : Similar to 3m-sparse , but with eight marines on each team.Experimental Setup:For the grid world task, we combine CMAE with Q-learning. For Pass,Secret-room , and Push-box, the Q value function is represented via a table. For Island we use a DQN (Mnih et al., 2013;2015). The Q-function is parameterized by a three-layer perceptron (MLP) with 64 hidden units perlayer and ReLU activation function. We compare CMAE with exploration via information-theoreticinfluence (EITI) and exploration via decision-theoretic influence (EDTI) (Wang et al., 2020), whichare state-of-the-art algorithms on the four tasks. EITI and EDTI (Wang et al., 2020) results areobtained using the official code. For a more complete comparison, we also show the results ofindependent Q-learning (IDQ) with ǫ-greedy and independent Q-learning with popular single agentexploration techniques, such as random network distillation (Burda et al., 2019).For the SMAC tasks, we combine CMAE with the official code for QMIX (Rashid et al., 2018). Wecompare with MA VEN (Mahajan et al., 2019), QMIX (Rashid et al., 2018), QTRAN (Son et al.,2019), and VDN (Sunehag et al., 2018). All of the aforementioned methods reported impressiveperformance on the dense-reward version of SMAC. To our best knowledge, CMAE is the first toreport results on the sparse-reward version of any SMAC tasks.For all the experiments, we consider level 0 to level 3 subspaces. Please see the Appendix for moredetails.Evaluation Protocol: To assess efficacy of CMAE we use the following evaluation procedure: wetest the target policies in an independent test environment every 400k environment steps during thetraining. Each test consists of ten testing episodes. We repeat all experiments using three runs withdifferent seeds.Results: We first compare CMAE with EITI and EDTI on Pass,Secret Room ,Push-Box , and Island .The results are summarized in Fig. 1, where test task success rate versus number of environmentsteps is shown. We observe CMAE to achieve a 100% success rate on Pass,Secret Room , andPush-Box within 2M environment steps. In contrast, EITI and EDTI (Wang et al., 2020) need morethan 300M steps to achieve an 80% success rate (Tab. 1). In Island , CMAE achieves a success rate6Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e60.00.20.40.60.81.0Success Rate3m SparseOursQMIXVDNQTRANMAVEN0.0 0.5 1.0 1.5 2.0Environment Steps 1e60.00.20.40.60.8Success Rate8m SparseOursQMIXVDNQTRANMAVEN0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e60.00.20.40.60.81.0Success Rate3m DenseOursQMIXVDNQTRANMAVEN0.0 0.5 1.0 1.5 2.0Environment Steps 1e60.00.20.40.60.81.0Success Rate8m DenseOursQMIXVDNQTRANMAVENFigure 2: Results on SMAC: 3m-sparse ,8m-sparse ,3m-dense ,8m-dense environment.0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4Environment Steps 1e60.30.40.50.60.70.80.91.0Normalized EntropyPush Box Subspace Efficiencya1_xa1_ybox_xbox_ya1_xybox_xya1_xy + a2_xya1_xy + box_xya1_xy + a2_xy + box_xy0.0 0.5 1.0 1.5 2.0 2.5 3.0Environment Steps 1e60.30.40.50.60.70.80.91.0Normalized EntropyPass Subspace Efficiencya1_xa1_ydoor_opena1_xya1 + a2_xya1_xy + door_opena1_xy + a2_xy + door_open0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00Environment Steps 1e70.20.30.40.50.60.70.80.91.0Normalized EntropyIsland Subspace Efficiencya1_xa1_ya1_healthwolf_xwolf_ywolf_healtha1_xy_healthwolf_xy_healtha1_xy_health+ a2_xy_healtha1_xy_health+ wolf_xy_healtha1_xy_health+ a2_xy_health+ wolf_xy_healthFigure 3: Normalized entropy of different levels lonPush-box ,Pass, and Island environments.(capture rate) above 50% within 20M environment steps (Fig. 1). In contrast, EITI and EDTI needmore than 480M and322M steps to achieve a 20% success rate (Tab. 1). The main reasons that EITIand EDTI need much more environment steps: they require a large number of samples to estimatethe influence of one agent’s behavior on other agents’ between each update. Specifically, EITI andEDTI need 64,000 environment steps between each update, which makes them less sample efficient.IDQ with ǫ-greedy and IDQ with RND does not achieve any success in those tasks.On SMAC, we compare CMAE with MA VEN (Mahajan et al., 2019), QMIX (Rashid et al., 2018),QTRAN (Son et al., 2019), and VDN (Sunehag et al., 2018). The results are summarized in Fig. 2.In sparse-reward tasks, MA VEN, QMIX, QTRAN, and VDN have at most 2%winning rate. Incontrast, CMAE achieves a win rate higher than 80%. Recently, Taiga et al. (2020) point out thatmany existing exploration strategies excel in challenging sparse-reward tasks but fail in simple tasksthat can be solved by using classical methods such as ǫ-greedy. To ensure CMAE doesn’t failin simpler tasks, we run experiments on dense-reward SMAC tasks. As shown in Fig. 2, CMAEachieves similar performance to state-of-the-art baselines in simpler dense-reward SMAC tasks.To investigate which subspaces CMAE selects, we plot the normalized entropy of different sub-spaces in Fig. 3. In the Push-Box task, CMAE mostly chooses the boxes location to explore. In theIsland task, CMAE mostly explores the health of the wolf. For the Pass task, instead of exploringsubspaces, CMAE explores the full global space.We also compare the exploration behavior of CMAE to ǫ-greedy using the Secret room environment.As shown in Fig. 4, early during training, both CMAE and ǫ-greedy explore only locations in the leftroom. However, after 1.5M steps, CMAE agents are able to frequently visit the right three roomswhileǫ-greedy still mostly visits the left room.Following the reviewers’ suggestions, we also consider a shared count-based bonus on the groupobservations as a baseline. We study Q-learning with this shared count-based bonus on the groupobservations for the Secret-room andPush-box tasks. The shared count method achieves a 5.1%±1.3%and2.2%±1.1%success rate on Secret-room andPush-box respectively. In contrast, ourapproach can achieve a 100%±0.1%success rate in both tasks. The training curves are shownin Fig. 5. The count-based bonus method is sub-optimal because group observation is not necessarilythe most efficient subspace to explore. This demonstrates the effectiveness of our subspace selectionmechanism.In addition, to demonstrate CMAE is applicable to a wide variety of challenging tasks. We conductexperiments on the SMAC 6hvs8z-dense (super hard) and6hvs8z-sparse (super hard) tasks,where the opponent agents’ AI are set to the ‘super hard’ level. In 6hvs8z-dense (super hard) ,an agent sees a reward of +1when it causes damage to an enemy’s health. A reward of −1isreceived when its health decreases. In 6hvs8z-sparse (super hard) , an agent can only see non-zero7Under review as a conference paper at ICLR 2021Oursε-greedyAgent 1 Agent 2 Agent 1 Agent 2300K env. steps 1.5M env. stepsFigure 4: Visitation map of Ours (CMAE) and baseline on the Secret-Room environment.0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RateSecret Room OursShared count0.0 0.5 1.0 1.5Environment Steps 1e60.00.20.40.60.81.0Success RatePush Box OursShared countFigure 5: Results of our approach and shared count baseline on Secret-room andPush-box .reward when an opponent is eliminated or a teammate is eliminated. We compare our approach toMA VEN Mahajan et al. (2019), which reports the state-of-the-art results on this task. All approachesare trained for 8M steps. In 6hvs8z-sparse (super hard) , CMAE achieves a 45.6%±3.2%winrate while MA VEN achieves a 4.3%±0.9%win rate. In 6hvs8z-dense (super hard) , CMAE andMA VEN achieve a 60.9%±1.3%and61.2%±2.3%success rate respectively. This illustratesthat dense reward environments tend to be easier than sparse ones. The training curves are shownin Fig. 6.5 R ELATED WORKWe discuss recently developed methods for exploration in reinforcement learning, multi-agent rein-forcement learning and concurrent reinforcement learning subsequently.Exploration for Reinforcement Learning: A wide variety of exploration techniques for deep rein-forcement learning have been studied, deviating from classical noise-based methods. Generalizationof count-based approaches, which give near-optimal results in tabular reinforcement learning, to en-vironments with continuous state spaces have been proposed. For instance, Bellemare et al. (2016)propose a density model to measure the agent’s uncertainty. Pseudo-counts are derived from thedensity model which give rise to an exploration bonus encouraging assessment of rarely visitedstates. Inspired by Bellemare et al. (2016), Ostrovski et al. (2017) discussed a neural density model,to estimate the pseudo count, and Tang et al. (2017) use a hash function to estimate the count.Besides count-based approaches, meta-policy gradient Xu et al. (2018) uses the actor policy’s im-provement as the reward to train an exploration policy. The resulting exploration policy differs fromthe actor policy, and enables more global exploration. Stadie et al. (2016) propose an explorationstrategy based on assigning an exploration bonus from a concurrently learned environment model.Lee et al. (2020) cast exploration as a state marginal matching (SMM) problem and aim to learn8Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8Environment Steps 1e70.00.10.20.30.40.5Success Rate6h8z sparseOursMAVEN0.0 0.2 0.4 0.6 0.8Environment Steps 1e70.00.10.20.30.40.50.6Success Rate6h8z denseOursMAVENFigure 6: Results of our approach and MA VEN on 6hvs8z-sparse (super hard) and6hvs8z-dense(super hard) .a policy for which the state marginal distribution matches a uniform distribution. Other relatedworks on exploration include curiosity-driven exploration Pathak et al. (2017), diversity-driven ex-ploration Hong et al. (2018), GEP-PG Colas et al. (2018), EX2Fu et al. (2017), and bootstrapDQN Osband et al. (2016). In contrast to our approach, all the techniques mentioned above targetsingle-agent deep reinforcement learning.Multi-agent Reinforcement Learning: MADDPG Lowe et al. (2017) uses a central critic thatconsiders other agents’ action policies to handle the non-stationary environment issues in the multi-agent setting. DIAL Foerster et al. (2016) uses an end-to-end differentiable architecture that allowsagents to learn to communicate. Jiang & Lu (2018) propose an attentional communication modelthat learns when communication is helpful for a cooperative setting. Foerster et al. (2017) add a‘fingerprint’ to each transition tuple in the replay memory to track the age of the transition tupleand stabilize training. In ‘Self-Other-Modeling’ (SOM) Raileanu et al. (2018) an agent uses its ownpolicy to predict others agents’ behavior and states.While inter-agent communication Lowe et al. (2017); Jiang & Lu (2018); Foerster et al. (2016);Rashid et al. (2018); Omidshafiei et al. (2017); Jain et al. (2019) has been considered, for exploration,multi-agent approaches rely on classical noise-based exploration. As discussed in Sec. 1, a noise-based approach prevents the agents from sharing their understanding of the environment. A teamof cooperative agents with a noise-based exploration policy can only explore local regions that areclose to their individual actor policy, which contrasts the approach from the proposed method.Recently, approaches that consider coordinated exploration have been proposed. Multi-agent varia-tional exploration (MA VEN) (Mahajan et al., 2019) introduces a latent space for hierarchical control.Agents condition their behavior on the latent variable to perform committed exploration. Influenced-based exploration (Wang et al., 2020) captures the influence of one agent’s behavior on others.Agents are encouraged to visit ‘interaction points’ that will change other agents’ behaviour.Concurrent Reinforcement Learning: Dimakopoulou & Roy (2018) study coordinated explo-ration in concurrent reinforcement learning, maintaining an environment model and extending pos-terior sampling such that agents explore in a coordinated fashion. Parisotto et al. (2019) proposedconcurrent meta reinforcement learning (CMRL) which permits a set of parallel agents to communi-cate with each other and find efficient exploration strategies. The concurrent setting fundamentallydiffers from the multi-agent setting of our approach. In a concurrent setting, agents operate in differ-ent instances of an environment, i.e., one agent’s action has no effect on the observation and rewardsreceived by other agents. In contrast, in the multi-agent setting, agents use the same instance of anenvironment. An agent’s action changes observations and rewards observed by other agents.6 C ONCLUSIONWe propose coordinated multi-agent exploration (CMAE). It defines shared goals and learns coor-dinated exploration policies. We studied subspace selection which helps to find a goal for efficientexploration. Empirically, we demonstrate that CMAE increases exploration efficiency significantly.Compared to state-of-the-art baselines, CMAE needs only 1−5%of the data to achieve similar orbetter results on various sparse-reward tasks. We hope this is a first step toward efficient coordinatedMARL exploration. Going forward we will study more complicated subspace selection techniquesand scale to more agents.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Proposed technique has limited applicability ### Review Text The paper proposes to improve the exponential sample complexity of finding a coordinated multi-agent strategy by learning an exploration policy for each agent that conditions on a shared goal. The exploration policy is mixed with the normal RL policy according to a parameter alpha, which is scaled down over time. The shared goal that agents pursue is selected by using an explicit counter mechanism over objects in the environment. Strengths: - The paper is well written. - The reduction in sample complexity due to this technique is very large. - Algorithms 1 and 2 are clear. Weaknesses: - The proposed counter mechanism relies on being able to manually identify entities in the environment, such as the box in the push-box environment. This has limited applicability to real-world problems with large-dimensional or visual state spaces, in which entities are not obvious a priori. Being able to explicitly count the number of times an agent has experienced an entity in a specific configuration is not a realistic expectation for interesting, real-world problems. Therefore, it is unclear how this method can be applied beyond simple tabular settings and video games. - Similarly, it seems that deciding which subspaces are equivalent requires a significant amount of domain knowledge into each problem, and does not seem to be generally applicable. - Why not benchmark against QMIX + RND, since both are tested independently? Other suggestions: - Typo on p. 3 "tuple is then store in a replay memory" -> stored - Why was the number of steps between updates for EITI and EDTI held constant at 64,000? How many steps between updates were used for the proposed technique? ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
3IKKBxByalk
ICML.cc/2020/Workshop/SAS
2020
Adversarial representation learning for private speech generation
["David Ericsson", "Adam \u00d6stberg", "Edvin Listo Zec", "John Martinsson", "Olof Mogren"]
As more data is collected in various settings across organizations, companies, and countries, there has been an increase in the demand of user privacy. Developing privacy preserving methods for data analytics is thus an important area of research. In this work we present a model based on generative adversarial networks (GANs) that learns to obfuscate specific sensitive attributes in speech data. We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance. The model is trained in two steps: first to filter sensitive information in the spectrogram domain, and then to generate new and private information independent of the filtered one. The model is based on a U-Net CNN that takes mel-spectrograms as input. A MelGAN is used to invert the spectrograms back to raw audio waveforms. We show that it is possible to hide sensitive information such as gender by generating new data, trained adversarially to maintain utility and realism.
["adversarial networks", "representation learning", "speech", "audio", "privacy"]
Adversarial representation learning for private speech generationDavid Ericsson* 1 2Adam ̈Ostberg* 1 2Edvin Listo Zec2John Martinsson2Olof Mogren2AbstractAs more data is collected in various settingsacross organizations, companies, and countries,there has been an increase in the demand of userprivacy. Developing privacy preserving methodsfor data analytics is thus an important area of re-search. In this work we present a model basedon generative adversarial networks (GANs) thatlearns to obfuscate specific sensitive attributes inspeech data. We train a model that learns to hidesensitive information in the data, while preservingthe meaning in the utterance. The model is trainedin two steps: first to filter sensitive informationin the spectrogram domain, and then to generatenew and private information independent of thefiltered one. The model is based on a U-Net CNNthat takes mel-spectrograms as input. A MelGANis used to invert the spectrograms back to rawaudio waveforms. We show that it is possible tohide sensitive information such as gender by gen-erating new data, trained adversarially to maintainutility and realism.1. IntroductionWith greater availability of computing power and largedatasets, machine learning methods are increasingly be-ing used to gain insights and make decisions based on data.While providing valuable insights, the methods may extractsensitive information which the provider of the data didnot intend to disclose. An example of this is digital voiceassistants. The user provides commands by speaking, andthe speech is recorded through a microphone. A speech pro-cessing algorithm infers the spoken contents and executesthe commands accordingly. However, it has been shownthat such state-of-the-art methods may infer other sensi-*Equal contribution1Chalmers University of Technology,Gothenburg, Sweden2RISE Research Institutes of Sweden. Corre-spondence to: David Ericsson <daverics@chalmers.se >, Adam ̈Ostberg <adamostberg@hotmail.com >, Edvin Listo Zec <ed-vin.listo.zec@ri.se >.Published at the workshop on Self-supervision in Audio and Speechat the 37thInternational Conference on Machine Learning , Vi-enna, Austria. Copyright 2020 by the author(s).tive attributes as well, such as intention, gender, emotionalstate, identity and many more (Srivastava et al., 2019). Thisraises the question of how to learn representations of data tosuch applications, which are useful for the intended purposewhile respecting the privacy of people.Speakers’ identities can often be inferred based on featuressuch as timbre, pitch, and speaker style. Voice morphingtechniques focus on making it difficult to infer informationfrom these attributes by altering properties such as pitchand intensity. However, this often limit the utility of thesignal, by altering intonation or variability. Voice conversionapproaches instead aim to mimic a specific speaker. Incontrast, this paper aims at modelling a distribution overplausible speakers, given the current input signal, and whilehiding sensitive attributes.In this paper, we approach the task of privacy-ensuringvoice transformations using an adversarial learning set-up.Generative adversarial networks (GANs) were proposed astractable generative models (Goodfellow et al., 2014), buthave also been adapted to transform data and to provide pri-vacy in the image domain (Huang et al., 2018). We build onthese findings, and propose PCMelGAN, a two-step GANset-up similar to from (Martinsson et al., 2020), that worksin the mel-spectrogram domain. The set-up consists of afilter module which removes sensitive information, and agenerator module which adds synthetic information in itsplace. The proposed method can successfully obfuscate sen-sitive attributes in speech data and generates realistic speechindependent of the sensitive input attribute. Our results forcensoring the gender attribute on the AudioMNIST dataset,demonstrate that the method can maintain a high level ofutility, i.e. retain qualities such as intonation and content,while obtaining strong privacy.In our experiments, the filter module makes it difficult foran adversary to infer the gender of the speaker, and the gen-erator module randomly assigns a synthetic value for thegender attribute which is used when generating the output.However, the proposed method is designed to be able tocensor any attribute of a categorical nature. The proposedsolution is agnostic to the downstream task, with the objec-tive to make the data as private as possible given a distortionconstraint.Adversarial representation learning for private speech generation2. Related workAdversarial representation learning. Research within ad-versarial learning aims to train two or more models simulta-neously with conflicting objective functions. One networkwhich is trained on the main task, and one adversary net-work that is trained to identify the other network’s output.Within the image domain, adversarial learning has had alarge success in a wide variety of tasks since the introductionof generative adversarial networks (GANs) (Goodfellowet al., 2014). Examples of such tasks are image-to-imagetransformations (Isola et al., 2017), and synthesis of facialexpressions and human pose (Song et al., 2017; Tang et al.,2019).Much less work with GANs has been done related to speechand audio. (Pascual et al., 2017) introduce SEGAN (speechenhancement GAN) and thus seem to be the first ones toapply GANs to the task of speech generation and enhance-ment. The authors train a model end-to-end working onthe raw-audio signal directly. (Higuchi et al., 2017; Qin& Jiang, 2018) use adversarial learning to perform speechenhancement for automatic speech recognition (ASR). (Don-ahue et al., 2018) study the benefit of GAN-based speechenhancement for ASR by extending SEGAN to operate ona time-frequency.While these works are applying GANs to tackle the chal-lenges within speech, they are limited to a supervised set-ting. The two most notable works in an unsupervised settingare (Donahue et al., 2019) and (Engel et al., 2019). (Don-ahue et al., 2019) focus on learning representations in anadversarial manner in order to synthesize audio data bothon waveform and spectrogram level, but still show that itis a challenging task, concluding that most perceptually-informed spectrograms are non-invertible.Intermediate speech representations. It is challenging towork on raw waveforms when modeling audio data, dueto a high temporal resolution but also a complex relation-ship between short-term and long-term dependencies. Thisleads to most work being done on a lower-dimensional rep-resentation domain, usually a spectrogram. Two commonintermediate speech representations are aligned linguisticfeatures (Oord et al., 2016) and mel-spectrograms (Shenet al., 2018; Gibiansky et al., 2017). The mel scale is anonlinear frequency scale that is linear in terms of humanperception. It has the benefit of emphasizing differences inlower frequencies, which are important to humans. At thesame time, it puts less weight on high frequency details, thattypically consists of different bursts of noise which are notneeded to be as distinguishable. (Engel et al., 2019) trains aGAN to synthesize magnitute-phase spectrograms of noterecords for different musical instruments. (Kumar et al.,2019) tackle the problem of non-invertible spectrograms byintroducing MelGAN: a fully convolutional model designedto invert mel-spectrograms to raw waveforms.Adversarial representation learning for privacy. Adver-sarial representation learning has also been studied as amethod of preserving privacy. More specifically, it has beenused with the goal of hiding sensitive attributes under someutility constraint. This work has mainly focused on imagesand/or videos, and some tasks related to text data (Zhanget al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al.,2017).To our knowledge, (Srivastava et al., 2019) are the first onesto apply privacy related adversarial representation learningto audio data. The authors study the problem of protectingthe speaker identity of a person based on an encoded rep-resentation of their speech. The encoder is trained for anautomatic speech recognition (ASR) task. While the authorsmanage to hide the speaker identity to some extent, theirmethod also relies on knowing labels for the downstreamtask.In the works of (Edwards & Storkey, 2016; Huang et al.,2018) and (Martinsson et al., 2020), the authors apply ad-versarial representation learning to censor images, withoutusing any downstream task labels.Voice conversion. V oice conversion algorithms aim to learna function that maps acoustic features from a source-speakerXto a target-speaker Y. Some notable works on this in-volving GANs are (Hsu et al., 2017; Pasini, 2019; Kameokaet al., 2018; Kaneko et al., 2019). Similar to (Kameokaet al., 2018), we do not require any parallel utterances, tran-scriptions, or time alignment for the speech generation part.(Qian et al., 2018; Aloufi et al., 2019) use voice conversionto study privacy in speech. However, these works differfrom our by having a target speaker to which they convertthe voice of the input speakers to.3. Problem setting3.1. Private conditional GANPrivate conditional GAN (PCGAN) (Martinsson et al., 2020)is a model that builds upon the generative adversarial pri-vacy (GAP) framework described by (Huang et al., 2017;Huang et al., 2018). Both works study adversarial represen-tation learning for obfuscating sensitive attributes in images.The authors of PCGAN show that by adding a generator tothe filter model in the GAP framework strengthens privacywhile maintaining utility. The filter network obfuscates thesensitive attribute sin the image, and the objective of thegenerator is to take the filtered image x0as input and gener-ate a new synthetic instance of the sensitive attribute s0in it,independent of the original s.The filter and the generator networks are trained againsttheir respective discriminators DFandDGin an adversarialAdversarial representation learning for private speech generationset up. The discriminator DFis trained to predict sinthe transformed image x0, while the filterFis trained totransform images that fools the discriminator. The trainingobjective of the filter can be described with the followingminimax setup:minFmaxDFEx;z1`FDF(F(x;z1);s)s.t.Ex;z1d(F(x;z1);x)"1(1)where"10denotes the allowed distortion in the transfor-mation performed by the filter.The purpose of the generator Gis to generate a synthetics0, independent of the original s. Its discriminator, DG,takes as input a real image or an image generated by G, andis trained to predict sin the first case, and to predict the“fake ” in the second, as in the semi-supervised learningsetup in (Salimans et al., 2016).This setup is defined with the following minimax game:minGmaxDGEx;s0;z1;z2`G(DG(G(F(x;z1);s0;z2));fake )+Ex`G(DG(x;DG);s)(2)s.t.Ex;s0;z1;z2d(G(F(x;z1);s0;z2);x)"2where"20is the allowed distortion in the transformationperformed by the generator.3.2. MelGANMelGAN is a non-autoregressive feed-forward convolu-tional model which is trained to learn to invert mel-spectrograms to raw waveforms (Kumar et al., 2019). TheMelGAN generator consists of a stack of transposed convo-lutional layers, and the model uses three different discrim-inators which each operate at different resolutions on theraw audio. The discriminators are trained using a hinge lossversion (Lim & Ye, 2017) of the original GAN objective.The generator is trained using the original GAN objective,combined with a feature matching loss (Larsen et al., 2015),which minimizes the L1 distance between the discriminatorfeature maps of real and synthetic audio.For each layer i, letD(i)k()denote the out-put from the kth discriminator. The featurematching loss is computed as LFM(G;Dk) =Ex;mPi1NiD(i)k(x)D(i)k(G(m))1whereNiis the number of output units in layer i,xis the rawaudio signal and mis its corresponding mel-spectrogram.The training objectives for the discriminators are thenformulated as:minDkExmin (0;1Dk(x))+Em;zmin (0;1 +Dk(G(m;z))):(3)The generator objective is:minGEm;z3Xk=1Dk(G(m;z))+3Xk=1LFM(G;Dk);(4)whereis a hyperparameter controlling the balance betweenthe feature matching and fooling the discriminators.3.3. Our contributionNotation. Lets2f0;1gbe a binary sensitive attribute, ands0Uf 0;1g. Letz2Z be a noise vector, x2X a rawwaveform and m2M a mel-spectrogram representationofx. LetDbe a discriminator, F:MZ 1!M0a filternetwork andG:M0Z 2!M00a generator. LetX0andX00denote the MelGAN inverted sets of M0andM00. Eachxis paired with a sensitive attribute: (xi;si). Each sample(xi;si)has a corresponding utility attribute ui, only usedfor evaluation. In our case this is the spoken digit in therecording, i.e. ui2f0;:::; 9g.In this work we combine PCGAN and MelGAN to adversar-ially learn private representations of speech data, and nameour model PCMelGAN. The whole pipeline is shown inFigure 1. The speech recording xis mapped to a mel-spectrogramm. PCGAN, with its filter and generatormodulesFandG, is trained to ensure privacy in the mel-spectrogram. We use a pre-trained MelGAN to invert themel-spectrogram output of our model m002M00to a rawwaveformx2X00.We implementFandGusing a U-Net architecture similarto (Martinsson et al., 2020). For DFandDGwe use theAlexNet architecture (Krizhevsky et al., 2012) as used in(Becker et al., 2018) for gender classification in the spec-trogram domain. We use categorical cross entropy as lossfunctions denoted by `Fand`G. The L1-norm is used as thedistortion measure d. The constrained optimization problemis reformulated as an unconstrained one by relaxing it usingthe quadratic penalty method (Nocedal & Wright, 2006).The distortion constraint is denoted by "and the penaltyparameter by . The parameters are updated using Adam(Kingma & Ba, 2014).As a baseline comparison, we use PCMelGAN where thegenerator module is excluded. Thus we can directly see howmuch the generator module adds to the privacy task.Fm0GDFm00DGs0mmz1 z2STFTx Mx00Figure 1. Schematic diagram of our model: PCMelGAN.Adversarial representation learning for private speech generation4. Experiments4.1. DataWe use the AudioMNIST dataset to conduct our experiments(Becker et al., 2018). AudioMNIST consists of 30,000 au-dio recordings of approximately 9.5 hours of spoken digits(0-9) in English. Each digit it repeated 50 times for each ofthe 60 different speakers. The audio files have a samplingfrequency of 48kHz and are saved in a 16 bit integer for-mat. The audio recordings are also labeled with informationsuch as age, gender, origin and accent of all speakers werecollected.In this paper, we use 10,000 samples as a training set and2,000 samples as a test set. For the training set, we randomlysample speakers such that it consists of 10 female and 10male speakers. Similarly, the test set consists of 2 femaleand 2 male speakers. We downsample the recordings to 8kHz and use zero padding to get an equal length of 8192 foreach recording.4.2. Data-driven implementationTo encourage reproducibility, we make our code publiclyavailable1. The model is trained end-to-end, with the hy-perparameters DF;DG= 0:0004 ,F;G= 0:0004 ,=102,"2f0:005;0:01;0:05;0:1gand(1;2) = (0:5;0:9).During training, mis computed using the short-time Fouriertransform with a window size of 1024 , a hop length of 256and80mel bins. We normalize and clip the spectrograms to[1;1]as in (Donahue et al., 2019), with the exception thatthe normalization is performed on the whole spectrogramas opposed to for each frequency bin.4.3. EvaluationFor each configuration of hyperparameters, we train themodel using five different random seeds for 1000 epochs ona NVIDIA V100 GPU. We evaluate the experiments bothin the spectrogram and in the raw waveform domain. Ineach domain, we train digit and gender classifiers on thecorresponding training sets, Xtrain andMtrain . The classi-fiers that predict gender are used as a privacy measure, andthe classifiers that predict spoken digits are used as a utilitymeasure. We evaluate the fixed classifiers on M0testandM00test, to directly compare the added benefit by a generatormodule on-top of the filter.We also measure the quality of the generated audio usingFr ́echet Inception Distance (FID) (Heusel et al., 2017). FIDis frequently used to measure the quality of GAN-generatedimages. Since we are interested in measuring generatedaudio quality, we replace the commonly used Inceptionv3 network with an AudioNet (Becker et al., 2018) digit1https://github.com/daverics/pcmelganclassifier using the features from the last convolutional layer.5. ResultsQuantitative results. In Table 1 the mean accuracy andstandard deviation of the fixed classifiers on the test set isshown over five runs in the spectrogram and audio domain,respectively. Privacy is measured by the accuracy of thefixed classifier predicting the original gender si, where anaccuracy close to 50% corresponds to more privacy. Utilityis measured by the accuracy of the fixed classifier predictingthe digitui, where a higher accuracy corresponds to greaterutility.Table 1. The spectrogram classifiers’ mean accuracy and standarddeviation on the test sets M0testandM00test(top) and on X0testandX00test(bottom) for varying values of ". For privacy (gender)an accuracy close to 50% is better. For utility (digit), a higheraccuracy is better.Dist. Privacy Utility"Baseline PCMelGAN Baseline PCMelGAN0.005 49:92:2 48:72:484:12:8 81:13:70.01 55:04:7 50:91:479:94:3 78:87:80.05 61:310:2 51:00:780:98:2 54:723:80.1 48:91:0 49:80:529:17:5 15:15:40.005 52:23:6 49:11:636:84:0 49:49:80.01 53:23:2 51:31:634:38:5 49:28:60.05 61:58:1 51:20:728:015:8 31:310:30.1 51:01:3 49:60:411:41:7 15:82:3In Table 2, FID scores are shown for our model working inthe audio domain. In figure 3, a recording of a woman say-ing ”zero” is shown, together with the baseline (filter) andPCMelGAN generating a male and a female spectrogram.Table 2. The mean FID-score and standard deviation of the testsetsX0testandX00testfor different ". A lower value corresponds tomore realistic audio.Dist. FID Audio" Baseline PCMelgan0.005 20:174:0410:123:150.01 27:274:5010:022:270.05 29:595:7720:224:870.141:503:4922:325:20Qualitative results. We provide samples from the AudioM-NIST test set that were transformed by our model2. Theshared folder contains original sound clips and their corre-sponding transformed versions.2https://www.dropbox.com/sh/oangx84ibhzodhs/AAAfG-PBW4Ne8KwdipAmKFy1a?dl=0Adversarial representation learning for private speech generationFigure 2. Privacy vs utility trade-off for the baseline and PCMelGAN for varying ". Orange and blue points correspond to evaluating thefixed classifiers for digits and gender on the spectrogram datasets M0testandM00test(left), and raw waveform datasets X0testandX00test(right). Lower right corner is better.Figure 3. Spectrograms of saying ”zero”. The original recording ofa female (top left), transformed ones from the baseline (top right),and our model of a sampled male (bottom left) and a sampledfemale (bottom right).6. DiscussionTable 1 (top) and Figure 2 (left) demonstrate that the pro-posed method achieves strong privacy while working on themel-spectrogram domain, and retains a strong utility preser-vation. We notice in Table 1 (bottom left) and in Figure 2(right) that the proposed method is able to provide privacyin the audio domain, but to a loss of utility. However, whencomparing to the baseline, we see that generating a syntheticsboth increases utility and ensures privacy. In the spectro-gram domain, the filter model seems to be enough to obtainboth privacy and utility. In both the spectrogram domainand the audio domain, the proposed approach achieves highprivacy. We assume that the privacy will suffer from havinga stricter distortion budget ", but this was not observed inthe experiments. While a quick sanity check with "= 105resulted in the model learning the identity map (with noadditional privacy), more experiments need to be carriedout to detect when privacy starts to deteriorate with lower ".It is worth noting that for some "we have a large standarddeviation. We hypothesize that this could be improved byusing more diverse data, and future work should includeevaluating the proposed method on longer sentences.In Table 2 we noticed that our model obtains substantiallybetter FID scores than the baseline in the audio domain. Weconclude that adding the synthetic sample of the sensitiveattribute improves the realism and fidelity of the speechsignal. We observe this also from listening to the generatedsounds (see qualitative results above).7. ConclusionsIn this work we have proposed an adversarially trainedmodel that learns to make speech data private. We do this byfirst filtering a sensitive attribute, and then generating a new,independent sensitive attribute. We formulate this as an un-constrained optimization problem with a distortion budget.This is done in the spectrogram domain, and we use a pre-trained MelGAN to invert the generated mel-spectrogramback to a raw waveform. We compare our model with thebaseline of just censoring the attribute, and show that wegain both privacy and utility by generating a new sensitiveattribute in the audio domain.Adversarial representation learning for private speech generationReferencesAloufi, R., Haddadi, H., and Boyle, D. Emotionless:Privacy-preserving speech analysis for voice assistants.arXiv preprint arXiv:1908.03632 , 2019.Becker, S., Ackermann, M., Lapuschkin, S., M ̈uller, K.-R., and Samek, W. Interpreting and explaining deepneural networks for classification of audio signals. CoRR ,abs/1807.03418, 2018.Beutel, A., Chen, J., Zhao, Z., and Chi, E. H. Data decisionsand theoretical implications when adversarially learningfair representations. arXiv preprint arXiv:1707.00075 ,2017.Donahue, C., Li, B., and Prabhavalkar, R. Exploring speechenhancement with generative adversarial networks forrobust speech recognition. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 5024–5028. IEEE, 2018.Donahue, C., McAuley, J., and Puckette, M. Adversarialaudio synthesis. In International Conference on LearningRepresentations , 2019. URL https://openreview.net/forum?id=ByMVTsR5KQ .Edwards, H. and Storkey, A. J. Censoring representationswith an adversary. In 4th International Conference onLearning Representations, ICLR 2016, San Juan, PuertoRico, May 2-4, 2016, Conference Track Proceedings ,2016.Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue,C., and Roberts, A. GANSynth: Adversarial neural au-dio synthesis. In International Conference on LearningRepresentations , 2019. URL https://openreview.net/forum?id=H1xQVn09FX .Gibiansky, A., Arik, S., Diamos, G., Miller, J., Peng, K.,Ping, W., Raiman, J., and Zhou, Y . Deep voice 2: Multi-speaker neural text-to-speech. In Advances in neuralinformation processing systems , pp. 2962–2970, 2017.Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y .Generative adversarial networks, 2014.Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., andHochreiter, S. Gans trained by a two time-scale updaterule converge to a local nash equilibrium, 2017.Higuchi, T., Kinoshita, K., Delcroix, M., and Nakatani, T.Adversarial training for data-driven speech enhancementwithout parallel corpus. In 2017 IEEE Automatic SpeechRecognition and Understanding Workshop (ASRU) , pp.40–47. IEEE, 2017.Hsu, C.-C., Hwang, H.-T., Wu, Y .-C., Tsao, Y ., and Wang,H.-M. V oice conversion from unaligned corpora usingvariational autoencoding wasserstein generative adversar-ial networks. arXiv preprint arXiv:1704.00849 , 2017.Huang, C., Kairouz, P., Chen, X., Sankar, L., and Rajagopal,R. Context-aware generative adversarial privacy. Entropy ,19(12), 2017. ISSN 1099-4300. doi: 10.3390/e19120656.URLhttps://www.mdpi.com/1099-4300/19/12/656 .Huang, C., Kairouz, P., and Sankar, L. Generative adver-sarial privacy: A data-driven approach to information-theoretic privacy. In 2018 52nd Asilomar Conference onSignals, Systems, and Computers , pp. 2162–2166, Oct2018. doi: 10.1109/ACSSC.2018.8645532.Isola, P., Zhu, J., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks.In2017 IEEE Conference on Computer Vision and Pat-tern Recognition (CVPR) , pp. 5967–5976, July 2017. doi:10.1109/CVPR.2017.632.Kameoka, H., Kaneko, T., Tanaka, K., and Hojo, N. Stargan-vc: Non-parallel many-to-many voice conversion usingstar generative adversarial networks. In 2018 IEEE Spo-ken Language Technology Workshop (SLT) , pp. 266–273.IEEE, 2018.Kaneko, T., Kameoka, H., Tanaka, K., and Hojo, N. Stargan-vc2: Rethinking conditional methods for stargan-basedvoice conversion. arXiv preprint arXiv:1907.12279 ,2019.Kingma, D. P. and Ba, J. Adam: A method for stochasticoptimization. arXiv preprint arXiv:1412.6980 , 2014.Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenetclassification with deep convolutional neural networks. InPereira, F., Burges, C. J. C., Bottou, L., and Weinberger,K. Q. (eds.), Advances in Neural Information Process-ing Systems 25 , pp. 1097–1105. Curran Associates, Inc.,2012.Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh,W. Z., Sotelo, J., de Br ́ebisson, A., Bengio, Y ., andCourville, A. C. Melgan: Generative adversarial net-works for conditional waveform synthesis. In Advancesin Neural Information Processing Systems , pp. 14881–14892, 2019.Larsen, A. B. L., Sønderby, S. K., and Winther, O. Autoen-coding beyond pixels using a learned similarity metric.CoRR , abs/1512.09300, 2015. URL http://arxiv.org/abs/1512.09300 .Lim, J. H. and Ye, J. C. Geometric gan, 2017.Adversarial representation learning for private speech generationMartinsson, J., Listo Zec, E., Gillblad, D., and Mogren, O.Adversarial representation learning for synthetic replace-ment of sensitive data. CoRR , abs/2006.08039, 2020.URL https://arxiv.org/abs/2006.08039 .Nocedal, J. and Wright, S. J. Numerical Optimization .Springer, New York, NY , USA, second edition, 2006.Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K.,Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A.,and Kavukcuoglu, K. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Pascual, S., Bonafonte, A., and Serr `a, J. Segan: Speechenhancement generative adversarial network. In Proc.Interspeech 2017 , pp. 3642–3646, 2017. doi: 10.21437/Interspeech.2017-1428. URL http://dx.doi.org/10.21437/Interspeech.2017-1428 .Pasini, M. Melgan-vc: V oice conversion and audio styletransfer on arbitrarily long samples using spectrograms.arXiv preprint arXiv:1910.03713 , 2019.Qian, J., Du, H., Hou, J., Chen, L., Jung, T., and Li, X.-Y .Hidebehind: Enjoy voice input with voiceprint unclon-ability and anonymity. In Proceedings of the 16th ACMConference on Embedded Networked Sensor Systems , pp.82–94, 2018.Qin, S. and Jiang, T. Improved wasserstein conditionalgenerative adversarial network speech enhancement.EURASIP Journal on Wireless Communications and Net-working , 2018(1):181, 2018.Raval, N., Machanavajjhala, A., and Cox, L. P. Protect-ing visual secrets using adversarial nets. In 2017 IEEEConference on Computer Vision and Pattern RecognitionWorkshops (CVPRW) , pp. 1329–1332. IEEE, 2017.Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V .,Radford, A., Chen, X., and Chen, X. Improved techniquesfor training gans. In Lee, D. D., Sugiyama, M., Luxburg,U. V ., Guyon, I., and Garnett, R. (eds.), Advances inNeural Information Processing Systems 29 , pp. 2234–2242. Curran Associates, Inc., 2016.Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N.,Yang, Z., Chen, Z., Zhang, Y ., Wang, Y ., Skerrv-Ryan, R.,et al. Natural tts synthesis by conditioning wavenet onmel spectrogram predictions. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 4779–4783. IEEE, 2018.Song, L., Lu, Z., He, R., Sun, Z., and Tan, T. Geometryguided adversarial facial expression synthesis. CoRR ,abs/1712.03474, 2017. URL http://arxiv.org/abs/1712.03474 .Srivastava, B. M. L., Bellet, A., Tommasi, M., andVincent, E. Privacy-preserving adversarial rep-resentation learning in asr: Reality or illusion?Interspeech 2019 , Sep 2019. doi: 10.21437/interspeech.2019-2415. URL http://dx.doi.org/10.21437/Interspeech.2019-2415 .Tang, H., Xu, D., Liu, G., Wang, W., Sebe, N., and Yan,Y . Cycle in cycle generative adversarial networks forkeypoint-guided image generation. In Proceedings of the27th ACM International Conference on Multimedia , MM’19, pp. 2052–2060, New York, NY , USA, 2019. Associa-tion for Computing Machinery. ISBN 9781450368896.doi: 10.1145/3343031.3350980. URL https://doi.org/10.1145/3343031.3350980 .Xie, Q., Dai, Z., Du, Y ., Hovy, E., and Neubig, G. Control-lable invariance through adversarial feature learning. InAdvances in Neural Information Processing Systems , pp.585–596, 2017.Zhang, B. H., Lemoine, B., and Mitchell, M. Mitigating un-wanted biases with adversarial learning. In Proceedingsof the 2018 AAAI/ACM Conference on AI, Ethics, andSociety , pp. 335–340. ACM, 2018.Adversarial representation learning for private speech generationSupplementaryAlgorithm 1 PCMelGANInput: datasetXtrain , learning rate , penalty, distor-tion constant "repeatDrawnsamples uniformly at random from the dataset(x1;s1);:::; (xn;sn)XtrainCompute mel-spectrogram and normalizemi=STFT (xi)8i= 1;:::;nDrawnsamples from the noise distributionz(1)1;:::;z(1)nN(0;1)z(2)1;:::;z(2)nN(0;1)Drawnsamples from the synthetic distributions01;:::;s0nUf 0;1gCompute the censored and synthetic datam0i=F(mi;z(1)i;F)8i= 1;:::;nm00i=G(m0i;s0i;z(2)i;G)8i= 1;:::;nCompute filter and generator lossLF(F) =1nnXi=1`(DF(m0i;DF);si)+max(1nnXi=1d(m0i;mi)";0)2LG(G) =1nnXi=1`(DG(m00i;DG);si)+max(1nnXi=1d(m00i;mi)";0)2Update filter and generator parametersF Adam (F;F;1;2)G Adam (G;G;1;2)Compute discriminator lossesLDF(DF) =1nPni=1`(DF(m0i;DF);si)LDG(DG) =1nnXi=1`(DG(m00i;DG);fake )+1nnXi=1`(DG(mi;DG);si)Update discriminator parametersD Adam (D;D;1;2)until termination criterion is met
W-ULi23RUQL
Privacy in speech , applied to generation
6: Marginally above acceptance threshold
Privacy in speech processing is a critical issue, both now and in the future. Because this paper sets out to do private speech generation, and manages to do so to some extent, I argue for acceptance despite the following critiques. Given the stated goal of privacy preserving - isn't operating at either a higher level (raw text with controllable "synthesis attributes", which are varied each generation to obscure the target attribute), or a lower one (per audio frame, distort and remove pieces of the audio which are not necessary to the end task, such as speech recognition) more directly applicable to the stated goal of "We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance". I do not find FID an adequate measure for this task - privacy preservation according to the specified attribute could be tested directly against a high quality speaker or vocal tract length recognizer being poor, alongside checking whether the resulting audio is still equivalent to the original audio put through a high quality speech recognizer or ASR system. Listening to the samples, I struggled to hear anything at all in a number of the sampled outputs, and many of the original audio files are so short that it is difficult to have context at all about the attribute that will be "masked". AudioMNIST is such a small dataset that it barely serves as a proof of concept here. Generally I like the concept of this work, and the effort in tuning and gathering results tables is thorough. But the metrics used here do not seem sufficient for either measuring speech privacy preservation, or quality of the resulting audio. Further exploration along these lines should include directly automated recognition error rates, as well as human quality evaluations. However, achieving any amount of results in such a difficult application area is commendable, and I hope the authors will work further on this area.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Adversarial representation learning for private speech generation ### Paper Abstract As more data is collected in various settings across organizations, companies, and countries, there has been an increase in the demand of user privacy. Developing privacy preserving methods for data analytics is thus an important area of research. In this work we present a model based on generative adversarial networks (GANs) that learns to obfuscate specific sensitive attributes in speech data. We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance. The model is trained in two steps: first to filter sensitive information in the spectrogram domain, and then to generate new and private information independent of the filtered one. The model is based on a U-Net CNN that takes mel-spectrograms as input. A MelGAN is used to invert the spectrograms back to raw audio waveforms. We show that it is possible to hide sensitive information such as gender by generating new data, trained adversarially to maintain utility and realism. ### Paper Keywords ["adversarial networks", "representation learning", "speech", "audio", "privacy"] ### Paper Content Adversarial representation learning for private speech generationDavid Ericsson* 1 2Adam ̈Ostberg* 1 2Edvin Listo Zec2John Martinsson2Olof Mogren2AbstractAs more data is collected in various settingsacross organizations, companies, and countries,there has been an increase in the demand of userprivacy. Developing privacy preserving methodsfor data analytics is thus an important area of re-search. In this work we present a model basedon generative adversarial networks (GANs) thatlearns to obfuscate specific sensitive attributes inspeech data. We train a model that learns to hidesensitive information in the data, while preservingthe meaning in the utterance. The model is trainedin two steps: first to filter sensitive informationin the spectrogram domain, and then to generatenew and private information independent of thefiltered one. The model is based on a U-Net CNNthat takes mel-spectrograms as input. A MelGANis used to invert the spectrograms back to rawaudio waveforms. We show that it is possible tohide sensitive information such as gender by gen-erating new data, trained adversarially to maintainutility and realism.1. IntroductionWith greater availability of computing power and largedatasets, machine learning methods are increasingly be-ing used to gain insights and make decisions based on data.While providing valuable insights, the methods may extractsensitive information which the provider of the data didnot intend to disclose. An example of this is digital voiceassistants. The user provides commands by speaking, andthe speech is recorded through a microphone. A speech pro-cessing algorithm infers the spoken contents and executesthe commands accordingly. However, it has been shownthat such state-of-the-art methods may infer other sensi-*Equal contribution1Chalmers University of Technology,Gothenburg, Sweden2RISE Research Institutes of Sweden. Corre-spondence to: David Ericsson <daverics@chalmers.se >, Adam ̈Ostberg <adamostberg@hotmail.com >, Edvin Listo Zec <ed-vin.listo.zec@ri.se >.Published at the workshop on Self-supervision in Audio and Speechat the 37thInternational Conference on Machine Learning , Vi-enna, Austria. Copyright 2020 by the author(s).tive attributes as well, such as intention, gender, emotionalstate, identity and many more (Srivastava et al., 2019). Thisraises the question of how to learn representations of data tosuch applications, which are useful for the intended purposewhile respecting the privacy of people.Speakers’ identities can often be inferred based on featuressuch as timbre, pitch, and speaker style. Voice morphingtechniques focus on making it difficult to infer informationfrom these attributes by altering properties such as pitchand intensity. However, this often limit the utility of thesignal, by altering intonation or variability. Voice conversionapproaches instead aim to mimic a specific speaker. Incontrast, this paper aims at modelling a distribution overplausible speakers, given the current input signal, and whilehiding sensitive attributes.In this paper, we approach the task of privacy-ensuringvoice transformations using an adversarial learning set-up.Generative adversarial networks (GANs) were proposed astractable generative models (Goodfellow et al., 2014), buthave also been adapted to transform data and to provide pri-vacy in the image domain (Huang et al., 2018). We build onthese findings, and propose PCMelGAN, a two-step GANset-up similar to from (Martinsson et al., 2020), that worksin the mel-spectrogram domain. The set-up consists of afilter module which removes sensitive information, and agenerator module which adds synthetic information in itsplace. The proposed method can successfully obfuscate sen-sitive attributes in speech data and generates realistic speechindependent of the sensitive input attribute. Our results forcensoring the gender attribute on the AudioMNIST dataset,demonstrate that the method can maintain a high level ofutility, i.e. retain qualities such as intonation and content,while obtaining strong privacy.In our experiments, the filter module makes it difficult foran adversary to infer the gender of the speaker, and the gen-erator module randomly assigns a synthetic value for thegender attribute which is used when generating the output.However, the proposed method is designed to be able tocensor any attribute of a categorical nature. The proposedsolution is agnostic to the downstream task, with the objec-tive to make the data as private as possible given a distortionconstraint.Adversarial representation learning for private speech generation2. Related workAdversarial representation learning. Research within ad-versarial learning aims to train two or more models simulta-neously with conflicting objective functions. One networkwhich is trained on the main task, and one adversary net-work that is trained to identify the other network’s output.Within the image domain, adversarial learning has had alarge success in a wide variety of tasks since the introductionof generative adversarial networks (GANs) (Goodfellowet al., 2014). Examples of such tasks are image-to-imagetransformations (Isola et al., 2017), and synthesis of facialexpressions and human pose (Song et al., 2017; Tang et al.,2019).Much less work with GANs has been done related to speechand audio. (Pascual et al., 2017) introduce SEGAN (speechenhancement GAN) and thus seem to be the first ones toapply GANs to the task of speech generation and enhance-ment. The authors train a model end-to-end working onthe raw-audio signal directly. (Higuchi et al., 2017; Qin& Jiang, 2018) use adversarial learning to perform speechenhancement for automatic speech recognition (ASR). (Don-ahue et al., 2018) study the benefit of GAN-based speechenhancement for ASR by extending SEGAN to operate ona time-frequency.While these works are applying GANs to tackle the chal-lenges within speech, they are limited to a supervised set-ting. The two most notable works in an unsupervised settingare (Donahue et al., 2019) and (Engel et al., 2019). (Don-ahue et al., 2019) focus on learning representations in anadversarial manner in order to synthesize audio data bothon waveform and spectrogram level, but still show that itis a challenging task, concluding that most perceptually-informed spectrograms are non-invertible.Intermediate speech representations. It is challenging towork on raw waveforms when modeling audio data, dueto a high temporal resolution but also a complex relation-ship between short-term and long-term dependencies. Thisleads to most work being done on a lower-dimensional rep-resentation domain, usually a spectrogram. Two commonintermediate speech representations are aligned linguisticfeatures (Oord et al., 2016) and mel-spectrograms (Shenet al., 2018; Gibiansky et al., 2017). The mel scale is anonlinear frequency scale that is linear in terms of humanperception. It has the benefit of emphasizing differences inlower frequencies, which are important to humans. At thesame time, it puts less weight on high frequency details, thattypically consists of different bursts of noise which are notneeded to be as distinguishable. (Engel et al., 2019) trains aGAN to synthesize magnitute-phase spectrograms of noterecords for different musical instruments. (Kumar et al.,2019) tackle the problem of non-invertible spectrograms byintroducing MelGAN: a fully convolutional model designedto invert mel-spectrograms to raw waveforms.Adversarial representation learning for privacy. Adver-sarial representation learning has also been studied as amethod of preserving privacy. More specifically, it has beenused with the goal of hiding sensitive attributes under someutility constraint. This work has mainly focused on imagesand/or videos, and some tasks related to text data (Zhanget al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al.,2017).To our knowledge, (Srivastava et al., 2019) are the first onesto apply privacy related adversarial representation learningto audio data. The authors study the problem of protectingthe speaker identity of a person based on an encoded rep-resentation of their speech. The encoder is trained for anautomatic speech recognition (ASR) task. While the authorsmanage to hide the speaker identity to some extent, theirmethod also relies on knowing labels for the downstreamtask.In the works of (Edwards & Storkey, 2016; Huang et al.,2018) and (Martinsson et al., 2020), the authors apply ad-versarial representation learning to censor images, withoutusing any downstream task labels.Voice conversion. V oice conversion algorithms aim to learna function that maps acoustic features from a source-speakerXto a target-speaker Y. Some notable works on this in-volving GANs are (Hsu et al., 2017; Pasini, 2019; Kameokaet al., 2018; Kaneko et al., 2019). Similar to (Kameokaet al., 2018), we do not require any parallel utterances, tran-scriptions, or time alignment for the speech generation part.(Qian et al., 2018; Aloufi et al., 2019) use voice conversionto study privacy in speech. However, these works differfrom our by having a target speaker to which they convertthe voice of the input speakers to.3. Problem setting3.1. Private conditional GANPrivate conditional GAN (PCGAN) (Martinsson et al., 2020)is a model that builds upon the generative adversarial pri-vacy (GAP) framework described by (Huang et al., 2017;Huang et al., 2018). Both works study adversarial represen-tation learning for obfuscating sensitive attributes in images.The authors of PCGAN show that by adding a generator tothe filter model in the GAP framework strengthens privacywhile maintaining utility. The filter network obfuscates thesensitive attribute sin the image, and the objective of thegenerator is to take the filtered image x0as input and gener-ate a new synthetic instance of the sensitive attribute s0in it,independent of the original s.The filter and the generator networks are trained againsttheir respective discriminators DFandDGin an adversarialAdversarial representation learning for private speech generationset up. The discriminator DFis trained to predict sinthe transformed image x0, while the filterFis trained totransform images that fools the discriminator. The trainingobjective of the filter can be described with the followingminimax setup:minFmaxDFEx;z1`FDF(F(x;z1);s)s.t.Ex;z1d(F(x;z1);x)"1(1)where"10denotes the allowed distortion in the transfor-mation performed by the filter.The purpose of the generator Gis to generate a synthetics0, independent of the original s. Its discriminator, DG,takes as input a real image or an image generated by G, andis trained to predict sin the first case, and to predict the“fake ” in the second, as in the semi-supervised learningsetup in (Salimans et al., 2016).This setup is defined with the following minimax game:minGmaxDGEx;s0;z1;z2`G(DG(G(F(x;z1);s0;z2));fake )+Ex`G(DG(x;DG);s)(2)s.t.Ex;s0;z1;z2d(G(F(x;z1);s0;z2);x)"2where"20is the allowed distortion in the transformationperformed by the generator.3.2. MelGANMelGAN is a non-autoregressive feed-forward convolu-tional model which is trained to learn to invert mel-spectrograms to raw waveforms (Kumar et al., 2019). TheMelGAN generator consists of a stack of transposed convo-lutional layers, and the model uses three different discrim-inators which each operate at different resolutions on theraw audio. The discriminators are trained using a hinge lossversion (Lim & Ye, 2017) of the original GAN objective.The generator is trained using the original GAN objective,combined with a feature matching loss (Larsen et al., 2015),which minimizes the L1 distance between the discriminatorfeature maps of real and synthetic audio.For each layer i, letD(i)k()denote the out-put from the kth discriminator. The featurematching loss is computed as LFM(G;Dk) =Ex;mPi1NiD(i)k(x)D(i)k(G(m))1whereNiis the number of output units in layer i,xis the rawaudio signal and mis its corresponding mel-spectrogram.The training objectives for the discriminators are thenformulated as:minDkExmin (0;1Dk(x))+Em;zmin (0;1 +Dk(G(m;z))):(3)The generator objective is:minGEm;z3Xk=1Dk(G(m;z))+3Xk=1LFM(G;Dk);(4)whereis a hyperparameter controlling the balance betweenthe feature matching and fooling the discriminators.3.3. Our contributionNotation. Lets2f0;1gbe a binary sensitive attribute, ands0Uf 0;1g. Letz2Z be a noise vector, x2X a rawwaveform and m2M a mel-spectrogram representationofx. LetDbe a discriminator, F:MZ 1!M0a filternetwork andG:M0Z 2!M00a generator. LetX0andX00denote the MelGAN inverted sets of M0andM00. Eachxis paired with a sensitive attribute: (xi;si). Each sample(xi;si)has a corresponding utility attribute ui, only usedfor evaluation. In our case this is the spoken digit in therecording, i.e. ui2f0;:::; 9g.In this work we combine PCGAN and MelGAN to adversar-ially learn private representations of speech data, and nameour model PCMelGAN. The whole pipeline is shown inFigure 1. The speech recording xis mapped to a mel-spectrogramm. PCGAN, with its filter and generatormodulesFandG, is trained to ensure privacy in the mel-spectrogram. We use a pre-trained MelGAN to invert themel-spectrogram output of our model m002M00to a rawwaveformx2X00.We implementFandGusing a U-Net architecture similarto (Martinsson et al., 2020). For DFandDGwe use theAlexNet architecture (Krizhevsky et al., 2012) as used in(Becker et al., 2018) for gender classification in the spec-trogram domain. We use categorical cross entropy as lossfunctions denoted by `Fand`G. The L1-norm is used as thedistortion measure d. The constrained optimization problemis reformulated as an unconstrained one by relaxing it usingthe quadratic penalty method (Nocedal & Wright, 2006).The distortion constraint is denoted by "and the penaltyparameter by . The parameters are updated using Adam(Kingma & Ba, 2014).As a baseline comparison, we use PCMelGAN where thegenerator module is excluded. Thus we can directly see howmuch the generator module adds to the privacy task.Fm0GDFm00DGs0mmz1 z2STFTx Mx00Figure 1. Schematic diagram of our model: PCMelGAN.Adversarial representation learning for private speech generation4. Experiments4.1. DataWe use the AudioMNIST dataset to conduct our experiments(Becker et al., 2018). AudioMNIST consists of 30,000 au-dio recordings of approximately 9.5 hours of spoken digits(0-9) in English. Each digit it repeated 50 times for each ofthe 60 different speakers. The audio files have a samplingfrequency of 48kHz and are saved in a 16 bit integer for-mat. The audio recordings are also labeled with informationsuch as age, gender, origin and accent of all speakers werecollected.In this paper, we use 10,000 samples as a training set and2,000 samples as a test set. For the training set, we randomlysample speakers such that it consists of 10 female and 10male speakers. Similarly, the test set consists of 2 femaleand 2 male speakers. We downsample the recordings to 8kHz and use zero padding to get an equal length of 8192 foreach recording.4.2. Data-driven implementationTo encourage reproducibility, we make our code publiclyavailable1. The model is trained end-to-end, with the hy-perparameters DF;DG= 0:0004 ,F;G= 0:0004 ,=102,"2f0:005;0:01;0:05;0:1gand(1;2) = (0:5;0:9).During training, mis computed using the short-time Fouriertransform with a window size of 1024 , a hop length of 256and80mel bins. We normalize and clip the spectrograms to[1;1]as in (Donahue et al., 2019), with the exception thatthe normalization is performed on the whole spectrogramas opposed to for each frequency bin.4.3. EvaluationFor each configuration of hyperparameters, we train themodel using five different random seeds for 1000 epochs ona NVIDIA V100 GPU. We evaluate the experiments bothin the spectrogram and in the raw waveform domain. Ineach domain, we train digit and gender classifiers on thecorresponding training sets, Xtrain andMtrain . The classi-fiers that predict gender are used as a privacy measure, andthe classifiers that predict spoken digits are used as a utilitymeasure. We evaluate the fixed classifiers on M0testandM00test, to directly compare the added benefit by a generatormodule on-top of the filter.We also measure the quality of the generated audio usingFr ́echet Inception Distance (FID) (Heusel et al., 2017). FIDis frequently used to measure the quality of GAN-generatedimages. Since we are interested in measuring generatedaudio quality, we replace the commonly used Inceptionv3 network with an AudioNet (Becker et al., 2018) digit1https://github.com/daverics/pcmelganclassifier using the features from the last convolutional layer.5. ResultsQuantitative results. In Table 1 the mean accuracy andstandard deviation of the fixed classifiers on the test set isshown over five runs in the spectrogram and audio domain,respectively. Privacy is measured by the accuracy of thefixed classifier predicting the original gender si, where anaccuracy close to 50% corresponds to more privacy. Utilityis measured by the accuracy of the fixed classifier predictingthe digitui, where a higher accuracy corresponds to greaterutility.Table 1. The spectrogram classifiers’ mean accuracy and standarddeviation on the test sets M0testandM00test(top) and on X0testandX00test(bottom) for varying values of ". For privacy (gender)an accuracy close to 50% is better. For utility (digit), a higheraccuracy is better.Dist. Privacy Utility"Baseline PCMelGAN Baseline PCMelGAN0.005 49:92:2 48:72:484:12:8 81:13:70.01 55:04:7 50:91:479:94:3 78:87:80.05 61:310:2 51:00:780:98:2 54:723:80.1 48:91:0 49:80:529:17:5 15:15:40.005 52:23:6 49:11:636:84:0 49:49:80.01 53:23:2 51:31:634:38:5 49:28:60.05 61:58:1 51:20:728:015:8 31:310:30.1 51:01:3 49:60:411:41:7 15:82:3In Table 2, FID scores are shown for our model working inthe audio domain. In figure 3, a recording of a woman say-ing ”zero” is shown, together with the baseline (filter) andPCMelGAN generating a male and a female spectrogram.Table 2. The mean FID-score and standard deviation of the testsetsX0testandX00testfor different ". A lower value corresponds tomore realistic audio.Dist. FID Audio" Baseline PCMelgan0.005 20:174:0410:123:150.01 27:274:5010:022:270.05 29:595:7720:224:870.141:503:4922:325:20Qualitative results. We provide samples from the AudioM-NIST test set that were transformed by our model2. Theshared folder contains original sound clips and their corre-sponding transformed versions.2https://www.dropbox.com/sh/oangx84ibhzodhs/AAAfG-PBW4Ne8KwdipAmKFy1a?dl=0Adversarial representation learning for private speech generationFigure 2. Privacy vs utility trade-off for the baseline and PCMelGAN for varying ". Orange and blue points correspond to evaluating thefixed classifiers for digits and gender on the spectrogram datasets M0testandM00test(left), and raw waveform datasets X0testandX00test(right). Lower right corner is better.Figure 3. Spectrograms of saying ”zero”. The original recording ofa female (top left), transformed ones from the baseline (top right),and our model of a sampled male (bottom left) and a sampledfemale (bottom right).6. DiscussionTable 1 (top) and Figure 2 (left) demonstrate that the pro-posed method achieves strong privacy while working on themel-spectrogram domain, and retains a strong utility preser-vation. We notice in Table 1 (bottom left) and in Figure 2(right) that the proposed method is able to provide privacyin the audio domain, but to a loss of utility. However, whencomparing to the baseline, we see that generating a syntheticsboth increases utility and ensures privacy. In the spectro-gram domain, the filter model seems to be enough to obtainboth privacy and utility. In both the spectrogram domainand the audio domain, the proposed approach achieves highprivacy. We assume that the privacy will suffer from havinga stricter distortion budget ", but this was not observed inthe experiments. While a quick sanity check with "= 105resulted in the model learning the identity map (with noadditional privacy), more experiments need to be carriedout to detect when privacy starts to deteriorate with lower ".It is worth noting that for some "we have a large standarddeviation. We hypothesize that this could be improved byusing more diverse data, and future work should includeevaluating the proposed method on longer sentences.In Table 2 we noticed that our model obtains substantiallybetter FID scores than the baseline in the audio domain. Weconclude that adding the synthetic sample of the sensitiveattribute improves the realism and fidelity of the speechsignal. We observe this also from listening to the generatedsounds (see qualitative results above).7. ConclusionsIn this work we have proposed an adversarially trainedmodel that learns to make speech data private. We do this byfirst filtering a sensitive attribute, and then generating a new,independent sensitive attribute. We formulate this as an un-constrained optimization problem with a distortion budget.This is done in the spectrogram domain, and we use a pre-trained MelGAN to invert the generated mel-spectrogramback to a raw waveform. We compare our model with thebaseline of just censoring the attribute, and show that wegain both privacy and utility by generating a new sensitiveattribute in the audio domain.Adversarial representation learning for private speech generationReferencesAloufi, R., Haddadi, H., and Boyle, D. Emotionless:Privacy-preserving speech analysis for voice assistants.arXiv preprint arXiv:1908.03632 , 2019.Becker, S., Ackermann, M., Lapuschkin, S., M ̈uller, K.-R., and Samek, W. Interpreting and explaining deepneural networks for classification of audio signals. CoRR ,abs/1807.03418, 2018.Beutel, A., Chen, J., Zhao, Z., and Chi, E. H. Data decisionsand theoretical implications when adversarially learningfair representations. arXiv preprint arXiv:1707.00075 ,2017.Donahue, C., Li, B., and Prabhavalkar, R. Exploring speechenhancement with generative adversarial networks forrobust speech recognition. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 5024–5028. IEEE, 2018.Donahue, C., McAuley, J., and Puckette, M. Adversarialaudio synthesis. In International Conference on LearningRepresentations , 2019. URL https://openreview.net/forum?id=ByMVTsR5KQ .Edwards, H. and Storkey, A. J. Censoring representationswith an adversary. In 4th International Conference onLearning Representations, ICLR 2016, San Juan, PuertoRico, May 2-4, 2016, Conference Track Proceedings ,2016.Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue,C., and Roberts, A. GANSynth: Adversarial neural au-dio synthesis. In International Conference on LearningRepresentations , 2019. URL https://openreview.net/forum?id=H1xQVn09FX .Gibiansky, A., Arik, S., Diamos, G., Miller, J., Peng, K.,Ping, W., Raiman, J., and Zhou, Y . Deep voice 2: Multi-speaker neural text-to-speech. In Advances in neuralinformation processing systems , pp. 2962–2970, 2017.Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y .Generative adversarial networks, 2014.Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., andHochreiter, S. Gans trained by a two time-scale updaterule converge to a local nash equilibrium, 2017.Higuchi, T., Kinoshita, K., Delcroix, M., and Nakatani, T.Adversarial training for data-driven speech enhancementwithout parallel corpus. In 2017 IEEE Automatic SpeechRecognition and Understanding Workshop (ASRU) , pp.40–47. IEEE, 2017.Hsu, C.-C., Hwang, H.-T., Wu, Y .-C., Tsao, Y ., and Wang,H.-M. V oice conversion from unaligned corpora usingvariational autoencoding wasserstein generative adversar-ial networks. arXiv preprint arXiv:1704.00849 , 2017.Huang, C., Kairouz, P., Chen, X., Sankar, L., and Rajagopal,R. Context-aware generative adversarial privacy. Entropy ,19(12), 2017. ISSN 1099-4300. doi: 10.3390/e19120656.URLhttps://www.mdpi.com/1099-4300/19/12/656 .Huang, C., Kairouz, P., and Sankar, L. Generative adver-sarial privacy: A data-driven approach to information-theoretic privacy. In 2018 52nd Asilomar Conference onSignals, Systems, and Computers , pp. 2162–2166, Oct2018. doi: 10.1109/ACSSC.2018.8645532.Isola, P., Zhu, J., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks.In2017 IEEE Conference on Computer Vision and Pat-tern Recognition (CVPR) , pp. 5967–5976, July 2017. doi:10.1109/CVPR.2017.632.Kameoka, H., Kaneko, T., Tanaka, K., and Hojo, N. Stargan-vc: Non-parallel many-to-many voice conversion usingstar generative adversarial networks. In 2018 IEEE Spo-ken Language Technology Workshop (SLT) , pp. 266–273.IEEE, 2018.Kaneko, T., Kameoka, H., Tanaka, K., and Hojo, N. Stargan-vc2: Rethinking conditional methods for stargan-basedvoice conversion. arXiv preprint arXiv:1907.12279 ,2019.Kingma, D. P. and Ba, J. Adam: A method for stochasticoptimization. arXiv preprint arXiv:1412.6980 , 2014.Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenetclassification with deep convolutional neural networks. InPereira, F., Burges, C. J. C., Bottou, L., and Weinberger,K. Q. (eds.), Advances in Neural Information Process-ing Systems 25 , pp. 1097–1105. Curran Associates, Inc.,2012.Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh,W. Z., Sotelo, J., de Br ́ebisson, A., Bengio, Y ., andCourville, A. C. Melgan: Generative adversarial net-works for conditional waveform synthesis. In Advancesin Neural Information Processing Systems , pp. 14881–14892, 2019.Larsen, A. B. L., Sønderby, S. K., and Winther, O. Autoen-coding beyond pixels using a learned similarity metric.CoRR , abs/1512.09300, 2015. URL http://arxiv.org/abs/1512.09300 .Lim, J. H. and Ye, J. C. Geometric gan, 2017.Adversarial representation learning for private speech generationMartinsson, J., Listo Zec, E., Gillblad, D., and Mogren, O.Adversarial representation learning for synthetic replace-ment of sensitive data. CoRR , abs/2006.08039, 2020.URL https://arxiv.org/abs/2006.08039 .Nocedal, J. and Wright, S. J. Numerical Optimization .Springer, New York, NY , USA, second edition, 2006.Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K.,Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A.,and Kavukcuoglu, K. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Pascual, S., Bonafonte, A., and Serr `a, J. Segan: Speechenhancement generative adversarial network. In Proc.Interspeech 2017 , pp. 3642–3646, 2017. doi: 10.21437/Interspeech.2017-1428. URL http://dx.doi.org/10.21437/Interspeech.2017-1428 .Pasini, M. Melgan-vc: V oice conversion and audio styletransfer on arbitrarily long samples using spectrograms.arXiv preprint arXiv:1910.03713 , 2019.Qian, J., Du, H., Hou, J., Chen, L., Jung, T., and Li, X.-Y .Hidebehind: Enjoy voice input with voiceprint unclon-ability and anonymity. In Proceedings of the 16th ACMConference on Embedded Networked Sensor Systems , pp.82–94, 2018.Qin, S. and Jiang, T. Improved wasserstein conditionalgenerative adversarial network speech enhancement.EURASIP Journal on Wireless Communications and Net-working , 2018(1):181, 2018.Raval, N., Machanavajjhala, A., and Cox, L. P. Protect-ing visual secrets using adversarial nets. In 2017 IEEEConference on Computer Vision and Pattern RecognitionWorkshops (CVPRW) , pp. 1329–1332. IEEE, 2017.Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V .,Radford, A., Chen, X., and Chen, X. Improved techniquesfor training gans. In Lee, D. D., Sugiyama, M., Luxburg,U. V ., Guyon, I., and Garnett, R. (eds.), Advances inNeural Information Processing Systems 29 , pp. 2234–2242. Curran Associates, Inc., 2016.Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N.,Yang, Z., Chen, Z., Zhang, Y ., Wang, Y ., Skerrv-Ryan, R.,et al. Natural tts synthesis by conditioning wavenet onmel spectrogram predictions. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 4779–4783. IEEE, 2018.Song, L., Lu, Z., He, R., Sun, Z., and Tan, T. Geometryguided adversarial facial expression synthesis. CoRR ,abs/1712.03474, 2017. URL http://arxiv.org/abs/1712.03474 .Srivastava, B. M. L., Bellet, A., Tommasi, M., andVincent, E. Privacy-preserving adversarial rep-resentation learning in asr: Reality or illusion?Interspeech 2019 , Sep 2019. doi: 10.21437/interspeech.2019-2415. URL http://dx.doi.org/10.21437/Interspeech.2019-2415 .Tang, H., Xu, D., Liu, G., Wang, W., Sebe, N., and Yan,Y . Cycle in cycle generative adversarial networks forkeypoint-guided image generation. In Proceedings of the27th ACM International Conference on Multimedia , MM’19, pp. 2052–2060, New York, NY , USA, 2019. Associa-tion for Computing Machinery. ISBN 9781450368896.doi: 10.1145/3343031.3350980. URL https://doi.org/10.1145/3343031.3350980 .Xie, Q., Dai, Z., Du, Y ., Hovy, E., and Neubig, G. Control-lable invariance through adversarial feature learning. InAdvances in Neural Information Processing Systems , pp.585–596, 2017.Zhang, B. H., Lemoine, B., and Mitchell, M. Mitigating un-wanted biases with adversarial learning. In Proceedingsof the 2018 AAAI/ACM Conference on AI, Ethics, andSociety , pp. 335–340. ACM, 2018.Adversarial representation learning for private speech generationSupplementaryAlgorithm 1 PCMelGANInput: datasetXtrain , learning rate , penalty, distor-tion constant "repeatDrawnsamples uniformly at random from the dataset(x1;s1);:::; (xn;sn)XtrainCompute mel-spectrogram and normalizemi=STFT (xi)8i= 1;:::;nDrawnsamples from the noise distributionz(1)1;:::;z(1)nN(0;1)z(2)1;:::;z(2)nN(0;1)Drawnsamples from the synthetic distributions01;:::;s0nUf 0;1gCompute the censored and synthetic datam0i=F(mi;z(1)i;F)8i= 1;:::;nm00i=G(m0i;s0i;z(2)i;G)8i= 1;:::;nCompute filter and generator lossLF(F) =1nnXi=1`(DF(m0i;DF);si)+max(1nnXi=1d(m0i;mi)";0)2LG(G) =1nnXi=1`(DG(m00i;DG);si)+max(1nnXi=1d(m00i;mi)";0)2Update filter and generator parametersF Adam (F;F;1;2)G Adam (G;G;1;2)Compute discriminator lossesLDF(DF) =1nPni=1`(DF(m0i;DF);si)LDG(DG) =1nnXi=1`(DG(m00i;DG);fake )+1nnXi=1`(DG(mi;DG);si)Update discriminator parametersD Adam (D;D;1;2)until termination criterion is met<|im_end|> <|im_start|>assistant ### Review Title Privacy in speech , applied to generation ### Review Text Privacy in speech processing is a critical issue, both now and in the future. Because this paper sets out to do private speech generation, and manages to do so to some extent, I argue for acceptance despite the following critiques. Given the stated goal of privacy preserving - isn't operating at either a higher level (raw text with controllable "synthesis attributes", which are varied each generation to obscure the target attribute), or a lower one (per audio frame, distort and remove pieces of the audio which are not necessary to the end task, such as speech recognition) more directly applicable to the stated goal of "We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance". I do not find FID an adequate measure for this task - privacy preservation according to the specified attribute could be tested directly against a high quality speaker or vocal tract length recognizer being poor, alongside checking whether the resulting audio is still equivalent to the original audio put through a high quality speech recognizer or ASR system. Listening to the samples, I struggled to hear anything at all in a number of the sampled outputs, and many of the original audio files are so short that it is difficult to have context at all about the attribute that will be "masked". AudioMNIST is such a small dataset that it barely serves as a proof of concept here. Generally I like the concept of this work, and the effort in tuning and gathering results tables is thorough. But the metrics used here do not seem sufficient for either measuring speech privacy preservation, or quality of the resulting audio. Further exploration along these lines should include directly automated recognition error rates, as well as human quality evaluations. However, achieving any amount of results in such a difficult application area is commendable, and I hope the authors will work further on this area. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rJma2bZCW
ICLR.cc/2018/Conference
2018
Three factors influencing minima in SGD
["Stanis\u0142aw Jastrz\u0119bski", "Zac Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Amos Storkey", "Yoshua Bengio"]
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.
["SGD", "Deep Learning", "Generalization"]
ABSTRACTWe study the statistical properties of the endpoint of stochastic gradient descent(SGD). We approximate SGD as a stochastic differential equation (SDE) andconsider its Boltzmann Gibbs equilibrium distribution under the assumption ofisotropic variance in loss gradients.. Through this analysis, we find that three fac-tors – learning rate, batch size and the variance of the loss gradients – control thetrade-off between the depth and width of the minima found by SGD, with widerminima favoured by a higher ratio of learning rate to batch size. In the equilibriumdistribution only the ratio of learning rate to batch size appears, implying that it’sinvariant under a simultaneous rescaling of each by the same amount. We experi-mentally show how learning rate and batch size affect SGD from two perspectives:the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the ex-periments suggest the endpoint of SGD is similar under simultaneous rescaling ofbatch size and learning rate, and also that a higher ratio leads to flatter minima,both findings are consistent with our theoretical analysis. We note experimentallythat the dynamics also seem to be similar under the same rescaling of learning rateand batch size, which we explore showing that one can exchange batch size andlearning rate in a cyclical learning rate schedule. Next, we illustrate how noiseaffects memorization, showing that high noise levels lead to better generalization.Finally, we find experimentally that the similarity under simultaneous rescaling oflearning rate and batch size breaks down if the learning rate gets too large or thebatch size gets too small.1 I NTRODUCTIONDespite being massively over-parameterized (Zhang et al., 2016), deep neural networks (DNNs)have demonstrated good generalization ability and achieved state-of-the-art performances in manyapplication domains such as image (He et al., 2016) and speech recognition (Amodei et al., 2016).The reason for this success has been a focus of research recently but still remains an open question.Our work provides new theoretical insights and useful suggestions for deep learning practitioners.The standard way of training DNNs involves minimizing a loss function using SGD and its vari-ants (Bottou, 1998). In SGD, parameters are updated by taking a small discrete step depending onthe learning rate in the direction of the negative loss gradient, which is approximated based on asmall subset of training examples (called a mini-batch). Since the loss functions of DNNs are highlynon-convex functions of the parameters, with complex structure and potentially multiple minimaand saddle points, SGD generally converges to different regions of parameter space depending onoptimization hyper-parameters and initialization.Recently, several works (Arpit et al., 2017; Advani & Saxe, 2017; Shirish Keskar et al., 2016) haveinvestigated how SGD impacts generalization in DNNs. It has been argued that wide minima tend togeneralize better than sharp minima (Hochreiter & Schmidhuber, 1997; Shirish Keskar et al., 2016).This is entirely compatible with a Bayesian viewpoint that emphasizes targeting the probability massassociated with a solution, rather than the density value at a solution (MacKay, 1992b). Specifically,(Shirish Keskar et al., 2016) find that larger batch sizes correlate with sharper minima. In contrast,we find that it is the ratio of learning rate to batch size which is correlated with sharpness of minima,not just batch size alone. In this vein, while (Dinh et al., 2017) discuss the existence of sharpminima which behave similarly in terms of predictions compared with wide minima, we argue thatSGD naturally tends to find wider minima at higher noise levels in gradients, and such wider minimaseem to correlate with better generalization.1Under review as a conference paper at ICLR 2018In order to achieve our goal, we approximate SGD as a continuous stochastic differential equation(Bottou, 1991; Mandt et al., 2017; Li et al., 2017). Assuming isotropic gradient noise, we derive theBoltzmann-Gibbs equilibrium distribution of this stochastic process, and further derive the relativeprobability of landing in one local minima as compared to another in terms of their depth and width.Our main finding is that the ratio of learning rate to batch-size along with the gradient’s covariancesinfluence the trade-off between the depth and sharpness of the final minima found by SGD, with ahigh ratio of learning rate to batch size favouring flatter minima. In addition, our analysis providesa theoretical justification for the empirical observation that scaling the learning rate linearly withbatch size (up to a limit) leads to identical performance in DNNs (Krizhevsky, 2014; Goyal et al.,2017).We verify our theoretical insights experimentally on different models and datasets. In particular, wedemonstrate that high learning rate to batch size ratio (due to either high learning rate or low batch-size) leads to wider minima and correlates well with better validation performance. We also showthat a high learning rate to batch size ratio helps prevent memorization. Furthermore, we observe thatmultiplying each of the learning rate and the batch size by the same scaling factor results in similartraining dynamics. Extending this observation, we validate experimentally that one can exchangelearning rate and batch size for the recently proposed cyclic learning rate (CLR) schedule (Smith,2015), where the learning rate oscillates between two levels. Finally, we discuss the limitations ofour theory in practice.2 R ELATED WORKThe relationship between SGD and sampling a posterior distribution via stochastic Langevin meth-ods has been the subject of discussion in a number of papers (Chaudhari et al., 2017; Chen et al.,2014; Ding et al., 2014; V ollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato &Nakagawa, 2014). In particular, (Mandt et al., 2017) describe the dynamics of stochastic gradientdescent (SGD) as a stochastic process that can be divided into three distinct phases. In the firstphase, weights diffuse and move away from the initialization. In the second phase the gradientmagnitude dominates the noise in the gradient estimate. In the final phase, the weights are near theoptimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoreticpoint of view and suggest the diffusion behaviour of the parameters in the last phase leads to theminimization of mutual information between the input and hidden representation. We also relate theSGD dynamics to the stationary distribution of the stochastic differential equation. Our derivationbears similarity with (Mandt et al., 2017). However, while (Mandt et al., 2017) study SGD as anapproximate Bayesian inference method in the final phase of optimization in a locally convex set-ting, our end goal is to analyze the stationary distribution over the entire parameter space reached bySGD. Further, our analysis allows us to compare the probability of SGD ending up in one minimaover another (in terms of width and depth), which is novel in our case.We discuss the Fokker-Planck equation which has appeared before in the machine learning literaturethough the exact form and solution we consider we believe is novel. For example, in the onlinesetting (Heskes & Kappen, 1993) derive a Gibbs distribution from the Fokker-Planck equation, butthe relation there does not give the temperature of the Gibbs distribution in terms of the learningrate, batch size and gradient covariance.Our work is also closely related to the ongoing discussion about the role of large batch size and thesharpness of minima found in terms of generalization (Shirish Keskar et al., 2016). (Shirish Keskaret al., 2016) showed that SGD ends up in sharp minimum when using large batch size. (Goyal et al.,2017; Hoffer et al., 2017) empirically observed that scaling up the learning rate, and training formore epochs, leads to good generalization when using large batch size. Our novelty is in explainingthe importance of the ratio of learning rate to batch size. In particular, our theoretical and empiricalresults show that simultaneously rescaling the batch size and learning rate by the same amount leadsSGD to minima having similar width despite using different batch sizes.Concurrent with this work, (Smith & Le, 2017; Chaudhari & Soatto, 2017) have both analyzed SGDapproximated as a continuous time stochastic process and stressed the importance of the learningrate to batch size ratio. (Smith & Le, 2017) focused on the training dynamics while (Chaudhari &Soatto, 2017) explored the stationary non-equilibrium solution for the stochastic differential equa-tion for non-isotropic gradient noise, but assuming other conditions on the covariance and loss to2Under review as a conference paper at ICLR 2018enforce the stationary distribution to be path-independent. Their solution does not have an explicitsolution in terms of the loss in this case. In contrast to other work, we strictly focus on the explic-itly solvable case of the Boltzmann-Gibbs equilibrium distribution with isotropic noise. This focusallows us to relate the noise in SGD, controlled by the learning rate to batch size ratio, with thewidth of its endpoint. We empirically verify that the width and height of minima correlates with thelearning rate to batch size ratio in practice.Our work continues the line of research on the importance of noise in SGD (Bottou, 1998; Rouxet al., 2008; Neelakantan et al., 2015; Mandt et al., 2017). Our novelty is in formalizing the impactof batch size and learning rate (i.e. noise level) on the width and depth of the final minima, andempirical verifications of this.3 I NSIGHTS FROM FOKKER -PLANCKOur focus in this section is on finding the relative probability with which we end optimization in aregion near a minimum characterized by a certain loss value and Hessian determinant. We will findthat the relative probability depends on the local geometry of the loss function at each minimumalong with batch size, learning rate and the covariance of the loss gradients. To reach this result, wefirst derive the equilibrium distribution of SGD over the parameter space under a stochastic differen-tial equation treatment. We make the assumption of isotropic covariance of the loss gradients, whichallows us to write down an explicit closed-form analytic expression for the equilibrium distribution,which turns out to be a Boltzmann-Gibbs distribution.3.1 S ETUPWe follow a similar (though not identical) theoretical setup to (Mandt et al., 2017), approximatingSGD with a continuous-time stochastic process, which we now outline.Let us consider a model parameterized by =f1;:::;qg. ForNtraining examples xi;i2f1;:::;Ng, the loss function, L(), and the corresponding gradient g(), are defined based on thesum over the loss values for all training examples. Stochastic gradients g(S)()arise when weconsider a batchBof sizeS <N of random indices drawn uniformly from f1;:::;Ngand form an(unbiased) estimate of loss and gradient based on the corresponding subset of training examplesL(S)() =1SXn2Bl(;xn); g(S)() =@@L(S)():We consider stochastic gradient descent (SGD) with learning rate , as defined by the update rule(t+ 1) =(t)g(S)():We now make the following assumptions:(1) By the central limit theorem (CLT), we assume the noise in the stochastic gradient is Gaus-sian with covariance matrix1SC()g(S)() =g() +1pSg();where g()N(0;C()):We note that the covariance is symmetric positive-semidefinite, and so can be decomposedinto the product of two matrices C() =B()B>():(2) We assume the discrete process of SGD can be approximated by the continuous time limitof the following stochastic differential equation (known as a Langevin equation)ddt=g() +pSB()f(t) (1)where f(t)is a normalized Gaussian time-dependent stochastic term.Note that the continuous time approximation of SGD as a stochastic differential equation has beenshown to hold in a weak approximation on the condition that the learning rate is small (Li et al.,2017).3Under review as a conference paper at ICLR 2018Note that we have not made Assumption 4 of (Mandt et al., 2017), where they assume the loss canbe globally approximated by a quadratic. Instead, we allow for a general loss function, which canhave many local minima.3.2 T HREE FACTORS INFLUENCING EQUILIBRIUM DISTRIBUTIONThe Langevin equation is a stochastic differential equation, and we are interested in its equilib-rium distribution which gives insights into the behavior of SGD and the properties of the pointsit converges to. Assuming isotropic noise, the Langevin equation is well known to have a Gibbs-Boltzmann distribution as its equilibrium distribution. This equilibrium distribution can be derivedby finding the stationary solution of the Fokker-Planck equation, with detailed balance, which gov-erns the evolution of the probability density of the value of the parameters with time. The Fokker-Planck equation and its derivation is standard in the statistical physics literature. In Appendix A wegive the equation in the machine learning context in Eq. (5) and for completeness of presentation wealso give its derivation. In Appendix C we restate the standard proofs of the stationary distribution ofa Langevin system, and provide the resulting Gibbs-Boltzmann equilbirium distribution here, usingthe notation of this paper:Theorem 1 (Equilibrium Distribution) .Assume1that the gradient covariance is isotropic,i.e.C() =2I, where2is a constant. Then the equilibrium distribution of the stochastic dif-ferential equation 1 is given byP() =P0exp2L()n2; (2)wherenSandP0is a normalization constant, which is well defined for loss functions with L2regularization.Discussion : HereP()defines the density over the parameter space. The above result says that ifwe run SGD for long enough (under the assumptions made regarding the SGD sufficiently matchingthe infinitesimal limit), then the probability of the parameters being in a particular state asymptoti-cally follows the above density. Note, that nSis a measure of the noise in the system set by thechoice of learning rate and batch size S. The fact that the loss is divided by nemphasizes that thehigher the noise n, the less granular the loss surface appears to SGD. The gradient variance C()on the other hand is determined by the dataset and model priors (e.g. architecture, model parameter-ization, batch normalization etc.). This reveals an important area of investigation, i.e., how differentarchitectures and model parameterizations affect the gradient’s covariance structure. We note that inthe analysis above, the assumption of the gradient covariance C()being fixed and isotropic in theparameter space is unrealistic. However it is a simplification that enables straightforward insightsregarding the relationship of the noise, batch size and learning rate in the Gibbs-Boltzmann equilib-rium. We empirically show that various predictions based on this relationship hold in practice.Returning to SGD as an optimization method, we can ask, given the probability density P()canwe derive the probability of ending at a given minimum, A, which we will denote by lowercasepA= ~pAC, whereCis a normalization constant which is the same for all minima (the unnormalizedprobability ~pAis all we are interested in when estimating the relative probability of finishing in agiven minimum compared to another one). This probability is derived in Appendix D, and given inthe following theorem, which is the core insight from our theory.Theorem 2 (Probability of ending in region near minima A).Assume the loss has a series ofseparated local minima. Consider one such minima, with Hessian HAand lossLAat a minimumA. Then the unnormalized probability of ending in a region near minima Ais~pA=1pdetHAexp2LAn2(3)wheren=Sis the noise used in the SGD process to reach A.Discussion : For this analysis, we qualitatively categorize a minima Aby its lossLA(depth) andthe determinant of the Hessian detHA(a larger determinant implies a sharper minima). The above1Here we also assume a weak regularity condition that the loss L()includes the regularization term kk22for some > 0.4Under review as a conference paper at ICLR 2018result shows that the probability of landing in a specific minimum depends on three factors - learningrate, batch-size and covariance of the gradients. The two factors that we directly control only appearin the ratio given by the noise n==S. Note that the proof of this result utilizes a LaplaceApproximation in which the loss near a given minimum can be approximated using a second orderTaylor series in order to evaluate an integral. We emphasize this is not the same as globally treatingthe loss as a quadratic.To study which kind of minima are more likely if we were to reach equilibrium, it is instructive toconsider the ratio of probabilities pAandpBat two distinct minima AandBrespectively givenbypApB=rdetHBdetHAexp2n2(LBLA):To highlight that towards the equilibrium solution SGD favors wider rather than sharper minima,let’s consider the special case when LA=LB, i.e., both minima have the same loss value. Then,pApB=rdetHBdetHA:This case highlights that in equilibrium, SGD favors the minimum with lower determinant of theHessian (i.e. the flatter minima) when all other factors are identical. On the flip side, it can be seenthat if two minima have the same curvature ( detHA= det HB), then SGD will favor the minimawith lower loss. Finally in the general case when LALB, it holds that pApBif and only if1n22logqdetHBdetHA(LALB)Y :That is, there is an upper bound on the inverse of the noise for Ato be favored in the case that itsloss is higher than at B, and this upper bound depends on the difference in the heights comparedto the ratio of the widths. In particular we can see that if detHB<detHA, thenY < 0, and sono amount of noise will result in Abeing more probable than B. In words, if the minimum at Ais both higher and sharper than the minimum at B, it is never reached with higher probability thanB, regardless of the amount of noise. However, if detHB>detHAthenY > 0, and there is alower bound on the noisen>(LALB)22logqdetHBdetHA (4)to makeAmore probable than B. In words, if the minimum at Ais higher but flatter than theminimum atB, it is favored over B, as long as the noise is large enough, as defined by eq. (4).To summarize, the presented theory shows that the noise level in SGD (which is defined by the ratioof learning rate to batch size) controls the extent to which optimization favors wider over deeperminima. Increasing the noise by increasing the ratio of learning rate to batch size increases theprobability of wider compared to deeper minima. For a discussion on the relative probabilities ofcritical points that are not strictly minima, see appendix D.4 E XPERIMENTS4.1 I MPACT OFSON THE MINIMA SGD FINDSIn this section, we empirically study the impact of learning rate and batch size Son the localminimum that SGD finds. We first focus on a 4-layer Batch Normalized ReLU MLP trained onFashion-MNIST (Xiao et al., 2017). We study how the noise ratio n=Sleads to minima withdifferent curvatures and validation accuracy. To measure the curvature at a minimum, we computethe norm of its Hessian (a higher norm implies higher sharpness of the minimum) using the finitedifference method (Wu et al., 2017). In Figure 1a, we report the norm of the Hessian for localminima obtained by SGD for different n=S, where2[5e3;1e1]andS2[25;1000] .Each experiment is run for 200epochs; most models reach approximately 100% accuracy on train5Under review as a conference paper at ICLR 20180E+00 2E-03 4E-03/S0E+002E-014E-01Log Hessian norm(a) Correlation ofSwith logarithm ofnorm of Hessian.0E+00 1E-03 2E-03 3E-03 4E-03/S87.5%88.0%88.5%89.0%89.5%90.0%90.5%Validation performance(b) Correlation ofSwith validationaccuracy.Figure 1: Impact on SGD with ratio of learning rate and batch size Sfor 4 layer ReLU MLP onFashionMNIST.1 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (a) left=0:1S=128, right=0:1S=10241 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (b) left=0:1S=128, right=0:01S=1281 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (c) left=0:1S=1024, right=0:01S=128Figure 2: Interpolation of Resnet56 networks trained with different learning rate to batch size ratio,S.(x-axis) corresponds to the interpolation coefficient. As predicted by our theory, lowerSratioleads to sharper minima (as shown by the left and middle plot).set. Asngrows, we observe that the norm of the Hessian at the minima also decreases, suggestingthat higherSpushes the optimization towards flatter minima. This agrees with Theorem 2, Eq. (3),that higherSfavors flatter over sharper minima.Figure 1b shows the results from exploring the impact of n=Son the final validation performance,which confirms that better generalization correlates with higher n. Taken together, Fig. 1a andFig. 1b imply wider minima correlate well with better generalization. As n=Sincreases, SGDfinds local minima that generalize better. In Appendix F.1, we report similar results for Resnet56applied on CIFAR10 in Figure 8, for a 20 layer ReLU network with good initialization schemes inFigures 9a and 9c, and with bad initilization in Figure 9b.To further illustrate the behavior of SGD with different noise levels, we train three Resnet56 modelson CIFAR10 using SGD (without momentum) with differentS. Our baseline model uses=0:1S=128.In comparision, we investigate a large batch model with=0:1S=1024and a small learning rate modelwith=0:01S=128, which have approximately the sameSratio. We follow (Goodfellow et al., 2014)by investigating the loss on the line interpolating between the parameters of two models. Morespecifically, let 1and2be the final parameters found by SGD using differentS, we report theloss valuesL((1)1+2)for2[1;2]. Results indicate that models with large batch size(Fig. 2-left) or low learning rate (Fig. 2-middle; both having a lowerSthan the baseline) end up ina sharper minimum relative to the baseline model. These plots are consistent with our theoreticalanalysis that higher n==S gives preference to wider minima over sharper minima. On the otherhand, figure 2 (right) shows that models trained with roughly the same level of noise end up inminima of similar quality. The following experiment explores this aspect further.We train VGG-11 models (Simonyan & Zisserman, 2014) on CIFAR-10, such that all the mod-els are trained with the same noise level but with different values of learning rate and batch size.Specifically, we use=0:1S=50, where we set = 0:25;1;4. We then interpolate between the modelparameters found when training with = 1 and= 4 (Fig. 3-left), and = 1 and= 0:256Under review as a conference paper at ICLR 20181.00.50.00.51.01.52.0020406080100Accuracy (and scaled cross-entropy)CIFAR10 (VGG11): =1, =4train accuracyval accuracytrain lossval loss (a)= 1 corresponds to model at = 0 and= 4corresponds to model at = 11.00.50.00.51.01.52.0020406080100Accuracy (and scaled cross-entropy)CIFAR10 (VGG11): =1, =0.25train accuracyval accuracytrain lossval loss (b)= 1 corresponds to model at = 0 and= 0:25corresponds to model at = 1Figure 3: Interpolation between parameters of models trained with the same learning rate ( ) tobatch-size (S) ratio:=0:1S=50, but different andSvalues determined by . As predicted by ourtheory, the minima for models with identical noise levels should be qualitatively similar as can beseen by these plots.0 100 200 300Epoch20406080100AccuracyCyclic BS TrainCyclic LR TrainCyclic BS TestCyclic LR Test0 100 200 300Epoch20406080100Accuracybs=128, lr=0.001 trainbs=640, lr0.005 trainbs=128, lr=0.001 testbs=640, lr=0.005 testFigure 4: Learning rate schedule can be replaced by an equivalent batch size schedule. The ratio oflearning rate to batch size is equal at all times for both red and blue curves in each plot. Above plotsshow train and test accuracy for experiments involving VGG-11 architecture on CIFAR10 dataset.Left: cyclic batch size schedule (blue) in range 128 to 640, compared to cyclic learning rate schedule(red) in range 0.001 to 0.005. Right: constant batch size 128 and constant learning rate 0.001 (blue),compared to constant batch size 640 and constant learning rate 0.005 (red).(Fig. 3-right). The interpolation results indicate that all the minima have similar width and depth,qualitatively supporting our theoretical observation that for the same noise ratio SGD ends up inminima of similar quality.4.2SRATIO INFLUENCES LEARNING DYNAMICS OF SGDIn this section we look at two experimental phenomena: firstly, the equilibrium endpoint of SGDand secondly the dynamical evolution of SGD. The former, was theoretically analysed in the theorysection, while the latter is not directly addressed in the theory section, but we note that the two arerelated - the endpoint is the result of the intermediate dynamics.We experimentally study both phenomena in the following four experiments involving the VGG-11 architecture on the CIFAR10 dataset, shown in Fig 4. The left plot compares two experiments:cyclic batch size schedule (blue) in range 128 to 640, compared to cyclic learning rate schedule (red)in range 0.001 to 0.005. The right plot compares two other experiments: constant learning rate tobatch-size ratio ofS=0:001128(blue) andS=0:005640(red).Regarding the first phenomena, of the endpoint of SGD, the test accuracy when training with a cyclicbatch size and cyclic learning rate is 89.39% and 89.24%, respectively, and we emphasize that theseare similar scores. For a constant learning rate to batch-size ratio ofS=0:001128andS=0:005640, the7Under review as a conference paper at ICLR 20180% 20% 40% 60% 80% 100%Random label accuracy60%62%63%65%68%70%72%74%76%Validation accuracy25.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy30%35%40%45%50%55%50.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy60%62%63%65%68%70%72%74%76%Validation accuracy25.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy30%35%40%45%50%55%50.0% random labelsFigure 5: Impact ofSon memorization of MNIST when 25% and50% of labels in the training setare replaced with random labels, using no momentum (on the right) or a momentum with param-eter0:9(on the left). We observe that for a specific level of memorization, highSleads to bettergeneralization. Red has higher value ofSthan blue.test accuracy is 87.25% and 86.92%, respectively, and we again emphasize that these two scores aresimilar to each other. That in each of these experiments the endpoint test accuracies are similar showsexchangability of learning rate for batch size for the endpoint, and is consistent with our theoreticalcalculation which says characteristics of the minima found at the endpoint are determined by theratio of learning rate to batch-size, but not individually on learning rate or batch size. Additionalresults exploring cyclical learning rate and batch size schedule are reported in Appendix F.4.Regarding the second phenomena of the dynamical evolution, we note the similarity of the train-ing and test accuracy curves for each pair of same-noise curves in each experiment. Our theoreti-cal analysis does not explain this phenomena, as it does not determine the dynamical distribution.Nonetheless, we report it here as an interesting observation, and point to Appendix B for some in-tuition on why this may occur from the Fokker-Planck equation. In Appendix F.2, Fig. 13 we showin more detail the loss curves. While the epoch-averaged loss curves match well when exchangingbatch size for learning rate, the per-iteration loss is not invariant to switching batch size for learningrate. In particular, we note that each run with smaller batch-size has higher variance in per-iterationloss than it’s same-noise pair. This is expected, since from one iteration to the next, the exampleswill have higher variance for a smaller batch-size.The take-away message from this section is that the endpoint and dynamics of SGD are approxi-mately invariant if the batch size and learning rate are simultaneously rescaled by the same amount.This is in contrast to a commonly used heuristic consisting of scaling the learning rate with thesquare root of the batch size, i.e. of keeping the ratio =pSconstant. This is used for example by(Hoffer et al., 2017) as a way of keeping the covariance matrix of the parameter update step thesame for any batch size. However, our theory and experiments suggest changing learning rate andbatch size in a way that keeps the ratio n==S constant instead, since this results in the sameequilibrium distribution.4.3 I MPACT OF SGD ONMEMORIZATIONTo generalize well, a model must identify the underlying pattern in the data instead of simply per-fectly memorizing each training example. An empirical approach to test for memorization is toanalyze how good a DNN can fit a training set when the true labels are partly replaced by randomlabels (Zhang et al., 2016; Arpit et al., 2017). The experiments described in this section highlightthat SGD with a sufficient amount of noise improves generalization at a given level of memorization.Experiments are performed on the MNIST dataset with an MLP similar to the one used by (Arpitet al., 2017), but with 256hidden units. We train the MLP with different amounts of random labelsin the training set. For each level of label noise, we evaluate the impact ofSon the generalizationperformance. Specifically, we run experiments withStaking values in a grid with batch size inf50;100;200;500;1000g, learning rate inf0:005;0:01;0:02;0:05;0:07;0:1g, and momentum inf0:0;0:9g. Models are trained for 300epochs. Fig. 5 reports the MLPs performances on both thenoisy training set and the validation set. The results show that larger noise in SGD (regardless ifinduced by using a smaller batch size or a larger learning rate) leads to solutions which generalizebetter for the same amount of random labels memorized on the training set. Thus, our analysis8Under review as a conference paper at ICLR 20180 25 50 75 100 125 150 175 200epoch1020304050607080validation accuracy = 1 (83.0±0.15) = 3 (82.7±0.07) = 5 (82.5±0.29) = 7 (82.1±0.06) = 9 (81.4±0.27) = 11 (81.0±0.3) = 13 (69.3±7.83) = 15 (77.5±1.05)*(a) Train dataset size 120000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (87.1±0.13) = 3 (87.0±0.2) = 5 (86.6±0.15) = 7 (86.4±0.14) = 9 (86.3±0.17) = 11 (77.8±13.9) = 13 (85.8±0.01)* = 15 (80.4±3.09)* (b) Train dataset size 225000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (90.4±0.12) = 3 (90.3±0.12) = 5 (90.3±0.14) = 7 (90.2±0.09) = 9 (90.0±0.11) = 11 (89.7±0.22) = 13 (89.1±0.42) = 15 (88.6±0.0)*(c) Train dataset size 450000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (89.8±0.09) = 5 (89.6±0.14) = 9 (89.5±0.11) = 13 (89.3±0.11) = 17 (89.2±0.1) = 21 (88.9±0.16) = 25 (85.4±5.43) = 29 (85.1±1.01)* (d) Half the noiseFigure 6: Breaking point analysis: Our theory suggests the final performance should be similar whenthe SGD noise levelSis kept the same. Here we study its breaking point in terms of too largea learning rate or too small a batch size. (a,b,c)- Validation accuracy for different dataset sizes anddifferentvalues for a VGG-11 architecture trained on CIFAR10. In each experiment, we multiplythe learning rate ( ) and batch size ( S) withsuch that the ratio(=0:1)(S=50)is fixed. We observethat for the same ratio, increasing the learning rate and batch size yields similar performance upto a certain value, for which the performance drops significantly. (d)- Breaking point analysiswhen half the noise level(=0:05)(S=50)is used. The breaking point happens for much larger whenusing a smaller noise. All experiments are repeated 5 times with different random seeds. The graphsdenote the mean validation accuracies and the numbers in the brackets denote the mean and standarddeviation of the maximum validation accuracy across different runs. The * denotes at least one seedlead to divergence.highlights that SGD with low noise n=Ssteers the endpoint of optimization towards a minimumwith low generalization ability.While Fig 5 reports the generalization at the endpoint, we observe that SGD with larger noise con-tinuously steers away from sharp solutions throughout the dynamics. We also reproduce the obser-vation reported by (Arpit et al., 2017): that memorization roughly starts after reaching maximumgeneralization. For runs with momentum we exclude learning rates higher than 0:02as they lead todivergence. Full learning curves are reported in Fig. 14 included in Appendix F.3.4.4 B REAKING POINT OF THE THEORY IN PRACTICEOur analysis relies on the assumption that the gradient step is sufficiently small to guarantee thatthe first order approximation of a Taylor’s expansion is a good estimate of the loss function. Inthe case where the learning rate becomes too high, this approximation is no longer suitable, andthe continuous limit of the discrete SGD update equation will no longer be valid. In this case, the9Under review as a conference paper at ICLR 2018stochastic differential equation doesn’t hold, and hence neither does the Fokker-Planck equation,and so we don’t expect our theory to be valid. In particular, we don’t expect to arrive at the samestationary distribution as indicated by a fixed ratio =S, if the learning rate gets too high.This is exemplified by the empirical results reported in Fig. 6, where similar learning dynamicsand final performance can be observed when simultaneously multiplying the learning rate and batchsize by a factor up to a certain limit. This is done for different training set sizes to investigate ifthe breaking point depends on this factor (Fig. 6 a,b,c). The plots suggest that the breaking pointhappens for smaller values if the dataset size is smaller. We also investigate the influence of when half the noise level is used, due to halving the learning rate, in (figure 6 d). These experimentsstrongly suggest that the reason behind breaking point is the use of a high learning rate because theperformance drops at much higher when the base learning rate is halved. A similar experiment isperformed on Resnets (for results see Fig 7 in the appendix). We highlight other limitations of ourtheory in appendix E.5 D ISCUSSIONIn the theoretical section of this work we treat the learning rate as fixed throughout training. How-ever, in practical applications, the learning rate is annealed to a lower value, either gradually or indiscrete jumps. When viewed within our framework, at the beginning with high noise, SGD favorswidth over depth of a region, then as the noise decreases, SGD prioritizes the depth more strongly -this can be seen from Theorem 3 and the comments that follow.In the theoretical section we made the additional assumption that the covariance of the gradientsis isotropic, in order to be able to derive a closed form solution for the equilibrium distribution.We do not expect this assumption to hold in practice, but speculate that there may be mechanismswhich drive the covariance towards isotropy, for example one may be able to tune learning rateson a per-parameter basis in such a way that the combination of learning rate and covariance matrixis approximately isotropic – this may lead to improvements in optimization. Perhaps some exist-ing mechanisms such as batch normalization or careful initialization give rise to more equalizedcovariance - we leave study of this for future work.We note further that our theoretical analysis considered an equilibrium distribution, which was in-dependent of the intermediate dynamics. However, this may not be the case in practice. Without theisotropic covariance, the system of partial differential equations in the late time limit will in gen-eral have a solution which will depend on the path through which optimization occurs, unless otherrestrictive assumptions are made to force this path dependence to disappear (Chaudhari & Soatto,2017). Despite this simplifying assumption, our empirical results are consistent with the developedtheory. We leave study of path dependence and dynamics to future work.In experiments investigating memorization we explored how the noise level changes the preferenceof wide minima over sharp ones. (Arpit et al., 2017) argues that SGD first learns true labels, beforefocusing on random labels. Our insight is that in the second phase the high level of noise main-tains generalization. This illustrates the trade-off between width of minima and depth in practice.When the noise level is lower, DNNs are more likely to fit random labels better, at the expense ofgeneralizing less well on true ones.6 C ONCLUSIONSWe shed light on the role of noise in SGD optimization of DNNs and argue that three factors (batchsize, learning rate and gradient variance) strongly influence the properties (loss and width) of thefinal minima at which SGD converges. The learning rate and batch size of SGD can be viewed asone effective hyper-parameter acting as a noise factor n==S. This, together with the gradientcovariance influences the trade-off between the loss and width of the final minima. Specifically,higher noise favors wider minima, which in turn correlates with better generalization.Further, we experimentally verify that the noise n==S determines the width and height of theminima towards which SGD converges. We also show the impact of this noise on the memorizationphenomenon. We discuss the limitations of the theory in practice, exemplified by when the learning10Under review as a conference paper at ICLR 2018rate gets too large. We also experimentally verify that andScan be simultaneously rescaled aslong as the noise =S remains the same.
BkC-HgcxG
Interesting paper, however not convincing theoretical results
3: Clear rejection
In this paper, the authors present an analysis of SGD within an SDE framework. The ideas and the presented results are interesting and are clearly of interest to the deep learning community. The paper is well-written overall. However, the paper has important problems. 1) The analysis is widely based on the recent paper by Mandt et al. While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, the setting that is considered in the paper. 2) There is a mistake in the proof Theorem 1. Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be "approximately equal to". The result will only hold if sigma does not depend on theta. However, letting sigma depend on theta is the only difference from Mandt et al. On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). Therefore, presenting it as a new result is misleading. 3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) and the result just boils down to computing some probability under a Gaussian distribution, which is still quite trivial. Apart from this assumption not being very realistic, the result does not justify the claims on "the probability of ending in a certain minimum" -- which is on the other hand a vague statement. First of all "ending in" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. This again does not justify the claims. For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. With authors' claim, the algorithm should spend more time on the wider one, however it is evident that this will not be the case. 4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption and therefore changing eta will result in a different distribution than the ideal one. With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. The experiments are very interesting and I do not underestimate their value. However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Three factors influencing minima in SGD ### Paper Abstract We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small. ### Paper Keywords ["SGD", "Deep Learning", "Generalization"] ### Paper Content ABSTRACTWe study the statistical properties of the endpoint of stochastic gradient descent(SGD). We approximate SGD as a stochastic differential equation (SDE) andconsider its Boltzmann Gibbs equilibrium distribution under the assumption ofisotropic variance in loss gradients.. Through this analysis, we find that three fac-tors – learning rate, batch size and the variance of the loss gradients – control thetrade-off between the depth and width of the minima found by SGD, with widerminima favoured by a higher ratio of learning rate to batch size. In the equilibriumdistribution only the ratio of learning rate to batch size appears, implying that it’sinvariant under a simultaneous rescaling of each by the same amount. We experi-mentally show how learning rate and batch size affect SGD from two perspectives:the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the ex-periments suggest the endpoint of SGD is similar under simultaneous rescaling ofbatch size and learning rate, and also that a higher ratio leads to flatter minima,both findings are consistent with our theoretical analysis. We note experimentallythat the dynamics also seem to be similar under the same rescaling of learning rateand batch size, which we explore showing that one can exchange batch size andlearning rate in a cyclical learning rate schedule. Next, we illustrate how noiseaffects memorization, showing that high noise levels lead to better generalization.Finally, we find experimentally that the similarity under simultaneous rescaling oflearning rate and batch size breaks down if the learning rate gets too large or thebatch size gets too small.1 I NTRODUCTIONDespite being massively over-parameterized (Zhang et al., 2016), deep neural networks (DNNs)have demonstrated good generalization ability and achieved state-of-the-art performances in manyapplication domains such as image (He et al., 2016) and speech recognition (Amodei et al., 2016).The reason for this success has been a focus of research recently but still remains an open question.Our work provides new theoretical insights and useful suggestions for deep learning practitioners.The standard way of training DNNs involves minimizing a loss function using SGD and its vari-ants (Bottou, 1998). In SGD, parameters are updated by taking a small discrete step depending onthe learning rate in the direction of the negative loss gradient, which is approximated based on asmall subset of training examples (called a mini-batch). Since the loss functions of DNNs are highlynon-convex functions of the parameters, with complex structure and potentially multiple minimaand saddle points, SGD generally converges to different regions of parameter space depending onoptimization hyper-parameters and initialization.Recently, several works (Arpit et al., 2017; Advani & Saxe, 2017; Shirish Keskar et al., 2016) haveinvestigated how SGD impacts generalization in DNNs. It has been argued that wide minima tend togeneralize better than sharp minima (Hochreiter & Schmidhuber, 1997; Shirish Keskar et al., 2016).This is entirely compatible with a Bayesian viewpoint that emphasizes targeting the probability massassociated with a solution, rather than the density value at a solution (MacKay, 1992b). Specifically,(Shirish Keskar et al., 2016) find that larger batch sizes correlate with sharper minima. In contrast,we find that it is the ratio of learning rate to batch size which is correlated with sharpness of minima,not just batch size alone. In this vein, while (Dinh et al., 2017) discuss the existence of sharpminima which behave similarly in terms of predictions compared with wide minima, we argue thatSGD naturally tends to find wider minima at higher noise levels in gradients, and such wider minimaseem to correlate with better generalization.1Under review as a conference paper at ICLR 2018In order to achieve our goal, we approximate SGD as a continuous stochastic differential equation(Bottou, 1991; Mandt et al., 2017; Li et al., 2017). Assuming isotropic gradient noise, we derive theBoltzmann-Gibbs equilibrium distribution of this stochastic process, and further derive the relativeprobability of landing in one local minima as compared to another in terms of their depth and width.Our main finding is that the ratio of learning rate to batch-size along with the gradient’s covariancesinfluence the trade-off between the depth and sharpness of the final minima found by SGD, with ahigh ratio of learning rate to batch size favouring flatter minima. In addition, our analysis providesa theoretical justification for the empirical observation that scaling the learning rate linearly withbatch size (up to a limit) leads to identical performance in DNNs (Krizhevsky, 2014; Goyal et al.,2017).We verify our theoretical insights experimentally on different models and datasets. In particular, wedemonstrate that high learning rate to batch size ratio (due to either high learning rate or low batch-size) leads to wider minima and correlates well with better validation performance. We also showthat a high learning rate to batch size ratio helps prevent memorization. Furthermore, we observe thatmultiplying each of the learning rate and the batch size by the same scaling factor results in similartraining dynamics. Extending this observation, we validate experimentally that one can exchangelearning rate and batch size for the recently proposed cyclic learning rate (CLR) schedule (Smith,2015), where the learning rate oscillates between two levels. Finally, we discuss the limitations ofour theory in practice.2 R ELATED WORKThe relationship between SGD and sampling a posterior distribution via stochastic Langevin meth-ods has been the subject of discussion in a number of papers (Chaudhari et al., 2017; Chen et al.,2014; Ding et al., 2014; V ollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato &Nakagawa, 2014). In particular, (Mandt et al., 2017) describe the dynamics of stochastic gradientdescent (SGD) as a stochastic process that can be divided into three distinct phases. In the firstphase, weights diffuse and move away from the initialization. In the second phase the gradientmagnitude dominates the noise in the gradient estimate. In the final phase, the weights are near theoptimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoreticpoint of view and suggest the diffusion behaviour of the parameters in the last phase leads to theminimization of mutual information between the input and hidden representation. We also relate theSGD dynamics to the stationary distribution of the stochastic differential equation. Our derivationbears similarity with (Mandt et al., 2017). However, while (Mandt et al., 2017) study SGD as anapproximate Bayesian inference method in the final phase of optimization in a locally convex set-ting, our end goal is to analyze the stationary distribution over the entire parameter space reached bySGD. Further, our analysis allows us to compare the probability of SGD ending up in one minimaover another (in terms of width and depth), which is novel in our case.We discuss the Fokker-Planck equation which has appeared before in the machine learning literaturethough the exact form and solution we consider we believe is novel. For example, in the onlinesetting (Heskes & Kappen, 1993) derive a Gibbs distribution from the Fokker-Planck equation, butthe relation there does not give the temperature of the Gibbs distribution in terms of the learningrate, batch size and gradient covariance.Our work is also closely related to the ongoing discussion about the role of large batch size and thesharpness of minima found in terms of generalization (Shirish Keskar et al., 2016). (Shirish Keskaret al., 2016) showed that SGD ends up in sharp minimum when using large batch size. (Goyal et al.,2017; Hoffer et al., 2017) empirically observed that scaling up the learning rate, and training formore epochs, leads to good generalization when using large batch size. Our novelty is in explainingthe importance of the ratio of learning rate to batch size. In particular, our theoretical and empiricalresults show that simultaneously rescaling the batch size and learning rate by the same amount leadsSGD to minima having similar width despite using different batch sizes.Concurrent with this work, (Smith & Le, 2017; Chaudhari & Soatto, 2017) have both analyzed SGDapproximated as a continuous time stochastic process and stressed the importance of the learningrate to batch size ratio. (Smith & Le, 2017) focused on the training dynamics while (Chaudhari &Soatto, 2017) explored the stationary non-equilibrium solution for the stochastic differential equa-tion for non-isotropic gradient noise, but assuming other conditions on the covariance and loss to2Under review as a conference paper at ICLR 2018enforce the stationary distribution to be path-independent. Their solution does not have an explicitsolution in terms of the loss in this case. In contrast to other work, we strictly focus on the explic-itly solvable case of the Boltzmann-Gibbs equilibrium distribution with isotropic noise. This focusallows us to relate the noise in SGD, controlled by the learning rate to batch size ratio, with thewidth of its endpoint. We empirically verify that the width and height of minima correlates with thelearning rate to batch size ratio in practice.Our work continues the line of research on the importance of noise in SGD (Bottou, 1998; Rouxet al., 2008; Neelakantan et al., 2015; Mandt et al., 2017). Our novelty is in formalizing the impactof batch size and learning rate (i.e. noise level) on the width and depth of the final minima, andempirical verifications of this.3 I NSIGHTS FROM FOKKER -PLANCKOur focus in this section is on finding the relative probability with which we end optimization in aregion near a minimum characterized by a certain loss value and Hessian determinant. We will findthat the relative probability depends on the local geometry of the loss function at each minimumalong with batch size, learning rate and the covariance of the loss gradients. To reach this result, wefirst derive the equilibrium distribution of SGD over the parameter space under a stochastic differen-tial equation treatment. We make the assumption of isotropic covariance of the loss gradients, whichallows us to write down an explicit closed-form analytic expression for the equilibrium distribution,which turns out to be a Boltzmann-Gibbs distribution.3.1 S ETUPWe follow a similar (though not identical) theoretical setup to (Mandt et al., 2017), approximatingSGD with a continuous-time stochastic process, which we now outline.Let us consider a model parameterized by =f1;:::;qg. ForNtraining examples xi;i2f1;:::;Ng, the loss function, L(), and the corresponding gradient g(), are defined based on thesum over the loss values for all training examples. Stochastic gradients g(S)()arise when weconsider a batchBof sizeS <N of random indices drawn uniformly from f1;:::;Ngand form an(unbiased) estimate of loss and gradient based on the corresponding subset of training examplesL(S)() =1SXn2Bl(;xn); g(S)() =@@L(S)():We consider stochastic gradient descent (SGD) with learning rate , as defined by the update rule(t+ 1) =(t)g(S)():We now make the following assumptions:(1) By the central limit theorem (CLT), we assume the noise in the stochastic gradient is Gaus-sian with covariance matrix1SC()g(S)() =g() +1pSg();where g()N(0;C()):We note that the covariance is symmetric positive-semidefinite, and so can be decomposedinto the product of two matrices C() =B()B>():(2) We assume the discrete process of SGD can be approximated by the continuous time limitof the following stochastic differential equation (known as a Langevin equation)ddt=g() +pSB()f(t) (1)where f(t)is a normalized Gaussian time-dependent stochastic term.Note that the continuous time approximation of SGD as a stochastic differential equation has beenshown to hold in a weak approximation on the condition that the learning rate is small (Li et al.,2017).3Under review as a conference paper at ICLR 2018Note that we have not made Assumption 4 of (Mandt et al., 2017), where they assume the loss canbe globally approximated by a quadratic. Instead, we allow for a general loss function, which canhave many local minima.3.2 T HREE FACTORS INFLUENCING EQUILIBRIUM DISTRIBUTIONThe Langevin equation is a stochastic differential equation, and we are interested in its equilib-rium distribution which gives insights into the behavior of SGD and the properties of the pointsit converges to. Assuming isotropic noise, the Langevin equation is well known to have a Gibbs-Boltzmann distribution as its equilibrium distribution. This equilibrium distribution can be derivedby finding the stationary solution of the Fokker-Planck equation, with detailed balance, which gov-erns the evolution of the probability density of the value of the parameters with time. The Fokker-Planck equation and its derivation is standard in the statistical physics literature. In Appendix A wegive the equation in the machine learning context in Eq. (5) and for completeness of presentation wealso give its derivation. In Appendix C we restate the standard proofs of the stationary distribution ofa Langevin system, and provide the resulting Gibbs-Boltzmann equilbirium distribution here, usingthe notation of this paper:Theorem 1 (Equilibrium Distribution) .Assume1that the gradient covariance is isotropic,i.e.C() =2I, where2is a constant. Then the equilibrium distribution of the stochastic dif-ferential equation 1 is given byP() =P0exp2L()n2; (2)wherenSandP0is a normalization constant, which is well defined for loss functions with L2regularization.Discussion : HereP()defines the density over the parameter space. The above result says that ifwe run SGD for long enough (under the assumptions made regarding the SGD sufficiently matchingthe infinitesimal limit), then the probability of the parameters being in a particular state asymptoti-cally follows the above density. Note, that nSis a measure of the noise in the system set by thechoice of learning rate and batch size S. The fact that the loss is divided by nemphasizes that thehigher the noise n, the less granular the loss surface appears to SGD. The gradient variance C()on the other hand is determined by the dataset and model priors (e.g. architecture, model parameter-ization, batch normalization etc.). This reveals an important area of investigation, i.e., how differentarchitectures and model parameterizations affect the gradient’s covariance structure. We note that inthe analysis above, the assumption of the gradient covariance C()being fixed and isotropic in theparameter space is unrealistic. However it is a simplification that enables straightforward insightsregarding the relationship of the noise, batch size and learning rate in the Gibbs-Boltzmann equilib-rium. We empirically show that various predictions based on this relationship hold in practice.Returning to SGD as an optimization method, we can ask, given the probability density P()canwe derive the probability of ending at a given minimum, A, which we will denote by lowercasepA= ~pAC, whereCis a normalization constant which is the same for all minima (the unnormalizedprobability ~pAis all we are interested in when estimating the relative probability of finishing in agiven minimum compared to another one). This probability is derived in Appendix D, and given inthe following theorem, which is the core insight from our theory.Theorem 2 (Probability of ending in region near minima A).Assume the loss has a series ofseparated local minima. Consider one such minima, with Hessian HAand lossLAat a minimumA. Then the unnormalized probability of ending in a region near minima Ais~pA=1pdetHAexp2LAn2(3)wheren=Sis the noise used in the SGD process to reach A.Discussion : For this analysis, we qualitatively categorize a minima Aby its lossLA(depth) andthe determinant of the Hessian detHA(a larger determinant implies a sharper minima). The above1Here we also assume a weak regularity condition that the loss L()includes the regularization term kk22for some > 0.4Under review as a conference paper at ICLR 2018result shows that the probability of landing in a specific minimum depends on three factors - learningrate, batch-size and covariance of the gradients. The two factors that we directly control only appearin the ratio given by the noise n==S. Note that the proof of this result utilizes a LaplaceApproximation in which the loss near a given minimum can be approximated using a second orderTaylor series in order to evaluate an integral. We emphasize this is not the same as globally treatingthe loss as a quadratic.To study which kind of minima are more likely if we were to reach equilibrium, it is instructive toconsider the ratio of probabilities pAandpBat two distinct minima AandBrespectively givenbypApB=rdetHBdetHAexp2n2(LBLA):To highlight that towards the equilibrium solution SGD favors wider rather than sharper minima,let’s consider the special case when LA=LB, i.e., both minima have the same loss value. Then,pApB=rdetHBdetHA:This case highlights that in equilibrium, SGD favors the minimum with lower determinant of theHessian (i.e. the flatter minima) when all other factors are identical. On the flip side, it can be seenthat if two minima have the same curvature ( detHA= det HB), then SGD will favor the minimawith lower loss. Finally in the general case when LALB, it holds that pApBif and only if1n22logqdetHBdetHA(LALB)Y :That is, there is an upper bound on the inverse of the noise for Ato be favored in the case that itsloss is higher than at B, and this upper bound depends on the difference in the heights comparedto the ratio of the widths. In particular we can see that if detHB<detHA, thenY < 0, and sono amount of noise will result in Abeing more probable than B. In words, if the minimum at Ais both higher and sharper than the minimum at B, it is never reached with higher probability thanB, regardless of the amount of noise. However, if detHB>detHAthenY > 0, and there is alower bound on the noisen>(LALB)22logqdetHBdetHA (4)to makeAmore probable than B. In words, if the minimum at Ais higher but flatter than theminimum atB, it is favored over B, as long as the noise is large enough, as defined by eq. (4).To summarize, the presented theory shows that the noise level in SGD (which is defined by the ratioof learning rate to batch size) controls the extent to which optimization favors wider over deeperminima. Increasing the noise by increasing the ratio of learning rate to batch size increases theprobability of wider compared to deeper minima. For a discussion on the relative probabilities ofcritical points that are not strictly minima, see appendix D.4 E XPERIMENTS4.1 I MPACT OFSON THE MINIMA SGD FINDSIn this section, we empirically study the impact of learning rate and batch size Son the localminimum that SGD finds. We first focus on a 4-layer Batch Normalized ReLU MLP trained onFashion-MNIST (Xiao et al., 2017). We study how the noise ratio n=Sleads to minima withdifferent curvatures and validation accuracy. To measure the curvature at a minimum, we computethe norm of its Hessian (a higher norm implies higher sharpness of the minimum) using the finitedifference method (Wu et al., 2017). In Figure 1a, we report the norm of the Hessian for localminima obtained by SGD for different n=S, where2[5e3;1e1]andS2[25;1000] .Each experiment is run for 200epochs; most models reach approximately 100% accuracy on train5Under review as a conference paper at ICLR 20180E+00 2E-03 4E-03/S0E+002E-014E-01Log Hessian norm(a) Correlation ofSwith logarithm ofnorm of Hessian.0E+00 1E-03 2E-03 3E-03 4E-03/S87.5%88.0%88.5%89.0%89.5%90.0%90.5%Validation performance(b) Correlation ofSwith validationaccuracy.Figure 1: Impact on SGD with ratio of learning rate and batch size Sfor 4 layer ReLU MLP onFashionMNIST.1 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (a) left=0:1S=128, right=0:1S=10241 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (b) left=0:1S=128, right=0:01S=1281 0 1 2020406080100Accuracy (and scaled cross-entropy)train accuracyval accuracytrain lossval loss (c) left=0:1S=1024, right=0:01S=128Figure 2: Interpolation of Resnet56 networks trained with different learning rate to batch size ratio,S.(x-axis) corresponds to the interpolation coefficient. As predicted by our theory, lowerSratioleads to sharper minima (as shown by the left and middle plot).set. Asngrows, we observe that the norm of the Hessian at the minima also decreases, suggestingthat higherSpushes the optimization towards flatter minima. This agrees with Theorem 2, Eq. (3),that higherSfavors flatter over sharper minima.Figure 1b shows the results from exploring the impact of n=Son the final validation performance,which confirms that better generalization correlates with higher n. Taken together, Fig. 1a andFig. 1b imply wider minima correlate well with better generalization. As n=Sincreases, SGDfinds local minima that generalize better. In Appendix F.1, we report similar results for Resnet56applied on CIFAR10 in Figure 8, for a 20 layer ReLU network with good initialization schemes inFigures 9a and 9c, and with bad initilization in Figure 9b.To further illustrate the behavior of SGD with different noise levels, we train three Resnet56 modelson CIFAR10 using SGD (without momentum) with differentS. Our baseline model uses=0:1S=128.In comparision, we investigate a large batch model with=0:1S=1024and a small learning rate modelwith=0:01S=128, which have approximately the sameSratio. We follow (Goodfellow et al., 2014)by investigating the loss on the line interpolating between the parameters of two models. Morespecifically, let 1and2be the final parameters found by SGD using differentS, we report theloss valuesL((1)1+2)for2[1;2]. Results indicate that models with large batch size(Fig. 2-left) or low learning rate (Fig. 2-middle; both having a lowerSthan the baseline) end up ina sharper minimum relative to the baseline model. These plots are consistent with our theoreticalanalysis that higher n==S gives preference to wider minima over sharper minima. On the otherhand, figure 2 (right) shows that models trained with roughly the same level of noise end up inminima of similar quality. The following experiment explores this aspect further.We train VGG-11 models (Simonyan & Zisserman, 2014) on CIFAR-10, such that all the mod-els are trained with the same noise level but with different values of learning rate and batch size.Specifically, we use=0:1S=50, where we set = 0:25;1;4. We then interpolate between the modelparameters found when training with = 1 and= 4 (Fig. 3-left), and = 1 and= 0:256Under review as a conference paper at ICLR 20181.00.50.00.51.01.52.0020406080100Accuracy (and scaled cross-entropy)CIFAR10 (VGG11): =1, =4train accuracyval accuracytrain lossval loss (a)= 1 corresponds to model at = 0 and= 4corresponds to model at = 11.00.50.00.51.01.52.0020406080100Accuracy (and scaled cross-entropy)CIFAR10 (VGG11): =1, =0.25train accuracyval accuracytrain lossval loss (b)= 1 corresponds to model at = 0 and= 0:25corresponds to model at = 1Figure 3: Interpolation between parameters of models trained with the same learning rate ( ) tobatch-size (S) ratio:=0:1S=50, but different andSvalues determined by . As predicted by ourtheory, the minima for models with identical noise levels should be qualitatively similar as can beseen by these plots.0 100 200 300Epoch20406080100AccuracyCyclic BS TrainCyclic LR TrainCyclic BS TestCyclic LR Test0 100 200 300Epoch20406080100Accuracybs=128, lr=0.001 trainbs=640, lr0.005 trainbs=128, lr=0.001 testbs=640, lr=0.005 testFigure 4: Learning rate schedule can be replaced by an equivalent batch size schedule. The ratio oflearning rate to batch size is equal at all times for both red and blue curves in each plot. Above plotsshow train and test accuracy for experiments involving VGG-11 architecture on CIFAR10 dataset.Left: cyclic batch size schedule (blue) in range 128 to 640, compared to cyclic learning rate schedule(red) in range 0.001 to 0.005. Right: constant batch size 128 and constant learning rate 0.001 (blue),compared to constant batch size 640 and constant learning rate 0.005 (red).(Fig. 3-right). The interpolation results indicate that all the minima have similar width and depth,qualitatively supporting our theoretical observation that for the same noise ratio SGD ends up inminima of similar quality.4.2SRATIO INFLUENCES LEARNING DYNAMICS OF SGDIn this section we look at two experimental phenomena: firstly, the equilibrium endpoint of SGDand secondly the dynamical evolution of SGD. The former, was theoretically analysed in the theorysection, while the latter is not directly addressed in the theory section, but we note that the two arerelated - the endpoint is the result of the intermediate dynamics.We experimentally study both phenomena in the following four experiments involving the VGG-11 architecture on the CIFAR10 dataset, shown in Fig 4. The left plot compares two experiments:cyclic batch size schedule (blue) in range 128 to 640, compared to cyclic learning rate schedule (red)in range 0.001 to 0.005. The right plot compares two other experiments: constant learning rate tobatch-size ratio ofS=0:001128(blue) andS=0:005640(red).Regarding the first phenomena, of the endpoint of SGD, the test accuracy when training with a cyclicbatch size and cyclic learning rate is 89.39% and 89.24%, respectively, and we emphasize that theseare similar scores. For a constant learning rate to batch-size ratio ofS=0:001128andS=0:005640, the7Under review as a conference paper at ICLR 20180% 20% 40% 60% 80% 100%Random label accuracy60%62%63%65%68%70%72%74%76%Validation accuracy25.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy30%35%40%45%50%55%50.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy60%62%63%65%68%70%72%74%76%Validation accuracy25.0% random labels0% 20% 40% 60% 80% 100%Random label accuracy30%35%40%45%50%55%50.0% random labelsFigure 5: Impact ofSon memorization of MNIST when 25% and50% of labels in the training setare replaced with random labels, using no momentum (on the right) or a momentum with param-eter0:9(on the left). We observe that for a specific level of memorization, highSleads to bettergeneralization. Red has higher value ofSthan blue.test accuracy is 87.25% and 86.92%, respectively, and we again emphasize that these two scores aresimilar to each other. That in each of these experiments the endpoint test accuracies are similar showsexchangability of learning rate for batch size for the endpoint, and is consistent with our theoreticalcalculation which says characteristics of the minima found at the endpoint are determined by theratio of learning rate to batch-size, but not individually on learning rate or batch size. Additionalresults exploring cyclical learning rate and batch size schedule are reported in Appendix F.4.Regarding the second phenomena of the dynamical evolution, we note the similarity of the train-ing and test accuracy curves for each pair of same-noise curves in each experiment. Our theoreti-cal analysis does not explain this phenomena, as it does not determine the dynamical distribution.Nonetheless, we report it here as an interesting observation, and point to Appendix B for some in-tuition on why this may occur from the Fokker-Planck equation. In Appendix F.2, Fig. 13 we showin more detail the loss curves. While the epoch-averaged loss curves match well when exchangingbatch size for learning rate, the per-iteration loss is not invariant to switching batch size for learningrate. In particular, we note that each run with smaller batch-size has higher variance in per-iterationloss than it’s same-noise pair. This is expected, since from one iteration to the next, the exampleswill have higher variance for a smaller batch-size.The take-away message from this section is that the endpoint and dynamics of SGD are approxi-mately invariant if the batch size and learning rate are simultaneously rescaled by the same amount.This is in contrast to a commonly used heuristic consisting of scaling the learning rate with thesquare root of the batch size, i.e. of keeping the ratio =pSconstant. This is used for example by(Hoffer et al., 2017) as a way of keeping the covariance matrix of the parameter update step thesame for any batch size. However, our theory and experiments suggest changing learning rate andbatch size in a way that keeps the ratio n==S constant instead, since this results in the sameequilibrium distribution.4.3 I MPACT OF SGD ONMEMORIZATIONTo generalize well, a model must identify the underlying pattern in the data instead of simply per-fectly memorizing each training example. An empirical approach to test for memorization is toanalyze how good a DNN can fit a training set when the true labels are partly replaced by randomlabels (Zhang et al., 2016; Arpit et al., 2017). The experiments described in this section highlightthat SGD with a sufficient amount of noise improves generalization at a given level of memorization.Experiments are performed on the MNIST dataset with an MLP similar to the one used by (Arpitet al., 2017), but with 256hidden units. We train the MLP with different amounts of random labelsin the training set. For each level of label noise, we evaluate the impact ofSon the generalizationperformance. Specifically, we run experiments withStaking values in a grid with batch size inf50;100;200;500;1000g, learning rate inf0:005;0:01;0:02;0:05;0:07;0:1g, and momentum inf0:0;0:9g. Models are trained for 300epochs. Fig. 5 reports the MLPs performances on both thenoisy training set and the validation set. The results show that larger noise in SGD (regardless ifinduced by using a smaller batch size or a larger learning rate) leads to solutions which generalizebetter for the same amount of random labels memorized on the training set. Thus, our analysis8Under review as a conference paper at ICLR 20180 25 50 75 100 125 150 175 200epoch1020304050607080validation accuracy = 1 (83.0±0.15) = 3 (82.7±0.07) = 5 (82.5±0.29) = 7 (82.1±0.06) = 9 (81.4±0.27) = 11 (81.0±0.3) = 13 (69.3±7.83) = 15 (77.5±1.05)*(a) Train dataset size 120000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (87.1±0.13) = 3 (87.0±0.2) = 5 (86.6±0.15) = 7 (86.4±0.14) = 9 (86.3±0.17) = 11 (77.8±13.9) = 13 (85.8±0.01)* = 15 (80.4±3.09)* (b) Train dataset size 225000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (90.4±0.12) = 3 (90.3±0.12) = 5 (90.3±0.14) = 7 (90.2±0.09) = 9 (90.0±0.11) = 11 (89.7±0.22) = 13 (89.1±0.42) = 15 (88.6±0.0)*(c) Train dataset size 450000 25 50 75 100 125 150 175 200epoch102030405060708090validation accuracy = 1 (89.8±0.09) = 5 (89.6±0.14) = 9 (89.5±0.11) = 13 (89.3±0.11) = 17 (89.2±0.1) = 21 (88.9±0.16) = 25 (85.4±5.43) = 29 (85.1±1.01)* (d) Half the noiseFigure 6: Breaking point analysis: Our theory suggests the final performance should be similar whenthe SGD noise levelSis kept the same. Here we study its breaking point in terms of too largea learning rate or too small a batch size. (a,b,c)- Validation accuracy for different dataset sizes anddifferentvalues for a VGG-11 architecture trained on CIFAR10. In each experiment, we multiplythe learning rate ( ) and batch size ( S) withsuch that the ratio(=0:1)(S=50)is fixed. We observethat for the same ratio, increasing the learning rate and batch size yields similar performance upto a certain value, for which the performance drops significantly. (d)- Breaking point analysiswhen half the noise level(=0:05)(S=50)is used. The breaking point happens for much larger whenusing a smaller noise. All experiments are repeated 5 times with different random seeds. The graphsdenote the mean validation accuracies and the numbers in the brackets denote the mean and standarddeviation of the maximum validation accuracy across different runs. The * denotes at least one seedlead to divergence.highlights that SGD with low noise n=Ssteers the endpoint of optimization towards a minimumwith low generalization ability.While Fig 5 reports the generalization at the endpoint, we observe that SGD with larger noise con-tinuously steers away from sharp solutions throughout the dynamics. We also reproduce the obser-vation reported by (Arpit et al., 2017): that memorization roughly starts after reaching maximumgeneralization. For runs with momentum we exclude learning rates higher than 0:02as they lead todivergence. Full learning curves are reported in Fig. 14 included in Appendix F.3.4.4 B REAKING POINT OF THE THEORY IN PRACTICEOur analysis relies on the assumption that the gradient step is sufficiently small to guarantee thatthe first order approximation of a Taylor’s expansion is a good estimate of the loss function. Inthe case where the learning rate becomes too high, this approximation is no longer suitable, andthe continuous limit of the discrete SGD update equation will no longer be valid. In this case, the9Under review as a conference paper at ICLR 2018stochastic differential equation doesn’t hold, and hence neither does the Fokker-Planck equation,and so we don’t expect our theory to be valid. In particular, we don’t expect to arrive at the samestationary distribution as indicated by a fixed ratio =S, if the learning rate gets too high.This is exemplified by the empirical results reported in Fig. 6, where similar learning dynamicsand final performance can be observed when simultaneously multiplying the learning rate and batchsize by a factor up to a certain limit. This is done for different training set sizes to investigate ifthe breaking point depends on this factor (Fig. 6 a,b,c). The plots suggest that the breaking pointhappens for smaller values if the dataset size is smaller. We also investigate the influence of when half the noise level is used, due to halving the learning rate, in (figure 6 d). These experimentsstrongly suggest that the reason behind breaking point is the use of a high learning rate because theperformance drops at much higher when the base learning rate is halved. A similar experiment isperformed on Resnets (for results see Fig 7 in the appendix). We highlight other limitations of ourtheory in appendix E.5 D ISCUSSIONIn the theoretical section of this work we treat the learning rate as fixed throughout training. How-ever, in practical applications, the learning rate is annealed to a lower value, either gradually or indiscrete jumps. When viewed within our framework, at the beginning with high noise, SGD favorswidth over depth of a region, then as the noise decreases, SGD prioritizes the depth more strongly -this can be seen from Theorem 3 and the comments that follow.In the theoretical section we made the additional assumption that the covariance of the gradientsis isotropic, in order to be able to derive a closed form solution for the equilibrium distribution.We do not expect this assumption to hold in practice, but speculate that there may be mechanismswhich drive the covariance towards isotropy, for example one may be able to tune learning rateson a per-parameter basis in such a way that the combination of learning rate and covariance matrixis approximately isotropic – this may lead to improvements in optimization. Perhaps some exist-ing mechanisms such as batch normalization or careful initialization give rise to more equalizedcovariance - we leave study of this for future work.We note further that our theoretical analysis considered an equilibrium distribution, which was in-dependent of the intermediate dynamics. However, this may not be the case in practice. Without theisotropic covariance, the system of partial differential equations in the late time limit will in gen-eral have a solution which will depend on the path through which optimization occurs, unless otherrestrictive assumptions are made to force this path dependence to disappear (Chaudhari & Soatto,2017). Despite this simplifying assumption, our empirical results are consistent with the developedtheory. We leave study of path dependence and dynamics to future work.In experiments investigating memorization we explored how the noise level changes the preferenceof wide minima over sharp ones. (Arpit et al., 2017) argues that SGD first learns true labels, beforefocusing on random labels. Our insight is that in the second phase the high level of noise main-tains generalization. This illustrates the trade-off between width of minima and depth in practice.When the noise level is lower, DNNs are more likely to fit random labels better, at the expense ofgeneralizing less well on true ones.6 C ONCLUSIONSWe shed light on the role of noise in SGD optimization of DNNs and argue that three factors (batchsize, learning rate and gradient variance) strongly influence the properties (loss and width) of thefinal minima at which SGD converges. The learning rate and batch size of SGD can be viewed asone effective hyper-parameter acting as a noise factor n==S. This, together with the gradientcovariance influences the trade-off between the loss and width of the final minima. Specifically,higher noise favors wider minima, which in turn correlates with better generalization.Further, we experimentally verify that the noise n==S determines the width and height of theminima towards which SGD converges. We also show the impact of this noise on the memorizationphenomenon. We discuss the limitations of the theory in practice, exemplified by when the learning10Under review as a conference paper at ICLR 2018rate gets too large. We also experimentally verify that andScan be simultaneously rescaled aslong as the noise =S remains the same.<|im_end|> <|im_start|>assistant ### Review Title Interesting paper, however not convincing theoretical results ### Review Text In this paper, the authors present an analysis of SGD within an SDE framework. The ideas and the presented results are interesting and are clearly of interest to the deep learning community. The paper is well-written overall. However, the paper has important problems. 1) The analysis is widely based on the recent paper by Mandt et al. While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, the setting that is considered in the paper. 2) There is a mistake in the proof Theorem 1. Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be "approximately equal to". The result will only hold if sigma does not depend on theta. However, letting sigma depend on theta is the only difference from Mandt et al. On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). Therefore, presenting it as a new result is misleading. 3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) and the result just boils down to computing some probability under a Gaussian distribution, which is still quite trivial. Apart from this assumption not being very realistic, the result does not justify the claims on "the probability of ending in a certain minimum" -- which is on the other hand a vague statement. First of all "ending in" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. This again does not justify the claims. For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. With authors' claim, the algorithm should spend more time on the wider one, however it is evident that this will not be the case. 4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption and therefore changing eta will result in a different distribution than the ideal one. With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. The experiments are very interesting and I do not underestimate their value. However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper. ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
SkfNU2e0Z
ICLR.cc/2018/Conference
2018
Statestream: A toolbox to explore layerwise-parallel deep neural networks
["Volker Fischer"]
Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.
["model-parallel", "parallelization", "software platform"]
ABSTRACTBuilding deep neural networks to control autonomous agents which have to in-teract in real-time with the physical world, such as robots or automotive vehi-cles, requires a seamless integration of time into a network’s architecture. Thecentral question of this work is, how the temporal nature of reality should bereflected in the execution of a deep neural network and its components. Mostartificial deep neural networks are partitioned into a directed graph of connectedmodules or layers and the layers themselves consist of elemental building blocks,such as single units. For most deep neural networks, all units of a layer are pro-cessed synchronously and in parallel, but layers themselves are processed in asequential manner. In contrast, all elements of a biological neural network areprocessed in parallel. In this paper, we define a class of networks between thesetwo extreme cases. These networks are executed in a streaming or synchronouslayerwise-parallel manner, unlocking the layers of such networks for parallel pro-cessing. Compared to the standard layerwise-sequential deep networks, these newlayerwise-parallel networks show a fundamentally different temporal behavior andflow of information, especially for networks with skip or recurrent connections.We argue that layerwise-parallel deep networks are better suited for future chal-lenges of deep neural network design, such as large functional modularized and/orrecurrent architectures as well as networks allocating different network capacitiesdependent on current stimulus and/or task complexity. We layout basic propertiesand discuss major challenges for layerwise-parallel networks. Additionally, weprovide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.1 I NTRODUCTIONOver the last years, the combination of newly available large datasets, parallel computing power,and new techniques to design, implement, and train deep neural networks has led to significantimprovements and numerous newly enabled applications in various fields including vision, speech,and reinforcement learning. Considering applications for which a neural network controls a systemthat interacts in real-time with the physical world, ranging from robots and autonomous vehiclesto chat-bots and networks playing computer games, renders it essential to integrate time into thenetwork’s design.In recent deep learning literature, enabling networks to learn and represent temporal features hasgained interest. Methods were presented leveraging short-term dynamic features to build temporalconsistent network responses (e.g. Ilg et al. (2017), Luc et al. (2017)) as well as networks learning tostore and utilize information over longer time periods (e.g. Neil et al. (2016), Graves et al. (2016)).Two major aspects considering the role of time in neural networks can be distinguished: First, theway neural networks and their components such as layers or single units, are implemented. Forexample, network components could operate sequentially or in parallel, and in case of parallel eval-uation, synchronous and asynchronous implementations can be distinguished. Second, the extentto which the network through its architecture can form representations of temporal features. Forexample, if the network has no mechanisms to integrate information over time, such as recurrentconnections, the network will not be able to represent temporal features, such as optic-flow. In this1Under review as a conference paper at ICLR 2018work, we focus on the implementation aspect but highly emphasise that our approach fundamentallyinfluences the network’s temporal behavior and the way information is integrated over time.Whereas, biological neural networks and some realizations of neural networks in silicon (reviewedin Indiveri et al. (2011), comparison in Farabet et al. (2012)) can operate on a continuous temporaldimension, we will assume a discrete (frame-based) temporal domain throughout this paper.1.1 L AYERWISE -PARALLEL DEEP NEURAL NETWORKSConsidering sequential and parallel realizations of artificial neural networks, at one end of the spec-trum, biologically inspired models of spiking neural networks have been studied for a long time (e.g.Liu et al. (2015)) and, in most cases, are simulated in a way that states of all neurons in a networkare updated in parallel and in a synchronous frame-based manner. In contrast to this parallel pro-cessing of all neurons, modern deep neural networks are constructed using collections of neurons,sometimes called layers, modules, or nodes, and while all neurons of the same layer are computedin parallel, the layers themselves are computed sequentially.different!stimulusglobal framelocal frameF)S)R)layerwise-sequential0 1 2 3012012012012012301230123012301230123layerwise-parallel0 1 2 301010101Figure 1: Three simple network examples, a pure feed forward F), a skip S), and a recurrent networkR), are shown (left most column), illustrating the difference between sequential (middle column)and layerwise-parallel (right column) network execution. For both network types, inference on foursucceeding (from left to right) time frames (pictograms: empty -=-n-empty ) is drawn. Encirclednodes indicate currently updated / computed / inferred layers and grey underlayed areas indicatealready computed network parts. Pictograms ( empty ,=,n) inside layers indicate information fromthis stimulus in a layer at a specific time frame. To increase clarity for the layerwise-parallel case,we omitted information from previous stimuli still memorized by the network. For the layerwise-sequential recurrent network (bottom left network), we used a 1-step rollout window. Local framesfor layerwise-sequential networks differ between architectures ( 3frames for F) and S), and 4framesfor R).2Under review as a conference paper at ICLR 2018In this work, we argue for a network type between these two ends of the spectrum which we calllayerwise-parallel deep neural networks. The difference to the widely used layerwise-sequentialnetworks in deep learning is that layers have a memory of their previous state and compute theirnext state on the basis of the (temporally) previous state of all (topologically) previous layers. Aschematic illustration of layerwise-sequential and layerwise-parallel networks is shown in Fig. 1.We will call the part of a layer holding its current state neuron-pool (nodes in Fig. 1), parts of thenetwork holding information about transformations between neuron-pools synapse-pools (edges inFig. 1), and functions yielding updates for some network parameters also plasticities (not shown inFig. 1). In our definition, all responses of synapse-pools targeting the same neuron-pool are simplyadded up. More thorough definitions are given in Section 2.In Fig. 1, network inference is illustrated for the two network types over four succeeding timeframes. In this work, time frame refers always to the global time frame in which stimuli are pre-sented to the network. For layerwise-sequential networks, we have another implicit type of localframes due to their sequential nature, and the number of local frames depends on network archi-tecture (compare Fig. 1: 3local frames for F) and S), 4local frames for R)). In contrast, forlayerwise-parallel networks all layers are updated in parallel, leading to two local frames (currentand updated). Information contained in the stimuli (squares) and neuron-pool states (circles) isencoded as empty ,=, orn. Simple forward architectures (example in first row) without skip or re-current connections lead to similar temporal behavior for the two network types. Introducing skipor recurrent connections (S) and R) example in Fig. 1) leads to potentially different temporal be-havior for layerwise-sequential and layerwise-parallel networks. For example in Fig. 1, networkresponses differ between layerwise-sequential and layerwise-parallel networks at the 2:frame forthe S) and R) networks (see different! in Fig. 1). This difference becomes more drastic consideringlarger, more complex network architectures, for which information from different time frames isintegrated. Considering a layerwise-parallel network at a certain point in time, information fromdifferent previous time frames is distributed across the network. The distribution pattern is directlydefined by the network’s architecture. The biological counterpart as well as recent deep learningliterature suggest to use gating mechanisms to guide content and time dependent information flow.One aspect of layerwise-parallel networks is the synchronization of the parallel updated layers. Thisis important especially considering neuron-pools representing temporal features, because these, bydefinition, depend on temporal differences. In case of asynchronous parallelization of network parts,one solution would be to provide time itself as a network input. We focused on the synchronizedapproach, because otherwise networks would have to learn to use additionally provided input of timeand also temporal features would be harder to interpret.Another property of layerwise-parallel networks is, that networks are parallelized independently oftheir architecture. A layerwise-parallel network is designed using constrained shallow elements andthe network is parallelized across all elements. For example, we prohibit using an arbitrary deep net-work as a synapse-pool connecting two neuron-pools. A detailed definition of neuron and synapse-pool operations is given in Section 2. With respect to this architecture-independent parallelization,layerwise-parallel networks also differ from other model-parallel approaches like synthetic gradients(Jaderberg et al., 2017) or direct feedback alignment (Nøkland, 2016).1.2 R ELATION TO STATE -OF-THE-ARTOne major advantage of layerwise-parallel over layerwise-sequential networks is that network ele-ments such as neuron-pools and plasticities can be computed in parallel. As stated in Jaderberg et al.(2017), the sequential nature of current deep networks results in computation locks, hindering effi-cient execution. Several approaches were proposed to circumvent the backward lock, which locksparameter updates due to the training procedure, providing auxiliary error signals for intermediatelayers (e.g. Lillicrap et al. (2016), Jaderberg et al. (2017), Nøkland (2016)). While these methods,to some extent, solve some drawbacks of the most widely used and effective technique for neuralnetwork training, namely backpropagation, they do not address the more fundamental difference be-tween parallel and sequential network evaluation between biological and artificial neural networks:the network’s integration of temporal information during inference stays the same as for layerwise-sequential networks. Further, these approaches are not directly applicable for layerwise-parallelnetworks and would have to take the temporal displacement between layers into account.3Under review as a conference paper at ICLR 2018Integration of temporal information in deep networks often is achieved using recurrent neural net-works (RNN). For inference and training, these RNNs are normally converted into feed forward net-works using network rollout and weight sharing over time (e.g. Hochreiter & Schmidhuber (1997)),especially to use existing methods to train feed forward networks. Similar to previously mentionedmethods which target problems of backpropagation, also the idea of network rollout only tackles asymptom arising from layerwise-sequential networks, while creating new challenges like networkinitialization at beginning of rollout or scalability over time, which is not the case for our approach,at least during inference.Beside recurrent connections, also skip connections are widely used in deep networks, such asResNets (He et al., 2016). Especially used identity skip connections can be interpreted as a lo-cal network rollout acting as a local filtering which could also be achieved through recurrent selfconnections (Greff et al., 2017). Hence it seems, currently used skip connections are primarily usedto mitigate problems of backpropagation rather than to form early, temporally shallow representa-tions in abstract layers on the basis of layers with lower abstraction, which would be biologicallyplausible (Bullier, 2001).The concept of layerwise-parallel networks is also strongly related to ideas like BranchyNet (Teer-apittayanon et al., 2016), which use intermediate features for early classification and hence en-able stimulus complexity dependent allocation of network resources. This is natively achieved withlayerwise-parallel networks using skip connections, which, for layerwise-parallel networks, intro-duce temporal shortcuts. Hence in general, the network has shorter response times, using short(in time and network architecture) pathways, for simple, and longer response times, using deeperpathways, for complex stimuli. A simple example for this is given in Section 3.Figure 2: Visualization example of a simple classification network using the provided toolbox (bestviewed in color). The network is shown as graph (green nodes are neuron-pools, blue nodes aresynapse-pools) together with information about the network.The already mentioned different integration of information over time between layerwise-sequentialand layerwise-parallel networks causes also differences in network evaluation. For layerwise-parallel networks, the response on a given stimulus is delayed in time dependent on the networkarchitecture. Hence on the one hand, existing performance measures for layerwise-sequential net-works, for example, accuracies or confusion-matrices, must be adapted to take time into account,while on the other hand, measures not available for layerwise-sequential networks, such as reactiontimes, can now be employed.Further, network training has to be adapted for layerwise-parallel networks. For layerwise-sequentialnetworks, normally an error is formulated for network outputs and back-propagated through the net-work providing a signal to update parameters (Rumelhart et al. (1986)). To some extent, existingtraining mechanisms for layerwise-sequential networks can be applied to layerwise-parallel net-works by using local losses, and rolling out local parts of the network. Also biologically inspired4Under review as a conference paper at ICLR 2018training rules, such as Hebbian learning (Hebb, 1949), can be used. We provide more details onsolutions for evaluation and training of layerwise-parallel networks in sections 3.1 and 3.2.In the previous paragraphs, we laid out some motivation for layerwise-parallel networks and com-pared it to layerwise-sequential networks. And while we are not the first ones pointing out certaindrawbacks of layerwise-sequential networks (e.g. Jaderberg et al. (2017), Farabet et al. (2012)), itis our understanding, that the underrepresentation of layerwise-parallel networks in deep learningliterature is due to a lack of tools to explore this class of networks.One of the main contributions of this work, is to provide an open source toolbox (available athttp://bit.ly/2yNfroI ) to design, train, evaluate, and interact with layerwise-parallel deep networks.An example screenshot of the provided graphical user interface is shown in Fig. 2.2 L AYERWISE -PARALLEL DEEP NETWORKSIn this section, we give a formal definition of layerwise-parallel networks and their elements.2.1 N ETWORK MODEL AND INFERENCEWe describe a neural network as a graph (V;E), withV=fviji= 1;:::;NVgbeing a set of verticesandE=fejjj= 1;:::;NEgbeing a set of directed (hyper-) edges with potentially multiple sourcevertices src jV and a single target vertex tgtj2V. Each vertex vihas a fixed dimensionalityDi= (Fi;Wi;Hi)2N3, a statexti2RDi=RFiRWiRHiat timet2N, and a parameterizedmappingi#ti:RDi!RDiwith some parameters #ti. Each edge ejhas a parameterized mappingfjtj:0@Yvi2srcjRDi1A!RDtgtjwith some parameters tj. For a vertex vi, let inputiE denote the set of all edges targeting vi. Wedefine a one-step temporal propagation for every vertex:xt+1i=Fi(xt;t;#t) =i#ti0@Xej2inputifjtj(fxtkgk2srcj)1AAs stated earlier, we also refer to the vertices of the network graph as neuron-pools and to the edgesassynapse-pools .Using this one-step temporal propagation, all network states xican be updated independently ofeach other and in parallel from time step to time step.Although, the above definition is general, the provided toolbox introduces some restrictions on whatupdate functions can be specified. An explicit specification of the internal structure of the updatefunctions was chosen to include common elements of deep neural networks, such as convolutionallayers, inception mechanisms and gated connections. Please see Appendix 6.1 for more details.We emphasize again that for layerwise-parallel networks, parallelization is independent from thenetwork architecture. The network is designed using certain elements and these are always par-allelized. Hence, these elements have to be flexible to a certain extent, to enable users to designvarious architectures.2.2 N ETWORK TRAININGLett= (t;#t)denote all current parameters. We refer to a mapping pthat, on the basis of currentand previous states and parameters, produces a parameter update for some parameters !pas aplasticity :!p(t) =p(xt;t)5Under review as a conference paper at ICLR 2018For a given set of plasticities P=fpiji= 1;:::;NPg, we define a one-step parameter updatefunction for every single parameter w2:wt+1=wt+Xiwithw2!pi!wpiNote that all plasticities can be computed independently from each other only on the basis of pre-vious states of neuron-pools and synapse-pools, and hence in parallel to the update functions fornetwork inference.Beside this abstract definition, some more explicit examples of plasticities, which are provided withthe toolbox, are given below.3 C HALLENGESMost challenges working with layerwise-parallel networks are caused by the fact that at a givenpoint in time, information from different previous time frames is distributed across the network. Thedistribution pattern is directly given by the network’s architecture and can be conveniently visualizedusing a rolled out version of the network. In general, gating neuron-pools could guide, for exampledependent on changes in input stimuli, the information flow through this pattern.A small example layerwise-parallel network for MNIST classification is illustrated in Fig. 3, show-ing the network architecture in 3a and the rolled out network in Fig. 4 to visualize information flow.Similar to the idea of BranchyNet (Teerapittayanon et al., 2016), the network uses two paths of dif-ferent depths for classification. We use this small network to illustrate some important mechanismsof layerwise-parallel networks.image conv1 conv2pred1 pred2prediction+labellabel copy[5;5;32;2][5;5;64;2]full fullId(a)0123456789temporal delay0.00.20.40.60.81.0accuracy(b)Figure 3: Illustration of a 2-path classification network. (a) The graph of the network.The network consists of two stacked convolutions, for which neuron-pools conv1 and conv2([rf;rf;features out;stride ]) hold the resulting feature maps, and fully connected classification ontop of both of them, with pred1 and pred2 storing the intermediate classification results. These in-termediate results are aggregated through summation in the final result (prediction). For training, aone-step copy of the current ground-truth, which is stored in the neuron-pool label and label copy,is needed. (b) Classification accuracies on MNIST test dataset, relative to temporal offset in framesbetween stimulus (image and label) onset and network response (prediction).3.1 P LASTICITIESAs stated above, plasticities operate in parallel to neuron-pools and while all neuron-pools are com-puting the one-step update t!t+ 1, all plasticities compute current parameter updates on the basisof the current time step t. For most plasticities, the network is rolled out locally, considering only asubset of all neuron-pools, and initialized with neuron-pool states at time t. This is illustrated in Fig.4, where local rollouts are shown for the two used plasticities to train the example network in Fig. 3a.After plasticities have computed parameter updates, these updates are aggregated by the synapse-pools, and in the next step the plasticity operates on the states of the now updated neuron-pool statesand parameters from time t+ 1.6Under review as a conference paper at ICLR 2018timageconv1conv2pred1pred2predictionlabellabel copy0 1 2 3 4Lclass0 1xt=1pred1xt=0label copyLdeep-class0 1 2 3xt=3pred2 xt=0labelFigure 4: Rolled out network for maximal path length 4and the two sub-networks used to computethe loss for the plasticities. All continuous lines represent synapse-pools which are initialized ran-domly and trained by the plasticities. All dotted lines are initialized as identity and are not trained.To increase transparency and re-usability, we separate plasticities into two parts: An update part ,which, state and memory free, computes a temporally local estimate of parameter updates and anoptimizer part which transforms the update estimates, for example, through temporal smoothing,into a current update which is used for the actual update of parameters.The toolbox provides some of the most widely used optimizers, such as stochastic gradient decentand ADAM (Kingma & Ba, 2015) and can easily be extended with new ones. Additionally, threetypes of update estimators are provided to specify plasticities:Loss based update: To leverage the large amount of existing techniques for training of layerwise-sequential deep networks, a loss can be defined on the basis of one or two neuron-pool states at acertain temporal offset tfrom the current time step t= 0. For example, considering the networkfrom Fig. 4, two loss-based plasticities are used to train the network:Lclassxt=1pred1;xt=0label copy=categorical-crossentropyxt=1pred1;xt=0label copyLdeep-classxt=3pred2;xt=0label=categorical-crossentropyxt=3pred2;xt=0labelHere, all losses are based on two neuron-pools. To compute the loss, the network is rolled backlocally from the neuron-pools until now (t= 0), being transformed into a feed forward network, ascan be seen on the right side of Fig. 4. Training is done as usual for layerwise-sequential networks.Note, that the validity and temporal properties of what we train, highly depend on the chosen neuron-pools and temporal offsets. Concerning validity, for example for Ldeep-class we could not have chosenneuron-pool states xt=4pred2 andxt=1label because then the rolled back network would have needed aninput image and label from the future time step t= 1 which are not available now t= 0.Concerning temporal properties, for example, if we would define the loss Lclassonxt=1pred1 andxt=0labelwe would have introduced a temporal offset of 1between prediction and ground-truth, leading tounintended behavior especially when the input changes.In general, loss based plasticities are expensive in the sense that potentially, for large parts of thenetwork, the inference (forward) step is done twice, once in the neuron-pools and potentially morethan once due to the rollout in the plasticity. Hence, local plasticities are preferred, which onlyoperate on a small set of neuron-pools and do not need a deep rollout. To achieve this, we suggestfunctional modularisation of the network which also increase network transparency and trainability.7Under review as a conference paper at ICLR 2018Hebbian based update: With some restrictions, we also provide the biologically more plausiblewell known Hebbian learning rule (Hebb, 1949) and some of its variants as an update estimator. Forexample, for a synapse wij, connecting some neuron xiwith some neuron xj:wtij=xtixt+1jNote, that this is used as an estimate and could be used as input for some optimizer, such as ADAM.Plasticities based on this estimator always use an internal one step rollout of the target neuron-pooland hence provide rather local plasticities compared to loss based plasticities.Parameter regularization update: Parameter update estimators can also directly be based onparameters rather than states, which is the case for example using L1orL2regularization on certainparameters.3.2 N ETWORK EVALUATIONConsidering performance measures for layerwise-parallel networks, we follow general ideas fromthe analysis of spiking neural networks and experimental psychology (e.g. Diehl et al. (2015),Woods et al. (2015)).Letm(x;y)denote any performance measure for a layerwise-sequential deep network, for examplea confusion-matrix or an accuracy value, where yis the network’s response for a stimulus x. Thiscan be converted into a performance measure for layerwise-parallel networks:mt(xtonset;ytonset+t) =m(xtonset;ytonset+t)Wheretonsetdenotes the time of a stimulus onset (first frame a stimulus is presented). We measurecurrent performance dependent on the temporal offset tbetween the network’s current responseand previous stimuli. On the basis of this, a concept of reaction- or response times can be defined,e.g. measuring the mean offset after which a certain performance measure reaches a given threshold.An example of a time dependent accuracy evaluation for the 2-path network from Fig. 3a is givenin Fig. 3b. Due to the network’s architecture, performance is at chance level for the first three timesteps. Then information about the stimulus reaches the prediction neuron-pool through the shortpath before, after one additional time step, also the longer path becomes active, from which on thenetwork reaches its highest accuracy. Stimuli were always presented for 12consecutive frames.4 T HE STATESTREAM TOOLBOXTo explore layerwise-parallel deep networks, we provide an open source toolbox enabling design,training, evaluation, and interaction with this kind of networks. Networks are specified in a text file,and a core process distributes the network elements onto separate processes and/or GPUs. Elementsare executed with alternating read and write phases, synchronized via a core process, and operateon a shared representation of the network. The toolbox is written in Python and uses the Theano(Theano Development Team, 2016) backend. The shared representation enables parallelization ofoperations across multiple processes and GPUs on one machine and enables online interaction.An additional motivation for intuitive, direct, and adjustable interaction with networks is that currentdeep learning literature (e.g. Vertens et al. (2017), Gupta et al. (2017), Marblestone et al. (2016))suggests that network architectures will become more complex and heterogeneous. These functionalmodularized architectures increase network understanding through transparent auxiliary neural in-terfaces, such as occupancy grids or optic flows, and trainability, using local losses to train sub-networks. Understanding of these network’s internal dynamics is important, concerning debuggingand optimizing architectures as well as safety aspects and to guide the design of future architectures.The chosen implementation of layerwise-parallel networks favors certain network architectures. Forexample, the overall frame rate of the network primarily depends on the slowest network element(neuron-pool or plasticity) rather than on the overall number of elements, as long as sufficient com-putation resources are available. With this toolbox, we did not intent to compete with existing deeplearning frameworks with respect to memory consumption or training speed but rather provide thesoftware infrastructure to explore layerwise-parallel deep networks, which, to our knowledge, otherdeep learning software does not.8Under review as a conference paper at ICLR 20185 C ONCLUSIONIn this paper, we defined and discussed layerwise-parallel deep neural networks, by which layerwisemodel-parallelism is realized for deep networks independently of their architecture. We argued thatlayerwise-parallel networks are beneficial for future trends in deep network design, such as largefunctional modularized or recurrent architectures as well as for networks allocating different net-work capacities dependent on stimulus and/or task complexity. Due to their biologically inspiredincreased parallelizability, layerwise-parallel networks can be distributed across several processesor GPUs natively without the need to explicitly specifying the network parts which should be par-allelized. Finally, we presented an open source toolbox to explore layerwise-parallel networks pro-viding design, training, evaluation, and interaction mechanisms.We would like to think of this work as a step towards native model-parallel deep networks, connect-ing the networks architecture directly to the temporal domain. For this, major challenges for thefuture remain, such as a more general formulation of neuron and synapse-pools than the one used inthe provided toolbox, the design of new local plasticities, or designing more adequate tasks whichtake the temporal domain into account.9Under review as a conference paper at ICLR 2018
B1KY-MqgG
Review of "STATESTREAM: A TOOLBOX TO EXPLORE LAYERWISE-PARALLEL DEEP NEURAL NETWORKS"
5: Marginally below acceptance threshold
In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies. While the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Statestream: A toolbox to explore layerwise-parallel deep neural networks ### Paper Abstract Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks. ### Paper Keywords ["model-parallel", "parallelization", "software platform"] ### Paper Content ABSTRACTBuilding deep neural networks to control autonomous agents which have to in-teract in real-time with the physical world, such as robots or automotive vehi-cles, requires a seamless integration of time into a network’s architecture. Thecentral question of this work is, how the temporal nature of reality should bereflected in the execution of a deep neural network and its components. Mostartificial deep neural networks are partitioned into a directed graph of connectedmodules or layers and the layers themselves consist of elemental building blocks,such as single units. For most deep neural networks, all units of a layer are pro-cessed synchronously and in parallel, but layers themselves are processed in asequential manner. In contrast, all elements of a biological neural network areprocessed in parallel. In this paper, we define a class of networks between thesetwo extreme cases. These networks are executed in a streaming or synchronouslayerwise-parallel manner, unlocking the layers of such networks for parallel pro-cessing. Compared to the standard layerwise-sequential deep networks, these newlayerwise-parallel networks show a fundamentally different temporal behavior andflow of information, especially for networks with skip or recurrent connections.We argue that layerwise-parallel deep networks are better suited for future chal-lenges of deep neural network design, such as large functional modularized and/orrecurrent architectures as well as networks allocating different network capacitiesdependent on current stimulus and/or task complexity. We layout basic propertiesand discuss major challenges for layerwise-parallel networks. Additionally, weprovide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.1 I NTRODUCTIONOver the last years, the combination of newly available large datasets, parallel computing power,and new techniques to design, implement, and train deep neural networks has led to significantimprovements and numerous newly enabled applications in various fields including vision, speech,and reinforcement learning. Considering applications for which a neural network controls a systemthat interacts in real-time with the physical world, ranging from robots and autonomous vehiclesto chat-bots and networks playing computer games, renders it essential to integrate time into thenetwork’s design.In recent deep learning literature, enabling networks to learn and represent temporal features hasgained interest. Methods were presented leveraging short-term dynamic features to build temporalconsistent network responses (e.g. Ilg et al. (2017), Luc et al. (2017)) as well as networks learning tostore and utilize information over longer time periods (e.g. Neil et al. (2016), Graves et al. (2016)).Two major aspects considering the role of time in neural networks can be distinguished: First, theway neural networks and their components such as layers or single units, are implemented. Forexample, network components could operate sequentially or in parallel, and in case of parallel eval-uation, synchronous and asynchronous implementations can be distinguished. Second, the extentto which the network through its architecture can form representations of temporal features. Forexample, if the network has no mechanisms to integrate information over time, such as recurrentconnections, the network will not be able to represent temporal features, such as optic-flow. In this1Under review as a conference paper at ICLR 2018work, we focus on the implementation aspect but highly emphasise that our approach fundamentallyinfluences the network’s temporal behavior and the way information is integrated over time.Whereas, biological neural networks and some realizations of neural networks in silicon (reviewedin Indiveri et al. (2011), comparison in Farabet et al. (2012)) can operate on a continuous temporaldimension, we will assume a discrete (frame-based) temporal domain throughout this paper.1.1 L AYERWISE -PARALLEL DEEP NEURAL NETWORKSConsidering sequential and parallel realizations of artificial neural networks, at one end of the spec-trum, biologically inspired models of spiking neural networks have been studied for a long time (e.g.Liu et al. (2015)) and, in most cases, are simulated in a way that states of all neurons in a networkare updated in parallel and in a synchronous frame-based manner. In contrast to this parallel pro-cessing of all neurons, modern deep neural networks are constructed using collections of neurons,sometimes called layers, modules, or nodes, and while all neurons of the same layer are computedin parallel, the layers themselves are computed sequentially.different!stimulusglobal framelocal frameF)S)R)layerwise-sequential0 1 2 3012012012012012301230123012301230123layerwise-parallel0 1 2 301010101Figure 1: Three simple network examples, a pure feed forward F), a skip S), and a recurrent networkR), are shown (left most column), illustrating the difference between sequential (middle column)and layerwise-parallel (right column) network execution. For both network types, inference on foursucceeding (from left to right) time frames (pictograms: empty -=-n-empty ) is drawn. Encirclednodes indicate currently updated / computed / inferred layers and grey underlayed areas indicatealready computed network parts. Pictograms ( empty ,=,n) inside layers indicate information fromthis stimulus in a layer at a specific time frame. To increase clarity for the layerwise-parallel case,we omitted information from previous stimuli still memorized by the network. For the layerwise-sequential recurrent network (bottom left network), we used a 1-step rollout window. Local framesfor layerwise-sequential networks differ between architectures ( 3frames for F) and S), and 4framesfor R).2Under review as a conference paper at ICLR 2018In this work, we argue for a network type between these two ends of the spectrum which we calllayerwise-parallel deep neural networks. The difference to the widely used layerwise-sequentialnetworks in deep learning is that layers have a memory of their previous state and compute theirnext state on the basis of the (temporally) previous state of all (topologically) previous layers. Aschematic illustration of layerwise-sequential and layerwise-parallel networks is shown in Fig. 1.We will call the part of a layer holding its current state neuron-pool (nodes in Fig. 1), parts of thenetwork holding information about transformations between neuron-pools synapse-pools (edges inFig. 1), and functions yielding updates for some network parameters also plasticities (not shown inFig. 1). In our definition, all responses of synapse-pools targeting the same neuron-pool are simplyadded up. More thorough definitions are given in Section 2.In Fig. 1, network inference is illustrated for the two network types over four succeeding timeframes. In this work, time frame refers always to the global time frame in which stimuli are pre-sented to the network. For layerwise-sequential networks, we have another implicit type of localframes due to their sequential nature, and the number of local frames depends on network archi-tecture (compare Fig. 1: 3local frames for F) and S), 4local frames for R)). In contrast, forlayerwise-parallel networks all layers are updated in parallel, leading to two local frames (currentand updated). Information contained in the stimuli (squares) and neuron-pool states (circles) isencoded as empty ,=, orn. Simple forward architectures (example in first row) without skip or re-current connections lead to similar temporal behavior for the two network types. Introducing skipor recurrent connections (S) and R) example in Fig. 1) leads to potentially different temporal be-havior for layerwise-sequential and layerwise-parallel networks. For example in Fig. 1, networkresponses differ between layerwise-sequential and layerwise-parallel networks at the 2:frame forthe S) and R) networks (see different! in Fig. 1). This difference becomes more drastic consideringlarger, more complex network architectures, for which information from different time frames isintegrated. Considering a layerwise-parallel network at a certain point in time, information fromdifferent previous time frames is distributed across the network. The distribution pattern is directlydefined by the network’s architecture. The biological counterpart as well as recent deep learningliterature suggest to use gating mechanisms to guide content and time dependent information flow.One aspect of layerwise-parallel networks is the synchronization of the parallel updated layers. Thisis important especially considering neuron-pools representing temporal features, because these, bydefinition, depend on temporal differences. In case of asynchronous parallelization of network parts,one solution would be to provide time itself as a network input. We focused on the synchronizedapproach, because otherwise networks would have to learn to use additionally provided input of timeand also temporal features would be harder to interpret.Another property of layerwise-parallel networks is, that networks are parallelized independently oftheir architecture. A layerwise-parallel network is designed using constrained shallow elements andthe network is parallelized across all elements. For example, we prohibit using an arbitrary deep net-work as a synapse-pool connecting two neuron-pools. A detailed definition of neuron and synapse-pool operations is given in Section 2. With respect to this architecture-independent parallelization,layerwise-parallel networks also differ from other model-parallel approaches like synthetic gradients(Jaderberg et al., 2017) or direct feedback alignment (Nøkland, 2016).1.2 R ELATION TO STATE -OF-THE-ARTOne major advantage of layerwise-parallel over layerwise-sequential networks is that network ele-ments such as neuron-pools and plasticities can be computed in parallel. As stated in Jaderberg et al.(2017), the sequential nature of current deep networks results in computation locks, hindering effi-cient execution. Several approaches were proposed to circumvent the backward lock, which locksparameter updates due to the training procedure, providing auxiliary error signals for intermediatelayers (e.g. Lillicrap et al. (2016), Jaderberg et al. (2017), Nøkland (2016)). While these methods,to some extent, solve some drawbacks of the most widely used and effective technique for neuralnetwork training, namely backpropagation, they do not address the more fundamental difference be-tween parallel and sequential network evaluation between biological and artificial neural networks:the network’s integration of temporal information during inference stays the same as for layerwise-sequential networks. Further, these approaches are not directly applicable for layerwise-parallelnetworks and would have to take the temporal displacement between layers into account.3Under review as a conference paper at ICLR 2018Integration of temporal information in deep networks often is achieved using recurrent neural net-works (RNN). For inference and training, these RNNs are normally converted into feed forward net-works using network rollout and weight sharing over time (e.g. Hochreiter & Schmidhuber (1997)),especially to use existing methods to train feed forward networks. Similar to previously mentionedmethods which target problems of backpropagation, also the idea of network rollout only tackles asymptom arising from layerwise-sequential networks, while creating new challenges like networkinitialization at beginning of rollout or scalability over time, which is not the case for our approach,at least during inference.Beside recurrent connections, also skip connections are widely used in deep networks, such asResNets (He et al., 2016). Especially used identity skip connections can be interpreted as a lo-cal network rollout acting as a local filtering which could also be achieved through recurrent selfconnections (Greff et al., 2017). Hence it seems, currently used skip connections are primarily usedto mitigate problems of backpropagation rather than to form early, temporally shallow representa-tions in abstract layers on the basis of layers with lower abstraction, which would be biologicallyplausible (Bullier, 2001).The concept of layerwise-parallel networks is also strongly related to ideas like BranchyNet (Teer-apittayanon et al., 2016), which use intermediate features for early classification and hence en-able stimulus complexity dependent allocation of network resources. This is natively achieved withlayerwise-parallel networks using skip connections, which, for layerwise-parallel networks, intro-duce temporal shortcuts. Hence in general, the network has shorter response times, using short(in time and network architecture) pathways, for simple, and longer response times, using deeperpathways, for complex stimuli. A simple example for this is given in Section 3.Figure 2: Visualization example of a simple classification network using the provided toolbox (bestviewed in color). The network is shown as graph (green nodes are neuron-pools, blue nodes aresynapse-pools) together with information about the network.The already mentioned different integration of information over time between layerwise-sequentialand layerwise-parallel networks causes also differences in network evaluation. For layerwise-parallel networks, the response on a given stimulus is delayed in time dependent on the networkarchitecture. Hence on the one hand, existing performance measures for layerwise-sequential net-works, for example, accuracies or confusion-matrices, must be adapted to take time into account,while on the other hand, measures not available for layerwise-sequential networks, such as reactiontimes, can now be employed.Further, network training has to be adapted for layerwise-parallel networks. For layerwise-sequentialnetworks, normally an error is formulated for network outputs and back-propagated through the net-work providing a signal to update parameters (Rumelhart et al. (1986)). To some extent, existingtraining mechanisms for layerwise-sequential networks can be applied to layerwise-parallel net-works by using local losses, and rolling out local parts of the network. Also biologically inspired4Under review as a conference paper at ICLR 2018training rules, such as Hebbian learning (Hebb, 1949), can be used. We provide more details onsolutions for evaluation and training of layerwise-parallel networks in sections 3.1 and 3.2.In the previous paragraphs, we laid out some motivation for layerwise-parallel networks and com-pared it to layerwise-sequential networks. And while we are not the first ones pointing out certaindrawbacks of layerwise-sequential networks (e.g. Jaderberg et al. (2017), Farabet et al. (2012)), itis our understanding, that the underrepresentation of layerwise-parallel networks in deep learningliterature is due to a lack of tools to explore this class of networks.One of the main contributions of this work, is to provide an open source toolbox (available athttp://bit.ly/2yNfroI ) to design, train, evaluate, and interact with layerwise-parallel deep networks.An example screenshot of the provided graphical user interface is shown in Fig. 2.2 L AYERWISE -PARALLEL DEEP NETWORKSIn this section, we give a formal definition of layerwise-parallel networks and their elements.2.1 N ETWORK MODEL AND INFERENCEWe describe a neural network as a graph (V;E), withV=fviji= 1;:::;NVgbeing a set of verticesandE=fejjj= 1;:::;NEgbeing a set of directed (hyper-) edges with potentially multiple sourcevertices src jV and a single target vertex tgtj2V. Each vertex vihas a fixed dimensionalityDi= (Fi;Wi;Hi)2N3, a statexti2RDi=RFiRWiRHiat timet2N, and a parameterizedmappingi#ti:RDi!RDiwith some parameters #ti. Each edge ejhas a parameterized mappingfjtj:0@Yvi2srcjRDi1A!RDtgtjwith some parameters tj. For a vertex vi, let inputiE denote the set of all edges targeting vi. Wedefine a one-step temporal propagation for every vertex:xt+1i=Fi(xt;t;#t) =i#ti0@Xej2inputifjtj(fxtkgk2srcj)1AAs stated earlier, we also refer to the vertices of the network graph as neuron-pools and to the edgesassynapse-pools .Using this one-step temporal propagation, all network states xican be updated independently ofeach other and in parallel from time step to time step.Although, the above definition is general, the provided toolbox introduces some restrictions on whatupdate functions can be specified. An explicit specification of the internal structure of the updatefunctions was chosen to include common elements of deep neural networks, such as convolutionallayers, inception mechanisms and gated connections. Please see Appendix 6.1 for more details.We emphasize again that for layerwise-parallel networks, parallelization is independent from thenetwork architecture. The network is designed using certain elements and these are always par-allelized. Hence, these elements have to be flexible to a certain extent, to enable users to designvarious architectures.2.2 N ETWORK TRAININGLett= (t;#t)denote all current parameters. We refer to a mapping pthat, on the basis of currentand previous states and parameters, produces a parameter update for some parameters !pas aplasticity :!p(t) =p(xt;t)5Under review as a conference paper at ICLR 2018For a given set of plasticities P=fpiji= 1;:::;NPg, we define a one-step parameter updatefunction for every single parameter w2:wt+1=wt+Xiwithw2!pi!wpiNote that all plasticities can be computed independently from each other only on the basis of pre-vious states of neuron-pools and synapse-pools, and hence in parallel to the update functions fornetwork inference.Beside this abstract definition, some more explicit examples of plasticities, which are provided withthe toolbox, are given below.3 C HALLENGESMost challenges working with layerwise-parallel networks are caused by the fact that at a givenpoint in time, information from different previous time frames is distributed across the network. Thedistribution pattern is directly given by the network’s architecture and can be conveniently visualizedusing a rolled out version of the network. In general, gating neuron-pools could guide, for exampledependent on changes in input stimuli, the information flow through this pattern.A small example layerwise-parallel network for MNIST classification is illustrated in Fig. 3, show-ing the network architecture in 3a and the rolled out network in Fig. 4 to visualize information flow.Similar to the idea of BranchyNet (Teerapittayanon et al., 2016), the network uses two paths of dif-ferent depths for classification. We use this small network to illustrate some important mechanismsof layerwise-parallel networks.image conv1 conv2pred1 pred2prediction+labellabel copy[5;5;32;2][5;5;64;2]full fullId(a)0123456789temporal delay0.00.20.40.60.81.0accuracy(b)Figure 3: Illustration of a 2-path classification network. (a) The graph of the network.The network consists of two stacked convolutions, for which neuron-pools conv1 and conv2([rf;rf;features out;stride ]) hold the resulting feature maps, and fully connected classification ontop of both of them, with pred1 and pred2 storing the intermediate classification results. These in-termediate results are aggregated through summation in the final result (prediction). For training, aone-step copy of the current ground-truth, which is stored in the neuron-pool label and label copy,is needed. (b) Classification accuracies on MNIST test dataset, relative to temporal offset in framesbetween stimulus (image and label) onset and network response (prediction).3.1 P LASTICITIESAs stated above, plasticities operate in parallel to neuron-pools and while all neuron-pools are com-puting the one-step update t!t+ 1, all plasticities compute current parameter updates on the basisof the current time step t. For most plasticities, the network is rolled out locally, considering only asubset of all neuron-pools, and initialized with neuron-pool states at time t. This is illustrated in Fig.4, where local rollouts are shown for the two used plasticities to train the example network in Fig. 3a.After plasticities have computed parameter updates, these updates are aggregated by the synapse-pools, and in the next step the plasticity operates on the states of the now updated neuron-pool statesand parameters from time t+ 1.6Under review as a conference paper at ICLR 2018timageconv1conv2pred1pred2predictionlabellabel copy0 1 2 3 4Lclass0 1xt=1pred1xt=0label copyLdeep-class0 1 2 3xt=3pred2 xt=0labelFigure 4: Rolled out network for maximal path length 4and the two sub-networks used to computethe loss for the plasticities. All continuous lines represent synapse-pools which are initialized ran-domly and trained by the plasticities. All dotted lines are initialized as identity and are not trained.To increase transparency and re-usability, we separate plasticities into two parts: An update part ,which, state and memory free, computes a temporally local estimate of parameter updates and anoptimizer part which transforms the update estimates, for example, through temporal smoothing,into a current update which is used for the actual update of parameters.The toolbox provides some of the most widely used optimizers, such as stochastic gradient decentand ADAM (Kingma & Ba, 2015) and can easily be extended with new ones. Additionally, threetypes of update estimators are provided to specify plasticities:Loss based update: To leverage the large amount of existing techniques for training of layerwise-sequential deep networks, a loss can be defined on the basis of one or two neuron-pool states at acertain temporal offset tfrom the current time step t= 0. For example, considering the networkfrom Fig. 4, two loss-based plasticities are used to train the network:Lclassxt=1pred1;xt=0label copy=categorical-crossentropyxt=1pred1;xt=0label copyLdeep-classxt=3pred2;xt=0label=categorical-crossentropyxt=3pred2;xt=0labelHere, all losses are based on two neuron-pools. To compute the loss, the network is rolled backlocally from the neuron-pools until now (t= 0), being transformed into a feed forward network, ascan be seen on the right side of Fig. 4. Training is done as usual for layerwise-sequential networks.Note, that the validity and temporal properties of what we train, highly depend on the chosen neuron-pools and temporal offsets. Concerning validity, for example for Ldeep-class we could not have chosenneuron-pool states xt=4pred2 andxt=1label because then the rolled back network would have needed aninput image and label from the future time step t= 1 which are not available now t= 0.Concerning temporal properties, for example, if we would define the loss Lclassonxt=1pred1 andxt=0labelwe would have introduced a temporal offset of 1between prediction and ground-truth, leading tounintended behavior especially when the input changes.In general, loss based plasticities are expensive in the sense that potentially, for large parts of thenetwork, the inference (forward) step is done twice, once in the neuron-pools and potentially morethan once due to the rollout in the plasticity. Hence, local plasticities are preferred, which onlyoperate on a small set of neuron-pools and do not need a deep rollout. To achieve this, we suggestfunctional modularisation of the network which also increase network transparency and trainability.7Under review as a conference paper at ICLR 2018Hebbian based update: With some restrictions, we also provide the biologically more plausiblewell known Hebbian learning rule (Hebb, 1949) and some of its variants as an update estimator. Forexample, for a synapse wij, connecting some neuron xiwith some neuron xj:wtij=xtixt+1jNote, that this is used as an estimate and could be used as input for some optimizer, such as ADAM.Plasticities based on this estimator always use an internal one step rollout of the target neuron-pooland hence provide rather local plasticities compared to loss based plasticities.Parameter regularization update: Parameter update estimators can also directly be based onparameters rather than states, which is the case for example using L1orL2regularization on certainparameters.3.2 N ETWORK EVALUATIONConsidering performance measures for layerwise-parallel networks, we follow general ideas fromthe analysis of spiking neural networks and experimental psychology (e.g. Diehl et al. (2015),Woods et al. (2015)).Letm(x;y)denote any performance measure for a layerwise-sequential deep network, for examplea confusion-matrix or an accuracy value, where yis the network’s response for a stimulus x. Thiscan be converted into a performance measure for layerwise-parallel networks:mt(xtonset;ytonset+t) =m(xtonset;ytonset+t)Wheretonsetdenotes the time of a stimulus onset (first frame a stimulus is presented). We measurecurrent performance dependent on the temporal offset tbetween the network’s current responseand previous stimuli. On the basis of this, a concept of reaction- or response times can be defined,e.g. measuring the mean offset after which a certain performance measure reaches a given threshold.An example of a time dependent accuracy evaluation for the 2-path network from Fig. 3a is givenin Fig. 3b. Due to the network’s architecture, performance is at chance level for the first three timesteps. Then information about the stimulus reaches the prediction neuron-pool through the shortpath before, after one additional time step, also the longer path becomes active, from which on thenetwork reaches its highest accuracy. Stimuli were always presented for 12consecutive frames.4 T HE STATESTREAM TOOLBOXTo explore layerwise-parallel deep networks, we provide an open source toolbox enabling design,training, evaluation, and interaction with this kind of networks. Networks are specified in a text file,and a core process distributes the network elements onto separate processes and/or GPUs. Elementsare executed with alternating read and write phases, synchronized via a core process, and operateon a shared representation of the network. The toolbox is written in Python and uses the Theano(Theano Development Team, 2016) backend. The shared representation enables parallelization ofoperations across multiple processes and GPUs on one machine and enables online interaction.An additional motivation for intuitive, direct, and adjustable interaction with networks is that currentdeep learning literature (e.g. Vertens et al. (2017), Gupta et al. (2017), Marblestone et al. (2016))suggests that network architectures will become more complex and heterogeneous. These functionalmodularized architectures increase network understanding through transparent auxiliary neural in-terfaces, such as occupancy grids or optic flows, and trainability, using local losses to train sub-networks. Understanding of these network’s internal dynamics is important, concerning debuggingand optimizing architectures as well as safety aspects and to guide the design of future architectures.The chosen implementation of layerwise-parallel networks favors certain network architectures. Forexample, the overall frame rate of the network primarily depends on the slowest network element(neuron-pool or plasticity) rather than on the overall number of elements, as long as sufficient com-putation resources are available. With this toolbox, we did not intent to compete with existing deeplearning frameworks with respect to memory consumption or training speed but rather provide thesoftware infrastructure to explore layerwise-parallel deep networks, which, to our knowledge, otherdeep learning software does not.8Under review as a conference paper at ICLR 20185 C ONCLUSIONIn this paper, we defined and discussed layerwise-parallel deep neural networks, by which layerwisemodel-parallelism is realized for deep networks independently of their architecture. We argued thatlayerwise-parallel networks are beneficial for future trends in deep network design, such as largefunctional modularized or recurrent architectures as well as for networks allocating different net-work capacities dependent on stimulus and/or task complexity. Due to their biologically inspiredincreased parallelizability, layerwise-parallel networks can be distributed across several processesor GPUs natively without the need to explicitly specifying the network parts which should be par-allelized. Finally, we presented an open source toolbox to explore layerwise-parallel networks pro-viding design, training, evaluation, and interaction mechanisms.We would like to think of this work as a step towards native model-parallel deep networks, connect-ing the networks architecture directly to the temporal domain. For this, major challenges for thefuture remain, such as a more general formulation of neuron and synapse-pools than the one used inthe provided toolbox, the design of new local plasticities, or designing more adequate tasks whichtake the temporal domain into account.9Under review as a conference paper at ICLR 2018<|im_end|> <|im_start|>assistant ### Review Title Review of "STATESTREAM: A TOOLBOX TO EXPLORE LAYERWISE-PARALLEL DEEP NEURAL NETWORKS" ### Review Text In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies. While the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
48goXfYCVFX
ICLR.cc/2021/Conference
2021
Interpretable Relational Representations for Food Ingredient Recommendation Systems
["Kana Maruyama", "Michael Spranger"]
Supporting chefs with ingredient recommender systems to create new recipes is challenging, as good ingredient combinations depend on many factors like taste, smell, cuisine style, texture among others. There have been few attempts to address these issues using machine learning. Importantly, useful models do obviously need to be accurate but importantly -- especially for food professionals -- interpretable. In order to address these issues, we propose the Interpretable Relational Representation Model (IRRM). The main component of the model is a key-value memory network to represent relationships of ingredients. We propose and test two variants of the model. One can learn latent relational representations over a trainable memory network (Implicit model), and the other can learn explainable relational representations over a pre-trained memory network that integrates an external knowledge base (Explicit model). The relational representations resulting from the model are interpretable -- they allow to inspect why certain ingredient pairings have been suggested. The Explicit model additionally allows to integrate any number of manually specified constraints. We conduct experiments on two recipe datasets, including CulinaryDB with 45,772 recipes and Flavornet with 55,001 recipes, respectively. The experimental results show that our models are both predictive and informative.
["Metric Learning", "Gastronomy", "Memory Network", "Knowledge Graph", "Interpretable"]
ABSTRACTSupporting chefs with ingredient recommender systems to create new recipes ischallenging, as good ingredient combinations depend on many factors like taste,smell, cuisine style, texture among others. There have been few attempts to ad-dress these issues using machine learning. Useful Machine Learning models doobviously need to be accurate but importantly – especially for food profession-als – interpretable. In order to address these issues, we propose the InterpretableRelational Representation Model (IRRM). The main component of the model is akey-value memory network to represent relationships of ingredients. We proposeand test two variants of the model. One can learn latent relational representationsover a trainable memory network (Implicit model), and the other can learn ex-plainable relational representations over a pre-trained memory network that inte-grates an external knowledge base (Explicit model). The relational representationsresulting from the model are interpretable – they allow to inspect why certain in-gredient pairings have been suggested. The Explicit model additionally allows tointegrate any number of manually specified constraints. We conduct experimentson two recipe datasets, including CulinaryDB with 45,772 recipes and Flavornetwith 55,001 recipes, respectively. The experimental results show that our modelsare both predictive and informative.1 I NTRODUCTIONData mining and machine learning methods play an increasingly prominent role in food preferencemodeling, food ingredient pairing discovery andnew recipe generation . Solving these tasks is non-trivial, since the goodness of ingredient combinations depends on many factors like taste, smell,cuisine, texture, and culture. Ahn et al. (2011) detected that the number of shared flavor moleculesbetween ingredients is one of important factors for food pairing. They found Western cuisines showa tendency to use ingredient pairs that share many flavor compounds, while East Asian cuisines tendto avoid compound sharing ingredients. Using this idea, Garg et al. (2017) developed a rule-basedfood pairing system which ranks ingredients based on the number of shares of flavor molecules.Recently, Park et al. (2019) suggested a neural network approach based on flavor molecules andco-occurrence of ingredients in recipes. These approaches focus on one-to-one food pairing. Thereis also research related to many-to-one pairing. De Clercq et al. (2016) proposed the Recipe Com-pletion Task which tries to identify matching ingredients for a partial list of ingredients (the recipe)using a Matrix Factorization based recommender system. Although efforts have been made to detectgood ingredient combinations, there is no current Machine Learning method in this field that allowsto interpret why suggested pairs are good.Our work is targeted at interpretable recommendation systems for food pairing and recipe comple-tion. Given a set of pre-selected ingredients (cardinality 1 or more) by a user, the recommendersuggests top-N ingredients from a set of candidates. For example, suppose a user selects apple andchocolate as the pre-selected ingredients, our recommender suggests some good paired ingredients(e.g. cinnamon ) and also identifies reasons (e.g. cinnamon is good for apple andchocolate in termsof their flavor affinity).For this, we propose the Interpretable Relational Representations Model (IRRM) in two variants toaddress food pairing and recipe completion tasks. The model features a key-value memory network(Sukhbaatar et al. (2015), Miller et al. (2016)) to represent relationships of ingredients. One variant1Under review as a conference paper at ICLR 2021of the model is trained to learn latent relational representations over a trainable memory network(Implicit Model). The other model can learn explainable relational representations over the pre-trained memory network integrating an external knowledge base (Explicit Model). The relationalrepresentations are interpretable and can be queried as to the reasons why the ingredients havebeen suggested. The Explicit model can integrate any number of constraints which can be decidedmanually based on the characteristics of the desired recommender system. Our contributions are asfollows:1. We model ingredient pairing as a general recommendation task with implicit feedback.2. We introduce the Interpretable Relational Representations Model and it’s two variants: Im-plicit and Explicit. Both of which can learn pair specific relational representations (vectors)for one-to-one (i.e. ingredient to ingredient) and many-to-one (ingredient-set to ingredient)food pairing tasks. The relational vectors are also interpretable.3. We propose a training procedure to learn one-to-one and many-to-one relationships effec-tively using recipes.4. We evaluate our proposed models in the Recipe Completion Task and the Artificial Foodpairing Task on the CulinaryDB and the Flavornet datasets. Our proposed approachesdemonstrate competitive results on all datasets, outperforming many other baselines.5. We perform qualitative analysis. The results presents our proposed Explicit model is capa-ble of unraveling hidden ingredients structures within recipes.2 R ELATED WORKThere are two related streams of work in recommender systems that are important for this paper: thesession-based setting and the knowledge-aware systems .In the session-based setting, user profile can be constructed from past user behavior. A naturalsolution to this problem is the item-to-item recommendation approach.A variety of methods existfor this problem. For example, Quadrana et al. (2017) models the item sequence using RNNs, Kang& McAuley (2018) uses Self-Attention layers, and Wu et al. (2020) uses Transformer layers. Whilethese methods mainly focus on how to encode item click-sequence interactions, we target goodingredient pairing using only ingredient attributes and the relationship between a ingredient set andan ingredient based on co-occurrence in recipes. For this we develop a new architecture integratingset encoders and relational memory with novel loss and score functions.There are also increasingly methods for integrating knowledge into recommenders. Zhang et al.(2016) and Cheng et al. (2016) directly incorporate user and item features as user profile into neu-ral network models. Huang et al. (2018) and Wang & Cai (2020) integrate them using a pre-trainedknowledge graph. These methods try to represent user context using external knowledge base, there-fore, usually these knowledge embeddings are integrated to user embeddings. In this work, we in-corporate knowledge specifically to detect relationships between an ingredient set and an ingredientfor interpretation to improve recommendation performance.3 P ROBLEM DEFINITIONWe first introduce the notations used throughout this paper. We model recipe completion as a recom-mendation scenario with implicit feedback (Huang et al., 2018, Tay et al., 2018. In such scenarios,a user has interacted with an item and the system infers the item that user will interact next based onthe interaction records of the user. We apply this to the food domain by using recipes as interactionrecords.LetIdenote a set of ingredients and fi1;:::;iMgdenote a pre-selected ingredient set, where i2Iis the ingredient and Mis the number of ingredients. We call fi1;:::;iMgpre-selected ingredientset in this paper. Next, let Icandidate denotes a set of candidate ingredients. Icandidate depends oneach pre-selected ingredient set, that is, Icandidate =Ifi1;:::;iMg. In addition, we assumethat a knowledge base (KB) of ingredients is also available and the KB contains factors which arerelated to why some ingredients are good combinations. A KB is defined as a set of triplets over a2Under review as a conference paper at ICLR 2021ingredient setingredient0000100:○○○○○○○○○○○○○○○○○○k1k2k3k4v1v2v3v4pScorings(p, q’, r)Scorings(p, q, r)qNegativeexampleLoss0000100:0000100:Ingredient Set Encoder○○○○○○rAdd&Norm+ SumIngredient EmbeddingLayerSoftmax○○○○○○q’Residual connection(a) Implicit model0000100:0000100:Ingredient EmbeddingLayer○○○○○○○○○○○○k1k2k3k4v1v2v3v4○○○○○○Scorings(p, q’, r)Scorings(p, q, r)qringredient set○○○○○○pei1:M○○○○○TransEpretrained○○○○○○○○○○ latt 1:NWriterWriteringredient0000100:Ingredient Set EncoderAdd&Normeicandidate+ Sumwrite writewrite○○○○○○q’NegativeexampleSoftmaxLossResidual connection (b) Explicit modelFigure 1: IRRM architecturesentity setEand a relationship set L. A KB triplethei;l;eaicomposed of two entities ei;ea2Eand a relationship l2L, whereeiis an ingredient (e.i. ei2I) andlis an attribute and eais theattribute value. For instance, happle, flavorMolecule, (-)-Epicatechin idenotes that apple containsthe(-)-Epicatechin flavor molecule.Based on these preliminaries, we define the food ingredient recommendation task. Given a pre-selected ingredient set fi1;:::;iMgand candidate ingredients Icandidate , we would like to infer thetop-N ingredients from Icandidate .4 R ECOMMENDATIONS WITH MEMORY NETWORKSIn this section, we introduce the IRRM architectures. We start with the Implicit model that consistsof a trainable key-value memory network. We then augment the Implicit model using a key-valuememory network which integrates pre-trained entity and relationship vectors with ingredient at-tributes in the KBs – we call this extention the Explicit model. The overall architecture is describedin Figure 1. The input of our architecture are a pre-selected ingredient set and a candidate ingredienticandidate2Icandidate . The output is a score. In inference, our recommender uses these scores torankIcandidate .4.1 I NGREDIENT EMBEDDING LAYER AND INGREDIENT SETENCODERIngredients are represented as one-hot encoding vectors (corresponding to a unique index key be-longing to each ingredient). At the embedding layer, this one-hot encoded vector is converted intoa low-dimensional real-valued dense vector representation which is multiplied with the embeddingmatrices Q2RdjIj– which stores ingredient embeddings. dis the dimensionality of the ingre-dient embeddings while jIjis the total number of ingredients. icandidate is converted to qusingthis embedding layer. On the other hand, pre-selected ingredients fi1;:::;iMgare encoded by theIngredient Set Encoder (Figure 6). At first, each ingredient ijis converted to a vector using theIngredient Embedding Layer (same as icandidate ). As a result, ij2Rdvectors are generated. Thesum of these vectors is converted to the ingredient set vector pusing a feed-forward network with asingle hidden layer, followed by Layer Normalization.4.2 R ELATION ENCODERTay et al. (2018) introduced LRAM (Latent Relational Attentive Memory), in order, to generatelatent relational vectors between user-item interactions. We expand this module by adding a residualconnection, followed by Layer Normalization. This idea is inspired by Vaswani et al. (2017).Given the pair of a pre-selected ingredient set vector and a candidate ingredient vector, hp;qi, theRelation Encoder first applies s=p+qto generate the joint embedding of pandq. The generatedvector s2Rdis of the same dimension of pandq. Note we also tried other transfer functions heresuch as element-wise multiplication or just using a multi-layered perceptron MLP (p;q). However,we found that addition performs best. This joint embedding sis used as the input of the memorynetwork. The attention vector a2Rdis a vector of importance weights over keys which are3Under review as a conference paper at ICLR 2021represented as the key matrix K= [k1;:::;kN]T2RNd, whereNis the number of key-valuepairs in the memory network and kj2Rdis a key vector. Each element of the attention vectoracan be defined as aj=sTkj, whereaj2R. In order to normalize the attention vector ato aprobability distribution, we use the Softmax function: Softmax (aj) =exp(aj)PNn=1exp(an). We generatethe vector m=PNn=1Softmax (an)vnas the summation of weighted value vectors which arerepresented as the value matrix V= [v1;:::;vN]T2RNd. Finally, in order to generate therelational vector – r,mis added with the joint embedding sand Layer Normalization is applied asfollows r=LayerNorm (s+m).4.2.1 T HEEXPLICIT MODELIn order to improve interpretability and predictive performance, we incorporate ingredient attributeinformation from a given KB into the memory network. Inspired by recent works which integrate amemory network with external memories (Huang et al. (2018)), we propose the Explicit RelationalEncoder. Instead of the trainable key matrix Kand value matrix V, we pre-train vectors over agiven KB. We then freeze key and value matrix for training the explicit model. Given a pair of a pre-selected ingredient set fi1;:::;iMgand a candidate ingredient icandidate ,fi1;:::;iM;icandidategis converted into the entity vectors using the KB embeddings which provide the entity vectors e2RdKBand the relationship vectors l2RdKB. Note that in case of dKB6=d, we convert the jointembedding s2Rdintos02RdKBand the relational vector r2RdKBintor02Rdwith linearprojections. We use the TransE (Bordes et al. (2013)) for the KB embeddings. The reason for thischoice is that given triplet hei;latt;eiatti, TransE can learn entity vectors and relationship vectors tofollow eiatt=ei+latt. KB relationships usually correspond to attribute types of entities, so we usethe notation lattas the attribute type and eiattas its value. Hence, we set the key matrix as follows:K= [latt1;:::;lattN]T(1)whereNdepends on the number of attribute types which you want to integrate and Kis constantthrough training. The value matrix is initialized as follows:vattj=Xi2fi1;:::;iM;icandidategeiattj=Xi2fi1;:::;iM;icandidateg(ei+lattj) (2)V= [vatt1;:::;vattN]T(3)There can be many one-to-multiple relations in the KB. For instance, an apple has multiple flavormolecules. Therefore, the entity vector eattshould not be an ingredient specific vector and we useei+lattinstead of using eatt.4.3 SCORE FUNCTIONFinally, we define our score function as the relationship between the pre-selected ingredient setvector p, the candidate ingredient vector q, and the relational vector r:s(p;q;r) =CosSim (p;q) +CosSim (p+q;r) (4)whereCosSim is the cosine similarity, and good pairs will have high scores etc.Figure 2 shows the geometric differences between our score function and other possible ones. Fig-ure2(a) tries to place the ingredient set and each ingredient into the same spot in vector space,Figure2(b) learns to fit the ingredient set and each ingredient with the relational vector r. This scorefunction is often used in the domain of collaborative filtering. The score functions a and b are sug-gested in previous work. Tay et al. (2018) reported jjp+rqjjcould achieve better performancethanjjpqjjin the domain of collaborative filtering, which is the user-item based recommender.However, in the food pairing task, the result was the opposite. It seems if we use jjp+rqjj,because of r,pandqcan not be trained properly. As rapproaches~0, the performance is improved,so the representation of rcannot be learned. This makes sense, since the ingredient set is a mix-ture rather than a list of ingredient and ingredient sets can be seen as a single processed ingredient.4Under review as a conference paper at ICLR 2021rIng.Ing. SetIng.Ing. SetIng.Ing. Setr(a) -||p–q|| (b) -||p+ r–q||(c) Ours: CosSim (p, q) + CosSim (p+q, r)pqqqppFigure 2: Geometric comparisons of score func-tions.Recipe A•ing_a1•ing_a2•ing_a3Recipe B•ing_b1•ing_b2•ing_b3•ing_b4Recipe C•ing_c1•ing_c2randomizerandomizeb1 b3 b4 b2 a2 a3 a1 c1 c2iter=1{ing_b1}ing_b3e.g. batch size = 3iter=2{ing_a2}ing_a3{ing_b1, ing_b3}ing_b4{ing_b1, ing_b3, ing_b4}ing_b2{ing_a2, ing_a3}ing_a1{ing_c1}ing_c2ingredient setingredientingredient setingredientingredient setingredient(1)(2)(3)Figure 3: How to generate pairs for mini-batches.Therefore, we propose to use a score function (Figure2(c)) that tries to place the ingredient set pandeach ingredient qinto the same location in vector space and rotate the relationship between pandqcloser to the same place as a relational vector rwhich is represented using attributes from KB. Inaddition, since our network structure is symmetric with respect to pandq, we require a symmetricbilinear score function .4.4 O BJECTIVE FUNCTIONOur objective function is defined as:L=BatchXx=1PosXy=1log[exp(s(px;qy;rxy))exp(s(px;qy;rxy)) +PNegz=1exp(s(px;qzy;rxy))] (5)whereis the margin that separates the golden pairs and corrupted pairs, is a temperature pa-rameter,Batch is the mini-batch size, Pos is the number of positive examples, Neg is the numberof negative examples for each positive example, and the score function for negative examples takethe same relational vector as their positive example. In order, to define this objective function, wecombine the Batch All loss (Hermans et al. (2017)) with the effective triplet loss in Metric Learningand the additive margin in the softmax loss (Wang et al. (2018)). Note that while the hinge loss isalso possible, we found that the softmax loss has better performance and is more stable.4.5 T RAININGUsing pre-processed recipes (removed all ingredients without attributes and empty recipes), we trainour models in the following steps (Figure 3): At first, we randomize the order of recipes and itsingredients (Figure 3(1)), and then generate the sequence of ingredients from recipes (Figure 3(2)).After that, we generate pairs between a pre-selected ingredient set and a candidate ingredient fromthe sequence (see Figure 3/3) and loosely make pairs from same recipes belong to the same mini-batch. Next, we sample additional positive examples as necessary which are taken from other recipesrandomly. Finally, we sample negative examples randomly. In the end, mini-batches are generated.Note, we also trained our models without the recipe restriction. However, performance was worsein both one-to-one and many-to-one pairs.5 E VALUATIONWe evaluated the the Implicit model and the Explicit model on two tasks against standard baselinesand we analyzed the relational representations in terms of interpretability. We propose two tasks forevaluation5Under review as a conference paper at ICLR 2021For evaluation, we first use the same ranking procedure as in De Clercq et al. (2016), namely theRecipe Completion Task . In the Recipe Completion Task, the recommender ranks all ingredients topredict one randomly eliminated ingredient, and the remaining ingredients of each recipe are usedas input to the model – which is the pre-selected ingredient set in our definition. We adopt threeevaluation metrics for this task (same as the previous work); Mean Rank: the mean position ofthe eliminated ingredient in the ranked ingredients, Hit Ratio(HR)@10: the proportion of correctingredients ranked in the top 10, and ROC-AUC(AUC): the mean area under the ROC curve. Thistask is useful as it allows us to measure the basic performance of our model in the same setting asexisting baselines (many-to-one pairing) and against ground truth data.In order to understand the model performance further, we tested on a second task called ArtificialFood Pairing Task . In this task, we generate pairs from existing recipes based on ingredient co-occurrences. Given some ingredients as a pre-selected ingredient set and a candidate ingredient,pairs that occur in any of recipes are used as a positive example, otherwise as a negative candidateexample. The Artificial Food Pairing task consists of positive pairs and negative pairs where thesame pre-selected ingredient set is always used for both positive and negative pairs but positive andnegative examples are randomly taken from its candidates with a pre-specified number and ratio. Inthis task, the recommender predicts whether pairs are positive or negative examples. We use thistask for more detailed analysis. We measure MAP@10: the mean average precision in top-10 andAUC as metrics.We evaluate on two datasets. The first dataset is taken from the CulinaryDB (Bagler (2017)), andthe second dataset is from Flavornet (Ahn et al. (2011)). Both datasets contain the recipe set whichconsist of the list of ingredients and the cuisine categories (e.g. Chinese, Japanese) and the ingredientsets which consist of the names, the food categories (e.g. Meat, Fruit), the flavor molecules theingredient has. The statistics of datasets shows in Table 4. Before training models, the recipe setare divided into a train, a validation and a test set. On the other hand, we generate triplets (Figure7) from whole ingredient set in order to train TransE, 172,207 triplets for CulinaryDB and 40,952triplets for Flavornet.5.1 BASELINESWe compare our Implicit/Explicit models to the following baselines.FREQ : Frequency predictor that always recommend the most frequently used ingredientsof the training recipes. Despite its simplicity it is often a strong baseline in certain domains.PMI : Recommender based on the PMI score.TFIDF :Recommender based on the TF-IDF score.NMF (De Clercq et al. (2016)): Non-negative matrix factorization based recommender.The model is trained using the train and the validation recipes. It is implemented by our-selves.NeuMF (He et al. (2017)): Neural matrix factorization model which is based on the Neuralcollaborative filtering framework. We use our Ingredient Embedding layer and IngredientSet Encoder (Section 4.1) as embedding layers of this model. And the training process isalso same as ours (Section 4.5). More details of implementation can be found in sectionA.5.WIDE&DEEP (Cheng et al. (2016)): Wide & Deep model based recommender. We use apre-selected ingredient set, a candidate ingredient, and attributes (Table 4) as inputs of thismodel. And the training process is same as ours (Section 4.5). More details of implemen-tation can be found in section A.5.5.2 R ESULTS AND DISCUSSIONPredictive performance Table 1 shows the results on all datasets for all compared baselines inRecipe Completion Task. While both our Implicit and Explicit models outperform all counterpartson all metrics usually with a clear margin, our Explicit model is approximately same performanceas the Implicit model. On the other hand, NMF is the best baseline. Performance is good sincethe Recipe Completion task requires finding only one missing ingredient - even though there should6Under review as a conference paper at ICLR 2021Table 1: Experimental results on the Recipe Completion Task.Datasets CulinaryDB FlavornetMetrics Mean Rank HR@10 HR@20 AUC Mean Rank HR@10 HR@20 AUCFREQ 473.5 0.119 0.140 0.605 257.9 0.131 0.153 0.621PMI 612.5 0.055 0.056 0.527 346.9 0.066 0.069 0.531TFIDF 478.5 0.055 0.055 0.598 261.3 0.055 0.089 0.613NMF 57.0 0.435 0.559 0.900 36.3 0.479 0.599 0.896NeuMF 37.5 0.400 0.530 0.943 35.7 0.332 0.509 0.898WIDE&DEEP 43.5 0.350 0.478 0.929 37.7 0.351 0.493 0.896IRRM(Implicit) 35.9 0.391 0.567 0.943 28.1 0.485 0.631 0.926IRRM(Explicit) 35.4 0.397 0.575 0.944 28.3 0.480 0.629 0.925Table 2: Experimental results on the Artificial Food Pairing Task (CulinaryDB).Pair type one-to-one two-to-one three-to-oneMetrics MAP@10 AUC MAP@10 AUC MAP@10 AUCNMF 0.638 0.702 0.604 0.786 0.601 0.794NeuMF 0.841 0.905 0.745 0.923 0.747 0.919WIDE&DEEP 0.787 0.919 0.705 0.930 0.726 0.926W/O-RELENC 0.574 0.736 0.494 0.731 0.499 0.746IRRM(Implicit) 0.839 0.904 0.755 0.921 0.792 0.925IRRM(Explicit) 0.842 0.906 0.759 0.922 0.790 0.926be multiple correct ingredients in the real situation. It appears that our approaches are effective formany-to-one pairs which are constrained by recipes.In order to evaluate the pairing skills from one-to-one to many-to-one, we use the Artifi-cial Food Pairing Task. We report the results of one-to-one, two-to-one, and three-to-one.We sample positive examples and negative examples according to three ratio settings like(Pos;Neg )2f(20;80);(50;50);(80;20)g. Beforehand we classify all ingredients into two cat-egories:fLow;Highgdepending on the number of ingredient occurrences in recipes. Then, pre-selected ingredient sets and positive examples are randomly sampled from each category set to havethe same ratio, and negative examples are randomly sampled from all candidates. We adopt thisprocess since generally speaking low frequency ingredients are more difficult to handle than highfrequency ingredients. In the end, We prepare 4,536 tasks by pre-selected ingredient set size, whereeach task consists of 100 pairs.As can be seen in Table 2, our models outperform NMF on one-to-one, two-to-one, and three-to-onetest sets. The proposed models show stable performance as the number of pre-selected ingredientsincreases.Here, we only report the results of the Explicit model since the Implicit model shows the sametendency. As expected, L-L seems to be the most difficult task for our models followed by H-L.This result is good since it shows that our model does not overfit to low frequency ingredients.Since one of the main contributions of the paper is the Relation Encoder Model, We performedan ablation study and evaluated a version of our model without it (W/O RELENC model). W/ORELENC has similar performance to NMF on the one-to-one Artificial Food Pairing task but failson two-to-one and three-to-one test sets in MAP@10. This clearly show that the Relation En-coder module plays an important role in our models. For more detailed analysis, we compare Roc-curves for our models by ingredient frequency-based pair types(Figure 4), which consists of HighFrequency-to-High Frequency(H-H), High Frequency-to-Low Frequency(H-L), Low Frequency-to-High Frequency(L-H), and Low Frequency-to-Low Frequency(L-L), e.g. H-L describes pairs that apre-selected ingredient is used in various recipes frequently, while the paired ingredient is used inonly few recipes.Interpretablity We first analyzed attention weights in the Explicit model for some specific foodpairs around chocolate (see Figure 5). The data shows that eggis paired with chocolate becauseof correlations in food category. miso on the other hand has considerable flavor molecule relatedaffinity to chocolate. This interpretation for eggs is consistent with the results reported by De Clercqet al. (2016). Next, we focus on the attention weights for flavor molecules in our trained Explicitmodel for a quantitative analysis. We compare the average Flavor Molecule attention weights withcorrelation coefficients which are calculated on full recipes. Details are in the caption of Table 3. In7Under review as a conference paper at ICLR 2021Figure 4: Comparison of Roc-curve by ingredientfrequency-based pair type, which use the Explicit modelon CulinaryDB, one-to-one pairs. L describes Low fre-quency, H describes High frequency.(a) chocolate vs egg (b) chocolate vs misoCuisineFood CategoryFlavor ProfileFlavor MoleculeCuisineFood CategoryFlavor ProfileFlavor MoleculeFigure 5: Visualizations of atten-tion weights of our Explicit model onCulidnaryDB.Table 3: Comparison of food ingredients sorted in descending order of scores. Left: ranked basedon the average of Flavor Molecule attention weights in our Explicit model. Right: ranked basedon correlation coefficients between the number of food ingredient co-occurrence in recipes and theshared flavor molecules number between two food ingredients. Both scores are calculated on co-occurring ingredient pairs in recipes. Note that while the range of scores for our attentions weights is[0;1], the range of scores for correlation coefficients is [1;1]and Freq is the number of ingredientoccurrences in whole recipes.RankOur attention weights Correlation coefficientsName Score Freq Name Score Freq1 soy yogurt 1.000 2 true frog 0.730 42 wheaten bread 1.000 4 florida pompano 0.577 33 sandalwood 1.000 2 shellfish 0.510 74 common dab 1.000 2 oil palm 0.488 35 blackberry brandy 1.000 3 abalone 0.408 46 miso 0.773 95 orange oil 0.403 87 wine 0.618 299 multigrain bread 0.358 88 sesame 0.603 1394 potato bread 0.356 79 gelatin 0.583 238 waffle 0.352 1310 sherry 0.578 475 fruit juice 0.344 21the ranking of attention weights, up to the top 5 are high score ingredients that do not seem to beflavor molecule correlated. It would appear that since the number of ingredient occurrences is verylow, the relational representations are not learned properly. On the other hand, the top 6 and belowlooks like a fairly good result. In the ranking of correlation coefficients, there are some ingredientslikebreads andwaffle that seems to have little to do with flavor molecules. In consequence themodel is able to find hidden ingredients structures within recipes different from simple correlationcoefficients.6 C ONCLUSIONWe proposed interpretable models for food ingredient recommendation systems that rank ingredientsbased on the co-occurrence of ingredients in cooking recipes and learned relational representations.In our experiments, we found that explicitly and implicitly integrated factors can improve predic-tions, and our trained relational representations detect interesting correlations between ingredientsand factors such as molecular structure. Future work will focus on controlling ranking scores basedon manual modifications of relational representations by domain experts (e.g. chefs and food pro-fessionals).8Under review as a conference paper at ICLR 2021
nnMaIcPjDZ
The paper is built upon modification to prior work related to memory network-based recommendations. The main contribution aspect of the paper is to apply this work in a completely new domain of food science. Overall the problem the authors are trying to solve is well defined and their approach is solid.
7: Good paper, accept
Overall, I vote for accepting. The authors propose a novel approach to support chefs with creating/experimenting with new recipes to overcome the challenging combinations of taste/texture etc. that can result from addition of new ingredients. Their idea of adding interpretability to their results from a Food Knowledge Base has a lot of appeal, especially to end-user’s (chef’s in this case). The author’s proposed solution is well supported by the robust and detailed evaluation results. The concerns detailed in the cons section is mainly related to the readability and clarity of the paper. ###################################################################### Pros: (1) The model architecture for both implicit and explicit use cases in the paper has a strong appeal with solid foundation. (2) The objective function is well defined and the reasoning for changing the score function from the original paper’s LRML is well reasoned and convincing. (3) The proposed model’s efficacy is well supported by empirical experimentation on two large food datasets and good comparisons with established baseline methods. (4) The motivation for adding KB based on ingredient attributes is well reasoned and gives more ability to understand which ingredient attribute contributed to the prediction result. Cons: (1) The readability of the paper can be improved. The authors have used a lot of prior work from different authors as a basis for many critical components of their architecture. But the explainability and the clear motivation behind using the related work is lacking. The authors could have given a brief summary of previous work they are using as critical components of their solution. (2) For the experiments trying to measure interpretability it requires domain knowledge and understanding of Food Science. For example, Table 3 results which are used to illustrate how the explicit model can help with interpretability requires understanding of the training dataset beyond general statistics. Why are the ingredients ranked on attention weights in Table 3 better related to flavor molecules? But it would probably really appeal to someone who has knowledge pertaining to them.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Interpretable Relational Representations for Food Ingredient Recommendation Systems ### Paper Abstract Supporting chefs with ingredient recommender systems to create new recipes is challenging, as good ingredient combinations depend on many factors like taste, smell, cuisine style, texture among others. There have been few attempts to address these issues using machine learning. Importantly, useful models do obviously need to be accurate but importantly -- especially for food professionals -- interpretable. In order to address these issues, we propose the Interpretable Relational Representation Model (IRRM). The main component of the model is a key-value memory network to represent relationships of ingredients. We propose and test two variants of the model. One can learn latent relational representations over a trainable memory network (Implicit model), and the other can learn explainable relational representations over a pre-trained memory network that integrates an external knowledge base (Explicit model). The relational representations resulting from the model are interpretable -- they allow to inspect why certain ingredient pairings have been suggested. The Explicit model additionally allows to integrate any number of manually specified constraints. We conduct experiments on two recipe datasets, including CulinaryDB with 45,772 recipes and Flavornet with 55,001 recipes, respectively. The experimental results show that our models are both predictive and informative. ### Paper Keywords ["Metric Learning", "Gastronomy", "Memory Network", "Knowledge Graph", "Interpretable"] ### Paper Content ABSTRACTSupporting chefs with ingredient recommender systems to create new recipes ischallenging, as good ingredient combinations depend on many factors like taste,smell, cuisine style, texture among others. There have been few attempts to ad-dress these issues using machine learning. Useful Machine Learning models doobviously need to be accurate but importantly – especially for food profession-als – interpretable. In order to address these issues, we propose the InterpretableRelational Representation Model (IRRM). The main component of the model is akey-value memory network to represent relationships of ingredients. We proposeand test two variants of the model. One can learn latent relational representationsover a trainable memory network (Implicit model), and the other can learn ex-plainable relational representations over a pre-trained memory network that inte-grates an external knowledge base (Explicit model). The relational representationsresulting from the model are interpretable – they allow to inspect why certain in-gredient pairings have been suggested. The Explicit model additionally allows tointegrate any number of manually specified constraints. We conduct experimentson two recipe datasets, including CulinaryDB with 45,772 recipes and Flavornetwith 55,001 recipes, respectively. The experimental results show that our modelsare both predictive and informative.1 I NTRODUCTIONData mining and machine learning methods play an increasingly prominent role in food preferencemodeling, food ingredient pairing discovery andnew recipe generation . Solving these tasks is non-trivial, since the goodness of ingredient combinations depends on many factors like taste, smell,cuisine, texture, and culture. Ahn et al. (2011) detected that the number of shared flavor moleculesbetween ingredients is one of important factors for food pairing. They found Western cuisines showa tendency to use ingredient pairs that share many flavor compounds, while East Asian cuisines tendto avoid compound sharing ingredients. Using this idea, Garg et al. (2017) developed a rule-basedfood pairing system which ranks ingredients based on the number of shares of flavor molecules.Recently, Park et al. (2019) suggested a neural network approach based on flavor molecules andco-occurrence of ingredients in recipes. These approaches focus on one-to-one food pairing. Thereis also research related to many-to-one pairing. De Clercq et al. (2016) proposed the Recipe Com-pletion Task which tries to identify matching ingredients for a partial list of ingredients (the recipe)using a Matrix Factorization based recommender system. Although efforts have been made to detectgood ingredient combinations, there is no current Machine Learning method in this field that allowsto interpret why suggested pairs are good.Our work is targeted at interpretable recommendation systems for food pairing and recipe comple-tion. Given a set of pre-selected ingredients (cardinality 1 or more) by a user, the recommendersuggests top-N ingredients from a set of candidates. For example, suppose a user selects apple andchocolate as the pre-selected ingredients, our recommender suggests some good paired ingredients(e.g. cinnamon ) and also identifies reasons (e.g. cinnamon is good for apple andchocolate in termsof their flavor affinity).For this, we propose the Interpretable Relational Representations Model (IRRM) in two variants toaddress food pairing and recipe completion tasks. The model features a key-value memory network(Sukhbaatar et al. (2015), Miller et al. (2016)) to represent relationships of ingredients. One variant1Under review as a conference paper at ICLR 2021of the model is trained to learn latent relational representations over a trainable memory network(Implicit Model). The other model can learn explainable relational representations over the pre-trained memory network integrating an external knowledge base (Explicit Model). The relationalrepresentations are interpretable and can be queried as to the reasons why the ingredients havebeen suggested. The Explicit model can integrate any number of constraints which can be decidedmanually based on the characteristics of the desired recommender system. Our contributions are asfollows:1. We model ingredient pairing as a general recommendation task with implicit feedback.2. We introduce the Interpretable Relational Representations Model and it’s two variants: Im-plicit and Explicit. Both of which can learn pair specific relational representations (vectors)for one-to-one (i.e. ingredient to ingredient) and many-to-one (ingredient-set to ingredient)food pairing tasks. The relational vectors are also interpretable.3. We propose a training procedure to learn one-to-one and many-to-one relationships effec-tively using recipes.4. We evaluate our proposed models in the Recipe Completion Task and the Artificial Foodpairing Task on the CulinaryDB and the Flavornet datasets. Our proposed approachesdemonstrate competitive results on all datasets, outperforming many other baselines.5. We perform qualitative analysis. The results presents our proposed Explicit model is capa-ble of unraveling hidden ingredients structures within recipes.2 R ELATED WORKThere are two related streams of work in recommender systems that are important for this paper: thesession-based setting and the knowledge-aware systems .In the session-based setting, user profile can be constructed from past user behavior. A naturalsolution to this problem is the item-to-item recommendation approach.A variety of methods existfor this problem. For example, Quadrana et al. (2017) models the item sequence using RNNs, Kang& McAuley (2018) uses Self-Attention layers, and Wu et al. (2020) uses Transformer layers. Whilethese methods mainly focus on how to encode item click-sequence interactions, we target goodingredient pairing using only ingredient attributes and the relationship between a ingredient set andan ingredient based on co-occurrence in recipes. For this we develop a new architecture integratingset encoders and relational memory with novel loss and score functions.There are also increasingly methods for integrating knowledge into recommenders. Zhang et al.(2016) and Cheng et al. (2016) directly incorporate user and item features as user profile into neu-ral network models. Huang et al. (2018) and Wang & Cai (2020) integrate them using a pre-trainedknowledge graph. These methods try to represent user context using external knowledge base, there-fore, usually these knowledge embeddings are integrated to user embeddings. In this work, we in-corporate knowledge specifically to detect relationships between an ingredient set and an ingredientfor interpretation to improve recommendation performance.3 P ROBLEM DEFINITIONWe first introduce the notations used throughout this paper. We model recipe completion as a recom-mendation scenario with implicit feedback (Huang et al., 2018, Tay et al., 2018. In such scenarios,a user has interacted with an item and the system infers the item that user will interact next based onthe interaction records of the user. We apply this to the food domain by using recipes as interactionrecords.LetIdenote a set of ingredients and fi1;:::;iMgdenote a pre-selected ingredient set, where i2Iis the ingredient and Mis the number of ingredients. We call fi1;:::;iMgpre-selected ingredientset in this paper. Next, let Icandidate denotes a set of candidate ingredients. Icandidate depends oneach pre-selected ingredient set, that is, Icandidate =Ifi1;:::;iMg. In addition, we assumethat a knowledge base (KB) of ingredients is also available and the KB contains factors which arerelated to why some ingredients are good combinations. A KB is defined as a set of triplets over a2Under review as a conference paper at ICLR 2021ingredient setingredient0000100:○○○○○○○○○○○○○○○○○○k1k2k3k4v1v2v3v4pScorings(p, q’, r)Scorings(p, q, r)qNegativeexampleLoss0000100:0000100:Ingredient Set Encoder○○○○○○rAdd&Norm+ SumIngredient EmbeddingLayerSoftmax○○○○○○q’Residual connection(a) Implicit model0000100:0000100:Ingredient EmbeddingLayer○○○○○○○○○○○○k1k2k3k4v1v2v3v4○○○○○○Scorings(p, q’, r)Scorings(p, q, r)qringredient set○○○○○○pei1:M○○○○○TransEpretrained○○○○○○○○○○ latt 1:NWriterWriteringredient0000100:Ingredient Set EncoderAdd&Normeicandidate+ Sumwrite writewrite○○○○○○q’NegativeexampleSoftmaxLossResidual connection (b) Explicit modelFigure 1: IRRM architecturesentity setEand a relationship set L. A KB triplethei;l;eaicomposed of two entities ei;ea2Eand a relationship l2L, whereeiis an ingredient (e.i. ei2I) andlis an attribute and eais theattribute value. For instance, happle, flavorMolecule, (-)-Epicatechin idenotes that apple containsthe(-)-Epicatechin flavor molecule.Based on these preliminaries, we define the food ingredient recommendation task. Given a pre-selected ingredient set fi1;:::;iMgand candidate ingredients Icandidate , we would like to infer thetop-N ingredients from Icandidate .4 R ECOMMENDATIONS WITH MEMORY NETWORKSIn this section, we introduce the IRRM architectures. We start with the Implicit model that consistsof a trainable key-value memory network. We then augment the Implicit model using a key-valuememory network which integrates pre-trained entity and relationship vectors with ingredient at-tributes in the KBs – we call this extention the Explicit model. The overall architecture is describedin Figure 1. The input of our architecture are a pre-selected ingredient set and a candidate ingredienticandidate2Icandidate . The output is a score. In inference, our recommender uses these scores torankIcandidate .4.1 I NGREDIENT EMBEDDING LAYER AND INGREDIENT SETENCODERIngredients are represented as one-hot encoding vectors (corresponding to a unique index key be-longing to each ingredient). At the embedding layer, this one-hot encoded vector is converted intoa low-dimensional real-valued dense vector representation which is multiplied with the embeddingmatrices Q2RdjIj– which stores ingredient embeddings. dis the dimensionality of the ingre-dient embeddings while jIjis the total number of ingredients. icandidate is converted to qusingthis embedding layer. On the other hand, pre-selected ingredients fi1;:::;iMgare encoded by theIngredient Set Encoder (Figure 6). At first, each ingredient ijis converted to a vector using theIngredient Embedding Layer (same as icandidate ). As a result, ij2Rdvectors are generated. Thesum of these vectors is converted to the ingredient set vector pusing a feed-forward network with asingle hidden layer, followed by Layer Normalization.4.2 R ELATION ENCODERTay et al. (2018) introduced LRAM (Latent Relational Attentive Memory), in order, to generatelatent relational vectors between user-item interactions. We expand this module by adding a residualconnection, followed by Layer Normalization. This idea is inspired by Vaswani et al. (2017).Given the pair of a pre-selected ingredient set vector and a candidate ingredient vector, hp;qi, theRelation Encoder first applies s=p+qto generate the joint embedding of pandq. The generatedvector s2Rdis of the same dimension of pandq. Note we also tried other transfer functions heresuch as element-wise multiplication or just using a multi-layered perceptron MLP (p;q). However,we found that addition performs best. This joint embedding sis used as the input of the memorynetwork. The attention vector a2Rdis a vector of importance weights over keys which are3Under review as a conference paper at ICLR 2021represented as the key matrix K= [k1;:::;kN]T2RNd, whereNis the number of key-valuepairs in the memory network and kj2Rdis a key vector. Each element of the attention vectoracan be defined as aj=sTkj, whereaj2R. In order to normalize the attention vector ato aprobability distribution, we use the Softmax function: Softmax (aj) =exp(aj)PNn=1exp(an). We generatethe vector m=PNn=1Softmax (an)vnas the summation of weighted value vectors which arerepresented as the value matrix V= [v1;:::;vN]T2RNd. Finally, in order to generate therelational vector – r,mis added with the joint embedding sand Layer Normalization is applied asfollows r=LayerNorm (s+m).4.2.1 T HEEXPLICIT MODELIn order to improve interpretability and predictive performance, we incorporate ingredient attributeinformation from a given KB into the memory network. Inspired by recent works which integrate amemory network with external memories (Huang et al. (2018)), we propose the Explicit RelationalEncoder. Instead of the trainable key matrix Kand value matrix V, we pre-train vectors over agiven KB. We then freeze key and value matrix for training the explicit model. Given a pair of a pre-selected ingredient set fi1;:::;iMgand a candidate ingredient icandidate ,fi1;:::;iM;icandidategis converted into the entity vectors using the KB embeddings which provide the entity vectors e2RdKBand the relationship vectors l2RdKB. Note that in case of dKB6=d, we convert the jointembedding s2Rdintos02RdKBand the relational vector r2RdKBintor02Rdwith linearprojections. We use the TransE (Bordes et al. (2013)) for the KB embeddings. The reason for thischoice is that given triplet hei;latt;eiatti, TransE can learn entity vectors and relationship vectors tofollow eiatt=ei+latt. KB relationships usually correspond to attribute types of entities, so we usethe notation lattas the attribute type and eiattas its value. Hence, we set the key matrix as follows:K= [latt1;:::;lattN]T(1)whereNdepends on the number of attribute types which you want to integrate and Kis constantthrough training. The value matrix is initialized as follows:vattj=Xi2fi1;:::;iM;icandidategeiattj=Xi2fi1;:::;iM;icandidateg(ei+lattj) (2)V= [vatt1;:::;vattN]T(3)There can be many one-to-multiple relations in the KB. For instance, an apple has multiple flavormolecules. Therefore, the entity vector eattshould not be an ingredient specific vector and we useei+lattinstead of using eatt.4.3 SCORE FUNCTIONFinally, we define our score function as the relationship between the pre-selected ingredient setvector p, the candidate ingredient vector q, and the relational vector r:s(p;q;r) =CosSim (p;q) +CosSim (p+q;r) (4)whereCosSim is the cosine similarity, and good pairs will have high scores etc.Figure 2 shows the geometric differences between our score function and other possible ones. Fig-ure2(a) tries to place the ingredient set and each ingredient into the same spot in vector space,Figure2(b) learns to fit the ingredient set and each ingredient with the relational vector r. This scorefunction is often used in the domain of collaborative filtering. The score functions a and b are sug-gested in previous work. Tay et al. (2018) reported jjp+rqjjcould achieve better performancethanjjpqjjin the domain of collaborative filtering, which is the user-item based recommender.However, in the food pairing task, the result was the opposite. It seems if we use jjp+rqjj,because of r,pandqcan not be trained properly. As rapproaches~0, the performance is improved,so the representation of rcannot be learned. This makes sense, since the ingredient set is a mix-ture rather than a list of ingredient and ingredient sets can be seen as a single processed ingredient.4Under review as a conference paper at ICLR 2021rIng.Ing. SetIng.Ing. SetIng.Ing. Setr(a) -||p–q|| (b) -||p+ r–q||(c) Ours: CosSim (p, q) + CosSim (p+q, r)pqqqppFigure 2: Geometric comparisons of score func-tions.Recipe A•ing_a1•ing_a2•ing_a3Recipe B•ing_b1•ing_b2•ing_b3•ing_b4Recipe C•ing_c1•ing_c2randomizerandomizeb1 b3 b4 b2 a2 a3 a1 c1 c2iter=1{ing_b1}ing_b3e.g. batch size = 3iter=2{ing_a2}ing_a3{ing_b1, ing_b3}ing_b4{ing_b1, ing_b3, ing_b4}ing_b2{ing_a2, ing_a3}ing_a1{ing_c1}ing_c2ingredient setingredientingredient setingredientingredient setingredient(1)(2)(3)Figure 3: How to generate pairs for mini-batches.Therefore, we propose to use a score function (Figure2(c)) that tries to place the ingredient set pandeach ingredient qinto the same location in vector space and rotate the relationship between pandqcloser to the same place as a relational vector rwhich is represented using attributes from KB. Inaddition, since our network structure is symmetric with respect to pandq, we require a symmetricbilinear score function .4.4 O BJECTIVE FUNCTIONOur objective function is defined as:L=BatchXx=1PosXy=1log[exp(s(px;qy;rxy))exp(s(px;qy;rxy)) +PNegz=1exp(s(px;qzy;rxy))] (5)whereis the margin that separates the golden pairs and corrupted pairs, is a temperature pa-rameter,Batch is the mini-batch size, Pos is the number of positive examples, Neg is the numberof negative examples for each positive example, and the score function for negative examples takethe same relational vector as their positive example. In order, to define this objective function, wecombine the Batch All loss (Hermans et al. (2017)) with the effective triplet loss in Metric Learningand the additive margin in the softmax loss (Wang et al. (2018)). Note that while the hinge loss isalso possible, we found that the softmax loss has better performance and is more stable.4.5 T RAININGUsing pre-processed recipes (removed all ingredients without attributes and empty recipes), we trainour models in the following steps (Figure 3): At first, we randomize the order of recipes and itsingredients (Figure 3(1)), and then generate the sequence of ingredients from recipes (Figure 3(2)).After that, we generate pairs between a pre-selected ingredient set and a candidate ingredient fromthe sequence (see Figure 3/3) and loosely make pairs from same recipes belong to the same mini-batch. Next, we sample additional positive examples as necessary which are taken from other recipesrandomly. Finally, we sample negative examples randomly. In the end, mini-batches are generated.Note, we also trained our models without the recipe restriction. However, performance was worsein both one-to-one and many-to-one pairs.5 E VALUATIONWe evaluated the the Implicit model and the Explicit model on two tasks against standard baselinesand we analyzed the relational representations in terms of interpretability. We propose two tasks forevaluation5Under review as a conference paper at ICLR 2021For evaluation, we first use the same ranking procedure as in De Clercq et al. (2016), namely theRecipe Completion Task . In the Recipe Completion Task, the recommender ranks all ingredients topredict one randomly eliminated ingredient, and the remaining ingredients of each recipe are usedas input to the model – which is the pre-selected ingredient set in our definition. We adopt threeevaluation metrics for this task (same as the previous work); Mean Rank: the mean position ofthe eliminated ingredient in the ranked ingredients, Hit Ratio(HR)@10: the proportion of correctingredients ranked in the top 10, and ROC-AUC(AUC): the mean area under the ROC curve. Thistask is useful as it allows us to measure the basic performance of our model in the same setting asexisting baselines (many-to-one pairing) and against ground truth data.In order to understand the model performance further, we tested on a second task called ArtificialFood Pairing Task . In this task, we generate pairs from existing recipes based on ingredient co-occurrences. Given some ingredients as a pre-selected ingredient set and a candidate ingredient,pairs that occur in any of recipes are used as a positive example, otherwise as a negative candidateexample. The Artificial Food Pairing task consists of positive pairs and negative pairs where thesame pre-selected ingredient set is always used for both positive and negative pairs but positive andnegative examples are randomly taken from its candidates with a pre-specified number and ratio. Inthis task, the recommender predicts whether pairs are positive or negative examples. We use thistask for more detailed analysis. We measure MAP@10: the mean average precision in top-10 andAUC as metrics.We evaluate on two datasets. The first dataset is taken from the CulinaryDB (Bagler (2017)), andthe second dataset is from Flavornet (Ahn et al. (2011)). Both datasets contain the recipe set whichconsist of the list of ingredients and the cuisine categories (e.g. Chinese, Japanese) and the ingredientsets which consist of the names, the food categories (e.g. Meat, Fruit), the flavor molecules theingredient has. The statistics of datasets shows in Table 4. Before training models, the recipe setare divided into a train, a validation and a test set. On the other hand, we generate triplets (Figure7) from whole ingredient set in order to train TransE, 172,207 triplets for CulinaryDB and 40,952triplets for Flavornet.5.1 BASELINESWe compare our Implicit/Explicit models to the following baselines.FREQ : Frequency predictor that always recommend the most frequently used ingredientsof the training recipes. Despite its simplicity it is often a strong baseline in certain domains.PMI : Recommender based on the PMI score.TFIDF :Recommender based on the TF-IDF score.NMF (De Clercq et al. (2016)): Non-negative matrix factorization based recommender.The model is trained using the train and the validation recipes. It is implemented by our-selves.NeuMF (He et al. (2017)): Neural matrix factorization model which is based on the Neuralcollaborative filtering framework. We use our Ingredient Embedding layer and IngredientSet Encoder (Section 4.1) as embedding layers of this model. And the training process isalso same as ours (Section 4.5). More details of implementation can be found in sectionA.5.WIDE&DEEP (Cheng et al. (2016)): Wide & Deep model based recommender. We use apre-selected ingredient set, a candidate ingredient, and attributes (Table 4) as inputs of thismodel. And the training process is same as ours (Section 4.5). More details of implemen-tation can be found in section A.5.5.2 R ESULTS AND DISCUSSIONPredictive performance Table 1 shows the results on all datasets for all compared baselines inRecipe Completion Task. While both our Implicit and Explicit models outperform all counterpartson all metrics usually with a clear margin, our Explicit model is approximately same performanceas the Implicit model. On the other hand, NMF is the best baseline. Performance is good sincethe Recipe Completion task requires finding only one missing ingredient - even though there should6Under review as a conference paper at ICLR 2021Table 1: Experimental results on the Recipe Completion Task.Datasets CulinaryDB FlavornetMetrics Mean Rank HR@10 HR@20 AUC Mean Rank HR@10 HR@20 AUCFREQ 473.5 0.119 0.140 0.605 257.9 0.131 0.153 0.621PMI 612.5 0.055 0.056 0.527 346.9 0.066 0.069 0.531TFIDF 478.5 0.055 0.055 0.598 261.3 0.055 0.089 0.613NMF 57.0 0.435 0.559 0.900 36.3 0.479 0.599 0.896NeuMF 37.5 0.400 0.530 0.943 35.7 0.332 0.509 0.898WIDE&DEEP 43.5 0.350 0.478 0.929 37.7 0.351 0.493 0.896IRRM(Implicit) 35.9 0.391 0.567 0.943 28.1 0.485 0.631 0.926IRRM(Explicit) 35.4 0.397 0.575 0.944 28.3 0.480 0.629 0.925Table 2: Experimental results on the Artificial Food Pairing Task (CulinaryDB).Pair type one-to-one two-to-one three-to-oneMetrics MAP@10 AUC MAP@10 AUC MAP@10 AUCNMF 0.638 0.702 0.604 0.786 0.601 0.794NeuMF 0.841 0.905 0.745 0.923 0.747 0.919WIDE&DEEP 0.787 0.919 0.705 0.930 0.726 0.926W/O-RELENC 0.574 0.736 0.494 0.731 0.499 0.746IRRM(Implicit) 0.839 0.904 0.755 0.921 0.792 0.925IRRM(Explicit) 0.842 0.906 0.759 0.922 0.790 0.926be multiple correct ingredients in the real situation. It appears that our approaches are effective formany-to-one pairs which are constrained by recipes.In order to evaluate the pairing skills from one-to-one to many-to-one, we use the Artifi-cial Food Pairing Task. We report the results of one-to-one, two-to-one, and three-to-one.We sample positive examples and negative examples according to three ratio settings like(Pos;Neg )2f(20;80);(50;50);(80;20)g. Beforehand we classify all ingredients into two cat-egories:fLow;Highgdepending on the number of ingredient occurrences in recipes. Then, pre-selected ingredient sets and positive examples are randomly sampled from each category set to havethe same ratio, and negative examples are randomly sampled from all candidates. We adopt thisprocess since generally speaking low frequency ingredients are more difficult to handle than highfrequency ingredients. In the end, We prepare 4,536 tasks by pre-selected ingredient set size, whereeach task consists of 100 pairs.As can be seen in Table 2, our models outperform NMF on one-to-one, two-to-one, and three-to-onetest sets. The proposed models show stable performance as the number of pre-selected ingredientsincreases.Here, we only report the results of the Explicit model since the Implicit model shows the sametendency. As expected, L-L seems to be the most difficult task for our models followed by H-L.This result is good since it shows that our model does not overfit to low frequency ingredients.Since one of the main contributions of the paper is the Relation Encoder Model, We performedan ablation study and evaluated a version of our model without it (W/O RELENC model). W/ORELENC has similar performance to NMF on the one-to-one Artificial Food Pairing task but failson two-to-one and three-to-one test sets in MAP@10. This clearly show that the Relation En-coder module plays an important role in our models. For more detailed analysis, we compare Roc-curves for our models by ingredient frequency-based pair types(Figure 4), which consists of HighFrequency-to-High Frequency(H-H), High Frequency-to-Low Frequency(H-L), Low Frequency-to-High Frequency(L-H), and Low Frequency-to-Low Frequency(L-L), e.g. H-L describes pairs that apre-selected ingredient is used in various recipes frequently, while the paired ingredient is used inonly few recipes.Interpretablity We first analyzed attention weights in the Explicit model for some specific foodpairs around chocolate (see Figure 5). The data shows that eggis paired with chocolate becauseof correlations in food category. miso on the other hand has considerable flavor molecule relatedaffinity to chocolate. This interpretation for eggs is consistent with the results reported by De Clercqet al. (2016). Next, we focus on the attention weights for flavor molecules in our trained Explicitmodel for a quantitative analysis. We compare the average Flavor Molecule attention weights withcorrelation coefficients which are calculated on full recipes. Details are in the caption of Table 3. In7Under review as a conference paper at ICLR 2021Figure 4: Comparison of Roc-curve by ingredientfrequency-based pair type, which use the Explicit modelon CulinaryDB, one-to-one pairs. L describes Low fre-quency, H describes High frequency.(a) chocolate vs egg (b) chocolate vs misoCuisineFood CategoryFlavor ProfileFlavor MoleculeCuisineFood CategoryFlavor ProfileFlavor MoleculeFigure 5: Visualizations of atten-tion weights of our Explicit model onCulidnaryDB.Table 3: Comparison of food ingredients sorted in descending order of scores. Left: ranked basedon the average of Flavor Molecule attention weights in our Explicit model. Right: ranked basedon correlation coefficients between the number of food ingredient co-occurrence in recipes and theshared flavor molecules number between two food ingredients. Both scores are calculated on co-occurring ingredient pairs in recipes. Note that while the range of scores for our attentions weights is[0;1], the range of scores for correlation coefficients is [1;1]and Freq is the number of ingredientoccurrences in whole recipes.RankOur attention weights Correlation coefficientsName Score Freq Name Score Freq1 soy yogurt 1.000 2 true frog 0.730 42 wheaten bread 1.000 4 florida pompano 0.577 33 sandalwood 1.000 2 shellfish 0.510 74 common dab 1.000 2 oil palm 0.488 35 blackberry brandy 1.000 3 abalone 0.408 46 miso 0.773 95 orange oil 0.403 87 wine 0.618 299 multigrain bread 0.358 88 sesame 0.603 1394 potato bread 0.356 79 gelatin 0.583 238 waffle 0.352 1310 sherry 0.578 475 fruit juice 0.344 21the ranking of attention weights, up to the top 5 are high score ingredients that do not seem to beflavor molecule correlated. It would appear that since the number of ingredient occurrences is verylow, the relational representations are not learned properly. On the other hand, the top 6 and belowlooks like a fairly good result. In the ranking of correlation coefficients, there are some ingredientslikebreads andwaffle that seems to have little to do with flavor molecules. In consequence themodel is able to find hidden ingredients structures within recipes different from simple correlationcoefficients.6 C ONCLUSIONWe proposed interpretable models for food ingredient recommendation systems that rank ingredientsbased on the co-occurrence of ingredients in cooking recipes and learned relational representations.In our experiments, we found that explicitly and implicitly integrated factors can improve predic-tions, and our trained relational representations detect interesting correlations between ingredientsand factors such as molecular structure. Future work will focus on controlling ranking scores basedon manual modifications of relational representations by domain experts (e.g. chefs and food pro-fessionals).8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title The paper is built upon modification to prior work related to memory network-based recommendations. The main contribution aspect of the paper is to apply this work in a completely new domain of food science. Overall the problem the authors are trying to solve is well defined and their approach is solid. ### Review Text Overall, I vote for accepting. The authors propose a novel approach to support chefs with creating/experimenting with new recipes to overcome the challenging combinations of taste/texture etc. that can result from addition of new ingredients. Their idea of adding interpretability to their results from a Food Knowledge Base has a lot of appeal, especially to end-user’s (chef’s in this case). The author’s proposed solution is well supported by the robust and detailed evaluation results. The concerns detailed in the cons section is mainly related to the readability and clarity of the paper. ###################################################################### Pros: (1) The model architecture for both implicit and explicit use cases in the paper has a strong appeal with solid foundation. (2) The objective function is well defined and the reasoning for changing the score function from the original paper’s LRML is well reasoned and convincing. (3) The proposed model’s efficacy is well supported by empirical experimentation on two large food datasets and good comparisons with established baseline methods. (4) The motivation for adding KB based on ingredient attributes is well reasoned and gives more ability to understand which ingredient attribute contributed to the prediction result. Cons: (1) The readability of the paper can be improved. The authors have used a lot of prior work from different authors as a basis for many critical components of their architecture. But the explainability and the clear motivation behind using the related work is lacking. The authors could have given a brief summary of previous work they are using as critical components of their solution. (2) For the experiments trying to measure interpretability it requires domain knowledge and understanding of Food Science. For example, Table 3 results which are used to illustrate how the explicit model can help with interpretability requires understanding of the training dataset beyond general statistics. Why are the ingredients ranked on attention weights in Table 3 better related to flavor molecules? But it would probably really appeal to someone who has knowledge pertaining to them. ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
9az9VKjOx00
ICLR.cc/2021/Conference
2021
TopoTER: Unsupervised Learning of Topology Transformation Equivariant Representations
["Xiang Gao", "Wei Hu", "Guo-Jun Qi"]
We present the Topology Transformation Equivariant Representation (TopoTER) learning, a general paradigm of unsupervised learning of node representations of graph data for the wide applicability to Graph Convolutional Neural Networks (GCNNs). We formalize the TopoTER from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations. We derive that maximizing such mutual information can be relaxed to minimizing the cross entropy between the applied topology transformation and its estimation from node representations. In particular, we seek to sample a subset of node pairs from the original graph and flip the edge connectivity between each pair to transform the graph topology. Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs. In experiments, we apply the TopoTER to the downstream node and graph classification tasks, and results show that the TopoTER outperforms the state-of-the-art unsupervised approaches.
["Unsupervised learning", "node representations", "mutual information"]
ABSTRACTWe present the Topology Transformation Equivariant Representation (TopoTER)learning, a general paradigm of unsupervised learning of node representations ofgraph data for the wide applicability to Graph Convolutional Neural Networks(GCNNs). We formalize the TopoTER from an information-theoretic perspec-tive, by maximizing the mutual information between topology transformationsand node representations before and after the transformations. We derive thatmaximizing such mutual information can be relaxed to minimizing the cross en-tropy between the applied topology transformation and its estimation from noderepresentations. In particular, we seek to sample a subset of node pairs from theoriginal graph and flip the edge connectivity between each pair to transform thegraph topology. Then, we self-train a representation encoder to learn node repre-sentations by reconstructing the topology transformations from the feature repre-sentations of the original and transformed graphs. In experiments, we apply theTopoTER to the downstream node and graph classification tasks, and results showthat the TopoTER outperforms the state-of-the-art unsupervised approaches.1 I NTRODUCTIONGraphs provide a natural and efficient representation for non-Euclidean data, such as brain networks,social networks, citation networks, and 3D point clouds. Graph Convolutional Neural Networks(GCNNs) (Bronstein et al., 2017) have been proposed to generalize the CNNs to learn representa-tions from non-Euclidean data, which has made significant advances in various applications suchas node classification (Kipf & Welling, 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2019a) and graphclassification (Xu et al., 2019b). However, most existing GCNNs are trained in a supervised fash-ion, requiring a large amount of labeled data for network training. This limits the applications of theGCNNs since it is often costly to collect adequately labeled data, especially on large-scale graphs.Hence, this motivates the proposed research to learn graph feature representations in an unsuper-vised fashion, which enables the discovery of intrinsic graph structures and thus adapts to variousdownstream tasks.Auto-Encoders (AEs) and Generative Adversarial Networks (GANs) are two most representative un-supervised learning methods. Based on the AEs and GANs, many approaches have sought to learntransformation equivariant representations (TERs) to further improve the quality of unsupervisedrepresentation learning. It assumes that the learned representations equivarying to transformationsare able to encode the intrinsic structures of data such that the transformations can be reconstructedfrom the representations before and after transformations (Qi et al., 2019b). Learning TERs tracesback to Hinton’s seminal work on learning transformation capsules (Hinton et al., 2011), and em-bodies a variety of methods developed for Euclidean data (Kivinen & Williams, 2011; Sohn &Lee, 2012; Schmidt & Roth, 2012; Skibbe, 2013; Lenc & Vedaldi, 2015; Gens & Domingos, 2014;Dieleman et al., 2015; 2016; Zhang et al., 2019; Qi et al., 2019a). Further, Gao et al. (2020) ex-tend transformation equivariant representation learning to non-Euclidean domain, which formalizesGraph Transformation Equivariant Representation (GraphTER) learning by auto-encoding node-wise transformations in an unsupervised fashion. Nevertheless, only transformations on node fea-tures are explored, while the underlying graph may vary implicitly. The graph topology has not beenfully explored yet, which however is crucial in unsupervised graph representation learning.To this end, we propose the Topology Transformation Equivariant Representation (TopoTER) learn-ing to infer unsupervised graph feature representations by estimating topology transformations. In-1Under review as a conference paper at ICLR 2021stead of transforming node features as in the GraphTER, the proposed TopoTER studies the trans-formation equivariant representation learning by transforming the graph topology, i.e., adding orremoving edges to perturb the graph structure. Then the same input signals are attached to the re-sultant graph topologies, resulting in different graph representations. This provides an insight intohow the same input signals associated with different graph topologies would lead to equivariantrepresentations enabling the fusion of node feature and graph topology in GCNNs. Formally, wepropose the TopoTER from an information-theoretic perspective, aiming to maximize the mutual in-formation between topology transformations and feature representations with respect to the originaland transformed graphs. We derive that maximizing such mutual information can be relaxed to thecross entropy minimization between the applied topology transformations and the estimation fromthe learned representations of graph data under the topological transformations.Specifically, given an input graph and its associated node features, we first sample a subset of nodepairs from the graph and flip the edge connectivity between each pair at a perturbation rate, lead-ing to a transformed graph with attached node features. Then, we design a graph-convolutionalauto-encoder architecture, where the encoder learns the node-wise representations over the origi-nal and transformed graphs respectively, and the decoder predicts the topology transformations ofedge connectivity from both representations by minimizing the cross entropy between the appliedand estimated transformations. Experimental results demonstrate that the proposed TopoTER modeloutperforms the state-of-the-art unsupervised models, and even achieves comparable results to the(semi-)supervised approaches in node classification and graph classification tasks at times.Our main contributions are summarized as follows.We propose the Topology Transformation Equivariant Representation (TopoTER) learning to in-fer expressive node feature representations in an unsupervised fashion, which can characterize theintrinsic structures of graphs and the associated features by exploring the graph transformationsof connectivity topology.We formulate the TopoTER from an information-theoretic perspective, by maximizing the mutualinformation between feature representations and topology transformations, which can be relaxedto the cross entropy minimization between the applied transformations and the prediction in anend-to-end graph-convolutional auto-encoder architecture.Experiments demonstrate that the proposed TopoTER model outperforms the state-of-the-art un-supervised methods in both node classification and graph classification.2 R ELATED WORKGraph Auto-Encoders. Graph Auto-Encoders (GAEs) are the most representative unsupervisedmethods. GAEs encode graph data into feature space via an encoder and reconstruct the inputgraph data from the encoded feature representations via a decoder. GAEs are often used to learnnetwork embeddings and graph generative distributions (Wu et al., 2020). For network embeddinglearning, GAEs learn the feature representations of each node by reconstructing graph structuralinformation, such as the graph adjacency matrix (Kipf & Welling, 2016) and the positive pointwisemutual information (PPMI) matrix (Cao et al., 2016; Wang et al., 2016). For graph generation, somemethods generate nodes and edges of a graph alternately (You et al., 2018), while other methodsoutput an entire graph (Simonovsky & Komodakis, 2018; Ma et al., 2018; De Cao & Kipf, 2018).Graph Contrastive Learning. An important paradigm called contrastive learning aims to trainan encoder to be contrastive between the representations of positive samples and negative sam-ples. Recent contrastive learning frameworks can be divided into two categories (Liu et al., 2020):context-instance contrast and context-context contrast. Context-instance contrast focuses on mod-eling the relationships between the local feature of a sample and its global context representation.Deep InfoMax (DIM) (Hjelm et al., 2018) first proposes to maximize the mutual information be-tween a local patch and its global context through a contrastive learning task. Deep Graph InfoMax(DGI) (Velickovic et al., 2019) proposes to learn node-level feature representation by extendingDIM to graph-structured data, while InfoGraph (Sun et al., 2020a) aims to use mutual informationmaximization for unsupervised representation learning on entire graphs. Peng et al. (2020) pro-pose a Graphical Mutual Information (GMI) approach to maximize the mutual information of bothfeatures and edges between inputs and outputs. In contrast to context-instance methods, context-context contrast studies the relationships between the global representations of different samples.M3S (Sun et al., 2020b) adopts a self-supervised pre-training paradigm as in DeepCluster (Caronet al., 2018) for better semi-supervised prediction in GCNNs. Graph Contrastive Coding (GCC)2Under review as a conference paper at ICLR 2021Original Graph02514 36Perturbed Graph02514 36unchanged edge added edge removed edgeTopology TransformationΔA=0 1 0 −1 0 0 01 0 0 0 0 0 00 0 0 −1 0 0 0−1 0 −1 0 0 0 00 0 0 0 0 0 −10 0 0 0 0 0 10 0 0 0 −1 1 0Figure 1: An example of graphs before and after topology transformations.(Qiu et al., 2020) designs the pre-training task as subgraph instance discrimination in and acrossnetworks to empower graph neural networks to learn the intrinsic structural representations.Transformation Equivariant Representation Learning. Many approaches have sought to learntransformation equivariant representations. Learning transformation equivariant representations hasbeen advocated in Hinton’s seminal work on learning transformation capsules. Following this, avariety of approaches have been proposed to learn transformation equivariant representations (Gens& Domingos, 2014; Dieleman et al., 2015; 2016; Cohen & Welling, 2016; Lenssen et al., 2018).To generalize to generic transformations, Zhang et al. (2019) propose to learn unsupervised featurerepresentations via Auto-Encoding Transformations (AET) by estimating transformations from thelearned feature representations of both the original and transformed images, while Qi et al. (2019a)extend AET from an information-theoretic perspective by maximizing the lower bound of mutual in-formation between transformations and representations. Wang et al. (2020) extend the AET to Gen-erative Adversarial Networks (GANs) for unsupervised image synthesis and representation learning.Gao et al. (2020) introduce the GraphTER model that extends AET to graph-structured data, whichis formalized by auto-encoding node-wise transformations in an unsupervised manner. de Haanet al. (2020) propose Gauge Equivariant Mesh CNNs which generalize GCNNs to apply anisotropicgauge equivariant kernels. Fuchs et al. (2020) introduce a self-attention mechanism specifically for3D point cloud data, which adheres to equivariance constraints, improving robustness to nuisancetransformations.3 M ETHOD3.1 P RELIMINARYWe consider an undirected graph G=fV;E;Agcomposed of a node set Vof cardinalityjVj=N,an edge setEconnecting nodes of cardinality jEj=M.Ais a real symmetric NNmatrix thatencodes the graph structure, where ai;j= 1if there exists an edge (i;j)between nodes iandj, andai;j= 0 otherwise. Graph signal refers to data that reside on the nodes of a graph G, denoted byX2RNCwith thei-th row representing the C-dimensional graph signal on the i-th node ofV.3.2 T OPOLOGY TRANSFORMATIONWe define the topology transformation tas adding or removing edges from the original edge set EingraphG. This can be done by sampling, i.i.d., a switch parameteri;jas in (Velickovic et al., 2019),which determines whether to modify edge (i;j)in the adjacency matrix. Assuming a BernoullidistributionB(p), wherepdenotes the probability of each edge being modified, we draw a randommatrix =fi;jgNNfromB(p),i.e.,B(p). We then acquire the perturbed adjacency matrixaseA=A; (1)whereis the exclusive OR (XOR) operation. This strategy produces a transformed graph throughthe topology transformation t,i.e.,eA=t(A). Here, the edge perturbation probability of p= 0corresponds to a non-transformed adjacency matrix, which is a special case of an identity transfor-mation to A.The transformed adjacency matrix eAcan also be written as the sum of the original adjacency matrixAand a topology perturbation matrix A:eA=A+ A; (2)3Under review as a conference paper at ICLR 2021where A=fai;jgNNencodes the perturbation of edges, with ai;j2f 1;0;1g. As shownin Fig. 1, when ai;j= 0, the edge between node iand nodejkeeps unchanged ( i.e., black solidlines); when ai;j=1or1, it means removing ( i.e., orange dotted lines) or adding ( i.e., blue solidlines) the edge between node iand nodej, respectively.3.3 T HEFORMULATION OF TOPOTERDefinition 1 Given a pair of graph signal and adjacency matrix (X;A), and a pair of graph signaland transformed adjacency matrix (X;eA)by a topology transformation t(), a function E()istransformation equivariant if it satisfiesE(X;eA) =E(X;t(A)) =(t) [E(X;A)]; (3)where(t)[]is a homomorphism of transformation tin the representation space.Let us denote H=E(X;A);andeH=E(X;eA). We seek to learn an encoder E: (X;A)7!H; (X;eA)7!eHthat maps both the original and transformed sample to representations fH;eHgequivariant to the sampled transformation t, whose information can thus be inferred from the rep-resentations via a decoder D: (eH;H)7!dAas much as possible. From an information-theoreticperspective, this requires (H;A)should jointly contain all necessary information about eH.Then a natural choice to formalize the topology transformation equivariance is the mutual infor-mationI(H;A;eH)between (H;A)andeH. The larger the mutual information is, the moreknowledge about Acan be inferred from the representations fH;eHg. Hence, we propose to max-imize the mutual information to learn the topology transformation equivariant representations asfollows:maxI(H;A;eH); (4)wheredenotes the parameters of the auto-encoder network.Nevertheless, it is difficult to compute the mutual information directly. Instead, we derive thatmaximizing the mutual information can be relaxed to minimizing the cross entropy, as described inthe following theorem.Theorem 1 The maximization of the mutual information I(H;A;eH)can be relaxed to the min-imization of the cross entropy H(pkq)between the probability distributions p(A;eH;H)andq(dAjeH;H):minHp(A;eH;H)kq(dAjeH;H),Ep(A;eH;H)logq(dAjeH;H): (5)Proof By using the chain rule of mutual information, we haveI(H;A;eH) =I(A;eHjH) +I(H;eH)I(A;eHjH):Thus the mutual information I(A;eHjH)is the lower bound of the mutual informationI(H;A;eH)that attains its minimum value when I(H;eH) = 0 .Therefore, we relax the objective to maximizing the lower bound mutual information I(A;eHjH)between the transformed representation eHand the topology transformation A:I(A;eHjH) =H(AjH)H(AjeH;H);whereH()denotes the conditional entropy. Since AandHare independent, we haveH(AjH) =H(A). Hence, maximizing I(A;eHjH)becomesminH(AjeH;H): (6)According to the chain rule of conditional entropy, we haveH(AjeH;H) =H(A;eH;H)H(eH;H)H(A;eH;H);4Under review as a conference paper at ICLR 2021Encoder E⋅GCNNGCNNshared weightsX,AX,A෩H∈ Rே×ிH෩∈ Rே×ிDecoder D⋅Minus ΔH∈ Rே×ிConstruct Edge RepresentationLinear ΔAFigure 2: The architecture of the proposed TopoTER.where the conditional entropy H(AjeH;H)is upper bounded by the joint entropy H(A;eH;H).Thus, the minimization problem in Eq. (6) becomesminH(A;eH;H): (7)We next introduce a conditional probability distribution q(dAjeH;H)to approximate the intractableposterior ~q(AjeH;H)with an estimated dA. According to the definition of the Kullback-Leiblerdivergence, we haveH(A;eH;H) =H(p) =H(pkq)DKL(pkq)H(pkq);whereDKL(pkq)denotes the Kullback-Leibler divergence of pandqthat is non-negative, andH(pkq)is the cross entropy between pandq. Thus, Eq. (6) is converted to minimizing the crossentropy as the upper bound:minHp(A;eH;H)kq(dAjeH;H),Ep(A;eH;H)logq(dAjeH;H):Hence, we relax the maximization problem in Eq. (4) to the optimization in Eq. (5). Based on Theorem 1 , we train the decoder Dto learn the distribution q(dAjeH;H)so as to esti-mate the topology transformation dAfrom the encodedfeH;Hg, where the input pairs of originaland transformed graph representations feH;Hgas well as the ground truth target Acan be sam-pled tractably from the factorization of p(A;eH;H),p(A)p(H)p(eHjA;H). This allows usto minimize the cross entropy betweenp(A;eH;H)andq(dAjeH;H)as in (5) with the trainingtriplets (eH;H; A)drawn from the tractable factorization of p(A;eH;H). Hence, we formu-late the TopoTER as the joint optimization of the representation encoder Eand the transformationdecoderD.3.4 T HEALGORITHMWe design a graph-convolutional auto-encoder network for the TopoTER learning, as illustratedin Fig. 2. Given a graph signal Xassociated with a graph G=fV;E;Ag, the proposed unsuper-vised learning algorithm for the TopoTER consists of three steps: 1) topology transformation, whichsamples and perturbs some edges from Eto acquire a transformed adjacency matrix eA; 2) repre-sentation encoding, which extracts the feature representations of graph signals before and after thetopology transformation; 3) transformation decoding, which estimates the topology transformationparameters from the learned feature representations. We elaborate on the three steps as follows.Topology Transformation. We randomly sample a subset of edges from Efor topologyperturbation—adding or removing edges, which not only enables to characterize local graph struc-tures at various scales, but also reduces the number of edge transformation parameters to estimatefor computational efficiency. In practice, in each iteration of training, we sample allthe node pairswith connected edges S1, and randomly sample a subset of disconnected node pairs S0,i.e.,S0=(i;j)ai;j= 0;S1=(i;j)ai;j= 1; (8)wherejS0j=jS1j=M. Next, we randomly split S0andS1into two disjoint sets, respectively, i.e.,Si=nS(1)i;S(2)iS(1)i\S(2)i=?;S(1)i[S(2)i=Si;jS(1)ij=rjSijo;i2f0;1g; (9)5Under review as a conference paper at ICLR 2021whereris the edge perturbation rate . Then, for each node pair (i;j)inS(1)0andS(1)1, we flipthe corresponding entry in the original graph adjacency matrix. That is, if ai;j= 0, then we set~ai;j= 1; otherwise, we set ~ai;j= 0. For each node pair (i;j)inS(2)0andS(2)1, we keep the originalconnectivities unchanged, i.e.,~ai;j=ai;j.This leads to the transformed adjacency matrix eA, as well as the sampled transformation parametersby accessing Aat position (i;j)fromS0andS1. Also, we can category the sampled topologytransformation parameters into four types:1. add an edge to a disconnected node pair, i.e.,ft:ai;j= 07!~ai;j= 1;(i;j)2S(1)0g;2. delete the edge between a connected node pair, i.e.,ft:ai;j= 17!~ai;j= 0;(i;j)2S(1)1g;3. keep the disconnection between node pairs in S(2)0,i.e.,ft:ai;j= 07!~ai;j= 0;(i;j)2S(2)0g;4. keep the connection between node pairs in S(2)1,i.e.,ft:ai;j= 17!~ai;j= 1;(i;j)2S(2)1g.Thus, we cast the problem of estimating transformation parameters in Afrom (eH;H)as theclassification problem of the transformation parameter types. The percentage of these four types isr:r: (1r) : (1r).Representation Encoder. We train an encoder E: (X;A)7!E(X;A)to encode the featurerepresentations of each node in the graph. As demonstrated in Fig. 2, we leverage GCNNs withshared weights to extract feature representations of each node in the graph signal. Taking the GCN(Kipf & Welling, 2017) as an example, the graph convolution in the GCN is defined asH=E(X;A) =D12(A+I)D12XW; (10)where Dis the degree matrix of A+I,W2RCFis a learnable parameter matrix, and H=[h1;:::;hN]>2RNFdenotes the node-wise feature matrix with Foutput channels. Similarly, thenode feature of the transformed counterpart is as follows with the shared weights W.eH=E(X;eA) =eD12(eA+I)eD12XW=eD12(A+I)eD12XW +eD12AeD12XW:(11)We thus acquire the feature representations HandeHof graph signals before and after topologytransformations.Transformation Decoder. Comparing Eq. (10) and Eq. (11), the prominent difference betweeneHandHlies in the second term of Eq. (11) featuring A. This enables us to train a decoderD: (eH;H)7!dAto estimate the topology transformation from the joint representations beforeand after transformation. We first take the difference between the extracted feature representationsbefore and after transformations along the feature channel,H=eHH= [h1;:::;hN]>2RNF: (12)Thus, we can predict the topology transformation between node iand nodejthrough the node-wisefeature difference Hby constructing the edge representation asei;j=expf(hihj)(hihj)gkexpf(hihj)(hihj)gk12RF;8(i;j)2S0[S1; (13)wheredenotes the Hadamard product of two vectors to capture the feature representation, andkk 1is the`1-norm of a vector for normalization. The edge representation ei;jof nodeiandjisthen fed into several linear layers for the prediction of the topology transformation,byi;j= softmax (linear( ei;j));8(i;j)2S0[S1; (14)where softmax()is an activation function.According to Eq. (5), the entire auto-encoder network is trained by minimizing the cross entropyL= E(i;j)2S0[S13Xf=0y(f)i;jlogby(f)i;j; (15)wherefdenotes the transformation type ( f2f0;1;2;3g), andyis the ground-truth binary indicator(0or1) for each transformation parameter type.6Under review as a conference paper at ICLR 2021Table 1: Node classification accuracies (with standard deviation) in percentage on three datasets.X;A;Ydenote the input data, adjacency matrix and labels respectively.Method Training Data Cora Citeseer PubmedSemi-Supervised MethodsGCN (Kipf & Welling, 2017) X;A;Y 81:5 70 :3 79 :0MoNet (Monti et al., 2017) X;A;Y 81:70:5 - 78:80:3GAT (Veli ˇckovi ́c et al., 2018) X;A;Y 83:00:7 72:50:7 79:00:3SGC (Wu et al., 2019) X;A;Y 81:00:0 71:90:1 78:90:0GWNN (Xu et al., 2019a) X;A;Y 82:8 71 :7 79 :1MixHop (Abu-El-Haija et al., 2019) X;A;Y 81:90:4 71:40:8 80:80:6DFNet (Wijesinghe & Wang, 2019) X;A;Y 85:20:5 74:20:3 84:30:4Unsupervised MethodsRaw Features (Velickovic et al., 2019) X 47:90:4 49:30:2 69:10:3DeepWalk (Perozzi et al., 2014) A 67:2 43 :2 65 :3DeepWalk + Features (Velickovic et al., 2019) X;A 70:70:6 51:40:5 74:30:9GAE (Kipf & Welling, 2016) X;A 80:90:4 66:70:4 77:10:7VGAE (Kipf & Welling, 2016) X;A 80:00:2 64:10:2 76:90:1DGI (Velickovic et al., 2019) X;A 81:10:1 71:40:2 77:00:2GMI (Peng et al., 2020) X;A 82:20:2 71:40:5 78:50:1TopoTER X;A 83 :70:3 71:70:5 79:10:1Table 2: Model size comparison of DGI, GMI, and the proposed TopoTER.Model DGI GMI TopoTERNo. of Parameters 996;354 1;730;052 736;2604 E XPERIMENTS4.1 N ODE CLASSIFICATIONDatasets. We adopt three citation networks to evaluate our model: Cora, Citeseer, and Pubmed (Senet al., 2008), where nodes correspond to documents and edges represent citations. We follow thestandard train/test split in (Kipf & Welling, 2017) to conduct the experiments.Implementation Details. In this task, the auto-encoder network is trained via Adam optimizer, andthe learning rate is set to 104. We use the same early stopping strategy as DGI (Velickovic et al.,2019) on the observed training loss, with a patience of 20epochs. We deploy one Simple GraphConvolution (SGC) layer (Wu et al., 2019) as our encoder, and the order of the adjacency matrixis set to 2, while we will study the order of the adjacency matrix in Appendix A. The LeakyReLUactivation function with a negative slope of 0:1is employed after the SGC layer. Similar to DGI(Velickovic et al., 2019), we set the output channel F= 512 for Cora and Citeseer dataset, and 256for Pubmed dataset due to memory limitations. After the encoder, we use one linear layer to classifythe transformation types. We set the edge perturbation rate in Eq. (9) as r=f0:7;0:4;0:7gfor Cora,Citeseer, and Pubmed, respectively. The analysis of the edge perturbation rate will be presented inAppendix B.During the training procedure of the classifier, the SGC layer in the encoder is used to extract graphfeature representations with the weights frozen. After the SGC layer, we apply one linear layer tomap the features to the classification scores.Experimental Results. We compare the proposed method with five unsupervised methods, includ-ing one node embedding method DeepWalk, two graph auto-encoders GAE and VGAE (Kipf &Welling, 2016), and two contrastive learning methods DGI (Velickovic et al., 2019) and GMI (Penget al., 2020). Additionally, we report the results of Raw Features and DeepWalk+Features (Perozziet al., 2014) under the same settings. For fair comparison, the results of all other unsupervised meth-ods are reproduced by using the same encoder architecture of the TopoTER except DeepWalk andRaw Features. We report the mean classification accuracy (with standard deviation) on the test nodesfor all methods after 50runs of training. As reported in Tab. 1, the TopoTER outperforms all othercompeting unsupervised methods on three datasets. Further, the proposed unsupervised method alsoachieves comparable performance with semi-supervised results. This significantly closes the gapbetween unsupervised approaches and the semi-supervised methods.Moreover, we compare the proposed TopoTER with two contrastive learning methods DGI and GMIin terms of the model complexity, as reported in Tab. 2. The number of parameters in our modelis less than that of DGI and even less than half of that of GMI, which further shows the TopoTERmodel is lightweight.7Under review as a conference paper at ICLR 2021Table 3: Graph classification accuracies (with standard deviation) in percentage on 6datasets. “>1Day” represents that the computation exceeds 24 hours. “OOM” is out of memory error.Dataset MUTAG PTC-MR RDT-B RDT-M5K IMDB-B IMDB-M(No. Graphs) 188 344 2000 4999 1000 1500(No. Classes) 2 2 2 5 2 3Graph Kernel MethodsRW 83:721:50 57:851:30 OOM OOM 50:680:26 34:650:19SP 85:222:43 58:242:44 64:110:14 39:550:22 55:600:22 37:990:30GK 81:662:11 57:261:41 77:340:18 41:010:17 65:870:98 43:890:38WL 80:723:00 57:970:49 68:820:41 46:060:21 72:303:44 46:950:46DGK 87:442:72 60:082:55 78:040:39 41:270:18 66:960:56 44:550:52MLG 87:941:61 63:261:48>1 Day >1 Day 66:550:25 41:170:03Supervised MethodsGCN 85:65:8 64:24:3 50:00:0 20:00:0 74:03:0 51:93:8GraphSAGE 85:17:6 63:97:7 - - 72:35:3 50:92:2GIN-0 89:45:6 64:67:0 92:42:5 57:51:5 75:15:1 52:32:8GIN- 89:06:0 63:78:2 92:22:3 57:01:7 74:35:1 52:13:6Unsupervised Methodsnode2vec 72:6310:20 58:588:00 - - - -sub2vec 61:0515:80 59:996:38 71:480:41 36:680:42 55:261:54 36:670:83graph2vec 83:159:25 60:176:86 75:781:03 47:860:26 71:100:54 50:440:87InfoGraph 89:011:13 61:651:43 82:501:42 53:461:03 73:030:87 49:690:53TopoTER 89:250:81 64:591:26 84:930:18 55:520:20 73:460:38 49:680:314.2 G RAPH CLASSIFICATIONDatasets. We conduct graph classification experiments on six well-known graph benchmarkdatasets (Yanardag & Vishwanathan, 2015): MUTAG, PTC, REDDIT-BINARY , REDDIT-MULTI-5K, IMDB-BINARY , and IMDB-MULTI.Implementation Details. In this task, the entire network is trained via Adam optimizer with a batchsize of 64, and the learning rate is set to 103. For the encoder architecture, we follow the sameencoder settings in the released code of InfoGraph (Sun et al., 2020a), i.e., three Graph IsomorphismNetwork (GIN) layers (Xu et al., 2019b) with batch normalization. We also use one linear layer toclassify the transformation types. We set the sampling rate r= 0:5for all datasets.During the evaluation stage, the entire encoder will be frozen to extract node-level feature repre-sentations, which will go through a global add pooling layer to acquire global features. We thenuse LIBSVM to classify these global features to classification scores. We adopt the same procedureof previous works (Sun et al., 2020a) to make a fair comparison and use 10-fold cross validationaccuracy to report the classification performance, and the experiments are repeated five times.Experimental Results. We take six graph kernel approaches for comparison: Random Walk (RW)(G ̈artner et al., 2003), Shortest Path Kernel (SP) (Borgwardt & Kriegel, 2005), Graphlet Kernel(GK) (Shervashidze et al., 2009), Weisfeiler-Lehman Sub-tree Kernel (WL) (Shervashidze et al.,2011), Deep Graph Kernels (DGK) (Yanardag & Vishwanathan, 2015), and Multi-Scale LaplacianKernel (MLG) (Kondor & Pan, 2016). Aside from graph kernel methods, we also compare withthree unsupervised graph-level representation learning methods: node2vec (Grover & Leskovec,2016), sub2vec (Adhikari et al., 2018), and graph2vec (Narayanan et al., 2017), and one contrastivelearning method: InfoGraph (Sun et al., 2020a). The experimental results of unsupervised graphclassification are preseted in Tab. 3. The proposed TopoTER outperforms all unsupervised baselinemethods on the first five datasets, and achieves comparable results on the other dataset. Also, theproposed approach reaches the performance of supervised methods at times, thus validating theeffectiveness of the TopoTER model.5 C ONCLUSIONWe propose Topology Transformation Equivariant Representation (TopoTER) for learning unsu-pervised representations on graph data. By maximizing the mutual information between topologytransformations and feature representations before and after transformations, the TopoTER enforcesthe encoder to learn intrinsic graph feature representations that contain sufficient information aboutstructures under applied topology transformations. We apply the TopoTER model to node classifi-cation and graph classification tasks, and results demonstrate that the TopoTER outperforms state-of-the-art unsupervised approaches and reaches the performance of supervised methods at times.8Under review as a conference paper at ICLR 2021
0AsbYbsOHe8
A method for for self-training GNN
7: Good paper, accept
The paper propose an unsupervised method for self-training of graph-neural-networks (GNNs). The authors provide information-theoretic justification to their method using maximization of the lower bound of the mutual information on their objective. Their approach is based on maximizing the mutual information between a perturbed graph topology and its node representation. Strong points: - very good results (some of the results are comparable to supervised method and the improvement achieved on the other unsupervised methods is significant) - simple approach with theoretical justification - paper is nicely written and easy to follow
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title TopoTER: Unsupervised Learning of Topology Transformation Equivariant Representations ### Paper Abstract We present the Topology Transformation Equivariant Representation (TopoTER) learning, a general paradigm of unsupervised learning of node representations of graph data for the wide applicability to Graph Convolutional Neural Networks (GCNNs). We formalize the TopoTER from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations. We derive that maximizing such mutual information can be relaxed to minimizing the cross entropy between the applied topology transformation and its estimation from node representations. In particular, we seek to sample a subset of node pairs from the original graph and flip the edge connectivity between each pair to transform the graph topology. Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs. In experiments, we apply the TopoTER to the downstream node and graph classification tasks, and results show that the TopoTER outperforms the state-of-the-art unsupervised approaches. ### Paper Keywords ["Unsupervised learning", "node representations", "mutual information"] ### Paper Content ABSTRACTWe present the Topology Transformation Equivariant Representation (TopoTER)learning, a general paradigm of unsupervised learning of node representations ofgraph data for the wide applicability to Graph Convolutional Neural Networks(GCNNs). We formalize the TopoTER from an information-theoretic perspec-tive, by maximizing the mutual information between topology transformationsand node representations before and after the transformations. We derive thatmaximizing such mutual information can be relaxed to minimizing the cross en-tropy between the applied topology transformation and its estimation from noderepresentations. In particular, we seek to sample a subset of node pairs from theoriginal graph and flip the edge connectivity between each pair to transform thegraph topology. Then, we self-train a representation encoder to learn node repre-sentations by reconstructing the topology transformations from the feature repre-sentations of the original and transformed graphs. In experiments, we apply theTopoTER to the downstream node and graph classification tasks, and results showthat the TopoTER outperforms the state-of-the-art unsupervised approaches.1 I NTRODUCTIONGraphs provide a natural and efficient representation for non-Euclidean data, such as brain networks,social networks, citation networks, and 3D point clouds. Graph Convolutional Neural Networks(GCNNs) (Bronstein et al., 2017) have been proposed to generalize the CNNs to learn representa-tions from non-Euclidean data, which has made significant advances in various applications suchas node classification (Kipf & Welling, 2017; Veli ˇckovi ́c et al., 2018; Xu et al., 2019a) and graphclassification (Xu et al., 2019b). However, most existing GCNNs are trained in a supervised fash-ion, requiring a large amount of labeled data for network training. This limits the applications of theGCNNs since it is often costly to collect adequately labeled data, especially on large-scale graphs.Hence, this motivates the proposed research to learn graph feature representations in an unsuper-vised fashion, which enables the discovery of intrinsic graph structures and thus adapts to variousdownstream tasks.Auto-Encoders (AEs) and Generative Adversarial Networks (GANs) are two most representative un-supervised learning methods. Based on the AEs and GANs, many approaches have sought to learntransformation equivariant representations (TERs) to further improve the quality of unsupervisedrepresentation learning. It assumes that the learned representations equivarying to transformationsare able to encode the intrinsic structures of data such that the transformations can be reconstructedfrom the representations before and after transformations (Qi et al., 2019b). Learning TERs tracesback to Hinton’s seminal work on learning transformation capsules (Hinton et al., 2011), and em-bodies a variety of methods developed for Euclidean data (Kivinen & Williams, 2011; Sohn &Lee, 2012; Schmidt & Roth, 2012; Skibbe, 2013; Lenc & Vedaldi, 2015; Gens & Domingos, 2014;Dieleman et al., 2015; 2016; Zhang et al., 2019; Qi et al., 2019a). Further, Gao et al. (2020) ex-tend transformation equivariant representation learning to non-Euclidean domain, which formalizesGraph Transformation Equivariant Representation (GraphTER) learning by auto-encoding node-wise transformations in an unsupervised fashion. Nevertheless, only transformations on node fea-tures are explored, while the underlying graph may vary implicitly. The graph topology has not beenfully explored yet, which however is crucial in unsupervised graph representation learning.To this end, we propose the Topology Transformation Equivariant Representation (TopoTER) learn-ing to infer unsupervised graph feature representations by estimating topology transformations. In-1Under review as a conference paper at ICLR 2021stead of transforming node features as in the GraphTER, the proposed TopoTER studies the trans-formation equivariant representation learning by transforming the graph topology, i.e., adding orremoving edges to perturb the graph structure. Then the same input signals are attached to the re-sultant graph topologies, resulting in different graph representations. This provides an insight intohow the same input signals associated with different graph topologies would lead to equivariantrepresentations enabling the fusion of node feature and graph topology in GCNNs. Formally, wepropose the TopoTER from an information-theoretic perspective, aiming to maximize the mutual in-formation between topology transformations and feature representations with respect to the originaland transformed graphs. We derive that maximizing such mutual information can be relaxed to thecross entropy minimization between the applied topology transformations and the estimation fromthe learned representations of graph data under the topological transformations.Specifically, given an input graph and its associated node features, we first sample a subset of nodepairs from the graph and flip the edge connectivity between each pair at a perturbation rate, lead-ing to a transformed graph with attached node features. Then, we design a graph-convolutionalauto-encoder architecture, where the encoder learns the node-wise representations over the origi-nal and transformed graphs respectively, and the decoder predicts the topology transformations ofedge connectivity from both representations by minimizing the cross entropy between the appliedand estimated transformations. Experimental results demonstrate that the proposed TopoTER modeloutperforms the state-of-the-art unsupervised models, and even achieves comparable results to the(semi-)supervised approaches in node classification and graph classification tasks at times.Our main contributions are summarized as follows.We propose the Topology Transformation Equivariant Representation (TopoTER) learning to in-fer expressive node feature representations in an unsupervised fashion, which can characterize theintrinsic structures of graphs and the associated features by exploring the graph transformationsof connectivity topology.We formulate the TopoTER from an information-theoretic perspective, by maximizing the mutualinformation between feature representations and topology transformations, which can be relaxedto the cross entropy minimization between the applied transformations and the prediction in anend-to-end graph-convolutional auto-encoder architecture.Experiments demonstrate that the proposed TopoTER model outperforms the state-of-the-art un-supervised methods in both node classification and graph classification.2 R ELATED WORKGraph Auto-Encoders. Graph Auto-Encoders (GAEs) are the most representative unsupervisedmethods. GAEs encode graph data into feature space via an encoder and reconstruct the inputgraph data from the encoded feature representations via a decoder. GAEs are often used to learnnetwork embeddings and graph generative distributions (Wu et al., 2020). For network embeddinglearning, GAEs learn the feature representations of each node by reconstructing graph structuralinformation, such as the graph adjacency matrix (Kipf & Welling, 2016) and the positive pointwisemutual information (PPMI) matrix (Cao et al., 2016; Wang et al., 2016). For graph generation, somemethods generate nodes and edges of a graph alternately (You et al., 2018), while other methodsoutput an entire graph (Simonovsky & Komodakis, 2018; Ma et al., 2018; De Cao & Kipf, 2018).Graph Contrastive Learning. An important paradigm called contrastive learning aims to trainan encoder to be contrastive between the representations of positive samples and negative sam-ples. Recent contrastive learning frameworks can be divided into two categories (Liu et al., 2020):context-instance contrast and context-context contrast. Context-instance contrast focuses on mod-eling the relationships between the local feature of a sample and its global context representation.Deep InfoMax (DIM) (Hjelm et al., 2018) first proposes to maximize the mutual information be-tween a local patch and its global context through a contrastive learning task. Deep Graph InfoMax(DGI) (Velickovic et al., 2019) proposes to learn node-level feature representation by extendingDIM to graph-structured data, while InfoGraph (Sun et al., 2020a) aims to use mutual informationmaximization for unsupervised representation learning on entire graphs. Peng et al. (2020) pro-pose a Graphical Mutual Information (GMI) approach to maximize the mutual information of bothfeatures and edges between inputs and outputs. In contrast to context-instance methods, context-context contrast studies the relationships between the global representations of different samples.M3S (Sun et al., 2020b) adopts a self-supervised pre-training paradigm as in DeepCluster (Caronet al., 2018) for better semi-supervised prediction in GCNNs. Graph Contrastive Coding (GCC)2Under review as a conference paper at ICLR 2021Original Graph02514 36Perturbed Graph02514 36unchanged edge added edge removed edgeTopology TransformationΔA=0 1 0 −1 0 0 01 0 0 0 0 0 00 0 0 −1 0 0 0−1 0 −1 0 0 0 00 0 0 0 0 0 −10 0 0 0 0 0 10 0 0 0 −1 1 0Figure 1: An example of graphs before and after topology transformations.(Qiu et al., 2020) designs the pre-training task as subgraph instance discrimination in and acrossnetworks to empower graph neural networks to learn the intrinsic structural representations.Transformation Equivariant Representation Learning. Many approaches have sought to learntransformation equivariant representations. Learning transformation equivariant representations hasbeen advocated in Hinton’s seminal work on learning transformation capsules. Following this, avariety of approaches have been proposed to learn transformation equivariant representations (Gens& Domingos, 2014; Dieleman et al., 2015; 2016; Cohen & Welling, 2016; Lenssen et al., 2018).To generalize to generic transformations, Zhang et al. (2019) propose to learn unsupervised featurerepresentations via Auto-Encoding Transformations (AET) by estimating transformations from thelearned feature representations of both the original and transformed images, while Qi et al. (2019a)extend AET from an information-theoretic perspective by maximizing the lower bound of mutual in-formation between transformations and representations. Wang et al. (2020) extend the AET to Gen-erative Adversarial Networks (GANs) for unsupervised image synthesis and representation learning.Gao et al. (2020) introduce the GraphTER model that extends AET to graph-structured data, whichis formalized by auto-encoding node-wise transformations in an unsupervised manner. de Haanet al. (2020) propose Gauge Equivariant Mesh CNNs which generalize GCNNs to apply anisotropicgauge equivariant kernels. Fuchs et al. (2020) introduce a self-attention mechanism specifically for3D point cloud data, which adheres to equivariance constraints, improving robustness to nuisancetransformations.3 M ETHOD3.1 P RELIMINARYWe consider an undirected graph G=fV;E;Agcomposed of a node set Vof cardinalityjVj=N,an edge setEconnecting nodes of cardinality jEj=M.Ais a real symmetric NNmatrix thatencodes the graph structure, where ai;j= 1if there exists an edge (i;j)between nodes iandj, andai;j= 0 otherwise. Graph signal refers to data that reside on the nodes of a graph G, denoted byX2RNCwith thei-th row representing the C-dimensional graph signal on the i-th node ofV.3.2 T OPOLOGY TRANSFORMATIONWe define the topology transformation tas adding or removing edges from the original edge set EingraphG. This can be done by sampling, i.i.d., a switch parameteri;jas in (Velickovic et al., 2019),which determines whether to modify edge (i;j)in the adjacency matrix. Assuming a BernoullidistributionB(p), wherepdenotes the probability of each edge being modified, we draw a randommatrix =fi;jgNNfromB(p),i.e.,B(p). We then acquire the perturbed adjacency matrixaseA=A; (1)whereis the exclusive OR (XOR) operation. This strategy produces a transformed graph throughthe topology transformation t,i.e.,eA=t(A). Here, the edge perturbation probability of p= 0corresponds to a non-transformed adjacency matrix, which is a special case of an identity transfor-mation to A.The transformed adjacency matrix eAcan also be written as the sum of the original adjacency matrixAand a topology perturbation matrix A:eA=A+ A; (2)3Under review as a conference paper at ICLR 2021where A=fai;jgNNencodes the perturbation of edges, with ai;j2f 1;0;1g. As shownin Fig. 1, when ai;j= 0, the edge between node iand nodejkeeps unchanged ( i.e., black solidlines); when ai;j=1or1, it means removing ( i.e., orange dotted lines) or adding ( i.e., blue solidlines) the edge between node iand nodej, respectively.3.3 T HEFORMULATION OF TOPOTERDefinition 1 Given a pair of graph signal and adjacency matrix (X;A), and a pair of graph signaland transformed adjacency matrix (X;eA)by a topology transformation t(), a function E()istransformation equivariant if it satisfiesE(X;eA) =E(X;t(A)) =(t) [E(X;A)]; (3)where(t)[]is a homomorphism of transformation tin the representation space.Let us denote H=E(X;A);andeH=E(X;eA). We seek to learn an encoder E: (X;A)7!H; (X;eA)7!eHthat maps both the original and transformed sample to representations fH;eHgequivariant to the sampled transformation t, whose information can thus be inferred from the rep-resentations via a decoder D: (eH;H)7!dAas much as possible. From an information-theoreticperspective, this requires (H;A)should jointly contain all necessary information about eH.Then a natural choice to formalize the topology transformation equivariance is the mutual infor-mationI(H;A;eH)between (H;A)andeH. The larger the mutual information is, the moreknowledge about Acan be inferred from the representations fH;eHg. Hence, we propose to max-imize the mutual information to learn the topology transformation equivariant representations asfollows:maxI(H;A;eH); (4)wheredenotes the parameters of the auto-encoder network.Nevertheless, it is difficult to compute the mutual information directly. Instead, we derive thatmaximizing the mutual information can be relaxed to minimizing the cross entropy, as described inthe following theorem.Theorem 1 The maximization of the mutual information I(H;A;eH)can be relaxed to the min-imization of the cross entropy H(pkq)between the probability distributions p(A;eH;H)andq(dAjeH;H):minHp(A;eH;H)kq(dAjeH;H),Ep(A;eH;H)logq(dAjeH;H): (5)Proof By using the chain rule of mutual information, we haveI(H;A;eH) =I(A;eHjH) +I(H;eH)I(A;eHjH):Thus the mutual information I(A;eHjH)is the lower bound of the mutual informationI(H;A;eH)that attains its minimum value when I(H;eH) = 0 .Therefore, we relax the objective to maximizing the lower bound mutual information I(A;eHjH)between the transformed representation eHand the topology transformation A:I(A;eHjH) =H(AjH)H(AjeH;H);whereH()denotes the conditional entropy. Since AandHare independent, we haveH(AjH) =H(A). Hence, maximizing I(A;eHjH)becomesminH(AjeH;H): (6)According to the chain rule of conditional entropy, we haveH(AjeH;H) =H(A;eH;H)H(eH;H)H(A;eH;H);4Under review as a conference paper at ICLR 2021Encoder E⋅GCNNGCNNshared weightsX,AX,A෩H∈ Rே×ிH෩∈ Rே×ிDecoder D⋅Minus ΔH∈ Rே×ிConstruct Edge RepresentationLinear ΔAFigure 2: The architecture of the proposed TopoTER.where the conditional entropy H(AjeH;H)is upper bounded by the joint entropy H(A;eH;H).Thus, the minimization problem in Eq. (6) becomesminH(A;eH;H): (7)We next introduce a conditional probability distribution q(dAjeH;H)to approximate the intractableposterior ~q(AjeH;H)with an estimated dA. According to the definition of the Kullback-Leiblerdivergence, we haveH(A;eH;H) =H(p) =H(pkq)DKL(pkq)H(pkq);whereDKL(pkq)denotes the Kullback-Leibler divergence of pandqthat is non-negative, andH(pkq)is the cross entropy between pandq. Thus, Eq. (6) is converted to minimizing the crossentropy as the upper bound:minHp(A;eH;H)kq(dAjeH;H),Ep(A;eH;H)logq(dAjeH;H):Hence, we relax the maximization problem in Eq. (4) to the optimization in Eq. (5). Based on Theorem 1 , we train the decoder Dto learn the distribution q(dAjeH;H)so as to esti-mate the topology transformation dAfrom the encodedfeH;Hg, where the input pairs of originaland transformed graph representations feH;Hgas well as the ground truth target Acan be sam-pled tractably from the factorization of p(A;eH;H),p(A)p(H)p(eHjA;H). This allows usto minimize the cross entropy betweenp(A;eH;H)andq(dAjeH;H)as in (5) with the trainingtriplets (eH;H; A)drawn from the tractable factorization of p(A;eH;H). Hence, we formu-late the TopoTER as the joint optimization of the representation encoder Eand the transformationdecoderD.3.4 T HEALGORITHMWe design a graph-convolutional auto-encoder network for the TopoTER learning, as illustratedin Fig. 2. Given a graph signal Xassociated with a graph G=fV;E;Ag, the proposed unsuper-vised learning algorithm for the TopoTER consists of three steps: 1) topology transformation, whichsamples and perturbs some edges from Eto acquire a transformed adjacency matrix eA; 2) repre-sentation encoding, which extracts the feature representations of graph signals before and after thetopology transformation; 3) transformation decoding, which estimates the topology transformationparameters from the learned feature representations. We elaborate on the three steps as follows.Topology Transformation. We randomly sample a subset of edges from Efor topologyperturbation—adding or removing edges, which not only enables to characterize local graph struc-tures at various scales, but also reduces the number of edge transformation parameters to estimatefor computational efficiency. In practice, in each iteration of training, we sample allthe node pairswith connected edges S1, and randomly sample a subset of disconnected node pairs S0,i.e.,S0=(i;j)ai;j= 0;S1=(i;j)ai;j= 1; (8)wherejS0j=jS1j=M. Next, we randomly split S0andS1into two disjoint sets, respectively, i.e.,Si=nS(1)i;S(2)iS(1)i\S(2)i=?;S(1)i[S(2)i=Si;jS(1)ij=rjSijo;i2f0;1g; (9)5Under review as a conference paper at ICLR 2021whereris the edge perturbation rate . Then, for each node pair (i;j)inS(1)0andS(1)1, we flipthe corresponding entry in the original graph adjacency matrix. That is, if ai;j= 0, then we set~ai;j= 1; otherwise, we set ~ai;j= 0. For each node pair (i;j)inS(2)0andS(2)1, we keep the originalconnectivities unchanged, i.e.,~ai;j=ai;j.This leads to the transformed adjacency matrix eA, as well as the sampled transformation parametersby accessing Aat position (i;j)fromS0andS1. Also, we can category the sampled topologytransformation parameters into four types:1. add an edge to a disconnected node pair, i.e.,ft:ai;j= 07!~ai;j= 1;(i;j)2S(1)0g;2. delete the edge between a connected node pair, i.e.,ft:ai;j= 17!~ai;j= 0;(i;j)2S(1)1g;3. keep the disconnection between node pairs in S(2)0,i.e.,ft:ai;j= 07!~ai;j= 0;(i;j)2S(2)0g;4. keep the connection between node pairs in S(2)1,i.e.,ft:ai;j= 17!~ai;j= 1;(i;j)2S(2)1g.Thus, we cast the problem of estimating transformation parameters in Afrom (eH;H)as theclassification problem of the transformation parameter types. The percentage of these four types isr:r: (1r) : (1r).Representation Encoder. We train an encoder E: (X;A)7!E(X;A)to encode the featurerepresentations of each node in the graph. As demonstrated in Fig. 2, we leverage GCNNs withshared weights to extract feature representations of each node in the graph signal. Taking the GCN(Kipf & Welling, 2017) as an example, the graph convolution in the GCN is defined asH=E(X;A) =D12(A+I)D12XW; (10)where Dis the degree matrix of A+I,W2RCFis a learnable parameter matrix, and H=[h1;:::;hN]>2RNFdenotes the node-wise feature matrix with Foutput channels. Similarly, thenode feature of the transformed counterpart is as follows with the shared weights W.eH=E(X;eA) =eD12(eA+I)eD12XW=eD12(A+I)eD12XW +eD12AeD12XW:(11)We thus acquire the feature representations HandeHof graph signals before and after topologytransformations.Transformation Decoder. Comparing Eq. (10) and Eq. (11), the prominent difference betweeneHandHlies in the second term of Eq. (11) featuring A. This enables us to train a decoderD: (eH;H)7!dAto estimate the topology transformation from the joint representations beforeand after transformation. We first take the difference between the extracted feature representationsbefore and after transformations along the feature channel,H=eHH= [h1;:::;hN]>2RNF: (12)Thus, we can predict the topology transformation between node iand nodejthrough the node-wisefeature difference Hby constructing the edge representation asei;j=expf(hihj)(hihj)gkexpf(hihj)(hihj)gk12RF;8(i;j)2S0[S1; (13)wheredenotes the Hadamard product of two vectors to capture the feature representation, andkk 1is the`1-norm of a vector for normalization. The edge representation ei;jof nodeiandjisthen fed into several linear layers for the prediction of the topology transformation,byi;j= softmax (linear( ei;j));8(i;j)2S0[S1; (14)where softmax()is an activation function.According to Eq. (5), the entire auto-encoder network is trained by minimizing the cross entropyL= E(i;j)2S0[S13Xf=0y(f)i;jlogby(f)i;j; (15)wherefdenotes the transformation type ( f2f0;1;2;3g), andyis the ground-truth binary indicator(0or1) for each transformation parameter type.6Under review as a conference paper at ICLR 2021Table 1: Node classification accuracies (with standard deviation) in percentage on three datasets.X;A;Ydenote the input data, adjacency matrix and labels respectively.Method Training Data Cora Citeseer PubmedSemi-Supervised MethodsGCN (Kipf & Welling, 2017) X;A;Y 81:5 70 :3 79 :0MoNet (Monti et al., 2017) X;A;Y 81:70:5 - 78:80:3GAT (Veli ˇckovi ́c et al., 2018) X;A;Y 83:00:7 72:50:7 79:00:3SGC (Wu et al., 2019) X;A;Y 81:00:0 71:90:1 78:90:0GWNN (Xu et al., 2019a) X;A;Y 82:8 71 :7 79 :1MixHop (Abu-El-Haija et al., 2019) X;A;Y 81:90:4 71:40:8 80:80:6DFNet (Wijesinghe & Wang, 2019) X;A;Y 85:20:5 74:20:3 84:30:4Unsupervised MethodsRaw Features (Velickovic et al., 2019) X 47:90:4 49:30:2 69:10:3DeepWalk (Perozzi et al., 2014) A 67:2 43 :2 65 :3DeepWalk + Features (Velickovic et al., 2019) X;A 70:70:6 51:40:5 74:30:9GAE (Kipf & Welling, 2016) X;A 80:90:4 66:70:4 77:10:7VGAE (Kipf & Welling, 2016) X;A 80:00:2 64:10:2 76:90:1DGI (Velickovic et al., 2019) X;A 81:10:1 71:40:2 77:00:2GMI (Peng et al., 2020) X;A 82:20:2 71:40:5 78:50:1TopoTER X;A 83 :70:3 71:70:5 79:10:1Table 2: Model size comparison of DGI, GMI, and the proposed TopoTER.Model DGI GMI TopoTERNo. of Parameters 996;354 1;730;052 736;2604 E XPERIMENTS4.1 N ODE CLASSIFICATIONDatasets. We adopt three citation networks to evaluate our model: Cora, Citeseer, and Pubmed (Senet al., 2008), where nodes correspond to documents and edges represent citations. We follow thestandard train/test split in (Kipf & Welling, 2017) to conduct the experiments.Implementation Details. In this task, the auto-encoder network is trained via Adam optimizer, andthe learning rate is set to 104. We use the same early stopping strategy as DGI (Velickovic et al.,2019) on the observed training loss, with a patience of 20epochs. We deploy one Simple GraphConvolution (SGC) layer (Wu et al., 2019) as our encoder, and the order of the adjacency matrixis set to 2, while we will study the order of the adjacency matrix in Appendix A. The LeakyReLUactivation function with a negative slope of 0:1is employed after the SGC layer. Similar to DGI(Velickovic et al., 2019), we set the output channel F= 512 for Cora and Citeseer dataset, and 256for Pubmed dataset due to memory limitations. After the encoder, we use one linear layer to classifythe transformation types. We set the edge perturbation rate in Eq. (9) as r=f0:7;0:4;0:7gfor Cora,Citeseer, and Pubmed, respectively. The analysis of the edge perturbation rate will be presented inAppendix B.During the training procedure of the classifier, the SGC layer in the encoder is used to extract graphfeature representations with the weights frozen. After the SGC layer, we apply one linear layer tomap the features to the classification scores.Experimental Results. We compare the proposed method with five unsupervised methods, includ-ing one node embedding method DeepWalk, two graph auto-encoders GAE and VGAE (Kipf &Welling, 2016), and two contrastive learning methods DGI (Velickovic et al., 2019) and GMI (Penget al., 2020). Additionally, we report the results of Raw Features and DeepWalk+Features (Perozziet al., 2014) under the same settings. For fair comparison, the results of all other unsupervised meth-ods are reproduced by using the same encoder architecture of the TopoTER except DeepWalk andRaw Features. We report the mean classification accuracy (with standard deviation) on the test nodesfor all methods after 50runs of training. As reported in Tab. 1, the TopoTER outperforms all othercompeting unsupervised methods on three datasets. Further, the proposed unsupervised method alsoachieves comparable performance with semi-supervised results. This significantly closes the gapbetween unsupervised approaches and the semi-supervised methods.Moreover, we compare the proposed TopoTER with two contrastive learning methods DGI and GMIin terms of the model complexity, as reported in Tab. 2. The number of parameters in our modelis less than that of DGI and even less than half of that of GMI, which further shows the TopoTERmodel is lightweight.7Under review as a conference paper at ICLR 2021Table 3: Graph classification accuracies (with standard deviation) in percentage on 6datasets. “>1Day” represents that the computation exceeds 24 hours. “OOM” is out of memory error.Dataset MUTAG PTC-MR RDT-B RDT-M5K IMDB-B IMDB-M(No. Graphs) 188 344 2000 4999 1000 1500(No. Classes) 2 2 2 5 2 3Graph Kernel MethodsRW 83:721:50 57:851:30 OOM OOM 50:680:26 34:650:19SP 85:222:43 58:242:44 64:110:14 39:550:22 55:600:22 37:990:30GK 81:662:11 57:261:41 77:340:18 41:010:17 65:870:98 43:890:38WL 80:723:00 57:970:49 68:820:41 46:060:21 72:303:44 46:950:46DGK 87:442:72 60:082:55 78:040:39 41:270:18 66:960:56 44:550:52MLG 87:941:61 63:261:48>1 Day >1 Day 66:550:25 41:170:03Supervised MethodsGCN 85:65:8 64:24:3 50:00:0 20:00:0 74:03:0 51:93:8GraphSAGE 85:17:6 63:97:7 - - 72:35:3 50:92:2GIN-0 89:45:6 64:67:0 92:42:5 57:51:5 75:15:1 52:32:8GIN- 89:06:0 63:78:2 92:22:3 57:01:7 74:35:1 52:13:6Unsupervised Methodsnode2vec 72:6310:20 58:588:00 - - - -sub2vec 61:0515:80 59:996:38 71:480:41 36:680:42 55:261:54 36:670:83graph2vec 83:159:25 60:176:86 75:781:03 47:860:26 71:100:54 50:440:87InfoGraph 89:011:13 61:651:43 82:501:42 53:461:03 73:030:87 49:690:53TopoTER 89:250:81 64:591:26 84:930:18 55:520:20 73:460:38 49:680:314.2 G RAPH CLASSIFICATIONDatasets. We conduct graph classification experiments on six well-known graph benchmarkdatasets (Yanardag & Vishwanathan, 2015): MUTAG, PTC, REDDIT-BINARY , REDDIT-MULTI-5K, IMDB-BINARY , and IMDB-MULTI.Implementation Details. In this task, the entire network is trained via Adam optimizer with a batchsize of 64, and the learning rate is set to 103. For the encoder architecture, we follow the sameencoder settings in the released code of InfoGraph (Sun et al., 2020a), i.e., three Graph IsomorphismNetwork (GIN) layers (Xu et al., 2019b) with batch normalization. We also use one linear layer toclassify the transformation types. We set the sampling rate r= 0:5for all datasets.During the evaluation stage, the entire encoder will be frozen to extract node-level feature repre-sentations, which will go through a global add pooling layer to acquire global features. We thenuse LIBSVM to classify these global features to classification scores. We adopt the same procedureof previous works (Sun et al., 2020a) to make a fair comparison and use 10-fold cross validationaccuracy to report the classification performance, and the experiments are repeated five times.Experimental Results. We take six graph kernel approaches for comparison: Random Walk (RW)(G ̈artner et al., 2003), Shortest Path Kernel (SP) (Borgwardt & Kriegel, 2005), Graphlet Kernel(GK) (Shervashidze et al., 2009), Weisfeiler-Lehman Sub-tree Kernel (WL) (Shervashidze et al.,2011), Deep Graph Kernels (DGK) (Yanardag & Vishwanathan, 2015), and Multi-Scale LaplacianKernel (MLG) (Kondor & Pan, 2016). Aside from graph kernel methods, we also compare withthree unsupervised graph-level representation learning methods: node2vec (Grover & Leskovec,2016), sub2vec (Adhikari et al., 2018), and graph2vec (Narayanan et al., 2017), and one contrastivelearning method: InfoGraph (Sun et al., 2020a). The experimental results of unsupervised graphclassification are preseted in Tab. 3. The proposed TopoTER outperforms all unsupervised baselinemethods on the first five datasets, and achieves comparable results on the other dataset. Also, theproposed approach reaches the performance of supervised methods at times, thus validating theeffectiveness of the TopoTER model.5 C ONCLUSIONWe propose Topology Transformation Equivariant Representation (TopoTER) for learning unsu-pervised representations on graph data. By maximizing the mutual information between topologytransformations and feature representations before and after transformations, the TopoTER enforcesthe encoder to learn intrinsic graph feature representations that contain sufficient information aboutstructures under applied topology transformations. We apply the TopoTER model to node classifi-cation and graph classification tasks, and results demonstrate that the TopoTER outperforms state-of-the-art unsupervised approaches and reaches the performance of supervised methods at times.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title A method for for self-training GNN ### Review Text The paper propose an unsupervised method for self-training of graph-neural-networks (GNNs). The authors provide information-theoretic justification to their method using maximization of the lower bound of the mutual information on their objective. Their approach is based on maximizing the mutual information between a perturbed graph topology and its node representation. Strong points: - very good results (some of the results are comparable to supervised method and the improvement achieved on the other unsupervised methods is significant) - simple approach with theoretical justification - paper is nicely written and easy to follow ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
BJlBNZDaP4
icaps-conference.org/ICAPS/2019/Workshop/HSDIP
2019
A∗ Search and Bound-Sensitive Heuristics for Oversubscription Planning
["Anonymous"]
Oversubscription planning (OSP) is the problem of finding plans that maximize the utility value of their end state while staying within a specified cost bound. Recently, it has been shown that OSP problems can be reformulated as classical planning problems with multiple cost functions but no utilities. Here we take advantage of this reformulation to show that OSP problems can be solved optimally using the A* search algorithm, in contrast to previous approaches that have used variations on branch-and-bound search. This allows many powerful techniques developed for classical planning to be applied to OSP problems. We also introduce novel bound-sensitive heuristics, which are able to reason about the primary cost of a solution while taking into account secondary cost functions and bounds, to provide superior guidance compared to heuristics that do not take these bounds into account. We implement two such bound-sensitive variants of existing classical planning heuristics, and show experimentally that the resulting search is significantly more informed than comparable heuristics that do not consider bounds.
["oversubscription planning", "heuristic search", "A*", "bound-sensitive heuristics"]
ASearch and Bound-Sensitive Heuristics for Oversubscription PlanningMichael Katz1,Emil Keyder21IBM Research, Yorktown Heights, NY , USA2Invitae Corporation, San Francisco, CA, USAmichael.katz1@ibm.com, emilkeyder@gmail.comAbstractOversubscription planning (OSP) is the problemof finding plans that maximize the utility valueof their end state while staying within a specifiedcost bound. Recently, it has been shown that OSPproblems can be reformulated as classical planningproblems with multiple cost functions but no utili-ties. Here we take advantage of this reformulationto show that OSP problems can be solved optimallyusing theAsearch algorithm, in contrast to previ-ous approaches that have used variations on branch-and-bound search. This allows many powerfultechniques developed for classical planning to beapplied to OSP problems. We also introduce novelbound-sensitive heuristics, which are able to reasonabout the primary cost of a solution while takinginto account secondary cost functions and bounds,to provide superior guidance compared to heuris-tics that do not take these bounds into account.We implement two such bound-sensitive variantsof existing classical planning heuristics, and showexperimentally that the resulting search is signif-icantly more informed than comparable heuristicsthat do not consider bounds.IntroductionOversubscription planning (OSP) problems are a family ofdeterministic planning problems. In contrast to classical plan-ning, where a set of hard goals is specified and the plannersearches for a minimal (or low) cost plan that reaches a statein which all of the goals are made true, oversubscription plan-ning specifies a utility function that describes the benefit as-sociated with achieving different possible states, and asks fora plan whose cost does not exceed a set bound and achievesas high a utility as possible [Smith, 2004 ].While domain-independent classical planning approacheshave increasingly standardized around variations on Asearch and heuristics that are automatically extracted from theproblem description [Bonet and Geffner, 2001; Keyder andGeffner, 2008; Haslum and Geffner, 2000; Edelkamp, 2001;Helmert et al. , 2014; Helmert and Domshlak, 2009 ], OSP hasgenerally been solved with branch-and-bound algorithms andheuristics that compute an admissible (in this context non-under) estimate of the utility achievable from a state. In or-der to obtain these estimates, recent approaches often adaptclassical planning techniques such as landmarks [Mirkis andDomshlak, 2014; Muller and Karpas, 2018 ]or abstractions[Mirkis and Domshlak, 2013 ], and enhance them with rea-soning that is specific to the context of OSP, such as theknowledge that there always exists an optimal plan that endswith a utility-increasing action, or that the cost bound for theproblem can be reduced under specific conditions to aid thesearch algorithm in detecting that improving over the cur-rently achieved utility is impossible.In contrast to these approaches, our aim here is to showthat general methods from classical planning, including Asearch, can be used in the OSP setting nearly as is. This previ-ously turned out to be the case for the related net-benefit plan-ning problem, where classical planners solving a compilationwere shown to outperform planners designed specifically forthat task [Keyder and Geffner, 2009 ]. Here, we use a similar,recently proposed compilation that converts OSP problemsinto classical planning problems with multiple cost functionsbut no utilities [Katz et al. , 2019a ]. In addition, we demon-strate that existing classical planning heuristics can be usedto guide the search for optimal plans. While these heuristicsare typically uninformative out-of-the-box, they require onlyminor modifications (and no specific reasoning about utili-ties) to render them sensitive to the secondary cost functionsand bounds that are introduced by the compilation. Our ex-periments with Aand the newly introduced estimators thatwe refer to as bound-sensitive heuristics show that they leadto informed searches that are competitive with, and in somecases outperform, the state of the art for optimal OSP.One related area of research in the classical setting isthat of bounded-cost planning , where the planner looks forany plan with (primary) cost below a given bound, simi-lar to the treatment of the secondary cost in the OSP set-ting. Approaches proposed for this setting include dedicatedsearch algorithms [Stern et al. , 2011 ]and heuristics that takeinto account accumulated cost and plan length at the cur-rent search node [Thayer and Ruml, 2011; Haslum, 2013;Dobson and Haslum, 2017 ]. These approaches work by pref-erentially expanding nodes in areas of the search space thatare likely to have a solution under the cost bound. OptimalOSP, however, requires expanding all nodes that potentiallylie on a path to state with maximal utility. Furthermore, itcannot be assumed that solutions necessarily achieve all softgoals. Heuristics that are able to take into account bounds onsecondary cost functions have also been investigated in thestochastic shortest path setting, where they were used as ad-ditional constraints in an LP-based heuristic to consider limi-tations on fuel or time resources [Trevizan et al. , 2017 ].We now briefly review the various flavors of planning thatwe consider in this work, and introduce the formalisms bywhich we describe them.BackgroundWe describe planning problems in terms of extensions to theSAS+formalism [B ̈ackstr ̈om and Nebel, 1995 ]. Aclassicalplanning task =hV;O ;sI;G;Ciis given by a set of vari-ablesV, with each variable v2Vhaving a finite domaindom(v), a set of actions O, with each action o2Odescribedby a pairhpre(o);e(o)iof partial assignments to V, calledtheprecondition andeffect ofo, respectively, initial state sIand goal condition G, which are full and partial assignmentstoV, respectively, and the cost function C:O!R0+. Astatesis given by a full assignment to V. An action is saidto be applicable in a statesifpre(o)s, andsJoKdenotesthe result of applying oins, where the value of each v2Vis given by e(o)[v]if defined and s[v]otherwise. An op-erator sequence =ho1;:::;okiis applicable in sif thereexist statess0;;sksuch that (i) s0=s, and (ii) for each1ik,oiis applicable in si1andsi=si1JoiK. Werefer to the state skbysJKand call it the end state of . Anoperator sequence is a plan for a classical planning problemif it is applicable in sIandGsIJK. The cost of a plan is given byC() =Po2C(o); the goal of optimal classicalplanning is to find a plan with minimal cost. We refer to a pairof variablevand its value #2dom(v)as afactand denoteit byhv;#i. We sometimes abuse notation and treat partialassignments as sets of facts.An oversubscription planning (OSP) problem is given byOSP=hV;O;sI;C;u;Bi, whereV,O,sI, andCare as inclassical planning, u: (hv;#i)!R0+is a non-negative val-ued utility function over variable assignments (facts), and Bis a cost bound for the plan, imposing the additional require-ment that only plans such thatC()Bare valid. Theutility of a planis given byPhv;#i2sIJKu(hv;#i); the ob-jective of OSP problems is to find valid plans with maximalutility.A multiple cost function (MCF) problem is given byMCF =hV;O;sI;G;C0;Ci, whereV,O,sI, andC0areas in classical planning, C0is the primary cost function , andC=fhCi;Biij1ingis a set of secondary costfunctionsCi:O!R0+, and bounds , both non-negative.Valid plans for MCF planning problems fulfill the conditionCi()Bifor all secondary cost functions, and optimalplans for MCF planning have minimal primary cost C0().In this paper we only consider MCF problems with a singlesecondary cost function, i.e. n= 1.Reformulating OSP ProblemsIt has recently been shown that an OSP problem can be com-piled into an MCF planning problem with a single secondarycost function that corresponds to the cost function Cof theoriginal problem, and is constrained to not exceed the spec-ified bound B[Katz et al. , 2019a ]. The primary cost func-tion for the problem, or the cost function to be optimized,results from compiling the utilities from the original probleminto costs. Two different compilations have been proposedfor this task: (i) the soft goals compilation , which adds foreach variable vthat has some value #2dom(v)for whicha utility is specified, a hard goal, along with actions that areable to achieve this hard goal at different costs, and (ii) thestate delta compilation which encodes in the cost of each ac-tion the change in state utility that results from applying it.Here we consider only (i), as (ii) introduces negative actioncosts thatAand existing classical planning heuristics arenot designed to handle. Note, however, that our methods donot depend on the specific choice of compilation, as long asthey remove utilities from the problem and do not introducenegative action costs.Thesoft goals compilation was originally introduced in thecontext of net-benefit planning, which is similar to oversub-scription planning but does not specify a bound on plan cost,having instead as an objective the minimization of the dif-ference between the achieved utility and the cost of the plan[Keyder and Geffner, 2009 ]. It can be applied in the OSPsetting to result in an MCF planning problem as follows:Definition 1 LetOSP=hV;O;sI;C;u;Bibe an over-subscription planning task. The soft goals reformulationsgMCF=hV0;O0;sI;G0;C0;fhC0;BigiofOSPis an MCFplanning task , whereV0=fv0jv2Vg, withdom(v0) =dom(v)[fgvgumax(v)>0dom(v) otherwise;O0=O[fov;#=hfhv;#ig;fhv;gvigi j#2dom(v);v2V;umax(v)>0gG0=fhv;gvijv2V;umax(v)>0g,C0(o) =0 o2Oumax(v)u(hv;#i)o=ov;#;C0(o) =C(o)o2O0 otherwise;with umax(v) := max #2dom (v)u(hv;#i)denoting the maxi-mum utility over the values of the variable v.In the reformulated problem, only the ov;#actions forwhich#is not the maximum utility value of vhave posi-tive primary costs. These actions make explicit that a par-ticular utility will not be achieved, and that the plan has in-stead chosen to achieve the associated gvby accepting theassociated cost penalty. The primary cost of a planforthe reformulated problem is then given byPv2Vumax(v)Pf2sJKu(f).Note that this compilation assumes that utilities are definedfor single facts. The more general case, in which utilities areinstead defined for logical formulae ', can be handled as inthe soft goals compilation by introducing a new variable v',and two actions that achieve its goal value with cost 0andprecondition ', and cost u(')and precondition;, respec-tively [Keyder and Geffner, 2009 ]. Since we consider onlysingle fact utilities here, we do not discuss this case in detail.While this compilation is sound as stated, two further op-timizations can be made to reduce the state space of the re-sulting compiled problem. First, an arbitrary ordering can beintroduced over Vto ensure that the gvvalues are achieved ina fixed sequence, to avoid searching over different orderings.Second, a new precondition fact that is deleted by the ov;#actions can be added to the original domain actions to ensurethatov;#actions happen only at the end of the plan and arenot interleaved with the original domain actions. We makeuse of both of these optimizations here.Afor MCF Planning ProblemsTheAalgorithm extends blind search techniques such as Di-jkstra’s algorithm by allowing the incorporation of admissible(non-overestimating) heuristics [Hart et al. , 1968 ]. In each it-eration of its main loop, Apicks a node nto expand withminimalf(n) =g(n) +h(n)value, where g(n)is the costof the path to n, andh(n)is an admissible estimate of the re-maining cost to the goal. An optimal solution to the problemis found when a node nwith minimal f(n)value is a goalnode.To adaptAto the MCF planning setting, we store at eachnodena set of accumulated path costs gi(n)resulting fromeach of the secondary cost functions C1;:::;Cn, in additionto the accumulated primary cost g0(n). When a node is takenfrom the priority queue and expanded, generated successornodes for which any gi(n)>Bican be immediately pruned,as allCiare assumed to be non-negative, and they cannotconstitute valid prefixes for solution paths.One key optimization used in modern Aimplementationsin the classical setting is duplicate detection, which allowsstates that are rediscovered during search to be discarded, ifthe newgvalue exceeds the cost of the path to the state thatwas previously found, or to be updated with a new parent, ifthe cost of the new path is less. In the MCF setting, care mustbe taken to ensure that newly discovered nodes are discarded(or replace existing nodes), only when they are dominatedby (or dominate), the existing node in all cost dimensions.While the only necessary property of the open list from a cor-rectness perspective is that it order nodes by increasing pri-maryf(n)value, the choice of a secondary ordering heuris-tic plays a role here: an ordering that causes a dominatingnode to be generated first and enables subsequently generatednodes to be immediately discarded as dominated results in su-perior performance. In our implementation of the algorithm,we therefore use an open list that orders nodes by increasinggi(n)value when their primary f(n)values are the same.l0 l1u(visited (l1)) : 10l2u(visited (l2)) : 101 1Figure 1: An OSP problem based on the V ISIT-ALLdomain.Bound-Sensitive HeuristicsWhile any admissible heuristic can be used to guide searchin MCF planning, classical planning heuristics that ignorebounds entirely are typically extremely uninformative. Con-sider the problem shown in Figure 1: the agent is initiallyatl0, and can obtain a utility of 10by visiting each of thelocationsl1andl2. The costs of the actions move (l0;l1)andmove (l1;l2)are both 1. In the compiled MCF versionof this problem, an optimal but naive heuristic that ignoresthe bound will give an estimate for the primary cost of 0, asboth visited (l1)andvisited (l2)can be made true, and the as-sociated 0-primary cost ovisited (l)actions applied to reach thenewly introduced hard goals corresponding to each utility. If,however, B= 1, the optimalC0cost atl0is10, sincel2cannot be reached at cost Band the agent must use theonot-visited (l2)action to achieve the associated hard goal with acost of 10. Similarly, if B= 0, theC0cost of the optimal planis20, since the value of C1for all available actions exceedsthe bound B. In practice, it turns out that the OSP versionsof many classical planning problems have similar behavior:their state spaces are strongly connected, so any variable as-signment can be achieved from any state, and classical plan-ning heuristics that ignore bounds are no more informed thanblind search.In order to obtain estimates that take secondary cost boundsinto account and can guide heuristic search towards feasiblesolutions, we therefore introduce bound-sensitive heuristics .In the following, we use bto denote a budget vector of non-negative reals that indicate the unused component of each ofthe secondary cost bounds Biat a given search node.Definition 2 (Optimal bound-sensitive heuristic) Givenan MCF planning problem MCF=hV;O;sI;G;C0;Ci, theoptimal bound-sensitive heuristic h(s;b)for a statesandbudget vector bis given by the minimal primary cost C0()of a planforssuch thatCi()bifori= 1;:::;n .By analogy with standard admissible heuristics, an ad-missible bound-sensitive heuristic is a non-overestimatingbound-sensitive heuristic:Definition 3 (Admissible bound-sensitive heuristic) Givenan MCF planning problem MCF=hV;O;sI;G;C0;Ci, anadmissible bound-sensitive heuristic h(s;b)for a statesandbudget vector bis a heuristic hsuch thath(s;b)h(s;b)for alls,b.Any classical planning heuristic that completely ignores Ciand Bican be thought of as an admissible bound-sensitiveheuristic that assumes b=1. As the value of bdecreases,the value ofh(s;b)can only increase. In general, it is usefulto keep in mind the following property:Theorem 1 Given a state sand budget vectors b;b’such thatbb’(whereis interpreted as a pairwise comparison),h(s;b)h(s;b’).Proof sketch: This follows from the fact that any plan forssuch thatCi()bialso has the property that Ci()b’ifori= 1;:::;n since bb’, yet the opposite is not the case.Theorem 1 applied to MCF planning problems obtainedas the soft goals compilations of OSP problems states thatfor anys, decreasing bincreasesh(s;b), and decreases theachievable utility, since the primary cost here indicates theutility that the plan must declare unachievable through ov;#actions withC0(ov;#)0.Bound-Sensitive hmaxThe admissible classical heuristic hmaxestimates the cost of aset of factsFas the cost of the most expensive fact f2F,and applies this approximation recursively to action precon-ditions in order to obtain the cost of the goal [Bonet andGeffner, 2001 ]:hmaxC(F;s) = maxf2FhmaxC(f;s)hmaxC(f;s) =(0 f2smino2achievers (f;s)hmaxC(o;s)otherwisehmaxC(o;s) =C(o) +hmaxC(pre(o);s)wherehmaxCdenotes the value of hmaxcomputed with a costfunctionC, and achievers (f;s)denotes the set of actions ofor whichf2e(o). Note that the hmaxcost of a factfthat is not present in sis computed by choosing an ac-tionofrom this set that achieves it with minimum possiblecost. Given a set of secondary cost functions and boundsC=fhC1;B1i;:::;hCn;Bnig, a bound-sensitive version ofhmaxcan easily be obtained by replacing the set of achieversused to compute hmaxC0withachievers (f;s)C0=fojf2e(o)^^i=1;:::;nhmaxCi(o;s)Bigwhere actions ofor which any estimate hmaxCi(o;s)exceeds Biare not considered. Note that due to the admissibility of hmax,this restriction of the set of achievers is sound but not com-plete: it is guaranteed that any action removed from the set ofachievers cannot be used in a valid plan, but there may be ad-ditional actions that cannot be achievers but are not pruned bythe heuristic. In general, any admissible estimate hmaxCi(o;s)could be used to compute achievers (f;s)C0, but we have cho-senhmaxhere for simplicity.Theorem 2 Bound-sensitive hmaxC0is an admissible bound-sensitive heuristic.Proof sketch: This follows from the admissibility of theheuristic used to compute achievers (f;s)C0.Bound-Sensitive Merge-and-shrinkMerge-and-shrink heuristics are a family of abstractionheuristics that incrementally build a representation of the fullstate space of a problem [Helmert et al. , 2014 ]. The construc-tion process begins with the set of transition systems inducedover each state variable; at each step, two transition systemsare selected to be merged and replaced with their synchro-nized product. Since the transition systems need to be rep-resented explicitly in memory, before the merge a shrinkingstep is perfomed on the two selected transition systems to en-force a user-specified threshold on the size of the synchro-nized product. This step is performed by abstracting multi-ple states in the current representation into a single state (andthereby losing optimality). The final output of the algorithmconsists of a single abstract transition system in which mul-tiple states and actions from the original task are mapped toa single state or transition, respectively. hMS(s)is then givenby the cost of a shortest path from the abstract state represent-ingsto the closest abstract goal state in the final transitionsystem. This estimate is admissible by definition.To adapt merge-and-shrink to the MCF setting, we main-tain for each transition in the abstract state space the mini-mumCicost fori= 1;:::;n among all of the transitionsfrom the original task represented by that transition. The dis-tanceCibetween any two abstract states s;s0then representsa non-overestimate of the secondary cost of reaching s0froms. A bound-sensitive heuristic value for a state scan be com-puted as the minimum C0cost of a path fromsto an abstractgoal statesgwhoseCicost in the abstract state space does notexceed Bi, for anyi. TheC0cost of such such a path can becomputed with a modified version of Dijkstra’s algorithm thatstores secondary cost information for each node and discardsnodes for whichCi>Bifor anyi.Theorem 3 Bound-sensitive hMSis an admissible bound-sensitive heuristic.Proof sketch: This follows from the fact that the secondarycosts used in the abstract state space are the minimums ofthe secondary costs Ciof the represented transitions in theoriginal problem, and the proof of admissibility of standardhMS.While the msbheuristic can be implemented by running Di-jkstra’s algorithm in the abstract state space for each heuristiccomputation, an important optimization when a single sec-ondary cost function is present (which is the case in the com-piled OSP problems that we consider) is to run Dijkstra onlyonce during preprocessing, and compute the primary costin the presence of different bounds on the secondary cost.This information can then be stored as a sequence of pairshhb0;c0i;:::;hbn;cnii, whereb0;:::;bnis strictly increas-ing andc0;:::;cnis strictly decreasing (recall Theorem 1).hMS(s;b)is then given by the first cisuch that bib.ExperimentsWe implemented our approach in the Fast Downward plan-ner[Helmert, 2006 ], and evaluated it on a set of publically25 50 75 100Coverage BnB bl maxbmax msbmsBnB bl maxbmax msbmsBnB bl maxbmax msbmsBnB bl maxbmax msbmsairport 270 -1 -1 -9 -9 2200 -1 -4 -4 210 -1 0 -4 -4 210 -3 -3 -5 -5barman11 120+1000 800000 400000 400000barman14 6000+20 300 -3 00 000000 000000blocks 3500000280 +1 -2 +4021000+8018000+80childsnack14 00 +1 0+20 000000 000000 000000depot 160 -1 -2 -1 01100 -4 0 -1 700 -1 00 4000+10driverlog 1500000130+1 -1+10100 +1 0+20 70 +1 0+40elevators08 3000 -1 -1 0250 -1 -1 00230 -1 -1 +1017 +10 -1 +3+1elevators11 20000001900000180 -1 -1 +1014 +10 -1 +2+1floortile11 9000 -2 0 40+1000 20+2 +2 +10 20+4 +4 00floortile14 9000 -2 -3 200000 00+2 +100 00+5 +5 00freecell 770 -14 -33 -12 -6 300 -2 -13 -2 -1 210 -6 -6 -1 -1 200 -6 -6 -2 -4ged14 2000000200000020000 -102000000grid 500 -1 00 3000 -10 2000 -10 1000+10gripper 11000+10 800000 80 -1 000 80 -1 000hiking14 190 -1 -5 +10140 -1 -3 +30130 -2 -2 +20110 -2 -2 +30logistics00 210+10001600000120+20+2010000+40logistics98 60+1000 40+10+10 20+10+10 200000miconic 960 -1 -4 +12 -1 6500 -1 +7055000+11050 +500+11 +4mprime 3500 -2 -4 -2 28 -1 -1 -5 -3 -1 240 -1 -2 -2 0190+1 -5 -20mystery 29000 -2 027 -10 -3 -4 -1 2100 -3 -1 01800 -3 -1 0nomystery11 20000001400 -2 00100 -1 -2 00 8000+3+1openstacks08 300000025000002400000220 -3 -2 00openstacks11 200000018000001700000170 -3 -3 00openstacks14 20 -1 -1 -1 -1 -1 15 -2 -4 -4 -2 -2 70 -3 -3 00 30 -1 000openstacks 90 -2 -2 -2 -2 700000 700000 700000parcprinter08 17 -2 +1 -2 -3 -3 130+100 -1 110+200 -1 11 -1 +2 +2 +10parcprinter11 13 -1 +1 -2 -2 -2 90+1000 70+20 +10 60+3 +2 +2 +2parking11 11 -1 -1 -2 -3 -1 100000 000000 000000parking14 14 -2 -3 -6 -3 -3 40 -3 -4 00 0000+10 000000pathways-nn 500000 40+10+10 400000 400000pegsol08 3000000300000029 -10 -2 002700000pegsol11 2000000200000019 -20 -2 001700000pipes-notank 4500 -2 -30 -27 300 -1 -5 -14 -12 220 -2 -6 -5 -5 150 -1 -2 0 -1pipes-tank 35 -2 -6 -11 -9 -9 200 -3 -5 -3 -3 16 -1 -4 -5 -1 -1 110 -1 -3 00psr-small 5000000500000049000+104900000rovers 150+1 -2 -1 0 80+10+10 6000+10 5+1 +1 +1 +1 +1satellite 90+20+20 7000+10 6000+10 500 -1 +10scanalyzer08 13 +100 -1 -1 1200 -3 00120 -3 -3 00120 -3 -3 00scanalyzer11 10000 -1 -1 900 -3 00 90 -3 -3 00 90 -4 -3 00sokoban08 30000002900 -1 00240+3000220+3 +100sokoban11 200000020000002000000190+1 -100storage 2000 -1 -1 01700 -1 -1 015000001400000tetris14 17000 -15 -15 140 -3 -4 -13 -12 11 -1 -3 -3 -11 -9 90 -4 -4 -8 -7tidybot11 20000 -19 -19 200 -1 -3 -19 -19 18 -1 -4 -6 -17 -17 130 -6 -8 -13 -12tidybot14 20000 -20 -20 180 -2 -5 -18 -18 14 -1 -6 -10 -14 -14 60 -6 -6 -6 -6tpp 9000 -1 0 7000 -10 600000 600000transport08 170+1 -2 -1 01500 -1 -2 012 +1 +1 -1+1 +1 11000 -10transport11 150+1 -1 -2 -1 11000 -2 -1 8+1 +1 -2+1 +1 6000+10transport14 13 +10 -1 00 9000 -30 900 -3 -2 0 70 -1 -2 -1 0trucks 13 -10 -1 -1 -1 800000 600000 500+100visitall11 160+1 -10012 -10 -1 00 900000 900000visitall14 1000 -1 -1 0 600000 400000 3000+10woodwork08 2500 -3 -6 -11 150 -1 -3 -7 -4 100+1 -10 -1 70+2 +2 +2 0woodwork11 180 -1 -2 -3 -5 100 -1 -3 -4 -4 50+1 -10 -2 20 +2 +2 +3-1zenotravel 130000010000+20 80 +1 0+20 800000Sum all 1190 -8 -20 -92 -139 -143 897 -5 -16 -85 -82 -84 748 -5 -22 -66 -22 -53 651 +7 -20 -39 +13-26Table 1: The coverage results as diff from the baseline BnB, for four domain suites defined by the 25%, 50%, 75%, and 100% ofbest known solution cost for the classical planning task as an OSP task cost bound. bl stands for blind, maxband max for hmax,bound-sensitive and regular variants, msband ms for merge-and-shrink , bound-sensitive and regular variants, respectively.10−110010110210310410510610710810−1100101102103104105106107108108108blindmaxbexpansions-until-last-jump10025507510−110010110210310410510610710810−1100101102103104105106107108108108blindmsbexpansions-until-last-jump100255075(a) (b)10−110010110210310410510610710810−1100101102103104105106107108108108maxmaxbexpansions-until-last-jump10025507510−110010110210310410510610710810−1100101102103104105106107108108108msmsbexpansions-until-last-jump100255075(c) (d)Figure 2: Expansions up to the last layer, Awith blind heuristic vs. (a) bound-sensitive hmaxand (b) bound-sensitivemerge-and-shrink ;Awith bound-sensitive vs. regular heuristic for (c) hmaxand (c) merge-and-shrink .available OSP benchmarks [Katz et al. , 2019b ]. The set ofbenchmarks is taken from the International Planning Compe-titions of recent years, in which goal facts are replaced withutilities, and the bound set at 25%, 50%, 75%, or 100% of thecost of the optimal or best known solution to each problem.The baseline for our comparison is a blind branch-and-boundsearch, currently the best available configuration for oversub-scription planning that we know of [Katz et al. , 2019a ]. Wecompare this baseline to our proposed approach of Asearchon the MCF compilation of the OSP task. Since the compila-tion introduces intermediate states at which some but not allof theov;#have been applied, we use a further optimizationthat avoids generating these nodes and applies all of the ov;#actions in a single step, reducing the state space to that of theoriginal OSP task. We experiment with blind Asearch, andAusing classical hmaxandhMS, as well as the two heuris-tics’ bound-sensitive variants introduced here. For hMS, weused exact bisimulation with an abstract state space thresholdof50000 states and exact generalized label reduction [Siev-erset al. , 2014 ]. The experiments were performed on Intel(R)Xeon(R) CPU E7-8837 @2.67GHz machines, with time andmemory limits of 30min and 3.5GB, respectively. Per-domainand overall coverage, as well as per-task node expansions forthe various configurations and problem suites are shown inTable 1 and Figure 2, respectively. We now report some ob-servations from our results.Blind branch-and-bound search usually slightly outper-forms blind Ain terms of coverage, except for the100% suite. The difference between the two may comedown to the fact that Amust do extra work in orderingthe priority queue, while the variant of branch and boundsearch that we consider uses no ordering heuristic andcan use a simple stack as its search queue. Alternately itmay be due to small differences in implementation.Bound-sensitive heuristics are much more informativethan their classical variants on OSP problems, some-times decreasing expansions by orders of magnitude.Compared to non-bound-sensitive heuristics, they alsoalmost always result in better coverage.Blind search dominates informed search in terms of cov-erage when bounds are low, but the effect diminishes asthe bound increases and it becomes intractable to explorethe full state space under the bound. For the 25% suiteof problems, heuristic configurations solve an averageof approximately 100 instances fewer than the baseline,compared to approximately 15 instances fewer on the100% suite. Notably, bound-sensitive hMShas the bestcoverage in the 100% suite, solving 13 problems morethan the baseline, and 6 more than blind A.Coverage on several domains benefits from more in-formed search schemes. On B LOCKSWORLD , DRIVER -LOG, and M ICONIC , bound-sensitive hMSsolves thelargest number of problems, and this is also the case forbound-sensitive hmaxon F LOORTILE , PARC-PRINTER ,and S OKOBAN .hMSoften times out in the construction phase and beforesearch has begun. This occurs on average in approxi-mately 300 problems per suite, or 1200 problems total.This is especially pronounced in the T IDYBOT ,TETRIS ,and P IPESWORLD -NOTANKAGE domains. This sug-gests a hybrid approach that combines the strengths ofblind search and hMS: setting an upper bound on thetime allotted to heuristic construction, and running blindsearch instead if construction does not terminate withinthis bound. Using this configuration with a value of 10minutes for the upper bound results in a planner thatoutperforms blind Aby+11,+16,+37, and +38 in-stances for the 25%, 50%, 75%, and 100% suites, re-spectively. This makes hMSschemes that are less ex-pensive to construct but maintain informativeness in thissetting an appealing future subject of research.Conclusions and Future WorkWe have shown that a previously introduced compilation tomultiple cost function classical planning allows the Aalgo-rithm to be used to solve oversubscription planning problems,and introduced a family of bound-sensitive heuristics that aremuch more informed than their classical counterparts in thissetting. Our experiments show that this approach results in astate-of-the-art method for some bound settings and domains.One future research direction we would like to explorethat builds on the methods introduced here is the use ofnon-admissible heuristics for satisficing OSP. The method bywhich bound-sensitive hmaxis obtained is fairly general andshould be equally applicable for haddor general relaxed planheuristics [Keyder and Geffner, 2008 ]. A second direction isthe use of these heuristics in other planning settings in whichtradeoffs must be made between different cost functions, e.g.minimizing fuel use in the presence of bounds on time or viceversa in logistics problems.Finally, our methods may be applicable to numeric plan-ning problems in which the variables describe resources thatare strictly decreasing and can be expressed in terms ofsecondary cost functions and associated bounds. Bound-sensitive heuristics could provide a principled way of reason-ing about numeric variables in this context.References[B ̈ackstr ̈om and Nebel, 1995 ]Christer B ̈ackstr ̈om and Bern-hard Nebel. Complexity results for SAS+planning. Com-putational Intelligence , 11(4):625–655, 1995.[Bonet and Geffner, 2001 ]Blai Bonet and H ́ector Geffner.Planning as heuristic search. AIJ, 129(1):5–33, 2001.[Borrajo et al. , 2013 ]Daniel Borrajo, Subbarao Kambham-pati, Angelo Oddi, and Simone Fratini, editors. Proceed-ings of the Twenty-Third International Conference on Au-tomated Planning and Scheduling (ICAPS 2013) . AAAIPress, 2013.[Dobson and Haslum, 2017 ]Sean Dobson and PatrikHaslum. Cost-length tradeoff heuristics for bounded-costsearch. page 58, 2017.[Edelkamp, 2001 ]Stefan Edelkamp. Planning with patterndatabases. In Amedeo Cesta and Daniel Borrajo, editors,Proceedings of the Sixth European Conference on Plan-ning (ECP 2001) , pages 84–90. AAAI Press, 2001.[Hart et al. , 1968 ]Peter E. Hart, Nils J. Nilsson, and BertramRaphael. A formal basis for the heuristic determinationof minimum cost paths. IEEE Transactions on SystemsScience and Cybernetics , 4(2):100–107, 1968.[Haslum and Geffner, 2000 ]Patrik Haslum and H ́ectorGeffner. Admissible heuristics for optimal planning.In Steve Chien, Subbarao Kambhampati, and Craig A.Knoblock, editors, Proceedings of the Fifth Interna-tional Conference on Artificial Intelligence Planning andScheduling (AIPS 2000) , pages 140–149. AAAI Press,2000.[Haslum, 2013 ]Patrik Haslum. Heuristics for bounded-costsearch. In Borrajo et al. [2013 ], pages 312–316.[Helmert and Domshlak, 2009 ]Malte Helmert and CarmelDomshlak. Landmarks, critical paths and abstractions:What’s the difference anyway? In Alfonso Gerevini,Adele Howe, Amedeo Cesta, and Ioannis Refanidis, ed-itors, Proceedings of the Nineteenth International Con-ference on Automated Planning and Scheduling (ICAPS2009) , pages 162–169. AAAI Press, 2009.[Helmert et al. , 2014 ]Malte Helmert, Patrik Haslum, J ̈orgHoffmann, and Raz Nissim. Merge-and-shrink abstrac-tion: A method for generating lower bounds in factoredstate spaces. JACM , 61(3):16:1–63, 2014.[Helmert, 2006 ]Malte Helmert. The Fast Downward plan-ning system. JAIR , 26:191–246, 2006.[Katz et al. , 2019a ]Michael Katz, Emil Keyder, FlorianPommerening, and Dominik Winterer. Oversubscrip-tion planning as classical planning with multiple costfunctions. In Proceedings of the Twenty-Ninth Interna-tional Conference on Automated Planning and Scheduling(ICAPS 2019) . AAAI Press, 2019.[Katz et al. , 2019b ]Michael Katz, Emil Keyder, FlorianPommerening, and Dominik Winterer. PDDL benchmarksfor oversubscription planning. https://doi.org/10.5281/zenodo.2576024 , 2019.[Keyder and Geffner, 2008 ]Emil Keyder and H ́ectorGeffner. Heuristics for planning with action costs revis-ited. In Proceedings of the 18th European Conference onArtificial Intelligence (ECAI 2008) , pages 588–592, 2008.[Keyder and Geffner, 2009 ]Emil Keyder and H ́ectorGeffner. Soft goals can be compiled away. JAIR ,36:547–556, 2009.[Mirkis and Domshlak, 2013 ]Vitaly Mirkis and CarmelDomshlak. Abstractions for oversubscription planning. InBorrajo et al. [2013 ], pages 153–161.[Mirkis and Domshlak, 2014 ]Vitaly Mirkis and CarmelDomshlak. Landmarks in oversubscription planning. InTorsten Schaub, Gerhard Friedrich, and Barry O’Sullivan,editors, Proceedings of the 21st European Conference onArtificial Intelligence (ECAI 2014) , pages 633–638. IOSPress, 2014.[Muller and Karpas, 2018 ]Daniel Muller and Erez Karpas.Value driven landmarks for oversubscription planning. InMathijs de Weerdt, Sven Koenig, Gabriele R ̈oger, andMatthijs Spaan, editors, Proceedings of the Twenty-EighthInternational Conference on Automated Planning andScheduling (ICAPS 2018) , pages 171–179. AAAI Press,2018.[Sievers et al. , 2014 ]Silvan Sievers, Martin Wehrle, andMalte Helmert. Generalized label reduction for merge-and-shrink heuristics. In Proceedings of the Twenty-EighthAAAI Conference on Artificial Intelligence (AAAI 2014) ,pages 2358–2366. AAAI Press, 2014.[Smith, 2004 ]David E. Smith. Choosing objectives inover-subscription planning. In Shlomo Zilberstein, JanaKoehler, and Sven Koenig, editors, Proceedings of theFourteenth International Conference on Automated Plan-ning and Scheduling (ICAPS 2004) , pages 393–401. AAAIPress, 2004.[Stern et al. , 2011 ]Roni Tzvi Stern, Rami Puzis, and ArielFelner. Potential search: A bounded-cost search algo-rithm. In Fahiem Bacchus, Carmel Domshlak, StefanEdelkamp, and Malte Helmert, editors, Proceedings ofthe Twenty-First International Conference on AutomatedPlanning and Scheduling (ICAPS 2011) , pages 234–241.AAAI Press, 2011.[Thayer and Ruml, 2011 ]Jordan T. Thayer and WheelerRuml. Bounded suboptimal search: A direct approachusing inadmissible estimates. In Toby Walsh, editor, Pro-ceedings of the 22nd International Joint Conference on Ar-tificial Intelligence (IJCAI 2011) , pages 674–679. AAAIPress, 2011.[Trevizan et al. , 2017 ]Felipe W. Trevizan, Sylvie Thi ́ebaux,and Patrik Haslum. Occupation measure heuristics forprobabilistic planning. In Laura Barbulescu, JeremyFrank, Mausam, and Stephen F. Smith, editors, Proceed-ings of the Twenty-Seventh International Conference onAutomated Planning and Scheduling (ICAPS 2017) , pages306–315. AAAI Press, 2017.
SkxFLKli_V
missing related work - weak accept
6: Marginally above acceptance threshold
The paper proposes modifications to admissible heuristics to make them better informed in a multi-criteria setting where one cost function is the minimization objective and one or more secondary cost functions are constrained by bounds. The modified heuristics are applied to a reformulation of oversubscription planning. Over all it is not a bad paper, but I think it misses some relevant connections. I also have some questions about the OSP formulation. The special case of multi-criteria planning optimizing for one cost function while remaining within bounds for another has been studied in the context of stochastic problems. The ICAPS 2016 paper by Trevizan et al. introduced algorithms for the constrained SSP (CSSP) problem, which is a stochastic shortest path problem with exactly this kind of constraint/cost structure. More to the point, the ICAPS 2017 paper by Trevizan et al. introduced a form of projection/operator counting heuristics to SSPs, which they also extended to CSSPs. The extension follows essentially the same pattern as that used by the authors of this paper, in that the bounding constraint on each of the secondary costs is added to the heuristic formulation. Clearly this can be applied to non-stochastic problems as well, in which case it reduces to an operator counting heuristic for the bounded multi-criteria problem. Another special case that has seen some attention is the bounded-cost planning problem. This is formulated the same way as the bounded MCF in this paper but without a primary cost function. In other words, the question is simply does there exist any plan within the secondary cost bound? Typically, this problem considers only a single bounding cost function. Some specialized search algorithms were introduced by Stern et al. (ICAPS 2011) and Thayer et al. (ICAPS 2012), but adaptation of some common planning heuristics to this setting were also proposed (Haslum ICAPS 2013; Dobson and Haslum HSDIP 2017). Again, the pattern of adaptation is similar, with the cost bound somehow imposed on the selection of actions in the abstract or relaxed plan. The paper should at least discuss these closely related works. Even better would be a comparison between the proposed new heuristics and the previous ones in settings where they are comparable. For example, the bounded-cost problem can be formulated as an OSP, by simply making the goal soft, which has a solution with reward equal to the trivial upper bound (the sum of all subgoal utilities) if and only if the original bounded-cost problem is solvable. The OSP formalism used in this paper, and presumably also in the paper by Katz et al. cited for the reformulation, assigns utilities only to individual facts, i.e., variable-value equalities. There is no explicit provision for assigning utilities to conjunctions (or disjunctions) of facts (for example, to say that the utility of have(bread) and have(butter) is more than the sum of the utilities of each of the two facts by themselves, or, for that matter, that the utility of have(train-ticket) and have(bus-ticket) is no more than the max of the utilities of each of these two facts individually). One can imagine encodings that use an artificial, zero-cost action to set an auxiliary variable to true when a conjunction is achieved, but this raises some problems with the reformulation, in that undoing any part of the conjunction must also force a reset of the auxiliary variable. It is also not clear how this would work in situations where the utility of a conjunction is less than the sum of its parts. It would be good if the authors can comment in the paper on how limiting the restriction to single-fact utilities is. The readability of Table 1 could be enhanced. For example, alternating rows with white and lightly shaded backgrounds would make it visually easier to follow a row. The plus/minus zero entries could be omitted (blank) to make it easier to identify where the differences are. References: Felipe Trevizan, Sylvie Thiébaux, Pedro Henrique Santana, Brian Charles Williams. Heuristic Search in Dual Space for Constrained Stochastic Shortest Path Problems. ICAPS 2016. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/13179 Felipe W. Trevizan, Sylvie Thiébaux, Patrik Haslum. Occupation Measure Heuristics for Probabilistic Planning. ICAPS 2017. https://aaai.org/ocs/index.php/ICAPS/ICAPS17/paper/view/15771 Jordan Tyler Thayer, Roni Stern, Ariel Felner, Wheeler Ruml. Faster Bounded-Cost Search Using Inadmissible Estimates. ICAPS 2012. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS12/paper/view/4706 Roni Tzvi Stern, Rami Puzis, Ariel Felner. Potential Search: A Bounded-Cost Search Algorithm. ICAPS 2011. http://aaai.org/ocs/index.php/ICAPS/ICAPS11/paper/view/2687 Patrik Haslum. Heuristics for Bounded-Cost Search. ICAPS 2013. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS13/paper/view/5993 Sean Dobson, Patrik Haslum. Cost-Length Tradeoff Heuristics for Bounded-Cost Search. HSDIP 2017. http://icaps17.icaps-conference.org/workshops/HSDIP/proceedings/dobson-haslum-icaps2017wshsdip.pdf I think I have seen a paper titled something along the lines of "planning with conjunctive utilities" somewhere, but now I cannot find it or recall where, or who wrote it.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A∗ Search and Bound-Sensitive Heuristics for Oversubscription Planning ### Paper Abstract Oversubscription planning (OSP) is the problem of finding plans that maximize the utility value of their end state while staying within a specified cost bound. Recently, it has been shown that OSP problems can be reformulated as classical planning problems with multiple cost functions but no utilities. Here we take advantage of this reformulation to show that OSP problems can be solved optimally using the A* search algorithm, in contrast to previous approaches that have used variations on branch-and-bound search. This allows many powerful techniques developed for classical planning to be applied to OSP problems. We also introduce novel bound-sensitive heuristics, which are able to reason about the primary cost of a solution while taking into account secondary cost functions and bounds, to provide superior guidance compared to heuristics that do not take these bounds into account. We implement two such bound-sensitive variants of existing classical planning heuristics, and show experimentally that the resulting search is significantly more informed than comparable heuristics that do not consider bounds. ### Paper Keywords ["oversubscription planning", "heuristic search", "A*", "bound-sensitive heuristics"] ### Paper Content ASearch and Bound-Sensitive Heuristics for Oversubscription PlanningMichael Katz1,Emil Keyder21IBM Research, Yorktown Heights, NY , USA2Invitae Corporation, San Francisco, CA, USAmichael.katz1@ibm.com, emilkeyder@gmail.comAbstractOversubscription planning (OSP) is the problemof finding plans that maximize the utility valueof their end state while staying within a specifiedcost bound. Recently, it has been shown that OSPproblems can be reformulated as classical planningproblems with multiple cost functions but no utili-ties. Here we take advantage of this reformulationto show that OSP problems can be solved optimallyusing theAsearch algorithm, in contrast to previ-ous approaches that have used variations on branch-and-bound search. This allows many powerfultechniques developed for classical planning to beapplied to OSP problems. We also introduce novelbound-sensitive heuristics, which are able to reasonabout the primary cost of a solution while takinginto account secondary cost functions and bounds,to provide superior guidance compared to heuris-tics that do not take these bounds into account.We implement two such bound-sensitive variantsof existing classical planning heuristics, and showexperimentally that the resulting search is signif-icantly more informed than comparable heuristicsthat do not consider bounds.IntroductionOversubscription planning (OSP) problems are a family ofdeterministic planning problems. In contrast to classical plan-ning, where a set of hard goals is specified and the plannersearches for a minimal (or low) cost plan that reaches a statein which all of the goals are made true, oversubscription plan-ning specifies a utility function that describes the benefit as-sociated with achieving different possible states, and asks fora plan whose cost does not exceed a set bound and achievesas high a utility as possible [Smith, 2004 ].While domain-independent classical planning approacheshave increasingly standardized around variations on Asearch and heuristics that are automatically extracted from theproblem description [Bonet and Geffner, 2001; Keyder andGeffner, 2008; Haslum and Geffner, 2000; Edelkamp, 2001;Helmert et al. , 2014; Helmert and Domshlak, 2009 ], OSP hasgenerally been solved with branch-and-bound algorithms andheuristics that compute an admissible (in this context non-under) estimate of the utility achievable from a state. In or-der to obtain these estimates, recent approaches often adaptclassical planning techniques such as landmarks [Mirkis andDomshlak, 2014; Muller and Karpas, 2018 ]or abstractions[Mirkis and Domshlak, 2013 ], and enhance them with rea-soning that is specific to the context of OSP, such as theknowledge that there always exists an optimal plan that endswith a utility-increasing action, or that the cost bound for theproblem can be reduced under specific conditions to aid thesearch algorithm in detecting that improving over the cur-rently achieved utility is impossible.In contrast to these approaches, our aim here is to showthat general methods from classical planning, including Asearch, can be used in the OSP setting nearly as is. This previ-ously turned out to be the case for the related net-benefit plan-ning problem, where classical planners solving a compilationwere shown to outperform planners designed specifically forthat task [Keyder and Geffner, 2009 ]. Here, we use a similar,recently proposed compilation that converts OSP problemsinto classical planning problems with multiple cost functionsbut no utilities [Katz et al. , 2019a ]. In addition, we demon-strate that existing classical planning heuristics can be usedto guide the search for optimal plans. While these heuristicsare typically uninformative out-of-the-box, they require onlyminor modifications (and no specific reasoning about utili-ties) to render them sensitive to the secondary cost functionsand bounds that are introduced by the compilation. Our ex-periments with Aand the newly introduced estimators thatwe refer to as bound-sensitive heuristics show that they leadto informed searches that are competitive with, and in somecases outperform, the state of the art for optimal OSP.One related area of research in the classical setting isthat of bounded-cost planning , where the planner looks forany plan with (primary) cost below a given bound, simi-lar to the treatment of the secondary cost in the OSP set-ting. Approaches proposed for this setting include dedicatedsearch algorithms [Stern et al. , 2011 ]and heuristics that takeinto account accumulated cost and plan length at the cur-rent search node [Thayer and Ruml, 2011; Haslum, 2013;Dobson and Haslum, 2017 ]. These approaches work by pref-erentially expanding nodes in areas of the search space thatare likely to have a solution under the cost bound. OptimalOSP, however, requires expanding all nodes that potentiallylie on a path to state with maximal utility. Furthermore, itcannot be assumed that solutions necessarily achieve all softgoals. Heuristics that are able to take into account bounds onsecondary cost functions have also been investigated in thestochastic shortest path setting, where they were used as ad-ditional constraints in an LP-based heuristic to consider limi-tations on fuel or time resources [Trevizan et al. , 2017 ].We now briefly review the various flavors of planning thatwe consider in this work, and introduce the formalisms bywhich we describe them.BackgroundWe describe planning problems in terms of extensions to theSAS+formalism [B ̈ackstr ̈om and Nebel, 1995 ]. Aclassicalplanning task =hV;O ;sI;G;Ciis given by a set of vari-ablesV, with each variable v2Vhaving a finite domaindom(v), a set of actions O, with each action o2Odescribedby a pairhpre(o);e(o)iof partial assignments to V, calledtheprecondition andeffect ofo, respectively, initial state sIand goal condition G, which are full and partial assignmentstoV, respectively, and the cost function C:O!R0+. Astatesis given by a full assignment to V. An action is saidto be applicable in a statesifpre(o)s, andsJoKdenotesthe result of applying oins, where the value of each v2Vis given by e(o)[v]if defined and s[v]otherwise. An op-erator sequence =ho1;:::;okiis applicable in sif thereexist statess0;;sksuch that (i) s0=s, and (ii) for each1ik,oiis applicable in si1andsi=si1JoiK. Werefer to the state skbysJKand call it the end state of . Anoperator sequence is a plan for a classical planning problemif it is applicable in sIandGsIJK. The cost of a plan is given byC() =Po2C(o); the goal of optimal classicalplanning is to find a plan with minimal cost. We refer to a pairof variablevand its value #2dom(v)as afactand denoteit byhv;#i. We sometimes abuse notation and treat partialassignments as sets of facts.An oversubscription planning (OSP) problem is given byOSP=hV;O;sI;C;u;Bi, whereV,O,sI, andCare as inclassical planning, u: (hv;#i)!R0+is a non-negative val-ued utility function over variable assignments (facts), and Bis a cost bound for the plan, imposing the additional require-ment that only plans such thatC()Bare valid. Theutility of a planis given byPhv;#i2sIJKu(hv;#i); the ob-jective of OSP problems is to find valid plans with maximalutility.A multiple cost function (MCF) problem is given byMCF =hV;O;sI;G;C0;Ci, whereV,O,sI, andC0areas in classical planning, C0is the primary cost function , andC=fhCi;Biij1ingis a set of secondary costfunctionsCi:O!R0+, and bounds , both non-negative.Valid plans for MCF planning problems fulfill the conditionCi()Bifor all secondary cost functions, and optimalplans for MCF planning have minimal primary cost C0().In this paper we only consider MCF problems with a singlesecondary cost function, i.e. n= 1.Reformulating OSP ProblemsIt has recently been shown that an OSP problem can be com-piled into an MCF planning problem with a single secondarycost function that corresponds to the cost function Cof theoriginal problem, and is constrained to not exceed the spec-ified bound B[Katz et al. , 2019a ]. The primary cost func-tion for the problem, or the cost function to be optimized,results from compiling the utilities from the original probleminto costs. Two different compilations have been proposedfor this task: (i) the soft goals compilation , which adds foreach variable vthat has some value #2dom(v)for whicha utility is specified, a hard goal, along with actions that areable to achieve this hard goal at different costs, and (ii) thestate delta compilation which encodes in the cost of each ac-tion the change in state utility that results from applying it.Here we consider only (i), as (ii) introduces negative actioncosts thatAand existing classical planning heuristics arenot designed to handle. Note, however, that our methods donot depend on the specific choice of compilation, as long asthey remove utilities from the problem and do not introducenegative action costs.Thesoft goals compilation was originally introduced in thecontext of net-benefit planning, which is similar to oversub-scription planning but does not specify a bound on plan cost,having instead as an objective the minimization of the dif-ference between the achieved utility and the cost of the plan[Keyder and Geffner, 2009 ]. It can be applied in the OSPsetting to result in an MCF planning problem as follows:Definition 1 LetOSP=hV;O;sI;C;u;Bibe an over-subscription planning task. The soft goals reformulationsgMCF=hV0;O0;sI;G0;C0;fhC0;BigiofOSPis an MCFplanning task , whereV0=fv0jv2Vg, withdom(v0) =dom(v)[fgvgumax(v)>0dom(v) otherwise;O0=O[fov;#=hfhv;#ig;fhv;gvigi j#2dom(v);v2V;umax(v)>0gG0=fhv;gvijv2V;umax(v)>0g,C0(o) =0 o2Oumax(v)u(hv;#i)o=ov;#;C0(o) =C(o)o2O0 otherwise;with umax(v) := max #2dom (v)u(hv;#i)denoting the maxi-mum utility over the values of the variable v.In the reformulated problem, only the ov;#actions forwhich#is not the maximum utility value of vhave posi-tive primary costs. These actions make explicit that a par-ticular utility will not be achieved, and that the plan has in-stead chosen to achieve the associated gvby accepting theassociated cost penalty. The primary cost of a planforthe reformulated problem is then given byPv2Vumax(v)Pf2sJKu(f).Note that this compilation assumes that utilities are definedfor single facts. The more general case, in which utilities areinstead defined for logical formulae ', can be handled as inthe soft goals compilation by introducing a new variable v',and two actions that achieve its goal value with cost 0andprecondition ', and cost u(')and precondition;, respec-tively [Keyder and Geffner, 2009 ]. Since we consider onlysingle fact utilities here, we do not discuss this case in detail.While this compilation is sound as stated, two further op-timizations can be made to reduce the state space of the re-sulting compiled problem. First, an arbitrary ordering can beintroduced over Vto ensure that the gvvalues are achieved ina fixed sequence, to avoid searching over different orderings.Second, a new precondition fact that is deleted by the ov;#actions can be added to the original domain actions to ensurethatov;#actions happen only at the end of the plan and arenot interleaved with the original domain actions. We makeuse of both of these optimizations here.Afor MCF Planning ProblemsTheAalgorithm extends blind search techniques such as Di-jkstra’s algorithm by allowing the incorporation of admissible(non-overestimating) heuristics [Hart et al. , 1968 ]. In each it-eration of its main loop, Apicks a node nto expand withminimalf(n) =g(n) +h(n)value, where g(n)is the costof the path to n, andh(n)is an admissible estimate of the re-maining cost to the goal. An optimal solution to the problemis found when a node nwith minimal f(n)value is a goalnode.To adaptAto the MCF planning setting, we store at eachnodena set of accumulated path costs gi(n)resulting fromeach of the secondary cost functions C1;:::;Cn, in additionto the accumulated primary cost g0(n). When a node is takenfrom the priority queue and expanded, generated successornodes for which any gi(n)>Bican be immediately pruned,as allCiare assumed to be non-negative, and they cannotconstitute valid prefixes for solution paths.One key optimization used in modern Aimplementationsin the classical setting is duplicate detection, which allowsstates that are rediscovered during search to be discarded, ifthe newgvalue exceeds the cost of the path to the state thatwas previously found, or to be updated with a new parent, ifthe cost of the new path is less. In the MCF setting, care mustbe taken to ensure that newly discovered nodes are discarded(or replace existing nodes), only when they are dominatedby (or dominate), the existing node in all cost dimensions.While the only necessary property of the open list from a cor-rectness perspective is that it order nodes by increasing pri-maryf(n)value, the choice of a secondary ordering heuris-tic plays a role here: an ordering that causes a dominatingnode to be generated first and enables subsequently generatednodes to be immediately discarded as dominated results in su-perior performance. In our implementation of the algorithm,we therefore use an open list that orders nodes by increasinggi(n)value when their primary f(n)values are the same.l0 l1u(visited (l1)) : 10l2u(visited (l2)) : 101 1Figure 1: An OSP problem based on the V ISIT-ALLdomain.Bound-Sensitive HeuristicsWhile any admissible heuristic can be used to guide searchin MCF planning, classical planning heuristics that ignorebounds entirely are typically extremely uninformative. Con-sider the problem shown in Figure 1: the agent is initiallyatl0, and can obtain a utility of 10by visiting each of thelocationsl1andl2. The costs of the actions move (l0;l1)andmove (l1;l2)are both 1. In the compiled MCF versionof this problem, an optimal but naive heuristic that ignoresthe bound will give an estimate for the primary cost of 0, asboth visited (l1)andvisited (l2)can be made true, and the as-sociated 0-primary cost ovisited (l)actions applied to reach thenewly introduced hard goals corresponding to each utility. If,however, B= 1, the optimalC0cost atl0is10, sincel2cannot be reached at cost Band the agent must use theonot-visited (l2)action to achieve the associated hard goal with acost of 10. Similarly, if B= 0, theC0cost of the optimal planis20, since the value of C1for all available actions exceedsthe bound B. In practice, it turns out that the OSP versionsof many classical planning problems have similar behavior:their state spaces are strongly connected, so any variable as-signment can be achieved from any state, and classical plan-ning heuristics that ignore bounds are no more informed thanblind search.In order to obtain estimates that take secondary cost boundsinto account and can guide heuristic search towards feasiblesolutions, we therefore introduce bound-sensitive heuristics .In the following, we use bto denote a budget vector of non-negative reals that indicate the unused component of each ofthe secondary cost bounds Biat a given search node.Definition 2 (Optimal bound-sensitive heuristic) Givenan MCF planning problem MCF=hV;O;sI;G;C0;Ci, theoptimal bound-sensitive heuristic h(s;b)for a statesandbudget vector bis given by the minimal primary cost C0()of a planforssuch thatCi()bifori= 1;:::;n .By analogy with standard admissible heuristics, an ad-missible bound-sensitive heuristic is a non-overestimatingbound-sensitive heuristic:Definition 3 (Admissible bound-sensitive heuristic) Givenan MCF planning problem MCF=hV;O;sI;G;C0;Ci, anadmissible bound-sensitive heuristic h(s;b)for a statesandbudget vector bis a heuristic hsuch thath(s;b)h(s;b)for alls,b.Any classical planning heuristic that completely ignores Ciand Bican be thought of as an admissible bound-sensitiveheuristic that assumes b=1. As the value of bdecreases,the value ofh(s;b)can only increase. In general, it is usefulto keep in mind the following property:Theorem 1 Given a state sand budget vectors b;b’such thatbb’(whereis interpreted as a pairwise comparison),h(s;b)h(s;b’).Proof sketch: This follows from the fact that any plan forssuch thatCi()bialso has the property that Ci()b’ifori= 1;:::;n since bb’, yet the opposite is not the case.Theorem 1 applied to MCF planning problems obtainedas the soft goals compilations of OSP problems states thatfor anys, decreasing bincreasesh(s;b), and decreases theachievable utility, since the primary cost here indicates theutility that the plan must declare unachievable through ov;#actions withC0(ov;#)0.Bound-Sensitive hmaxThe admissible classical heuristic hmaxestimates the cost of aset of factsFas the cost of the most expensive fact f2F,and applies this approximation recursively to action precon-ditions in order to obtain the cost of the goal [Bonet andGeffner, 2001 ]:hmaxC(F;s) = maxf2FhmaxC(f;s)hmaxC(f;s) =(0 f2smino2achievers (f;s)hmaxC(o;s)otherwisehmaxC(o;s) =C(o) +hmaxC(pre(o);s)wherehmaxCdenotes the value of hmaxcomputed with a costfunctionC, and achievers (f;s)denotes the set of actions ofor whichf2e(o). Note that the hmaxcost of a factfthat is not present in sis computed by choosing an ac-tionofrom this set that achieves it with minimum possiblecost. Given a set of secondary cost functions and boundsC=fhC1;B1i;:::;hCn;Bnig, a bound-sensitive version ofhmaxcan easily be obtained by replacing the set of achieversused to compute hmaxC0withachievers (f;s)C0=fojf2e(o)^^i=1;:::;nhmaxCi(o;s)Bigwhere actions ofor which any estimate hmaxCi(o;s)exceeds Biare not considered. Note that due to the admissibility of hmax,this restriction of the set of achievers is sound but not com-plete: it is guaranteed that any action removed from the set ofachievers cannot be used in a valid plan, but there may be ad-ditional actions that cannot be achievers but are not pruned bythe heuristic. In general, any admissible estimate hmaxCi(o;s)could be used to compute achievers (f;s)C0, but we have cho-senhmaxhere for simplicity.Theorem 2 Bound-sensitive hmaxC0is an admissible bound-sensitive heuristic.Proof sketch: This follows from the admissibility of theheuristic used to compute achievers (f;s)C0.Bound-Sensitive Merge-and-shrinkMerge-and-shrink heuristics are a family of abstractionheuristics that incrementally build a representation of the fullstate space of a problem [Helmert et al. , 2014 ]. The construc-tion process begins with the set of transition systems inducedover each state variable; at each step, two transition systemsare selected to be merged and replaced with their synchro-nized product. Since the transition systems need to be rep-resented explicitly in memory, before the merge a shrinkingstep is perfomed on the two selected transition systems to en-force a user-specified threshold on the size of the synchro-nized product. This step is performed by abstracting multi-ple states in the current representation into a single state (andthereby losing optimality). The final output of the algorithmconsists of a single abstract transition system in which mul-tiple states and actions from the original task are mapped toa single state or transition, respectively. hMS(s)is then givenby the cost of a shortest path from the abstract state represent-ingsto the closest abstract goal state in the final transitionsystem. This estimate is admissible by definition.To adapt merge-and-shrink to the MCF setting, we main-tain for each transition in the abstract state space the mini-mumCicost fori= 1;:::;n among all of the transitionsfrom the original task represented by that transition. The dis-tanceCibetween any two abstract states s;s0then representsa non-overestimate of the secondary cost of reaching s0froms. A bound-sensitive heuristic value for a state scan be com-puted as the minimum C0cost of a path fromsto an abstractgoal statesgwhoseCicost in the abstract state space does notexceed Bi, for anyi. TheC0cost of such such a path can becomputed with a modified version of Dijkstra’s algorithm thatstores secondary cost information for each node and discardsnodes for whichCi>Bifor anyi.Theorem 3 Bound-sensitive hMSis an admissible bound-sensitive heuristic.Proof sketch: This follows from the fact that the secondarycosts used in the abstract state space are the minimums ofthe secondary costs Ciof the represented transitions in theoriginal problem, and the proof of admissibility of standardhMS.While the msbheuristic can be implemented by running Di-jkstra’s algorithm in the abstract state space for each heuristiccomputation, an important optimization when a single sec-ondary cost function is present (which is the case in the com-piled OSP problems that we consider) is to run Dijkstra onlyonce during preprocessing, and compute the primary costin the presence of different bounds on the secondary cost.This information can then be stored as a sequence of pairshhb0;c0i;:::;hbn;cnii, whereb0;:::;bnis strictly increas-ing andc0;:::;cnis strictly decreasing (recall Theorem 1).hMS(s;b)is then given by the first cisuch that bib.ExperimentsWe implemented our approach in the Fast Downward plan-ner[Helmert, 2006 ], and evaluated it on a set of publically25 50 75 100Coverage BnB bl maxbmax msbmsBnB bl maxbmax msbmsBnB bl maxbmax msbmsBnB bl maxbmax msbmsairport 270 -1 -1 -9 -9 2200 -1 -4 -4 210 -1 0 -4 -4 210 -3 -3 -5 -5barman11 120+1000 800000 400000 400000barman14 6000+20 300 -3 00 000000 000000blocks 3500000280 +1 -2 +4021000+8018000+80childsnack14 00 +1 0+20 000000 000000 000000depot 160 -1 -2 -1 01100 -4 0 -1 700 -1 00 4000+10driverlog 1500000130+1 -1+10100 +1 0+20 70 +1 0+40elevators08 3000 -1 -1 0250 -1 -1 00230 -1 -1 +1017 +10 -1 +3+1elevators11 20000001900000180 -1 -1 +1014 +10 -1 +2+1floortile11 9000 -2 0 40+1000 20+2 +2 +10 20+4 +4 00floortile14 9000 -2 -3 200000 00+2 +100 00+5 +5 00freecell 770 -14 -33 -12 -6 300 -2 -13 -2 -1 210 -6 -6 -1 -1 200 -6 -6 -2 -4ged14 2000000200000020000 -102000000grid 500 -1 00 3000 -10 2000 -10 1000+10gripper 11000+10 800000 80 -1 000 80 -1 000hiking14 190 -1 -5 +10140 -1 -3 +30130 -2 -2 +20110 -2 -2 +30logistics00 210+10001600000120+20+2010000+40logistics98 60+1000 40+10+10 20+10+10 200000miconic 960 -1 -4 +12 -1 6500 -1 +7055000+11050 +500+11 +4mprime 3500 -2 -4 -2 28 -1 -1 -5 -3 -1 240 -1 -2 -2 0190+1 -5 -20mystery 29000 -2 027 -10 -3 -4 -1 2100 -3 -1 01800 -3 -1 0nomystery11 20000001400 -2 00100 -1 -2 00 8000+3+1openstacks08 300000025000002400000220 -3 -2 00openstacks11 200000018000001700000170 -3 -3 00openstacks14 20 -1 -1 -1 -1 -1 15 -2 -4 -4 -2 -2 70 -3 -3 00 30 -1 000openstacks 90 -2 -2 -2 -2 700000 700000 700000parcprinter08 17 -2 +1 -2 -3 -3 130+100 -1 110+200 -1 11 -1 +2 +2 +10parcprinter11 13 -1 +1 -2 -2 -2 90+1000 70+20 +10 60+3 +2 +2 +2parking11 11 -1 -1 -2 -3 -1 100000 000000 000000parking14 14 -2 -3 -6 -3 -3 40 -3 -4 00 0000+10 000000pathways-nn 500000 40+10+10 400000 400000pegsol08 3000000300000029 -10 -2 002700000pegsol11 2000000200000019 -20 -2 001700000pipes-notank 4500 -2 -30 -27 300 -1 -5 -14 -12 220 -2 -6 -5 -5 150 -1 -2 0 -1pipes-tank 35 -2 -6 -11 -9 -9 200 -3 -5 -3 -3 16 -1 -4 -5 -1 -1 110 -1 -3 00psr-small 5000000500000049000+104900000rovers 150+1 -2 -1 0 80+10+10 6000+10 5+1 +1 +1 +1 +1satellite 90+20+20 7000+10 6000+10 500 -1 +10scanalyzer08 13 +100 -1 -1 1200 -3 00120 -3 -3 00120 -3 -3 00scanalyzer11 10000 -1 -1 900 -3 00 90 -3 -3 00 90 -4 -3 00sokoban08 30000002900 -1 00240+3000220+3 +100sokoban11 200000020000002000000190+1 -100storage 2000 -1 -1 01700 -1 -1 015000001400000tetris14 17000 -15 -15 140 -3 -4 -13 -12 11 -1 -3 -3 -11 -9 90 -4 -4 -8 -7tidybot11 20000 -19 -19 200 -1 -3 -19 -19 18 -1 -4 -6 -17 -17 130 -6 -8 -13 -12tidybot14 20000 -20 -20 180 -2 -5 -18 -18 14 -1 -6 -10 -14 -14 60 -6 -6 -6 -6tpp 9000 -1 0 7000 -10 600000 600000transport08 170+1 -2 -1 01500 -1 -2 012 +1 +1 -1+1 +1 11000 -10transport11 150+1 -1 -2 -1 11000 -2 -1 8+1 +1 -2+1 +1 6000+10transport14 13 +10 -1 00 9000 -30 900 -3 -2 0 70 -1 -2 -1 0trucks 13 -10 -1 -1 -1 800000 600000 500+100visitall11 160+1 -10012 -10 -1 00 900000 900000visitall14 1000 -1 -1 0 600000 400000 3000+10woodwork08 2500 -3 -6 -11 150 -1 -3 -7 -4 100+1 -10 -1 70+2 +2 +2 0woodwork11 180 -1 -2 -3 -5 100 -1 -3 -4 -4 50+1 -10 -2 20 +2 +2 +3-1zenotravel 130000010000+20 80 +1 0+20 800000Sum all 1190 -8 -20 -92 -139 -143 897 -5 -16 -85 -82 -84 748 -5 -22 -66 -22 -53 651 +7 -20 -39 +13-26Table 1: The coverage results as diff from the baseline BnB, for four domain suites defined by the 25%, 50%, 75%, and 100% ofbest known solution cost for the classical planning task as an OSP task cost bound. bl stands for blind, maxband max for hmax,bound-sensitive and regular variants, msband ms for merge-and-shrink , bound-sensitive and regular variants, respectively.10−110010110210310410510610710810−1100101102103104105106107108108108blindmaxbexpansions-until-last-jump10025507510−110010110210310410510610710810−1100101102103104105106107108108108blindmsbexpansions-until-last-jump100255075(a) (b)10−110010110210310410510610710810−1100101102103104105106107108108108maxmaxbexpansions-until-last-jump10025507510−110010110210310410510610710810−1100101102103104105106107108108108msmsbexpansions-until-last-jump100255075(c) (d)Figure 2: Expansions up to the last layer, Awith blind heuristic vs. (a) bound-sensitive hmaxand (b) bound-sensitivemerge-and-shrink ;Awith bound-sensitive vs. regular heuristic for (c) hmaxand (c) merge-and-shrink .available OSP benchmarks [Katz et al. , 2019b ]. The set ofbenchmarks is taken from the International Planning Compe-titions of recent years, in which goal facts are replaced withutilities, and the bound set at 25%, 50%, 75%, or 100% of thecost of the optimal or best known solution to each problem.The baseline for our comparison is a blind branch-and-boundsearch, currently the best available configuration for oversub-scription planning that we know of [Katz et al. , 2019a ]. Wecompare this baseline to our proposed approach of Asearchon the MCF compilation of the OSP task. Since the compila-tion introduces intermediate states at which some but not allof theov;#have been applied, we use a further optimizationthat avoids generating these nodes and applies all of the ov;#actions in a single step, reducing the state space to that of theoriginal OSP task. We experiment with blind Asearch, andAusing classical hmaxandhMS, as well as the two heuris-tics’ bound-sensitive variants introduced here. For hMS, weused exact bisimulation with an abstract state space thresholdof50000 states and exact generalized label reduction [Siev-erset al. , 2014 ]. The experiments were performed on Intel(R)Xeon(R) CPU E7-8837 @2.67GHz machines, with time andmemory limits of 30min and 3.5GB, respectively. Per-domainand overall coverage, as well as per-task node expansions forthe various configurations and problem suites are shown inTable 1 and Figure 2, respectively. We now report some ob-servations from our results.Blind branch-and-bound search usually slightly outper-forms blind Ain terms of coverage, except for the100% suite. The difference between the two may comedown to the fact that Amust do extra work in orderingthe priority queue, while the variant of branch and boundsearch that we consider uses no ordering heuristic andcan use a simple stack as its search queue. Alternately itmay be due to small differences in implementation.Bound-sensitive heuristics are much more informativethan their classical variants on OSP problems, some-times decreasing expansions by orders of magnitude.Compared to non-bound-sensitive heuristics, they alsoalmost always result in better coverage.Blind search dominates informed search in terms of cov-erage when bounds are low, but the effect diminishes asthe bound increases and it becomes intractable to explorethe full state space under the bound. For the 25% suiteof problems, heuristic configurations solve an averageof approximately 100 instances fewer than the baseline,compared to approximately 15 instances fewer on the100% suite. Notably, bound-sensitive hMShas the bestcoverage in the 100% suite, solving 13 problems morethan the baseline, and 6 more than blind A.Coverage on several domains benefits from more in-formed search schemes. On B LOCKSWORLD , DRIVER -LOG, and M ICONIC , bound-sensitive hMSsolves thelargest number of problems, and this is also the case forbound-sensitive hmaxon F LOORTILE , PARC-PRINTER ,and S OKOBAN .hMSoften times out in the construction phase and beforesearch has begun. This occurs on average in approxi-mately 300 problems per suite, or 1200 problems total.This is especially pronounced in the T IDYBOT ,TETRIS ,and P IPESWORLD -NOTANKAGE domains. This sug-gests a hybrid approach that combines the strengths ofblind search and hMS: setting an upper bound on thetime allotted to heuristic construction, and running blindsearch instead if construction does not terminate withinthis bound. Using this configuration with a value of 10minutes for the upper bound results in a planner thatoutperforms blind Aby+11,+16,+37, and +38 in-stances for the 25%, 50%, 75%, and 100% suites, re-spectively. This makes hMSschemes that are less ex-pensive to construct but maintain informativeness in thissetting an appealing future subject of research.Conclusions and Future WorkWe have shown that a previously introduced compilation tomultiple cost function classical planning allows the Aalgo-rithm to be used to solve oversubscription planning problems,and introduced a family of bound-sensitive heuristics that aremuch more informed than their classical counterparts in thissetting. Our experiments show that this approach results in astate-of-the-art method for some bound settings and domains.One future research direction we would like to explorethat builds on the methods introduced here is the use ofnon-admissible heuristics for satisficing OSP. The method bywhich bound-sensitive hmaxis obtained is fairly general andshould be equally applicable for haddor general relaxed planheuristics [Keyder and Geffner, 2008 ]. A second direction isthe use of these heuristics in other planning settings in whichtradeoffs must be made between different cost functions, e.g.minimizing fuel use in the presence of bounds on time or viceversa in logistics problems.Finally, our methods may be applicable to numeric plan-ning problems in which the variables describe resources thatare strictly decreasing and can be expressed in terms ofsecondary cost functions and associated bounds. Bound-sensitive heuristics could provide a principled way of reason-ing about numeric variables in this context.References[B ̈ackstr ̈om and Nebel, 1995 ]Christer B ̈ackstr ̈om and Bern-hard Nebel. Complexity results for SAS+planning. Com-putational Intelligence , 11(4):625–655, 1995.[Bonet and Geffner, 2001 ]Blai Bonet and H ́ector Geffner.Planning as heuristic search. AIJ, 129(1):5–33, 2001.[Borrajo et al. , 2013 ]Daniel Borrajo, Subbarao Kambham-pati, Angelo Oddi, and Simone Fratini, editors. Proceed-ings of the Twenty-Third International Conference on Au-tomated Planning and Scheduling (ICAPS 2013) . AAAIPress, 2013.[Dobson and Haslum, 2017 ]Sean Dobson and PatrikHaslum. Cost-length tradeoff heuristics for bounded-costsearch. page 58, 2017.[Edelkamp, 2001 ]Stefan Edelkamp. Planning with patterndatabases. In Amedeo Cesta and Daniel Borrajo, editors,Proceedings of the Sixth European Conference on Plan-ning (ECP 2001) , pages 84–90. AAAI Press, 2001.[Hart et al. , 1968 ]Peter E. Hart, Nils J. Nilsson, and BertramRaphael. A formal basis for the heuristic determinationof minimum cost paths. IEEE Transactions on SystemsScience and Cybernetics , 4(2):100–107, 1968.[Haslum and Geffner, 2000 ]Patrik Haslum and H ́ectorGeffner. Admissible heuristics for optimal planning.In Steve Chien, Subbarao Kambhampati, and Craig A.Knoblock, editors, Proceedings of the Fifth Interna-tional Conference on Artificial Intelligence Planning andScheduling (AIPS 2000) , pages 140–149. AAAI Press,2000.[Haslum, 2013 ]Patrik Haslum. Heuristics for bounded-costsearch. In Borrajo et al. [2013 ], pages 312–316.[Helmert and Domshlak, 2009 ]Malte Helmert and CarmelDomshlak. Landmarks, critical paths and abstractions:What’s the difference anyway? In Alfonso Gerevini,Adele Howe, Amedeo Cesta, and Ioannis Refanidis, ed-itors, Proceedings of the Nineteenth International Con-ference on Automated Planning and Scheduling (ICAPS2009) , pages 162–169. AAAI Press, 2009.[Helmert et al. , 2014 ]Malte Helmert, Patrik Haslum, J ̈orgHoffmann, and Raz Nissim. Merge-and-shrink abstrac-tion: A method for generating lower bounds in factoredstate spaces. JACM , 61(3):16:1–63, 2014.[Helmert, 2006 ]Malte Helmert. The Fast Downward plan-ning system. JAIR , 26:191–246, 2006.[Katz et al. , 2019a ]Michael Katz, Emil Keyder, FlorianPommerening, and Dominik Winterer. Oversubscrip-tion planning as classical planning with multiple costfunctions. In Proceedings of the Twenty-Ninth Interna-tional Conference on Automated Planning and Scheduling(ICAPS 2019) . AAAI Press, 2019.[Katz et al. , 2019b ]Michael Katz, Emil Keyder, FlorianPommerening, and Dominik Winterer. PDDL benchmarksfor oversubscription planning. https://doi.org/10.5281/zenodo.2576024 , 2019.[Keyder and Geffner, 2008 ]Emil Keyder and H ́ectorGeffner. Heuristics for planning with action costs revis-ited. In Proceedings of the 18th European Conference onArtificial Intelligence (ECAI 2008) , pages 588–592, 2008.[Keyder and Geffner, 2009 ]Emil Keyder and H ́ectorGeffner. Soft goals can be compiled away. JAIR ,36:547–556, 2009.[Mirkis and Domshlak, 2013 ]Vitaly Mirkis and CarmelDomshlak. Abstractions for oversubscription planning. InBorrajo et al. [2013 ], pages 153–161.[Mirkis and Domshlak, 2014 ]Vitaly Mirkis and CarmelDomshlak. Landmarks in oversubscription planning. InTorsten Schaub, Gerhard Friedrich, and Barry O’Sullivan,editors, Proceedings of the 21st European Conference onArtificial Intelligence (ECAI 2014) , pages 633–638. IOSPress, 2014.[Muller and Karpas, 2018 ]Daniel Muller and Erez Karpas.Value driven landmarks for oversubscription planning. InMathijs de Weerdt, Sven Koenig, Gabriele R ̈oger, andMatthijs Spaan, editors, Proceedings of the Twenty-EighthInternational Conference on Automated Planning andScheduling (ICAPS 2018) , pages 171–179. AAAI Press,2018.[Sievers et al. , 2014 ]Silvan Sievers, Martin Wehrle, andMalte Helmert. Generalized label reduction for merge-and-shrink heuristics. In Proceedings of the Twenty-EighthAAAI Conference on Artificial Intelligence (AAAI 2014) ,pages 2358–2366. AAAI Press, 2014.[Smith, 2004 ]David E. Smith. Choosing objectives inover-subscription planning. In Shlomo Zilberstein, JanaKoehler, and Sven Koenig, editors, Proceedings of theFourteenth International Conference on Automated Plan-ning and Scheduling (ICAPS 2004) , pages 393–401. AAAIPress, 2004.[Stern et al. , 2011 ]Roni Tzvi Stern, Rami Puzis, and ArielFelner. Potential search: A bounded-cost search algo-rithm. In Fahiem Bacchus, Carmel Domshlak, StefanEdelkamp, and Malte Helmert, editors, Proceedings ofthe Twenty-First International Conference on AutomatedPlanning and Scheduling (ICAPS 2011) , pages 234–241.AAAI Press, 2011.[Thayer and Ruml, 2011 ]Jordan T. Thayer and WheelerRuml. Bounded suboptimal search: A direct approachusing inadmissible estimates. In Toby Walsh, editor, Pro-ceedings of the 22nd International Joint Conference on Ar-tificial Intelligence (IJCAI 2011) , pages 674–679. AAAIPress, 2011.[Trevizan et al. , 2017 ]Felipe W. Trevizan, Sylvie Thi ́ebaux,and Patrik Haslum. Occupation measure heuristics forprobabilistic planning. In Laura Barbulescu, JeremyFrank, Mausam, and Stephen F. Smith, editors, Proceed-ings of the Twenty-Seventh International Conference onAutomated Planning and Scheduling (ICAPS 2017) , pages306–315. AAAI Press, 2017.<|im_end|> <|im_start|>assistant ### Review Title missing related work - weak accept ### Review Text The paper proposes modifications to admissible heuristics to make them better informed in a multi-criteria setting where one cost function is the minimization objective and one or more secondary cost functions are constrained by bounds. The modified heuristics are applied to a reformulation of oversubscription planning. Over all it is not a bad paper, but I think it misses some relevant connections. I also have some questions about the OSP formulation. The special case of multi-criteria planning optimizing for one cost function while remaining within bounds for another has been studied in the context of stochastic problems. The ICAPS 2016 paper by Trevizan et al. introduced algorithms for the constrained SSP (CSSP) problem, which is a stochastic shortest path problem with exactly this kind of constraint/cost structure. More to the point, the ICAPS 2017 paper by Trevizan et al. introduced a form of projection/operator counting heuristics to SSPs, which they also extended to CSSPs. The extension follows essentially the same pattern as that used by the authors of this paper, in that the bounding constraint on each of the secondary costs is added to the heuristic formulation. Clearly this can be applied to non-stochastic problems as well, in which case it reduces to an operator counting heuristic for the bounded multi-criteria problem. Another special case that has seen some attention is the bounded-cost planning problem. This is formulated the same way as the bounded MCF in this paper but without a primary cost function. In other words, the question is simply does there exist any plan within the secondary cost bound? Typically, this problem considers only a single bounding cost function. Some specialized search algorithms were introduced by Stern et al. (ICAPS 2011) and Thayer et al. (ICAPS 2012), but adaptation of some common planning heuristics to this setting were also proposed (Haslum ICAPS 2013; Dobson and Haslum HSDIP 2017). Again, the pattern of adaptation is similar, with the cost bound somehow imposed on the selection of actions in the abstract or relaxed plan. The paper should at least discuss these closely related works. Even better would be a comparison between the proposed new heuristics and the previous ones in settings where they are comparable. For example, the bounded-cost problem can be formulated as an OSP, by simply making the goal soft, which has a solution with reward equal to the trivial upper bound (the sum of all subgoal utilities) if and only if the original bounded-cost problem is solvable. The OSP formalism used in this paper, and presumably also in the paper by Katz et al. cited for the reformulation, assigns utilities only to individual facts, i.e., variable-value equalities. There is no explicit provision for assigning utilities to conjunctions (or disjunctions) of facts (for example, to say that the utility of have(bread) and have(butter) is more than the sum of the utilities of each of the two facts by themselves, or, for that matter, that the utility of have(train-ticket) and have(bus-ticket) is no more than the max of the utilities of each of these two facts individually). One can imagine encodings that use an artificial, zero-cost action to set an auxiliary variable to true when a conjunction is achieved, but this raises some problems with the reformulation, in that undoing any part of the conjunction must also force a reset of the auxiliary variable. It is also not clear how this would work in situations where the utility of a conjunction is less than the sum of its parts. It would be good if the authors can comment in the paper on how limiting the restriction to single-fact utilities is. The readability of Table 1 could be enhanced. For example, alternating rows with white and lightly shaded backgrounds would make it visually easier to follow a row. The plus/minus zero entries could be omitted (blank) to make it easier to identify where the differences are. References: Felipe Trevizan, Sylvie Thiébaux, Pedro Henrique Santana, Brian Charles Williams. Heuristic Search in Dual Space for Constrained Stochastic Shortest Path Problems. ICAPS 2016. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS16/paper/view/13179 Felipe W. Trevizan, Sylvie Thiébaux, Patrik Haslum. Occupation Measure Heuristics for Probabilistic Planning. ICAPS 2017. https://aaai.org/ocs/index.php/ICAPS/ICAPS17/paper/view/15771 Jordan Tyler Thayer, Roni Stern, Ariel Felner, Wheeler Ruml. Faster Bounded-Cost Search Using Inadmissible Estimates. ICAPS 2012. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS12/paper/view/4706 Roni Tzvi Stern, Rami Puzis, Ariel Felner. Potential Search: A Bounded-Cost Search Algorithm. ICAPS 2011. http://aaai.org/ocs/index.php/ICAPS/ICAPS11/paper/view/2687 Patrik Haslum. Heuristics for Bounded-Cost Search. ICAPS 2013. http://www.aaai.org/ocs/index.php/ICAPS/ICAPS13/paper/view/5993 Sean Dobson, Patrik Haslum. Cost-Length Tradeoff Heuristics for Bounded-Cost Search. HSDIP 2017. http://icaps17.icaps-conference.org/workshops/HSDIP/proceedings/dobson-haslum-icaps2017wshsdip.pdf I think I have seen a paper titled something along the lines of "planning with conjunctive utilities" somewhere, but now I cannot find it or recall where, or who wrote it. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rywDjg-RW
ICLR.cc/2018/Conference
2018
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
["Ashwin Kalyan", "Abhishek Mohta", "Oleksandr Polozov", "Dhruv Batra", "Prateek Jain", "Sumit Gulwani"]
Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
["Program synthesis", "deductive search", "deep learning", "program induction", "recurrent neural networks"]
ABSTRACTSynthesizing user-intended programs from a small number of input-output exam-ples is a challenging problem with several important applications like spreadsheetmanipulation, data wrangling and code refactoring. Existing synthesis systemseither completely rely on deductive logic techniques that are extensively hand-engineered or on purely statistical models that need massive amounts of data, and ingeneral fail to provide real-time synthesis on challenging benchmarks. In this work,we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis techniquethat combines the best of both symbolic logic techniques and statistical models.Thus, it produces programs that satisfy the provided specifications by constructionand generalize well on unseen examples, similar to data-driven systems. Ourtechnique effectively utilizes the deductive search framework to reduce the learningproblem of the neural component to a simple supervised learning setup. Further,this allows us to both train on sparingly available real-world data and still leveragepowerful recurrent neural network encoders. We demonstrate the effectivenessof our method by evaluating on real-world customer scenarios by synthesizingaccurate programs with up to 12 speed-up compared to state-of-the-art systems.1 I NTRODUCTIONAutomatic synthesis of programs that satisfy a given specification is a classical problem inAI (Waldinger & Lee, 1969), with extensive literature in both machine learning and programminglanguages communities. Recently, this area has gathered widespread interest, mainly spurred bythe emergence of a sub-area – Programming by Examples (PBE) (Gulwani, 2011). A PBE systemsynthesizes programs that map a given set of example inputs to their specified example outputs. Suchsystems make many tasks accessible to a wider audience as example-based specifications can beeasily provided even by end users without programming skills. See Figure 1 for an example. PBEsystems are usually evaluated on three key criteria: (a)correctness : whether the synthesized programInput OutputYann LeCunn Y LeCunnHugo Larochelle H LarochelleTara Sainath T SainathYoshua Bengio ?Figure 1: An example input-output spec; the goal is to learn aprogram that maps the given inputs to the corresponding outputsand generalizes well to new inputs. Both programs belowsatisfy the spec: (i)Concat (1stletter of 1stword, 2ndword), (ii)Concat (4th-last letter of 1stword, 2ndword). However, program(i)clearly generalizes better: for instance, its output on “YoshuaBengio” is “Y Bengio” while program (ii)produces “s Bengio”.Work done during an internship at Microsoft Research.yEqual contribution.1Published as a conference paper at ICLR 2018satisfies the spec i.e.the provided example input-output mapping, (b)generalization : whether theprogram produces the desired outputs on unseen inputs, and finally, (c)performance : synthesis time.State-of-the-art PBE systems are either symbolic , based on enumerative or deductive search (Gulwani,2011; Polozov & Gulwani, 2015) or statistical , based on data-driven learning to induce the most likelyprogram for the spec (Gaunt et al., 2016; Balog et al., 2017; Devlin et al., 2017). Symbolic systems aredesigned to produce a correct program by construction using logical reasoning and domain-specificknowledge. They also produce the intended program with few input-output examples (often just 1).However, they require significant engineering effort and their underlying search processes strugglewith real-time performance, which is critical for user-facing PBE scenarios.In contrast, statistical systems do not rely on specialized deductive algorithms, which makes theirimplementation and training easier. However, they lack in two critical aspects. First, they requirea lot of training data and so are often trained using randomly generated tasks. As a result, inducedprograms can be fairly unnatural and fail to generalize to real-world tasks with a small number ofexamples. Second, purely statistical systems like RobustFill (Devlin et al., 2017) do not guaranteethat the generated program satisfies the spec. Thus, solving the synthesis task requires generatingmultiple programs with a beam search and post-hoc filtering, which defeats real-time performance.Neural-Guided Deductive Search Motivated by shortcomings of both the above approaches,we propose Neural-Guided Deductive Search (NGDS), a hybrid synthesis technique that bringstogether the desirable aspects of both methods. The symbolic foundation of NGDS is deductivesearch (Polozov & Gulwani, 2015) and is parameterized by an underlying domain-specific language(DSL) of target programs. Synthesis proceeds by recursively applying production rules of the DSL todecompose the initial synthesis problem into smaller sub-problems and further applying the samesearch technique on them. Our key observation I is that most of the deduced sub-problems do notcontribute to the final best program and therefore a priori predicting the usefulness of pursuing aparticular sub-problem streamlines the search process resulting in considerable time savings. InNGDS, we use a statistical model trained on real-world data to predict a score that corresponds to thelikelihood of finding a generalizable program as a result of exploring a sub-problem branch.Our key observation II is that speeding up deductive search while retaining its correctness orgeneralization requires a close integration of symbolic and statistical approaches via an intelligentcontroller. It is based on the “branch & bound” technique from combinatorial optimization (Clausen,1999). The overall algorithm integrates (i) deductive search, (ii) a statistical model that predicts, apriori , the generalization score of the best program from a branch, and (iii) a controller that selectssub-problems for further exploration based on the model’s predictions.Since program synthesis is a sequential process wherein a sequence of decisions (here, selectionsof DSL rules) collectively construct the final program, a reinforcement learning setup seems morenatural. However, our key observation III is that deductive search is Markovian – it generatesindependent sub-problems at every level. In other words, we can reason about a satisfying programfor the sub-problem without factoring in the bigger problem from which it was deduced. This bringsthree benefits enabling a supervised learning formulation: (a)a dataset of search decisions at everylevel over a relatively small set of PBE tasks that contains an exponential amount of informationabout the DSL promoting generalization, (b)such search traces can be generated and used for offlinetraining, (c)we can learn separate models for different classes of sub-problems (e.g. DSL levels orrules), with relatively simpler supervised learning tasks.Evaluation We evaluate NGDS on the string transformation domain, building on top of PROSE,a commercially successful deductive synthesis framework for PBE (Polozov & Gulwani, 2015).It represents one of the most widespread and challenging applications of PBE and has shipped inmultiple mass-market tools including Microsoft Excel and Azure ML Workbench.1We train andvalidate our method on 375scenarios obtained from real-world customer tasks (Gulwani, 2011;Devlin et al., 2017). Thanks to the Markovian search properties described above, these scenariosgenerate a dataset of 400;000+ intermediate search decisions. NGDS produces intended programson68% of the scenarios despite using only oneinput-output example. In contrast, state-of-the-artneural synthesis techniques (Balog et al., 2017; Devlin et al., 2017) learn intended programs from a1https://microsoft.github.io/prose/impact/2Published as a conference paper at ICLR 2018single example in only 24-36% of scenarios taking 4more time. Moreover, NGDS matches theaccuracy of baseline PROSE while providing a speed-up of up to 12over challenging tasks.Contributions First, we present a branch-and-bound optimization based controller that exploitsdeep neural network based score predictions to select grammar rules efficiently (Section 3.2). Second,we propose a program synthesis algorithm that combines key traits of a symbolic and a statisticalapproach to retain desirable properties like correctness, robust generalization, and real-time perfor-mance (Section 3.3). Third, we evaluate NGDS against state-of-the-art baselines on real customertasks and show significant gains (speed-up of up to 12) on several critical cases (Section 4).2 B ACKGROUNDIn this section, we provide a brief background on PBE and the PROSE framework, using establishedformalism from the programming languages community.Domain-Specific Language A program synthesis problem is defined over a domain-specific lan-guage (DSL). A DSL is a restricted programming language that is suitable for expressing tasks in agiven domain, but small enough to restrict a search space for program synthesis. For instance, typicalreal-life DSLs with applications in textual data transformations (Gulwani, 2011) often include condi-tionals, limited forms of loops, and domain-specific operators such as string concatenation, regularexpressions, and date/time formatting. DSLs for tree transformations such as code refactoring (Rolimet al., 2017) and data extraction (Le & Gulwani, 2014) include list/data-type processing operatorssuch as Map andFilter , as well as domain-specific matching operators. Formally, a DSL Lis speci-fied as a context-free grammar, with each non-terminal symbol Ndefined by a set of productions.The right-hand side of each production is an application of some operator F(N1;:::;Nk)to somesymbols ofL. All symbols and operators are strongly typed. Figure 2 shows a subset of the Flash FillDSL that we use as a running example in this paper.Inductive Program Synthesis The task of inductive program synthesis is characterized by a spec.A spec'is a set ofminput-output constraintsfi igmi=1, where:•, aninput state is a mapping of free variables of the desired program Pto some correspondinglytyped values. At the top level of L, a program (and its expected input state) has only one freevariable – the input variable of the DSL (e.g., inputs in Figure 2). Additional local variables areintroduced insideLwith a let construct.• is an output constraint on the execution result of the desired program P(i). At the top level ofL, when provided by the user, is usually the output example – precisely the expected result ofP(i). However, other intermediate constraints arise during the synthesis process. For instance, may be a disjunction of multiple allowed outputs.The overall goal of program synthesis is thus: given a spec ', find a program Pin the underlyingDSLLthatsatisfies',i.e., its outputs P(i)satisfy all the corresponding constraints i.Example 1. Consider the task of formatting a phone number, characterized by the spec '=finputs : [“(612) 8729128 ”]g “612-872-9128 ”. It has a single input-output example,with an input state containing a single variable inputs and its value which is a list with a singleinput string. The output constraint is simply the desired program result.The program the user is most likely looking for is the one that extracts (a) the part of the inputenclosed in the first pair of parentheses, (b) the 7thto 4thcharacters from the end, and (c) the last 4characters, and then concatenates all three parts using hyphens. In our DSL, this corresponds to:ConcatSubStr 0(RegexPosition (x;h“(”;"i;0);RegexPosition (x;h";“)”i;0));ConstStr (“-”);SubStr 0(AbsolutePosition (x;8);AbsolutePosition (x;5));ConstStr (“-”);SubStr 0(AbsolutePosition (x;5);AbsolutePosition (x;1))where"is an empty regex, SubStr 0(pos 1;pos 2)is an abbreviation for “ letx=std:Kth(inputs; 0)inSubstring (x;hpos 1;pos 2i)”, andhiis an abbreviation for std:Pair.However, many other programs in the DSL also satisfy '. For instance, all occurrences of “8”inthe output can be produced via a subprogram that simply extracts the last character. Such a programoverfits to'and is bound to fail for other inputs where the last character and the 4thone differ.3Published as a conference paper at ICLR 2018// Nonterminals@start string transform :=atom | Concat( atom ,transform );stringatom :=ConstStr( s)| let string x= std.Kth( inputs ,k) in Substring( x,pp);Tuple<int, int> pp:=std.Pair( pos,pos) | RegexOccurrence( x,r,k);intpos:=AbsolutePosition( x,k) | RegexPosition( x, std.Pair( r,r),k);// Terminals@input string[] inputs ; string s; int k; Regex r;Figure 2: A subset of the FlashFill DSL (Gulwani, 2011), used as a running example in this paper.Every program takes as input a list of strings inputs , and returns an output string, a concatenationofatoms . Each atom is either a constant or a substring of one of the inputs ( x), extracted usingsome position logic. The RegexOccurrence position logic finds kthoccurrence of a regex rinxandreturns its boundaries. Alternatively, start and end positions can be selected independently either asabsolute indices in xfrom left or right ( AbsolutePosition ) or as thekthoccurrence of a pair of regexessurrounding the position ( RegexPosition ). See Gulwani (2011) for an in-depth DSL description.As Example 1 shows, typical real-life problems are severely underspecified. A DSL like FlashFillmay contain up to 1020programs that satisfy a given spec of 1-3 input-output examples (Polozov &Gulwani, 2015). Therefore, the main challenge lies in finding a program that not only satisfies theprovided input-output examples but also generalizes to unseen inputs . Thus, the synthesis processusually interleaves search andranking : the search phase finds a set of spec-satisfying programs in theDSL, from which the ranking phase selects top programs ordered using a domain-specific rankingfunctionh:L~!Rwhere is the set of all input states. The ranking function takes as input acandidate program P2L and a set of input states ~ 2~(usually~ =inputs in the given spec + anyavailable unlabeled inputs), and produces a score for P’sgeneralization .The implementation of hexpresses a subtle balance between program generality, complexity, andbehavior on available inputs. For instance, in FlashFill hpenalizes overly specific regexes, prefersprograms that produce fewer empty outputs, and prioritizes lower Kolmogorov complexity, amongother features. In modern PBE systems like PROSE, his usually learned in a data-driven mannerfrom customer tasks (Singh & Gulwani, 2015; Ellis & Gulwani, 2017). While designing and learningsuch a ranking is an interesting problem in itself, in this work we assume a black-box access to h.Finally, the problem of inductive program synthesis can be summarized as follows:Problem 1. Given a DSLL, a ranking function h, a spec'=fi igmi=1, optionally a setof unlabeled inputs ~ u, and a target number of programs K, let~ =~ u[figmi=1. The goal ofinductive program synthesis is to find a program set S=fP1;:::;PKgL such that (a)everyprogram inSsatisfies', and (b)the programs inSgeneralize best: h(Pi;~ )h(P;~ )for anyotherP2L that satisfies '.Search Strategy Deductive search strategy for program synthesis, employed by PROSE exploresthe grammar ofLtop-down – iteratively unrolling the productions into partial programs starting fromthe root symbol. Following the divide-and-conquer paradigm, at each step it reduces its synthesisproblem to smaller subproblems defined over the parameters of the current production. Formally,given a spec 'and a symbol N, PROSE computes the set Learn (N;')of top programs w.r.t. husingtwo guiding principles:1.IfNis defined through nproductions N:=F1(:::)j:::jFn(:::), PROSE finds a '-satisfyingprogram set for everyFi, and unites the results, i.e., Learn (N;') =[iLearn (Fi(:::);').2.For a given production N:=F(N1;:::;Nk), PROSE spawns off ksmaller synthesis problemsLearn (Nj;'j);1jkwherein PROSE deduces necessary and sufficient specs 'jfor eachNjsuch that every program of type F(P1;:::;Pk), wherePj2Learn (Nj;'j), satisfies'. Thededuction logic (called a witness function ) is domain-specific for each operator F. PROSE thenagain recursively solves each subproblem and unites a cross-product of the results.Example 2. Consider a spec '=f“Yann ” “Y.L”gon atransform program. Via the firstproductiontransform :=atom , the only'-satisfying program is ConstStr (“Y.L”). The secondproduction on the same level is Concat (atom; transform ). A necessary & sufficient spec on theatom sub-program is that it should produce some prefix of the output string. Thus, the witnessfunction for the Concat operator produces a disjunctive spec 'a=f“Yann ” “Y”_“Y.”g. Each4Published as a conference paper at ICLR 2018of these disjuncts, in turn, induces a corresponding necessary and sufficient suffix spec on the secondparameter:'t1=f“Yann ” “.L”g, and't2=f“Yann ” “L”g, respectively. The disjunctsin'awill be recursively satisfied by different program sets: “Y.”can only be produced via anatom path with a ConstStr program, whereas “Y”can also be extracted from the input using manySubstring logics (their generalization capabilities vary). Figure 3 shows the resulting search DAG.transform“Y.L”Concat (:::)“Y.L”atom“Y.L”atom“Y”_“Y.”transform“L”atom“L”transform“.L”atom“.L”Concat (:::)“.L”atom“.”ConstStr (s)“Y.L”. . . . . . . . . . . .ConstStr (s)“Y”_“Y.”letx= . . .“Y”_“Y.”...Substring (:::)“Y”pp(0;1). . .Figure 3: A portion of the search DAG from Example 2. Only the output parts of the respective specsare shown in each node, their common input state is a single string “Yann ”. Dashed arrows showrecursive Learn calls on a corresponding DSL symbol.Notice that the above mentioned principles create logical non-determinism due to which we mightneed to explore multiple alternatives in a search tree. As such non-determinism arises at every level ofthe DSL with potentially any operator, the search tree (and the resulting search process) is exponentialin size. While all the branches of the tree by construction produce programs that satisfy the givenspec, most of the branches do not contribute to the overall top-ranked generalizable program. Duringdeductive search, PROSE has limited information about the programs potentially produced fromeach branch, and cannot estimate their quality, thus exploring the entire tree unnecessarily. Our maincontribution is a neural-guided search algorithm that predicts the best program scores from eachbranch, and allows PROSE to omit branches that are unlikely to produce the desired program a priori .3 S YNTHESIS ALGORITHMConsider an arbitrary branching moment in the top-down search strategy of PROSE. For example, letNbe a nonterminal symbol in L, defined through a set of productions N:=F1(:::)j:::jFn(:::),and let'be a spec on N, constructed earlier during the recursive descent over L. A conservativeway to select the top kprograms rooted at N(as defined by the ranking function h), i.e., to computeLearn (N;'), is to learn the top kprograms of kind Fi(:::)for alli2[k]and then select the top kprograms overall from the union of program sets learned for each production. Naturally, exploring allthe branches for each nonterminal in the search tree is computationally expensive.In this work, we propose a data-driven method to select an appropriate production rule N:=Fi(N1;:::;Nk)that would most likely lead to a top-ranked program. To this end, we use the currentspec'to determine the “optimal” rule. Now, it might seem unintuitive that even without exploringa production rule and finding the best program in the corresponding program set, we can a prioridetermine optimality of that rule. However, we argue that by understanding 'and its relationshipwith the ranking function h, we can predict the intended branch in many real-life scenarios.Example 3. Consider a spec '=f“alice ” “alice@iclr.org ”;“bob” “bob@iclr.org ”g. While learning a program in Lgiven by Figure 2 that satisfies ', it is clearright at the beginning of the search procedure that the rule transform :=atom does not apply. Thisis because any programs derived from transform :=atom can either extract a substring from theinput or return a constant string, both of which fail to produce the desired output. Hence, we shouldonly consider transform :=Concat (:::), thus significantly reducing the search space.Similarly, consider another spec '=f“alice smith ” “alice ”;“bob jones ” “bob”g. In this case, the output appears to be a substring of input, thus selecting transform :=atomat the beginning of the search procedure is a better option than transform :=Concat (:::).However, many such decisions are more subtle and depend on the ranking function hitself. Forexample, consider a spec '=f“alice liddell ” “al”;“bob ong ” “bo”g. Now,5Published as a conference paper at ICLR 2018LSTM for input encoding LSTM for output encodingChar EmbeddingInput stateChar EmbeddingOutput example(s) EmbeddingProduction rule TwoFClayersPredicted scoreFigure 4: LSTM-based model for predicting the score of a candidate production for a given spec '.bothtransform :=atom andtransform :=Concat (:::)may lead to viable programs becausethe output can be constructed using the first two letters of the input (i.e. a substring atom) or byconcatenating the first letters of each word. Hence, the branch that produces the best program isultimately determined by the ranking function hsince both branches generate valid programs.Example 3 shows that to design a data-driven search strategy for branch selection, we need to learnthe subtle relationship between ',h, and the candidate branch. Below, we provide one such model.3.1 P REDICTING THE GENERALIZATION SCOREAs mentioned above, our goal is to predict one or more production rules that for a given spec 'willlead to a top-ranked program (as ranked a posteriori byh). Formally, given black-box access to h,we want to learn a function fsuch that,f(;') maxP2S(;' )h(P;');whereis a production rule in L, andS(;')is aprogram set of all DSL programs derived fromthe rulethat satisfy'. In other words, we want to predict the score of the top-ranked '-satisfyingprogram that is synthesized by unrolling the rule . We assume that the symbolic search of PROSEhandles the construction of S(;')and ensures that programs in it satisfy 'by construction. Thegoal offis to optimize the score of a program derived from assuming this program is valid. If noprogram derived from can satisfy',fshould return1. Note that, drawing upon observationsmentioned in Section 1, we have cast the production selection problem as a supervised learningproblem, thus simplifying the learning task as opposed to end-to-end reinforcement learning solution.We have evaluated two models for learning f. The loss function for the prediction is given by:L(f;;') =f(;')maxP2S(;' )h(P;')2:Figure 4 shows a common structure of both models we have evaluated. Both are based on a standardmulti-layer LSTM architecture (Hochreiter & Schmidhuber, 1997) and involve (a)embedding thegiven spec',(b)encoding the given production rule , and (c)a feed-forward network to output ascoref(;'). One model attends over input when it encodes the output, whereas another does not.3.2 C ONTROLLER FOR BRANCH SELECTIONA score model falone is insufficient to perfectly predict the branches that should be explored atevery level. Consider again a branching decision moment N:=F1(:::)j:::jFn(:::)in a searchprocess for top kprograms satisfying a spec '. One naïve approach to using the predictions of fis toalways follow the highest-scored production rule argmaxif(Fi;'). However, this means that anysingle incorrect decision on the path from the DSL root to the desired program will eliminate thatprogram from the learned program set . If our search algorithm fails to produce the desired programby committing to a suboptimal branch anytime during the search process, then the user may neverdiscover that such a program exists unless they supply additional input-output example.Thus, a branch selection strategy based on the predictions of fmust balance a trade-off of performanceandgeneralization . Selecting too few branches (a single best branch in the extreme case) riskscommitting to an incorrect path early in the search process and producing a suboptimal program orno program at all. Selecting too many branches (all nbranches in the extreme case) is no differentfrom baseline PROSE and fails to exploit the predictions of fto improve its performance.Formally, a controller for branch selection at a symbol N:=F1(:::)j:::jFn(:::)targetingkbest programs must (a)predict the expected score of the best program from each program set:6Published as a conference paper at ICLR 2018function THRESHOLD BASED (';h;k;s 1;. . .;sn)1:Result setS []2:i argmaxisi3:for all 1indo4: ifjsisijthen//Recursive search5:S+= LEARN (Fi;';k )6:return the topkprograms ofSw.r.t.hfunction BNBBASED (';h;k;s 1;. . .;sn)1:Result setS []; Program target k0 k2:ReorderFiin the descending order of si3:for all 1indo4:Si LEARN (Fi;';k0)//Recursive search5:j BINARY SEARCH (si+1;Map(h;Si))6:S=Si[Si[0::j];k0 k0j7: ifk00then break8:returnSFigure 5: The controllers for guiding the search process to construct a most generalizable '-satisfyingprogram setSof sizekgiven thef-predicted best scores s1;. . .;snof the productions F1;. . .;Fn.Given: DSLL, ranking function h, controllerCfrom Figure 5 ( THRESHOLD BASED orBNBBASED ),symbolic search algorithm LEARN (Production rule , spec', targetk) as in PROSE (Polozov &Gulwani, 2015, Figure 7) with all recursive calls to L EARN replaced with L EARN NGDSfunction LEARN NGDS(Symbol N:=F1(:::)j:::jFn(:::), spec', target number of programs k)1:ifn= 1then return LEARN (F1;';k )2:Pick a score model fbased on depth (N;L)3:s1;:::;s n f(F1;');:::;f (Fn;')4:returnC(';h;k;s 1;:::;s n)Figure 6: Neural-guided deductive search over L, parameterized with a branch selection controller C.si=f(Fi;')81in;and(b)use the predicted scores sito narrow down the set of productionsF1;:::;Fnto explore and to obtain the overall result by selecting a subset of generated programs. Inthis work, we propose and evaluate two controllers. Their pseudocode is shown in Figure 5.Threshold-based: Fix a score threshold , and explore those branches whose predicted score differsby at mostfrom the maximum predicted score. This is a simple extension of the naïve “ argmax ”controller discussed earlier that also explores any branches that are predicted “approximately as goodas the best one”. When = 0, it reduces to the “ argmax ” one.Branch & Bound: This controller is based on the “branch & bound” technique in combinatorialoptimization (Clausen, 1999). Assume the branches Fiare ordered in the descending order of theirrespective predicted scores si. After recursive learning produces its program set Si, the controllerproceeds to the next branch only if si+1exceeds the score of the worst program in Si. Moreover, itreduces the target number of programs to be learned, using si+1as a lower bound on the scores ofthe programs inSi. That is, rather than relying blindly on the predicted scores, the controller guidesthe remaining search process by accounting for the actual synthesized programs as well.3.3 N EURAL -GUIDED DEDUCTIVE SEARCHWe now combine the above components to present our unified algorithm for program synthesis. Itbuilds upon the deductive search of the PROSE system, which uses symbolic PL insights in the formofwitness functions to construct and narrow down the search space, and a ranking function hto pickthe most generalizable program from the found set of spec-satisfying ones. However, it significantlyspeeds up the search process by guiding it a priori at each branching decision using the learnedscore model fand a branch selection controller, outlined in Sections 3.1 and 3.2. The resultingneural-guided deductive search (NGDS) keeps the symbolic insights that construct the search treeensuring correctness of the found programs, but explores only those branches of this tree that arelikely to produce the user-intended generalizable program, thus eliminating unproductive search time.A key idea in NGDS is that the score prediction model fdoes not have to be the same for all decisionsin the search process. It is possible to train separate models for different DSL levels, symbols, or evenproductions. This allows the model to use different features of the input-output spec for evaluatingthe fitness of different productions, and also leads to much simpler supervised learning problems.Figure 6 shows the pseudocode of NGDS. It builds upon the deductive search of PROSE, but augmentsevery branching decision on a symbol with some branch selection controller from Section 3.2. Wepresent a comprehensive evaluation of different strategies in Section 4.7Published as a conference paper at ICLR 2018Metric PROSE DC 1 DC 2 DC 3 RF1 RF2 RF3NGDSAccuracy (% of 73) 67.12 35.81 47.38 62.92 24.53 39.72 56.41 68.49Speed-up (PROSE) 1.00 1.82 1.53 1.42 0.25 0.27 0.30 1.67Table 1: Accuracy and average speed-up of NGDS vs. baseline methods. Accuracies are computedon a test set of 73tasks. Speed-up of a method is the geometric mean of its per-task speed-up (ratioof synthesis time of PROSE and of the method) when restricted to a subset of tasks with PROSE’ssynthesis time is0:5sec.4 E VALUATIONIn this section, we evaluate our NGDS algorithm over the string manipulation domain with a DSLgiven by Figure 2; see Figure 1 for an example task. We evaluate NGDS, its ablations, and baselinetechniques on two key metrics: (a) generalization accuracy on unseen inputs, (b) synthesis time.Dataset. We use a dataset of 375tasks collected from real-world customer string manipulation prob-lems, split into 65% training, 15% validation, and 20% test data. Some of the common applicationsfound in our dataset include date/time formatting, manipulating addresses, modifying names, automat-ically generating email IDs, etc. Each task contains about 10inputs, of which only one is provided asthe spec to the synthesis system, mimicking industrial applications. The remaining unseen examplesare used to evaluate generalization performance of the synthesized programs. After running synthesisof top-1 programs with PROSE on all training tasks, we have collected a dataset of 400,000intermediate search decisions, i.e.tripleshproduction;spec';a posteriori best scoreh(P;')i.Baselines. We compare our method against two state-of-the-art neural synthesis algorithms: Ro-bustFill (Devlin et al., 2017) and DeepCoder (Balog et al., 2017). For RobustFill, we use thebest-performing Attention-C model and use their recommended DP-Beam Search with a beam size of100 as it seems to perform the best; Table 3 in Appendix A presents results with different beam sizes.As in the original work, we select the top-1 program ranked according to the generated log-likelihood.DeepCoder is a generic framework that allows their neural predictions to be combined with anyprogram synthesis method. So, for fair comparison, we combine DeepCoder’s predictions withPROSE. We train DeepCoder model to predict a distribution over L’s operators and as proposed, useit to guide PROSE synthesis. Since both RobustFill and DeepCoder are trained on randomly sampledprograms and are not optimized for generalization in the real-world, we include their variants trainedwith 2 or 3 examples (denoted RF mand DCm) for fairness, although m= 1example is the mostimportant scenario in real-life industrial usage.Ablations. As mentioned in Section 3, our novel usage of score predictors to guide the searchenables us to have multiple prediction models and controllers at various stages of the synthesisprocess. Here we investigate ablations of our approach with models that specialize in predictions forindividual levels in the search process. The model T1is trained for symbol transform (Figure 2)when expanded in the first level. Similarly, PP,POS refer to models trained for the ppandpossymbol, respectively. Finally, we train all our LSTM-based models with CNTK (Seide & Agarwal,2016) using Adam (Kingma & Ba, 2014) with a learning rate of 102and a batch size of 32, usingearly stopping on the validation loss to select the best performing model (thus, 100-600 epochs).We also evaluate three controllers: threshold-based (Thr) and branch-and-bound (BB) controllersgiven in Figure 5, and a combination of them – branch-and-bound with a 0:2threshold predecessor(BB 0:2). In Tables 1 and 2 we denote different model combinations as NGDS( f,C) wherefis asymbol-based model and Cis a controller. The final algorithm selection depends on its accuracy-performance trade-off. In Table 1, we use NGDS( T1+POS , BB), the best performing algorithm onthe test set, although NGDS( T1, BB) performs slightly better on the validation set.Evaluation Metrics. Generalization accuracy is the percentage of test tasks for which the generatedprogram satisfies allunseen inputs in the task. Synthesis time is measured as the wall-clock timetaken by a synthesis method to find the correct program, median over 5 runs. We run all the methodson the same machine with 2.3 GHz Intel Xeon processor, 64GB of RAM, and Windows Server 2016.Results. Table 1 presents generalization accuracy as well as synthesis time speed-up of variousmethods w.r.t. PROSE. As we strive to provide real-time synthesis, we only compare the times fortasks which require PROSE more than 0:5sec. Note that, with one example, NGDS and PROSE are8Published as a conference paper at ICLR 2018MethodValidation Test% of branchesAccuracy Speed-up Accuracy Speed-upPROSE 70.21 1 67.12 1 100.00NGDS(T1, Thr) 59.57 1.15 67.12 1.27 62.72NGDS(T1, BB) 63.83 1.58 68.49 1.22 51.78NGDS(T1, BB 0:2) 61.70 1.03 67.12 1.22 63.16NGDS(T1+PP, Thr) 59.57 0.76 67.12 0.97 56.41NGDS(T1+PP, BB) 61.70 1.05 72.60 0.89 50.22NGDS(T1+PP, BB 0:2) 61.70 0.72 67.12 0.86 56.43NGDS(T1+POS , Thr) 61.70 1.19 67.12 1.93 55.63NGDS(T1+POS , BB) 63.83 1.13 68.49 1.67 50.44NGDS(T1+POS , BB 0:2) 63.83 1.19 67.12 1.73 55.73Table 2: Accuracies, mean speed-ups, and % of branches taken for different ablations of NGDS.significantly more accurate than RobustFill and DeepCoder. This is natural as those methods arenot trained to optimize generalization, but it also highlights advantage of a close integration with asymbolic system (PROSE) that incorporates deep domain knowledge. Moreover, on an average, ourmethod saves more than 50% of synthesis time over PROSE. While DeepCoder with one examplespeeds up the synthesis even more, it does so at the expense of accuracy, eliminating branches withcorrect programs in 65% of tasks.Table 2 presents speed-up obtained by variations of our models and controllers. In addition togeneralization accuracy and synthesis speed-up, we also show a fraction of branches that wereselected for exploration by the controller. Our method obtains impressive speed-up of >1:5in22cases. One such test case where we obtain 12speedup is a simple extraction case whichis fairly common in Web mining: f“alpha,beta,charlie,delta ” “alpha ”g. Forsuch cases, our model determine transform :=atom to be the correct branch (that leads tothe final Substring based program) and hence saves time required to explore the entire Concatoperator which is expensive. Another interesting test case where we observe 2:7speed-up is:f“457 124th St S, Seattle, WA 98111 ” “Seattle-WA ”g. This test case involveslearning a Concat operator initially followed by Substring andRegexPosition operator. Appendix Bincludes a comprehensive table of NGDS performance on all the validation and test tasks.All the models in Table 2 run without attention. As measured by score flip accuracies (i.e.per-centage of correct orderings of branch scores on the same level), attention-based models performbest, achieving 99:57=90:4=96:4%accuracy on train/validation/test, respectively (as compared to96:09=91:24=91:12% for non-attention models). However, an attention-based model is significantlymore computationally expensive at prediction time. Evaluating it dominates the synthesis timeand eliminates any potential speed-ups. Thus, we decided to forgo attention in initial NGDS andinvestigate model compression/binarization in future work.Error Analysis. As Appendix B shows, NGDS is slower than PROSE on some tasks. This occurswhen the predictions do not satisfy the constraints of the controller i.e.all the predicted scores arewithin the threshold or they violate the actual scores during B&B exploration. This leads to NGDSevaluating the LSTM for branches that were previously pruned. This is especially harmful whenbranches pruned out at the very beginning of the search need to be reconsidered – as it could leadto evaluating the neural network many times. While a single evaluation of the network is quick, asearch tree involves many evaluations, and when performance of PROSE is already <1s, this resultsin considerable relative slowdown. We provide two examples to illustrate both the failure modes:(a)“41.7114830017,-91.41233825683,41.60762786865,-91.63739013671 ” “41.7114830017 ”. The intended program is a simple substring extraction. However, at depth 1,the predicted score of Concat is much higher than the predicted score of Atom , and thus NGDSexplores only the Concat branch. The found Concat program is incorrect because it uses absoluteposition indexes and does not generalize to other similar extraction tasks. We found this scenariocommon with punctuation in the output string, which the model considers a strong signal for Concat .(b) “type size = 36: Bartok.Analysis.CallGraphNode type size = 32:Bartok.Analysis.CallGraphNode CallGraphNode ” “36->32 ”. In this case,NGDS correctly explores only the Concat branch, but the slowdown happens at the possymbol.9Published as a conference paper at ICLR 2018There are many different logics to extract the “36”and“32”substrings. NGDS explores theRelativePosition branch first, but the score of the resulting program is less then the prediction forRegexPositionRelative . Thus, the B&B controller explores both branches anyway, which leads to arelative slowdown caused by the network evaluation time.5 R ELATED WORKNeural Program Induction systems synthesize a program by training a newneural network modelto map the example inputs to example outputs (Graves et al., 2014; Reed & De Freitas, 2016;Zaremba et al., 2016). Examples include Neural Turing Machines (Graves et al., 2014) that can learnsimple programs like copying/sorting, work of Kaiser & Sutskever (2015) that can perform morecomplex computations like binary multiplications, and more recent work of Cai et al. (2017) that canincorporate recursions. While we are interested in ultimately producing the right output, all thesemodels need to be re-trained for a given problem type, thus making them unsuitable for real-lifesynthesis of different programs with fewexamples.Neural Program Synthesis systems synthesize a program in a given Lwith a pre-learned neuralnetwork. Seminal works of Bosnjak et al. (2017) and Gaunt et al. (2016) proposed first producing ahigh-level sketch of the program using procedural knowledge, and then synthesizing the program bycombining the sketch with a neural or enumerative synthesis engine. In contrast, R3NN (Parisottoet al., 2016) and RobustFill (Devlin et al., 2017) systems synthesize the program end-to-end usinga neural network; Devlin et al. (2017) show that RobustFill in fact outperforms R3NN. However,RobustFill does not guarantee generation of spec-satisfying programs and often requires more thanone example to find the intended program. In fact, our empirical evaluation (Section 4) shows thatour hybrid synthesis approach significantly outperforms the purely statistical approach of RobustFill.DeepCoder (Balog et al., 2017) is also a hybrid synthesis system that guides enumerative programsynthesis by prioritizing DSL operators according to a spec-driven likelihood distribution on the same.However, NGDS differs from DeepCoder in two important ways: (a) it guides the search process ateach recursive level in a top-down goal-oriented enumeration and thus reshapes the search tree, (b) itis trained on real-world data instead of random programs, thus achieving better generalization.Symbolic Program Synthesis has been studied extensively in the PL community (Gulwani et al.,2017; Alur et al., 2013), dating back as far as 1960s (Waldinger & Lee, 1969). Most approachesemploy either bottom-up enumerative search (Udupa et al., 2013), constraint solving (Torlak & Bodik,2013), or inductive logic programming (Lin et al., 2014), and thus scale poorly to real-world industrialapplications (e.g. data wrangling applications). In this work, we build upon deductive search, firststudied for synthesis by Manna & Waldinger (1971), and primarily used for program synthesisfrom formal logical specifications (Puschel et al., 2005; Chaudhari & Damani, 2015). Gulwani(2011) and later Polozov & Gulwani (2015) used it to build PROSE, a commercially successfuldomain-agnostic system for PBE. While its deductive search guarantees program correctness and alsogood generalization via an accurate ranking function, it still takes several seconds on complex tasks.Thus, speeding up deductive search requires considerable engineering to develop manual heuristics.NGDS instead integrates neural-driven predictions at each level of deductive search to alleviate thisdrawback. Work of Loos et al. (2017) represents the closest work with a similar technique but theirwork is applied to an automated theorem prover, and hence need not care about generalization. Incontrast, NGDS guides the search toward generalizable programs while relying on the underlyingsymbolic engine to generate correct programs.6 C ONCLUSIONWe studied the problem of real-time program synthesis with a small number of input-output examples.For this problem, we proposed a neural-guided system that builds upon PROSE, a state-of-the-artsymbolic logic based system. Our system avoids top-down enumerative grammar exploration requiredby PROSE thus providing impressive synthesis performance while still retaining key advantages ofa deductive system. That is, compared to existing neural synthesis techniques, our system enjoysfollowing advantages: a) correctness : programs generated by our system are guaranteed to satisfy thegiven input-output specification, b) generalization : our system learns the user-intended program withjust one input-output example in around 60% test cases while existing neural systems learn such a10Published as a conference paper at ICLR 2018program in only 16% test cases, c) synthesis time : our system can solve most of the test cases in lessthan 0.1 sec and provide impressive performance gains over both neural as well symbolic systems.The key take-home message of this work is that a deep integration of a symbolic deductive inferencebased system with statistical techniques leads to best of both the worlds where we can avoid extensiveengineering effort required by symbolic systems without compromising the quality of generatedprograms, and at the same time provide significant performance (when measured as synthesis time)gains. For future work, exploring better learning models for production rule selection and applyingour technique to diverse and more powerful grammars should be important research directions.
SkPNib9ez
Incremental paper but well-written
6: Marginally above acceptance threshold
This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem. This paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming languages community. Moreover the work mentions a neurally-guided search, but little time is spent on that portion of their contribution. I am not even clear how their system is trained. The experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy. It is difficult to conclude overall if the technique helps in synthesis.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples ### Paper Abstract Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems. ### Paper Keywords ["Program synthesis", "deductive search", "deep learning", "program induction", "recurrent neural networks"] ### Paper Content ABSTRACTSynthesizing user-intended programs from a small number of input-output exam-ples is a challenging problem with several important applications like spreadsheetmanipulation, data wrangling and code refactoring. Existing synthesis systemseither completely rely on deductive logic techniques that are extensively hand-engineered or on purely statistical models that need massive amounts of data, and ingeneral fail to provide real-time synthesis on challenging benchmarks. In this work,we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis techniquethat combines the best of both symbolic logic techniques and statistical models.Thus, it produces programs that satisfy the provided specifications by constructionand generalize well on unseen examples, similar to data-driven systems. Ourtechnique effectively utilizes the deductive search framework to reduce the learningproblem of the neural component to a simple supervised learning setup. Further,this allows us to both train on sparingly available real-world data and still leveragepowerful recurrent neural network encoders. We demonstrate the effectivenessof our method by evaluating on real-world customer scenarios by synthesizingaccurate programs with up to 12 speed-up compared to state-of-the-art systems.1 I NTRODUCTIONAutomatic synthesis of programs that satisfy a given specification is a classical problem inAI (Waldinger & Lee, 1969), with extensive literature in both machine learning and programminglanguages communities. Recently, this area has gathered widespread interest, mainly spurred bythe emergence of a sub-area – Programming by Examples (PBE) (Gulwani, 2011). A PBE systemsynthesizes programs that map a given set of example inputs to their specified example outputs. Suchsystems make many tasks accessible to a wider audience as example-based specifications can beeasily provided even by end users without programming skills. See Figure 1 for an example. PBEsystems are usually evaluated on three key criteria: (a)correctness : whether the synthesized programInput OutputYann LeCunn Y LeCunnHugo Larochelle H LarochelleTara Sainath T SainathYoshua Bengio ?Figure 1: An example input-output spec; the goal is to learn aprogram that maps the given inputs to the corresponding outputsand generalizes well to new inputs. Both programs belowsatisfy the spec: (i)Concat (1stletter of 1stword, 2ndword), (ii)Concat (4th-last letter of 1stword, 2ndword). However, program(i)clearly generalizes better: for instance, its output on “YoshuaBengio” is “Y Bengio” while program (ii)produces “s Bengio”.Work done during an internship at Microsoft Research.yEqual contribution.1Published as a conference paper at ICLR 2018satisfies the spec i.e.the provided example input-output mapping, (b)generalization : whether theprogram produces the desired outputs on unseen inputs, and finally, (c)performance : synthesis time.State-of-the-art PBE systems are either symbolic , based on enumerative or deductive search (Gulwani,2011; Polozov & Gulwani, 2015) or statistical , based on data-driven learning to induce the most likelyprogram for the spec (Gaunt et al., 2016; Balog et al., 2017; Devlin et al., 2017). Symbolic systems aredesigned to produce a correct program by construction using logical reasoning and domain-specificknowledge. They also produce the intended program with few input-output examples (often just 1).However, they require significant engineering effort and their underlying search processes strugglewith real-time performance, which is critical for user-facing PBE scenarios.In contrast, statistical systems do not rely on specialized deductive algorithms, which makes theirimplementation and training easier. However, they lack in two critical aspects. First, they requirea lot of training data and so are often trained using randomly generated tasks. As a result, inducedprograms can be fairly unnatural and fail to generalize to real-world tasks with a small number ofexamples. Second, purely statistical systems like RobustFill (Devlin et al., 2017) do not guaranteethat the generated program satisfies the spec. Thus, solving the synthesis task requires generatingmultiple programs with a beam search and post-hoc filtering, which defeats real-time performance.Neural-Guided Deductive Search Motivated by shortcomings of both the above approaches,we propose Neural-Guided Deductive Search (NGDS), a hybrid synthesis technique that bringstogether the desirable aspects of both methods. The symbolic foundation of NGDS is deductivesearch (Polozov & Gulwani, 2015) and is parameterized by an underlying domain-specific language(DSL) of target programs. Synthesis proceeds by recursively applying production rules of the DSL todecompose the initial synthesis problem into smaller sub-problems and further applying the samesearch technique on them. Our key observation I is that most of the deduced sub-problems do notcontribute to the final best program and therefore a priori predicting the usefulness of pursuing aparticular sub-problem streamlines the search process resulting in considerable time savings. InNGDS, we use a statistical model trained on real-world data to predict a score that corresponds to thelikelihood of finding a generalizable program as a result of exploring a sub-problem branch.Our key observation II is that speeding up deductive search while retaining its correctness orgeneralization requires a close integration of symbolic and statistical approaches via an intelligentcontroller. It is based on the “branch & bound” technique from combinatorial optimization (Clausen,1999). The overall algorithm integrates (i) deductive search, (ii) a statistical model that predicts, apriori , the generalization score of the best program from a branch, and (iii) a controller that selectssub-problems for further exploration based on the model’s predictions.Since program synthesis is a sequential process wherein a sequence of decisions (here, selectionsof DSL rules) collectively construct the final program, a reinforcement learning setup seems morenatural. However, our key observation III is that deductive search is Markovian – it generatesindependent sub-problems at every level. In other words, we can reason about a satisfying programfor the sub-problem without factoring in the bigger problem from which it was deduced. This bringsthree benefits enabling a supervised learning formulation: (a)a dataset of search decisions at everylevel over a relatively small set of PBE tasks that contains an exponential amount of informationabout the DSL promoting generalization, (b)such search traces can be generated and used for offlinetraining, (c)we can learn separate models for different classes of sub-problems (e.g. DSL levels orrules), with relatively simpler supervised learning tasks.Evaluation We evaluate NGDS on the string transformation domain, building on top of PROSE,a commercially successful deductive synthesis framework for PBE (Polozov & Gulwani, 2015).It represents one of the most widespread and challenging applications of PBE and has shipped inmultiple mass-market tools including Microsoft Excel and Azure ML Workbench.1We train andvalidate our method on 375scenarios obtained from real-world customer tasks (Gulwani, 2011;Devlin et al., 2017). Thanks to the Markovian search properties described above, these scenariosgenerate a dataset of 400;000+ intermediate search decisions. NGDS produces intended programson68% of the scenarios despite using only oneinput-output example. In contrast, state-of-the-artneural synthesis techniques (Balog et al., 2017; Devlin et al., 2017) learn intended programs from a1https://microsoft.github.io/prose/impact/2Published as a conference paper at ICLR 2018single example in only 24-36% of scenarios taking 4more time. Moreover, NGDS matches theaccuracy of baseline PROSE while providing a speed-up of up to 12over challenging tasks.Contributions First, we present a branch-and-bound optimization based controller that exploitsdeep neural network based score predictions to select grammar rules efficiently (Section 3.2). Second,we propose a program synthesis algorithm that combines key traits of a symbolic and a statisticalapproach to retain desirable properties like correctness, robust generalization, and real-time perfor-mance (Section 3.3). Third, we evaluate NGDS against state-of-the-art baselines on real customertasks and show significant gains (speed-up of up to 12) on several critical cases (Section 4).2 B ACKGROUNDIn this section, we provide a brief background on PBE and the PROSE framework, using establishedformalism from the programming languages community.Domain-Specific Language A program synthesis problem is defined over a domain-specific lan-guage (DSL). A DSL is a restricted programming language that is suitable for expressing tasks in agiven domain, but small enough to restrict a search space for program synthesis. For instance, typicalreal-life DSLs with applications in textual data transformations (Gulwani, 2011) often include condi-tionals, limited forms of loops, and domain-specific operators such as string concatenation, regularexpressions, and date/time formatting. DSLs for tree transformations such as code refactoring (Rolimet al., 2017) and data extraction (Le & Gulwani, 2014) include list/data-type processing operatorssuch as Map andFilter , as well as domain-specific matching operators. Formally, a DSL Lis speci-fied as a context-free grammar, with each non-terminal symbol Ndefined by a set of productions.The right-hand side of each production is an application of some operator F(N1;:::;Nk)to somesymbols ofL. All symbols and operators are strongly typed. Figure 2 shows a subset of the Flash FillDSL that we use as a running example in this paper.Inductive Program Synthesis The task of inductive program synthesis is characterized by a spec.A spec'is a set ofminput-output constraintsfi igmi=1, where:•, aninput state is a mapping of free variables of the desired program Pto some correspondinglytyped values. At the top level of L, a program (and its expected input state) has only one freevariable – the input variable of the DSL (e.g., inputs in Figure 2). Additional local variables areintroduced insideLwith a let construct.• is an output constraint on the execution result of the desired program P(i). At the top level ofL, when provided by the user, is usually the output example – precisely the expected result ofP(i). However, other intermediate constraints arise during the synthesis process. For instance, may be a disjunction of multiple allowed outputs.The overall goal of program synthesis is thus: given a spec ', find a program Pin the underlyingDSLLthatsatisfies',i.e., its outputs P(i)satisfy all the corresponding constraints i.Example 1. Consider the task of formatting a phone number, characterized by the spec '=finputs : [“(612) 8729128 ”]g “612-872-9128 ”. It has a single input-output example,with an input state containing a single variable inputs and its value which is a list with a singleinput string. The output constraint is simply the desired program result.The program the user is most likely looking for is the one that extracts (a) the part of the inputenclosed in the first pair of parentheses, (b) the 7thto 4thcharacters from the end, and (c) the last 4characters, and then concatenates all three parts using hyphens. In our DSL, this corresponds to:ConcatSubStr 0(RegexPosition (x;h“(”;"i;0);RegexPosition (x;h";“)”i;0));ConstStr (“-”);SubStr 0(AbsolutePosition (x;8);AbsolutePosition (x;5));ConstStr (“-”);SubStr 0(AbsolutePosition (x;5);AbsolutePosition (x;1))where"is an empty regex, SubStr 0(pos 1;pos 2)is an abbreviation for “ letx=std:Kth(inputs; 0)inSubstring (x;hpos 1;pos 2i)”, andhiis an abbreviation for std:Pair.However, many other programs in the DSL also satisfy '. For instance, all occurrences of “8”inthe output can be produced via a subprogram that simply extracts the last character. Such a programoverfits to'and is bound to fail for other inputs where the last character and the 4thone differ.3Published as a conference paper at ICLR 2018// Nonterminals@start string transform :=atom | Concat( atom ,transform );stringatom :=ConstStr( s)| let string x= std.Kth( inputs ,k) in Substring( x,pp);Tuple<int, int> pp:=std.Pair( pos,pos) | RegexOccurrence( x,r,k);intpos:=AbsolutePosition( x,k) | RegexPosition( x, std.Pair( r,r),k);// Terminals@input string[] inputs ; string s; int k; Regex r;Figure 2: A subset of the FlashFill DSL (Gulwani, 2011), used as a running example in this paper.Every program takes as input a list of strings inputs , and returns an output string, a concatenationofatoms . Each atom is either a constant or a substring of one of the inputs ( x), extracted usingsome position logic. The RegexOccurrence position logic finds kthoccurrence of a regex rinxandreturns its boundaries. Alternatively, start and end positions can be selected independently either asabsolute indices in xfrom left or right ( AbsolutePosition ) or as thekthoccurrence of a pair of regexessurrounding the position ( RegexPosition ). See Gulwani (2011) for an in-depth DSL description.As Example 1 shows, typical real-life problems are severely underspecified. A DSL like FlashFillmay contain up to 1020programs that satisfy a given spec of 1-3 input-output examples (Polozov &Gulwani, 2015). Therefore, the main challenge lies in finding a program that not only satisfies theprovided input-output examples but also generalizes to unseen inputs . Thus, the synthesis processusually interleaves search andranking : the search phase finds a set of spec-satisfying programs in theDSL, from which the ranking phase selects top programs ordered using a domain-specific rankingfunctionh:L~!Rwhere is the set of all input states. The ranking function takes as input acandidate program P2L and a set of input states ~ 2~(usually~ =inputs in the given spec + anyavailable unlabeled inputs), and produces a score for P’sgeneralization .The implementation of hexpresses a subtle balance between program generality, complexity, andbehavior on available inputs. For instance, in FlashFill hpenalizes overly specific regexes, prefersprograms that produce fewer empty outputs, and prioritizes lower Kolmogorov complexity, amongother features. In modern PBE systems like PROSE, his usually learned in a data-driven mannerfrom customer tasks (Singh & Gulwani, 2015; Ellis & Gulwani, 2017). While designing and learningsuch a ranking is an interesting problem in itself, in this work we assume a black-box access to h.Finally, the problem of inductive program synthesis can be summarized as follows:Problem 1. Given a DSLL, a ranking function h, a spec'=fi igmi=1, optionally a setof unlabeled inputs ~ u, and a target number of programs K, let~ =~ u[figmi=1. The goal ofinductive program synthesis is to find a program set S=fP1;:::;PKgL such that (a)everyprogram inSsatisfies', and (b)the programs inSgeneralize best: h(Pi;~ )h(P;~ )for anyotherP2L that satisfies '.Search Strategy Deductive search strategy for program synthesis, employed by PROSE exploresthe grammar ofLtop-down – iteratively unrolling the productions into partial programs starting fromthe root symbol. Following the divide-and-conquer paradigm, at each step it reduces its synthesisproblem to smaller subproblems defined over the parameters of the current production. Formally,given a spec 'and a symbol N, PROSE computes the set Learn (N;')of top programs w.r.t. husingtwo guiding principles:1.IfNis defined through nproductions N:=F1(:::)j:::jFn(:::), PROSE finds a '-satisfyingprogram set for everyFi, and unites the results, i.e., Learn (N;') =[iLearn (Fi(:::);').2.For a given production N:=F(N1;:::;Nk), PROSE spawns off ksmaller synthesis problemsLearn (Nj;'j);1jkwherein PROSE deduces necessary and sufficient specs 'jfor eachNjsuch that every program of type F(P1;:::;Pk), wherePj2Learn (Nj;'j), satisfies'. Thededuction logic (called a witness function ) is domain-specific for each operator F. PROSE thenagain recursively solves each subproblem and unites a cross-product of the results.Example 2. Consider a spec '=f“Yann ” “Y.L”gon atransform program. Via the firstproductiontransform :=atom , the only'-satisfying program is ConstStr (“Y.L”). The secondproduction on the same level is Concat (atom; transform ). A necessary & sufficient spec on theatom sub-program is that it should produce some prefix of the output string. Thus, the witnessfunction for the Concat operator produces a disjunctive spec 'a=f“Yann ” “Y”_“Y.”g. Each4Published as a conference paper at ICLR 2018of these disjuncts, in turn, induces a corresponding necessary and sufficient suffix spec on the secondparameter:'t1=f“Yann ” “.L”g, and't2=f“Yann ” “L”g, respectively. The disjunctsin'awill be recursively satisfied by different program sets: “Y.”can only be produced via anatom path with a ConstStr program, whereas “Y”can also be extracted from the input using manySubstring logics (their generalization capabilities vary). Figure 3 shows the resulting search DAG.transform“Y.L”Concat (:::)“Y.L”atom“Y.L”atom“Y”_“Y.”transform“L”atom“L”transform“.L”atom“.L”Concat (:::)“.L”atom“.”ConstStr (s)“Y.L”. . . . . . . . . . . .ConstStr (s)“Y”_“Y.”letx= . . .“Y”_“Y.”...Substring (:::)“Y”pp(0;1). . .Figure 3: A portion of the search DAG from Example 2. Only the output parts of the respective specsare shown in each node, their common input state is a single string “Yann ”. Dashed arrows showrecursive Learn calls on a corresponding DSL symbol.Notice that the above mentioned principles create logical non-determinism due to which we mightneed to explore multiple alternatives in a search tree. As such non-determinism arises at every level ofthe DSL with potentially any operator, the search tree (and the resulting search process) is exponentialin size. While all the branches of the tree by construction produce programs that satisfy the givenspec, most of the branches do not contribute to the overall top-ranked generalizable program. Duringdeductive search, PROSE has limited information about the programs potentially produced fromeach branch, and cannot estimate their quality, thus exploring the entire tree unnecessarily. Our maincontribution is a neural-guided search algorithm that predicts the best program scores from eachbranch, and allows PROSE to omit branches that are unlikely to produce the desired program a priori .3 S YNTHESIS ALGORITHMConsider an arbitrary branching moment in the top-down search strategy of PROSE. For example, letNbe a nonterminal symbol in L, defined through a set of productions N:=F1(:::)j:::jFn(:::),and let'be a spec on N, constructed earlier during the recursive descent over L. A conservativeway to select the top kprograms rooted at N(as defined by the ranking function h), i.e., to computeLearn (N;'), is to learn the top kprograms of kind Fi(:::)for alli2[k]and then select the top kprograms overall from the union of program sets learned for each production. Naturally, exploring allthe branches for each nonterminal in the search tree is computationally expensive.In this work, we propose a data-driven method to select an appropriate production rule N:=Fi(N1;:::;Nk)that would most likely lead to a top-ranked program. To this end, we use the currentspec'to determine the “optimal” rule. Now, it might seem unintuitive that even without exploringa production rule and finding the best program in the corresponding program set, we can a prioridetermine optimality of that rule. However, we argue that by understanding 'and its relationshipwith the ranking function h, we can predict the intended branch in many real-life scenarios.Example 3. Consider a spec '=f“alice ” “alice@iclr.org ”;“bob” “bob@iclr.org ”g. While learning a program in Lgiven by Figure 2 that satisfies ', it is clearright at the beginning of the search procedure that the rule transform :=atom does not apply. Thisis because any programs derived from transform :=atom can either extract a substring from theinput or return a constant string, both of which fail to produce the desired output. Hence, we shouldonly consider transform :=Concat (:::), thus significantly reducing the search space.Similarly, consider another spec '=f“alice smith ” “alice ”;“bob jones ” “bob”g. In this case, the output appears to be a substring of input, thus selecting transform :=atomat the beginning of the search procedure is a better option than transform :=Concat (:::).However, many such decisions are more subtle and depend on the ranking function hitself. Forexample, consider a spec '=f“alice liddell ” “al”;“bob ong ” “bo”g. Now,5Published as a conference paper at ICLR 2018LSTM for input encoding LSTM for output encodingChar EmbeddingInput stateChar EmbeddingOutput example(s) EmbeddingProduction rule TwoFClayersPredicted scoreFigure 4: LSTM-based model for predicting the score of a candidate production for a given spec '.bothtransform :=atom andtransform :=Concat (:::)may lead to viable programs becausethe output can be constructed using the first two letters of the input (i.e. a substring atom) or byconcatenating the first letters of each word. Hence, the branch that produces the best program isultimately determined by the ranking function hsince both branches generate valid programs.Example 3 shows that to design a data-driven search strategy for branch selection, we need to learnthe subtle relationship between ',h, and the candidate branch. Below, we provide one such model.3.1 P REDICTING THE GENERALIZATION SCOREAs mentioned above, our goal is to predict one or more production rules that for a given spec 'willlead to a top-ranked program (as ranked a posteriori byh). Formally, given black-box access to h,we want to learn a function fsuch that,f(;') maxP2S(;' )h(P;');whereis a production rule in L, andS(;')is aprogram set of all DSL programs derived fromthe rulethat satisfy'. In other words, we want to predict the score of the top-ranked '-satisfyingprogram that is synthesized by unrolling the rule . We assume that the symbolic search of PROSEhandles the construction of S(;')and ensures that programs in it satisfy 'by construction. Thegoal offis to optimize the score of a program derived from assuming this program is valid. If noprogram derived from can satisfy',fshould return1. Note that, drawing upon observationsmentioned in Section 1, we have cast the production selection problem as a supervised learningproblem, thus simplifying the learning task as opposed to end-to-end reinforcement learning solution.We have evaluated two models for learning f. The loss function for the prediction is given by:L(f;;') =f(;')maxP2S(;' )h(P;')2:Figure 4 shows a common structure of both models we have evaluated. Both are based on a standardmulti-layer LSTM architecture (Hochreiter & Schmidhuber, 1997) and involve (a)embedding thegiven spec',(b)encoding the given production rule , and (c)a feed-forward network to output ascoref(;'). One model attends over input when it encodes the output, whereas another does not.3.2 C ONTROLLER FOR BRANCH SELECTIONA score model falone is insufficient to perfectly predict the branches that should be explored atevery level. Consider again a branching decision moment N:=F1(:::)j:::jFn(:::)in a searchprocess for top kprograms satisfying a spec '. One naïve approach to using the predictions of fis toalways follow the highest-scored production rule argmaxif(Fi;'). However, this means that anysingle incorrect decision on the path from the DSL root to the desired program will eliminate thatprogram from the learned program set . If our search algorithm fails to produce the desired programby committing to a suboptimal branch anytime during the search process, then the user may neverdiscover that such a program exists unless they supply additional input-output example.Thus, a branch selection strategy based on the predictions of fmust balance a trade-off of performanceandgeneralization . Selecting too few branches (a single best branch in the extreme case) riskscommitting to an incorrect path early in the search process and producing a suboptimal program orno program at all. Selecting too many branches (all nbranches in the extreme case) is no differentfrom baseline PROSE and fails to exploit the predictions of fto improve its performance.Formally, a controller for branch selection at a symbol N:=F1(:::)j:::jFn(:::)targetingkbest programs must (a)predict the expected score of the best program from each program set:6Published as a conference paper at ICLR 2018function THRESHOLD BASED (';h;k;s 1;. . .;sn)1:Result setS []2:i argmaxisi3:for all 1indo4: ifjsisijthen//Recursive search5:S+= LEARN (Fi;';k )6:return the topkprograms ofSw.r.t.hfunction BNBBASED (';h;k;s 1;. . .;sn)1:Result setS []; Program target k0 k2:ReorderFiin the descending order of si3:for all 1indo4:Si LEARN (Fi;';k0)//Recursive search5:j BINARY SEARCH (si+1;Map(h;Si))6:S=Si[Si[0::j];k0 k0j7: ifk00then break8:returnSFigure 5: The controllers for guiding the search process to construct a most generalizable '-satisfyingprogram setSof sizekgiven thef-predicted best scores s1;. . .;snof the productions F1;. . .;Fn.Given: DSLL, ranking function h, controllerCfrom Figure 5 ( THRESHOLD BASED orBNBBASED ),symbolic search algorithm LEARN (Production rule , spec', targetk) as in PROSE (Polozov &Gulwani, 2015, Figure 7) with all recursive calls to L EARN replaced with L EARN NGDSfunction LEARN NGDS(Symbol N:=F1(:::)j:::jFn(:::), spec', target number of programs k)1:ifn= 1then return LEARN (F1;';k )2:Pick a score model fbased on depth (N;L)3:s1;:::;s n f(F1;');:::;f (Fn;')4:returnC(';h;k;s 1;:::;s n)Figure 6: Neural-guided deductive search over L, parameterized with a branch selection controller C.si=f(Fi;')81in;and(b)use the predicted scores sito narrow down the set of productionsF1;:::;Fnto explore and to obtain the overall result by selecting a subset of generated programs. Inthis work, we propose and evaluate two controllers. Their pseudocode is shown in Figure 5.Threshold-based: Fix a score threshold , and explore those branches whose predicted score differsby at mostfrom the maximum predicted score. This is a simple extension of the naïve “ argmax ”controller discussed earlier that also explores any branches that are predicted “approximately as goodas the best one”. When = 0, it reduces to the “ argmax ” one.Branch & Bound: This controller is based on the “branch & bound” technique in combinatorialoptimization (Clausen, 1999). Assume the branches Fiare ordered in the descending order of theirrespective predicted scores si. After recursive learning produces its program set Si, the controllerproceeds to the next branch only if si+1exceeds the score of the worst program in Si. Moreover, itreduces the target number of programs to be learned, using si+1as a lower bound on the scores ofthe programs inSi. That is, rather than relying blindly on the predicted scores, the controller guidesthe remaining search process by accounting for the actual synthesized programs as well.3.3 N EURAL -GUIDED DEDUCTIVE SEARCHWe now combine the above components to present our unified algorithm for program synthesis. Itbuilds upon the deductive search of the PROSE system, which uses symbolic PL insights in the formofwitness functions to construct and narrow down the search space, and a ranking function hto pickthe most generalizable program from the found set of spec-satisfying ones. However, it significantlyspeeds up the search process by guiding it a priori at each branching decision using the learnedscore model fand a branch selection controller, outlined in Sections 3.1 and 3.2. The resultingneural-guided deductive search (NGDS) keeps the symbolic insights that construct the search treeensuring correctness of the found programs, but explores only those branches of this tree that arelikely to produce the user-intended generalizable program, thus eliminating unproductive search time.A key idea in NGDS is that the score prediction model fdoes not have to be the same for all decisionsin the search process. It is possible to train separate models for different DSL levels, symbols, or evenproductions. This allows the model to use different features of the input-output spec for evaluatingthe fitness of different productions, and also leads to much simpler supervised learning problems.Figure 6 shows the pseudocode of NGDS. It builds upon the deductive search of PROSE, but augmentsevery branching decision on a symbol with some branch selection controller from Section 3.2. Wepresent a comprehensive evaluation of different strategies in Section 4.7Published as a conference paper at ICLR 2018Metric PROSE DC 1 DC 2 DC 3 RF1 RF2 RF3NGDSAccuracy (% of 73) 67.12 35.81 47.38 62.92 24.53 39.72 56.41 68.49Speed-up (PROSE) 1.00 1.82 1.53 1.42 0.25 0.27 0.30 1.67Table 1: Accuracy and average speed-up of NGDS vs. baseline methods. Accuracies are computedon a test set of 73tasks. Speed-up of a method is the geometric mean of its per-task speed-up (ratioof synthesis time of PROSE and of the method) when restricted to a subset of tasks with PROSE’ssynthesis time is0:5sec.4 E VALUATIONIn this section, we evaluate our NGDS algorithm over the string manipulation domain with a DSLgiven by Figure 2; see Figure 1 for an example task. We evaluate NGDS, its ablations, and baselinetechniques on two key metrics: (a) generalization accuracy on unseen inputs, (b) synthesis time.Dataset. We use a dataset of 375tasks collected from real-world customer string manipulation prob-lems, split into 65% training, 15% validation, and 20% test data. Some of the common applicationsfound in our dataset include date/time formatting, manipulating addresses, modifying names, automat-ically generating email IDs, etc. Each task contains about 10inputs, of which only one is provided asthe spec to the synthesis system, mimicking industrial applications. The remaining unseen examplesare used to evaluate generalization performance of the synthesized programs. After running synthesisof top-1 programs with PROSE on all training tasks, we have collected a dataset of 400,000intermediate search decisions, i.e.tripleshproduction;spec';a posteriori best scoreh(P;')i.Baselines. We compare our method against two state-of-the-art neural synthesis algorithms: Ro-bustFill (Devlin et al., 2017) and DeepCoder (Balog et al., 2017). For RobustFill, we use thebest-performing Attention-C model and use their recommended DP-Beam Search with a beam size of100 as it seems to perform the best; Table 3 in Appendix A presents results with different beam sizes.As in the original work, we select the top-1 program ranked according to the generated log-likelihood.DeepCoder is a generic framework that allows their neural predictions to be combined with anyprogram synthesis method. So, for fair comparison, we combine DeepCoder’s predictions withPROSE. We train DeepCoder model to predict a distribution over L’s operators and as proposed, useit to guide PROSE synthesis. Since both RobustFill and DeepCoder are trained on randomly sampledprograms and are not optimized for generalization in the real-world, we include their variants trainedwith 2 or 3 examples (denoted RF mand DCm) for fairness, although m= 1example is the mostimportant scenario in real-life industrial usage.Ablations. As mentioned in Section 3, our novel usage of score predictors to guide the searchenables us to have multiple prediction models and controllers at various stages of the synthesisprocess. Here we investigate ablations of our approach with models that specialize in predictions forindividual levels in the search process. The model T1is trained for symbol transform (Figure 2)when expanded in the first level. Similarly, PP,POS refer to models trained for the ppandpossymbol, respectively. Finally, we train all our LSTM-based models with CNTK (Seide & Agarwal,2016) using Adam (Kingma & Ba, 2014) with a learning rate of 102and a batch size of 32, usingearly stopping on the validation loss to select the best performing model (thus, 100-600 epochs).We also evaluate three controllers: threshold-based (Thr) and branch-and-bound (BB) controllersgiven in Figure 5, and a combination of them – branch-and-bound with a 0:2threshold predecessor(BB 0:2). In Tables 1 and 2 we denote different model combinations as NGDS( f,C) wherefis asymbol-based model and Cis a controller. The final algorithm selection depends on its accuracy-performance trade-off. In Table 1, we use NGDS( T1+POS , BB), the best performing algorithm onthe test set, although NGDS( T1, BB) performs slightly better on the validation set.Evaluation Metrics. Generalization accuracy is the percentage of test tasks for which the generatedprogram satisfies allunseen inputs in the task. Synthesis time is measured as the wall-clock timetaken by a synthesis method to find the correct program, median over 5 runs. We run all the methodson the same machine with 2.3 GHz Intel Xeon processor, 64GB of RAM, and Windows Server 2016.Results. Table 1 presents generalization accuracy as well as synthesis time speed-up of variousmethods w.r.t. PROSE. As we strive to provide real-time synthesis, we only compare the times fortasks which require PROSE more than 0:5sec. Note that, with one example, NGDS and PROSE are8Published as a conference paper at ICLR 2018MethodValidation Test% of branchesAccuracy Speed-up Accuracy Speed-upPROSE 70.21 1 67.12 1 100.00NGDS(T1, Thr) 59.57 1.15 67.12 1.27 62.72NGDS(T1, BB) 63.83 1.58 68.49 1.22 51.78NGDS(T1, BB 0:2) 61.70 1.03 67.12 1.22 63.16NGDS(T1+PP, Thr) 59.57 0.76 67.12 0.97 56.41NGDS(T1+PP, BB) 61.70 1.05 72.60 0.89 50.22NGDS(T1+PP, BB 0:2) 61.70 0.72 67.12 0.86 56.43NGDS(T1+POS , Thr) 61.70 1.19 67.12 1.93 55.63NGDS(T1+POS , BB) 63.83 1.13 68.49 1.67 50.44NGDS(T1+POS , BB 0:2) 63.83 1.19 67.12 1.73 55.73Table 2: Accuracies, mean speed-ups, and % of branches taken for different ablations of NGDS.significantly more accurate than RobustFill and DeepCoder. This is natural as those methods arenot trained to optimize generalization, but it also highlights advantage of a close integration with asymbolic system (PROSE) that incorporates deep domain knowledge. Moreover, on an average, ourmethod saves more than 50% of synthesis time over PROSE. While DeepCoder with one examplespeeds up the synthesis even more, it does so at the expense of accuracy, eliminating branches withcorrect programs in 65% of tasks.Table 2 presents speed-up obtained by variations of our models and controllers. In addition togeneralization accuracy and synthesis speed-up, we also show a fraction of branches that wereselected for exploration by the controller. Our method obtains impressive speed-up of >1:5in22cases. One such test case where we obtain 12speedup is a simple extraction case whichis fairly common in Web mining: f“alpha,beta,charlie,delta ” “alpha ”g. Forsuch cases, our model determine transform :=atom to be the correct branch (that leads tothe final Substring based program) and hence saves time required to explore the entire Concatoperator which is expensive. Another interesting test case where we observe 2:7speed-up is:f“457 124th St S, Seattle, WA 98111 ” “Seattle-WA ”g. This test case involveslearning a Concat operator initially followed by Substring andRegexPosition operator. Appendix Bincludes a comprehensive table of NGDS performance on all the validation and test tasks.All the models in Table 2 run without attention. As measured by score flip accuracies (i.e.per-centage of correct orderings of branch scores on the same level), attention-based models performbest, achieving 99:57=90:4=96:4%accuracy on train/validation/test, respectively (as compared to96:09=91:24=91:12% for non-attention models). However, an attention-based model is significantlymore computationally expensive at prediction time. Evaluating it dominates the synthesis timeand eliminates any potential speed-ups. Thus, we decided to forgo attention in initial NGDS andinvestigate model compression/binarization in future work.Error Analysis. As Appendix B shows, NGDS is slower than PROSE on some tasks. This occurswhen the predictions do not satisfy the constraints of the controller i.e.all the predicted scores arewithin the threshold or they violate the actual scores during B&B exploration. This leads to NGDSevaluating the LSTM for branches that were previously pruned. This is especially harmful whenbranches pruned out at the very beginning of the search need to be reconsidered – as it could leadto evaluating the neural network many times. While a single evaluation of the network is quick, asearch tree involves many evaluations, and when performance of PROSE is already <1s, this resultsin considerable relative slowdown. We provide two examples to illustrate both the failure modes:(a)“41.7114830017,-91.41233825683,41.60762786865,-91.63739013671 ” “41.7114830017 ”. The intended program is a simple substring extraction. However, at depth 1,the predicted score of Concat is much higher than the predicted score of Atom , and thus NGDSexplores only the Concat branch. The found Concat program is incorrect because it uses absoluteposition indexes and does not generalize to other similar extraction tasks. We found this scenariocommon with punctuation in the output string, which the model considers a strong signal for Concat .(b) “type size = 36: Bartok.Analysis.CallGraphNode type size = 32:Bartok.Analysis.CallGraphNode CallGraphNode ” “36->32 ”. In this case,NGDS correctly explores only the Concat branch, but the slowdown happens at the possymbol.9Published as a conference paper at ICLR 2018There are many different logics to extract the “36”and“32”substrings. NGDS explores theRelativePosition branch first, but the score of the resulting program is less then the prediction forRegexPositionRelative . Thus, the B&B controller explores both branches anyway, which leads to arelative slowdown caused by the network evaluation time.5 R ELATED WORKNeural Program Induction systems synthesize a program by training a newneural network modelto map the example inputs to example outputs (Graves et al., 2014; Reed & De Freitas, 2016;Zaremba et al., 2016). Examples include Neural Turing Machines (Graves et al., 2014) that can learnsimple programs like copying/sorting, work of Kaiser & Sutskever (2015) that can perform morecomplex computations like binary multiplications, and more recent work of Cai et al. (2017) that canincorporate recursions. While we are interested in ultimately producing the right output, all thesemodels need to be re-trained for a given problem type, thus making them unsuitable for real-lifesynthesis of different programs with fewexamples.Neural Program Synthesis systems synthesize a program in a given Lwith a pre-learned neuralnetwork. Seminal works of Bosnjak et al. (2017) and Gaunt et al. (2016) proposed first producing ahigh-level sketch of the program using procedural knowledge, and then synthesizing the program bycombining the sketch with a neural or enumerative synthesis engine. In contrast, R3NN (Parisottoet al., 2016) and RobustFill (Devlin et al., 2017) systems synthesize the program end-to-end usinga neural network; Devlin et al. (2017) show that RobustFill in fact outperforms R3NN. However,RobustFill does not guarantee generation of spec-satisfying programs and often requires more thanone example to find the intended program. In fact, our empirical evaluation (Section 4) shows thatour hybrid synthesis approach significantly outperforms the purely statistical approach of RobustFill.DeepCoder (Balog et al., 2017) is also a hybrid synthesis system that guides enumerative programsynthesis by prioritizing DSL operators according to a spec-driven likelihood distribution on the same.However, NGDS differs from DeepCoder in two important ways: (a) it guides the search process ateach recursive level in a top-down goal-oriented enumeration and thus reshapes the search tree, (b) itis trained on real-world data instead of random programs, thus achieving better generalization.Symbolic Program Synthesis has been studied extensively in the PL community (Gulwani et al.,2017; Alur et al., 2013), dating back as far as 1960s (Waldinger & Lee, 1969). Most approachesemploy either bottom-up enumerative search (Udupa et al., 2013), constraint solving (Torlak & Bodik,2013), or inductive logic programming (Lin et al., 2014), and thus scale poorly to real-world industrialapplications (e.g. data wrangling applications). In this work, we build upon deductive search, firststudied for synthesis by Manna & Waldinger (1971), and primarily used for program synthesisfrom formal logical specifications (Puschel et al., 2005; Chaudhari & Damani, 2015). Gulwani(2011) and later Polozov & Gulwani (2015) used it to build PROSE, a commercially successfuldomain-agnostic system for PBE. While its deductive search guarantees program correctness and alsogood generalization via an accurate ranking function, it still takes several seconds on complex tasks.Thus, speeding up deductive search requires considerable engineering to develop manual heuristics.NGDS instead integrates neural-driven predictions at each level of deductive search to alleviate thisdrawback. Work of Loos et al. (2017) represents the closest work with a similar technique but theirwork is applied to an automated theorem prover, and hence need not care about generalization. Incontrast, NGDS guides the search toward generalizable programs while relying on the underlyingsymbolic engine to generate correct programs.6 C ONCLUSIONWe studied the problem of real-time program synthesis with a small number of input-output examples.For this problem, we proposed a neural-guided system that builds upon PROSE, a state-of-the-artsymbolic logic based system. Our system avoids top-down enumerative grammar exploration requiredby PROSE thus providing impressive synthesis performance while still retaining key advantages ofa deductive system. That is, compared to existing neural synthesis techniques, our system enjoysfollowing advantages: a) correctness : programs generated by our system are guaranteed to satisfy thegiven input-output specification, b) generalization : our system learns the user-intended program withjust one input-output example in around 60% test cases while existing neural systems learn such a10Published as a conference paper at ICLR 2018program in only 16% test cases, c) synthesis time : our system can solve most of the test cases in lessthan 0.1 sec and provide impressive performance gains over both neural as well symbolic systems.The key take-home message of this work is that a deep integration of a symbolic deductive inferencebased system with statistical techniques leads to best of both the worlds where we can avoid extensiveengineering effort required by symbolic systems without compromising the quality of generatedprograms, and at the same time provide significant performance (when measured as synthesis time)gains. For future work, exploring better learning models for production rule selection and applyingour technique to diverse and more powerful grammars should be important research directions.<|im_end|> <|im_start|>assistant ### Review Title Incremental paper but well-written ### Review Text This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem. This paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming languages community. Moreover the work mentions a neurally-guided search, but little time is spent on that portion of their contribution. I am not even clear how their system is trained. The experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy. It is difficult to conclude overall if the technique helps in synthesis. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
D5lK-IW_xS
MIDL.io/2020/Conference
2020
Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets
["Joshua V. Stough"]
The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data.
["Convolutional Neural Networks", "Echocardiography", "Segmentation", "Data Augmentation"]
Medical Imaging with Deep Learning { Under Review 2020 Short Paper { MIDL 2020 submissionModel Averaging and Augmented Inference for StableEchocardiography Segmentation using 2D ConvNetsAuthor(s) names withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2020AbstractThe automatic segmentation of heart substructures in 2D echocardiography images is a goalcommon to both clinicians and researchers. Convolutional neural networks (CNNs) haverecently shown the best average performance. However, on the rare occasions that a trainedCNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop andvalidate two easily implementable schemes for regularizing performance in 2D CNNs: modelaveraging and augmented inference. Model averaging involves training multiple instances ofa CNN with data augmentation over a sampled training set. Augmented inference involvesaccumulating network output over augmentations of the test image. Using the recentlyreleased CAMUS echocardiography dataset, we show signicant incremental improvementin outlier performance over the baseline model. These encouraging results must still bevalidated against independent clinical data.Keywords: Convolutional Neural Networks, Echocardiography, Segmentation, Data Aug-mentation1. IntroductionEchocardiography is a ubiquitous imaging modality for diagnosing and managing patientswith cardiovascular disease (Virnig et al., 2014), a major cause of morbidity and mortalityglobally. Derived from the apical two- and four-chamber views (AP2/AP4) of an echo study,the left ventricular (LV) ejection fraction (EF) is the most common clinical index for mea-suring cardiac function. The time{consuming nature of the required manual delineations,and their high degree of inter-observer variability (Wood et al., 2014), has motivated thedevelopment of automatic techniques (Zhang et al., 2018).Among many automatic segmentation methods that have been proposed in echo overdecades (Noble and Boukerroui, 2006), convolutional neural networks (CNNs) have recentlyshown the most promise. In order to catalyze further development in this eld, Leclerc et al.(2019) recently published the large annotated CAMUS dataset, providing expert manualannotations of hundreds of individual echo frames, needed for the supervised training of suchmodels. The authors also tested numerous deep learning and prior segmentation techniques,reporting that deep CNNs produced the best results.However, as pixel-wise classiers without shape or topological constraints, typical CNNscan suer from catastrophic failures, particularly in poor quality images or those with ar-tifacts. While rare, these failures make CNNs not yet trustworthy for large-scale precisionmedicine applications using clinical data. To address such outliers Oktay et al. (2017) pro-posed an anatomically constrained CNN in 3D echo, where the training is regularized byc2020 A.n. withheld.Augmented InferenceAP4_ED ES AP2_ED ES0.70.80.91.0Left VentricleAP4_ED ES AP2_ED ES0.80.91.0LV EpicardiumAP4_ED ES AP2_ED ES0.60.70.80.91.0Left AtriumFigure 1: Dice distribution for each structure, by view and phase. The left side of each pairrepresents a single trained model; the right, the 8-fold model average.an additional loss based on compact encoding of the ground-truth labeled images. How-ever, Leclerc et al. (2019) could not reproduce those benets on the 2D CAMUS dataset.Additionally, C.Qin et al. (2018) have integrated CNN-based segmentation with motionestimation in 3D cardiac magnetic resonance.In this work we appropriate bootstrapping concepts to develop and validate two rela-tively practicable techniques for mitigating these outlier errors in 2D CNNs. The rst ismodel averaging, in which a test image is segmented by multiple instances of a CNN trainedwith data augmentation over a sampled training set. The second technique is augmentedinference, in which model output is accumulated over multiple augmentations of the testimage. We use these techniques on the CAMUS dataset and show signicant incrementalimprovement in outlier performance over the baseline model.2. MethodsIn this section we briey describe our CNN model, data augmentation, evaluation, andexperimental setup. Our model architecture is based on the popular U-net CNN (Ron-neberger et al., 2015). With 13M parameters, the model uses convolutional down- andup-sampling, additive skip connections, and group normalization(Wu and He, 2018) forimproved stability.To help regularize output, we train all models with data augmentation reecting thevariability observed in the CAMUS set and echocardiography studies generally. The aug-mentations are performed on the y and include random intensity windowing, rotationabout the transducer, and additive Gaussian noise.To evaluate performance, we report Dice overlap on the segmented 2D echo frames.ForSautoandSrefrepresenting the areas enclosed by the respective object contours, Diceoverlap measures the intersection area divided by the average, D(Sauto; Sref) = 2(jSauto\Srefj)=(jSautoj+jSrefj). Dice is a highly validated measure in 2D.The publicly-released CAMUS dataset consists of 450 patients, two (AP2/AP4) viewsper patient, and two annotated (diastolic/systolic, ED/ES) phases per view, totalling 1800echo frames and corresponding label masks (background, LV endocardium LV endo, LV epi-cardium LV epi, and the left atrium LA). Additional information for each patient includes2Augmented Inferenceage, sex, and reported ED/ES LV volumes and EF, along with the observed image qualityfor each view.We initially generated ten patient folds, stratied on both EF range ( 45%;55%; else )and reported AP2 image quality (good, medium, poor), as suggested (Leclerc et al., 2019).We then excluded two folds for a test set totalling 90 patients (20%). We then performed8-fold cross-validation training on the remaining patient folds: each iteration, the CNNis trained on seven folds while being validated against another for parameter optimiza-tion. Each view is trained separately, resulting in eight model instances per view that cangeneralize to the test patients.3. ResultsTo evaluate model averaging, we compare the 8-fold accumulated inference to a baselinemodel of an arbitrarily chosen single fold. The box plots of Figure 1 clearly show thatmodel averaging improves median performance and tightens the interquartile range acrossall structures, views, and phases, with a similar number of outliers ([ 3;+2] out of 90). Toevaluate augmented inference, we consider an outlier of the baseline model in Figure 2. Weaccumulate the model inferences over 200 augmentations of the echo frame, as inferenceis relatively inexpensive. The recorded rotational augmentations are inverted before accu-mulation. As a result of augmented inference, Dice scores are dramatically improved oversingle inference for all labels (LV endo0.69-0.83, LV epi0.80-0.95, LA 0.36-0.70).4. ConclusionsModel averaging and augmented inference are relatively practicable methods that can sig-nicantly mitigate catastrophic errors in 2D CNNs. Model averaging signicantly reducesinterquartile ranges, while augmented inference may dramatically improve segmentationsof outlier test images, such as those with imaging artifacts. Future work revolves aroundincorporating video information and generalizing to other clinical datasets.Figure 2: Augmented inference on a test case. Center frame: baseline model performanceon test image (LV endoyellow, LV epimagenta, LA blue, ground truth red). Rightframe: accumulated performance of augmented inferences of the same model.3Augmented InferenceReferencesC.Qin, W. Bai, J. Schlemper, S.E Petersen, S.K. Piechnik, S. Neubauer, and D. Rueckert.Joint learning of motion estimation and segmentation for cardiac mr image sequences.InProc. Int. Conf. on Medical Image Computing and Computer Assisted Intervention(MICCAI) , 2018. https://doi.org/10.1007/978-3-030-00934-2_53 .Sarah Leclerc, Erik Smistad, Jo ao Pedrosa, Andreas stvik, Frederic Cervenansky, Flo-rian Espinosa, Torvald Espeland, Erik Andreas Rye Berg, Pierre-Marc Jodoin, ThomasGrenier, Carole Lartizien, Jan D'hooge, Lasse Lovstakken, and Olivier Bernard. Deeplearning for segmentation using an open large-scale dataset in 2d echocardiography. IEEETrans Med Imaging , 2019. https://doi.org/10.1109/TMI.2019.2900516 ;https://www.creatis.insa-lyon.fr/Challenge/camus/index.html .JA Noble and D Boukerroui. Ultrasound image segmentation: a survey. IEEE Trans MedImaging , 25:987{1010, 2006. https://www.ncbi.nlm.nih.gov/pubmed/16894993 .Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, JoseCaballero, Stuart Cook, Antonio de Marvao, Timothy Dawes, Declan O'Regan, BernhardKainz, Ben Glocker, and Daniel Rueckert. Anatomically constrained neural networks(acnns): Application to cardiac image enhancement and segmentation. IEEE Trans MedImaging , 37:384{395, 2017. https://doi.org/10.1109/TMI.2017.2743464 .Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networksfor biomedical image segmentation. pages 234{241, 2015. https://doi.org/10.1007/978-3-319-24574-4_28 .Beth A. Virnig, Nathan D Shippee, Brian O'Donnell, Jessica Zeglin, and Shriram Parashu-ram. Trends in the use of echocardiography, 2007 to 2011, 2014. https://www.ncbi.nlm.nih.gov/books/NBK208663/ .Peter W. Wood, Jonathan B. Choy, Navin C. Nanda, and Harald Becher. Left ventricularejection fraction and volumes: It depends on the imaging method. Echocardiography , 31(1):87{100, 2014. https://doi.org/10.1111/echo.12331 .Yuxin Wu and Kaiming He. Group normalization. 2018. https://doi.org/10.1007/978-3-030-01261-8_1 .Jerey Zhang, Sravani Gajjala, Pulkit Agrawal, Georey H. Tison, Laura A. Hallock, Lau-ren Beussink-Nelson, Mats H. Lassen, Eugene Fan, Mandar A. Aras, ChaRandle Jordan,Kirsten E. Fleischmann, Michelle Melisko, Atif Qasim, Sanjiv J. Shah, Ruzena Bajcsy,and Rahul C. Deo. Fully automated echocardiogram interpretation in clinical practice.Circulation , 136(16):1623{1635, 2018. https://doi.org/10.1161/CIRCULATIONAHA.118.034338 .4
yY-pqA1VlE2
Model for improved performance on outliers on echocardiography segmentation using test-time augmentation and ensembling
2: Weak reject
- The paper is very clearly written and the methods clearly described. The method involves ensembling 8 U-net models, trained on different overlapping folds of the echocardiography data with on-the-fly augmentations, and then applying test time augmentation by introducing 200 rotation variations and averaging the (unrotated) predictions. - The data is split into 10 folds initially, where 2 are held out as test data. The 8 U-net models are trained on 7/8 remaining folds in rotation (with the remaining 1/8 held out for validation on each of these splits). The ensembled prediction is compared to a baseline U-net trained only on a single fold. This however is not a fair comparison, as the ensemble ultimately sees all the data from the 8 folds across the 8 trained models, so the baseline effectively learns from 12.5% fewer real training images. Nonetheless, it is well established that ensembling improves over single models, as also demonstrated in the paper. - Test time augmentation improves segmentation results compared to the baseline model too. It is unclear whether test time augmentation improves over the ensemble model without test time augmentation however. - Both ensembling and test time augmentation are well established approaches in the literature. There is limited novelty in the proposed work, although clear improvements over a U-net baseline are shown.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets ### Paper Abstract The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data. ### Paper Keywords ["Convolutional Neural Networks", "Echocardiography", "Segmentation", "Data Augmentation"] ### Paper Content Medical Imaging with Deep Learning { Under Review 2020 Short Paper { MIDL 2020 submissionModel Averaging and Augmented Inference for StableEchocardiography Segmentation using 2D ConvNetsAuthor(s) names withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2020AbstractThe automatic segmentation of heart substructures in 2D echocardiography images is a goalcommon to both clinicians and researchers. Convolutional neural networks (CNNs) haverecently shown the best average performance. However, on the rare occasions that a trainedCNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop andvalidate two easily implementable schemes for regularizing performance in 2D CNNs: modelaveraging and augmented inference. Model averaging involves training multiple instances ofa CNN with data augmentation over a sampled training set. Augmented inference involvesaccumulating network output over augmentations of the test image. Using the recentlyreleased CAMUS echocardiography dataset, we show signicant incremental improvementin outlier performance over the baseline model. These encouraging results must still bevalidated against independent clinical data.Keywords: Convolutional Neural Networks, Echocardiography, Segmentation, Data Aug-mentation1. IntroductionEchocardiography is a ubiquitous imaging modality for diagnosing and managing patientswith cardiovascular disease (Virnig et al., 2014), a major cause of morbidity and mortalityglobally. Derived from the apical two- and four-chamber views (AP2/AP4) of an echo study,the left ventricular (LV) ejection fraction (EF) is the most common clinical index for mea-suring cardiac function. The time{consuming nature of the required manual delineations,and their high degree of inter-observer variability (Wood et al., 2014), has motivated thedevelopment of automatic techniques (Zhang et al., 2018).Among many automatic segmentation methods that have been proposed in echo overdecades (Noble and Boukerroui, 2006), convolutional neural networks (CNNs) have recentlyshown the most promise. In order to catalyze further development in this eld, Leclerc et al.(2019) recently published the large annotated CAMUS dataset, providing expert manualannotations of hundreds of individual echo frames, needed for the supervised training of suchmodels. The authors also tested numerous deep learning and prior segmentation techniques,reporting that deep CNNs produced the best results.However, as pixel-wise classiers without shape or topological constraints, typical CNNscan suer from catastrophic failures, particularly in poor quality images or those with ar-tifacts. While rare, these failures make CNNs not yet trustworthy for large-scale precisionmedicine applications using clinical data. To address such outliers Oktay et al. (2017) pro-posed an anatomically constrained CNN in 3D echo, where the training is regularized byc2020 A.n. withheld.Augmented InferenceAP4_ED ES AP2_ED ES0.70.80.91.0Left VentricleAP4_ED ES AP2_ED ES0.80.91.0LV EpicardiumAP4_ED ES AP2_ED ES0.60.70.80.91.0Left AtriumFigure 1: Dice distribution for each structure, by view and phase. The left side of each pairrepresents a single trained model; the right, the 8-fold model average.an additional loss based on compact encoding of the ground-truth labeled images. How-ever, Leclerc et al. (2019) could not reproduce those benets on the 2D CAMUS dataset.Additionally, C.Qin et al. (2018) have integrated CNN-based segmentation with motionestimation in 3D cardiac magnetic resonance.In this work we appropriate bootstrapping concepts to develop and validate two rela-tively practicable techniques for mitigating these outlier errors in 2D CNNs. The rst ismodel averaging, in which a test image is segmented by multiple instances of a CNN trainedwith data augmentation over a sampled training set. The second technique is augmentedinference, in which model output is accumulated over multiple augmentations of the testimage. We use these techniques on the CAMUS dataset and show signicant incrementalimprovement in outlier performance over the baseline model.2. MethodsIn this section we briey describe our CNN model, data augmentation, evaluation, andexperimental setup. Our model architecture is based on the popular U-net CNN (Ron-neberger et al., 2015). With 13M parameters, the model uses convolutional down- andup-sampling, additive skip connections, and group normalization(Wu and He, 2018) forimproved stability.To help regularize output, we train all models with data augmentation reecting thevariability observed in the CAMUS set and echocardiography studies generally. The aug-mentations are performed on the y and include random intensity windowing, rotationabout the transducer, and additive Gaussian noise.To evaluate performance, we report Dice overlap on the segmented 2D echo frames.ForSautoandSrefrepresenting the areas enclosed by the respective object contours, Diceoverlap measures the intersection area divided by the average, D(Sauto; Sref) = 2(jSauto\Srefj)=(jSautoj+jSrefj). Dice is a highly validated measure in 2D.The publicly-released CAMUS dataset consists of 450 patients, two (AP2/AP4) viewsper patient, and two annotated (diastolic/systolic, ED/ES) phases per view, totalling 1800echo frames and corresponding label masks (background, LV endocardium LV endo, LV epi-cardium LV epi, and the left atrium LA). Additional information for each patient includes2Augmented Inferenceage, sex, and reported ED/ES LV volumes and EF, along with the observed image qualityfor each view.We initially generated ten patient folds, stratied on both EF range ( 45%;55%; else )and reported AP2 image quality (good, medium, poor), as suggested (Leclerc et al., 2019).We then excluded two folds for a test set totalling 90 patients (20%). We then performed8-fold cross-validation training on the remaining patient folds: each iteration, the CNNis trained on seven folds while being validated against another for parameter optimiza-tion. Each view is trained separately, resulting in eight model instances per view that cangeneralize to the test patients.3. ResultsTo evaluate model averaging, we compare the 8-fold accumulated inference to a baselinemodel of an arbitrarily chosen single fold. The box plots of Figure 1 clearly show thatmodel averaging improves median performance and tightens the interquartile range acrossall structures, views, and phases, with a similar number of outliers ([ 3;+2] out of 90). Toevaluate augmented inference, we consider an outlier of the baseline model in Figure 2. Weaccumulate the model inferences over 200 augmentations of the echo frame, as inferenceis relatively inexpensive. The recorded rotational augmentations are inverted before accu-mulation. As a result of augmented inference, Dice scores are dramatically improved oversingle inference for all labels (LV endo0.69-0.83, LV epi0.80-0.95, LA 0.36-0.70).4. ConclusionsModel averaging and augmented inference are relatively practicable methods that can sig-nicantly mitigate catastrophic errors in 2D CNNs. Model averaging signicantly reducesinterquartile ranges, while augmented inference may dramatically improve segmentationsof outlier test images, such as those with imaging artifacts. Future work revolves aroundincorporating video information and generalizing to other clinical datasets.Figure 2: Augmented inference on a test case. Center frame: baseline model performanceon test image (LV endoyellow, LV epimagenta, LA blue, ground truth red). Rightframe: accumulated performance of augmented inferences of the same model.3Augmented InferenceReferencesC.Qin, W. Bai, J. Schlemper, S.E Petersen, S.K. Piechnik, S. Neubauer, and D. Rueckert.Joint learning of motion estimation and segmentation for cardiac mr image sequences.InProc. Int. Conf. on Medical Image Computing and Computer Assisted Intervention(MICCAI) , 2018. https://doi.org/10.1007/978-3-030-00934-2_53 .Sarah Leclerc, Erik Smistad, Jo ao Pedrosa, Andreas stvik, Frederic Cervenansky, Flo-rian Espinosa, Torvald Espeland, Erik Andreas Rye Berg, Pierre-Marc Jodoin, ThomasGrenier, Carole Lartizien, Jan D'hooge, Lasse Lovstakken, and Olivier Bernard. Deeplearning for segmentation using an open large-scale dataset in 2d echocardiography. IEEETrans Med Imaging , 2019. https://doi.org/10.1109/TMI.2019.2900516 ;https://www.creatis.insa-lyon.fr/Challenge/camus/index.html .JA Noble and D Boukerroui. Ultrasound image segmentation: a survey. IEEE Trans MedImaging , 25:987{1010, 2006. https://www.ncbi.nlm.nih.gov/pubmed/16894993 .Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, JoseCaballero, Stuart Cook, Antonio de Marvao, Timothy Dawes, Declan O'Regan, BernhardKainz, Ben Glocker, and Daniel Rueckert. Anatomically constrained neural networks(acnns): Application to cardiac image enhancement and segmentation. IEEE Trans MedImaging , 37:384{395, 2017. https://doi.org/10.1109/TMI.2017.2743464 .Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networksfor biomedical image segmentation. pages 234{241, 2015. https://doi.org/10.1007/978-3-319-24574-4_28 .Beth A. Virnig, Nathan D Shippee, Brian O'Donnell, Jessica Zeglin, and Shriram Parashu-ram. Trends in the use of echocardiography, 2007 to 2011, 2014. https://www.ncbi.nlm.nih.gov/books/NBK208663/ .Peter W. Wood, Jonathan B. Choy, Navin C. Nanda, and Harald Becher. Left ventricularejection fraction and volumes: It depends on the imaging method. Echocardiography , 31(1):87{100, 2014. https://doi.org/10.1111/echo.12331 .Yuxin Wu and Kaiming He. Group normalization. 2018. https://doi.org/10.1007/978-3-030-01261-8_1 .Jerey Zhang, Sravani Gajjala, Pulkit Agrawal, Georey H. Tison, Laura A. Hallock, Lau-ren Beussink-Nelson, Mats H. Lassen, Eugene Fan, Mandar A. Aras, ChaRandle Jordan,Kirsten E. Fleischmann, Michelle Melisko, Atif Qasim, Sanjiv J. Shah, Ruzena Bajcsy,and Rahul C. Deo. Fully automated echocardiogram interpretation in clinical practice.Circulation , 136(16):1623{1635, 2018. https://doi.org/10.1161/CIRCULATIONAHA.118.034338 .4<|im_end|> <|im_start|>assistant ### Review Title Model for improved performance on outliers on echocardiography segmentation using test-time augmentation and ensembling ### Review Text - The paper is very clearly written and the methods clearly described. The method involves ensembling 8 U-net models, trained on different overlapping folds of the echocardiography data with on-the-fly augmentations, and then applying test time augmentation by introducing 200 rotation variations and averaging the (unrotated) predictions. - The data is split into 10 folds initially, where 2 are held out as test data. The 8 U-net models are trained on 7/8 remaining folds in rotation (with the remaining 1/8 held out for validation on each of these splits). The ensembled prediction is compared to a baseline U-net trained only on a single fold. This however is not a fair comparison, as the ensemble ultimately sees all the data from the 8 folds across the 8 trained models, so the baseline effectively learns from 12.5% fewer real training images. Nonetheless, it is well established that ensembling improves over single models, as also demonstrated in the paper. - Test time augmentation improves segmentation results compared to the baseline model too. It is unclear whether test time augmentation improves over the ensemble model without test time augmentation however. - Both ensembling and test time augmentation are well established approaches in the literature. There is limited novelty in the proposed work, although clear improvements over a U-net baseline are shown. ### Review Rating 2: Weak reject ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Sx-mvOvnmJj
ICLR.cc/2021/Conference
2021
Improving Calibration for Long-Tailed Recognition
["Zhisheng Zhong", "Jiequan Cui", "Shu Liu", "Jiaya Jia"]
Deep neural networks often perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods greatly improve the performances by decoupling representation learning and classifier learning. In this paper, we discover that networks trained on long-tailed datasets are more prone to miscalibrated and over-confident. The two-stage models suffer the same issue as well. We design two novel methods to improve calibration and performance in such scenarios. Motivated by the predicted probability distributions of classes are highly related to the numbers of class instances, we propose a label-aware smoothing to deal with the different degrees of over-confidence for different classes and improve classifier learning. Noting that there is a dataset bias between these two stages because of different samplers, we further propose a shifted batch normalization to solve the dataset bias in the decoupling framework. Through extensive experiments, we also observe that mixup can remedy over-confidence and improve representation learning but has a negative or negligible effect on classifier learning. Our proposed methods set new records on multiple popular long-tailed recognition benchmarks including LT CIFAR 10/100, ImageNet-LT, Places-LT, and iNaturalist 2018.
["long tailed recognition", "network calibration", "label-aware smoothing", "mixup", "dataset bias"]
ABSTRACTDeep neural networks often perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods greatly improve the performances bydecoupling representation learning and classifier learning. In this paper, we discoverthat networks trained on long-tailed datasets are more prone to miscalibrated andover-confident. The two-stage models suffer the same issue as well. We designtwo novel methods to improve calibration and performance in such scenarios.Motivated by the predicted probability distributions of classes are highly related tothe numbers of class instances, we propose a label-aware smoothing to deal withthe different degrees of over-confidence for different classes and improve classifierlearning. Noting that there is a dataset bias between these two stages because ofdifferent samplers, we further propose a shifted batch normalization to solve thedataset bias in the decoupling framework. Through extensive experiments, wealso observe that mixup can remedy over-confidence and improve representationlearning but has a negative or negligible effect on classifier learning. Our proposedmethods set new records on multiple popular long-tailed recognition benchmarksincluding LT CIFAR 10/100, ImageNet-LT, Places-LT, and iNaturalist 2018.1 I NTRODUCTIONWith numerous available large-scale and high-quality datasets such as ImageNet (Russakovskyet al., 2015), COCO (Lin et al., 2014), and Places (Zhou et al., 2017), deep convolutional neuralnetworks (CNNs) have made notable breakthroughs in various computer vision tasks such as imagerecognition (Krizhevsky et al., 2012; He et al., 2016), object detection (Ren et al., 2015) and semanticsegmentation (Cordts et al., 2016). These delicate datasets are usually artificially balanced withrespect to the number of instances for each object/class. However, in real-world applications, dataoften follows an unexpected long-tailed distribution, where the numbers of instances for differentclasses are seriously imbalanced. When training CNNs on such long-tailed datasets, the performancesextremely degrade. Motivated by this phenomenon, a number of works have recently emerged thattry to explore long-tailed recognition.Recently, many two-stage approaches have achieved significant improvement comparing with one-stage methods. Concretely, DRS and DRW (Cao et al., 2019) first train CNNs in a normal way inStage-1. DRS finetunes CNNs on datasets with class-balanced resampling while DRW finetunesCNNs by assigning different weights to different classes in Stage-2. Zhou et al. (2020) proposedBBN with one-stage to simulate the process of DRS by dynamically combining the instance-balancedsampler and the reverse-balanced sampler. Kang et al. (2020) proposed two-stage decoupling models,cRT and LWS, to further boost the performance: Decoupling models freeze the backbone and justfinetune the classifier with class-balanced resampling in Stage-2.Confidence calibration (Niculescu-Mizil & Caruana, 2005; Guo et al., 2017) – the problem ofpredicting probability estimates representative of the true correctness likelihood – is important forrecognition models in many applications (Bojarski et al., 2016; Jiang et al., 2012). In this study, wediscover that networks trained on long-tailed datasets are more miscalibrated and over-confident: Wedraw the reliability diagrams with 15 bins in Fig. 1, which compares the plain model trained on theoriginal CIFAR-100 dataset, the plain model, cRT, and LWS trained on long-tailed CIFAR-100 withimbalanced factor (IF) 100. We observe that networks trained on long-tailed datasets have higherexpected calibration errors (ECEs). The two-stage models, cRT and LWS, suffer over-confidence as1Under review as a conference paper at ICLR 2021Org. CIFAR-100 LT CIFAR-100, IF100 LT CIFAR-100, IF100, cRT LT CIFAR-100, IF100, LWS0.0 0.2 0.4 0.6 0.8 1.0 0.20.40.60.81.0 AccuracyACC=0.576ECE=0.190GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.390ECE=0.381GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.407ECE=0.295GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.412ECE=0.363GapAccuracyConfidenceFigure 1: Reliability diagrams of ResNet-32. From left to right: the plain model trained on the originalCIFAR-100 dataset, the plain model, cRT, and LWS trained on long-tailed CIFAR-100 with IF=100.well. Moreover, Fig. 7 and Fig. 8 (the first two plots) in Appendix B depict that this phenomenonalso commonly exists on other long-tailed datasets such as LT CIFAR-10 and ImageNet-LT.Another issue is that two-stage decoupling methods ignore the dataset bias or domain shift (Quionero-Candela et al., 2009) between these two stages. Concretely, two-stage models are first trained on theinstanced-balanced dataset DIin Stage-1. Then, models are trained on the class-balanced datasetDCin Stage-2. Obviously, PDI(x;y)6=PDC(x;y), the distributions of the dataset with differentsampling manners are inconsistent. Motivated by the transfer learning methods (Li et al., 2018; Wanget al., 2019), we focus on the batch normalization (Ioffe & Szegedy, 2015) layer to deal with thedataset bias problem.In this work, we propose a MixupShifted Label-Aware Smoothing model (MiSLAS) to effectivelysolve the above issues. Our key contributions are as follows: (i) We discover that models trained onlong-tailed datasets are much more miscalibrated and over-confident than them trained on balanceddatasets. The two-stage models suffer the same problem as well. (ii) We find that mixup can remedyover-confidence and have a positive effect on representation learning but a negative or negligibleeffect on classifier learning. To further enhance classifier learning and calibration, we propose alabel-aware smoothing to handle the different degrees of over-confidence for different classes. (iii) Weare the first to note the dataset bias or domain shift in two-stage resampling methods for long-tailedrecognition. To deal with the dataset bias in the decoupling framework, we propose shift learning onthe batch normalization layer, which can greatly improve the performance.We extensively validate our MiSLAS on multiple long-tailed recognition benchmark datasets, i.e.,LT CIFAR-10, LT CIFAR-100, ImageNet-LT, Places-LT, and iNaturalist 2018. Experimental resultsmanifest that the effectiveness and our method yields new state-of-the-art.2 R ELATED WORKSRe-sampling and re-weighting. There are two groups of re-sampling strategies: over-samplingthe tail-class images (Shen et al., 2016; Buda et al., 2018; Byrd & Lipton, 2019) and under-samplingthe head-class images (Japkowicz & Stephen, 2002; Buda et al., 2018). Over-sampling is regularlyuseful on large datasets and often suffers from heavy over-fitting to tail classes especially on smalldatasets. For under-sampling, it discards a large portion of data, which inevitably causes degradationof the generalization ability of deep models. Re-weighting (Huang et al., 2016; Wang et al., 2017) isanother prominent strategy. It assigns different weights for classes and even instances. The vanillare-weighting method gives class weights in reverse proportion to the number of samples of classes.However, with large-scale data, re-weighting makes the deep models difficult to optimize duringtraining. Cui et al. (2019) relieved the problem using the effective numbers to calculate the classweights. Another line of work is to adaptively re-weight each instance, e.g., Focal loss (Lin et al.,2017) assigned smaller weights for well-classified samples.Network calibration and regularization. Calibrated confidence is significant for classificationmodels in many applications. The calibration of modern neural networks is first discussed in Guoet al. (2017). The authors discovered that model capacity, normalization, and regularization havestrong effects on network calibration. mixup (Zhang et al., 2018) is a regularization technique thatis proposed to train with interpolations of inputs and labels. mixup inspires several follow-ups likemanifold mixup (Verma et al., 2019), CutMix (Yun et al., 2019), and Remix (Chou et al., 2020) that2Under review as a conference paper at ICLR 2021Table 1: Top-1 accuracy of the decoupling models (cRT and LWS) for ResNet families trained on the ImageNet-LT dataset. We vary the augmentation strategies (with or without mixup = 0:2) on both two stages.Training setup for two stagesResNet-10 ResNet-50 ResNet-101 ResNet-152cRT LWS cRT LWS cRT LWS cRT LWSStage-1 (no mixup) 36.8 36.8 45.8 45.8 47.3 47.3 48.7 48.7Stage-1 (mixup) 35.7 35.7 45.6 45.6 47.7 47.7 48.4 48.4Stage-1 (no mixup) + Stage-2 (no mixup) 43.3 43.5 50.3 51.2 51.4 52.3 52.7 53.8Stage-1 (no mixup) + Stage-2 (mixup) 43.0 43.3 50.2 51.1 51.4 52.2 52.8 53.6Stage-1 (mixup) + Stage-2 (no mixup) 43.4 42.9 51.7 52.0 53.1 53.5 54.2 54.6Stage-1 (mixup) + Stage-2 (mixup) 43.3 42.8 51.6 51.9 53.0 53.5 54.1 54.502004006008001000 Class Index 1.01.21.41.6Weight NormcRT, Acc. 50.2%cRT + mixup, Acc. 51.7%02004006008001000 Class Index 0.80.91.01.11.2Weight NormLWS, Acc. 51.2%LWS + mixup, Acc. 51.9%Figure 2: Classifier weight norms for the ImageNet-LT validation set when classes are sorted by descendingvalues ofNj. Left: weight norms of cRT with or without mixup. Right: weight norms of LWS with or withoutmixup. (light shade: true norm, dark lines: smooth version)have shown significant improvement over mixup. Thulasidasan et al. (2019) found that CNNs trainedwith mixup are significantly better calibrated. Label smoothing (Szegedy et al., 2016) is anotherregularization technique that encourages the model to be less over-confident. Unlike cross-entropycomputes loss upon the ground truth labels, label smoothing computes loss upon a soft version of thelabel, which can relieve the over-fitting and increase calibration and reliability (Müller et al., 2019).Two-stage methods. Cao et al. (2019) first proposed deferred re-weighting (DRW) and deferredre-sampling (DRS) that are superior to conventional one-stage methods: Stage-2, starting from betterfeatures, adjusts the decision boundary and locally fine-tunes the features. Recently, Kang et al. (2020)and Zhou et al. (2020) concluded that although class re-balance strategies matter when jointly trainingrepresentation and classifier, instance-balanced sampling gives more general representations. Basedon this observation, Kang et al. (2020) achieved state-of-the-art results by decomposing representationand classifier learning, i.e., first train the deep models with instance-balanced sampling, then fine-tunethe classifier with class-balanced sampling while keeping parameters of representation learning fixed.Similarly, Zhou et al. (2020) integrated mixup training into the proposed cumulative learning strategywith which they bridged the representation learning and classifier re-balancing. The cumulativelearning strategy requires dual samplers: instance-balanced and reversed instance-balanced sampler.3 M AINAPPROACH3.1 I MPROVING CALIBRATION AND REPRESENTATION LEARNING BY MIXUPFor the two-stage learning framework, Kang et al. (2020) and Zhou et al. (2020) found that instance-balanced sampling gives the most generalizable representations among other sampling methods.Thulasidasan et al. (2019) found that networks trained with mixup are better calibrated. Whenusing instance-balanced sampling, to further improve the representation generalization and relieveover-confidence, we explore the effect of mixup in the two-stage decoupling framework.Here, we train two two-stage models, i.e. cRT and LWS, on ImageNet-LT for 180 epochs in Stage-1and finetune for 10 epochs in Stage-2, respectively. We vary the training setup (with/without mixup3Under review as a conference paper at ICLR 2021-Figure 3: Violin plot of predicted probability distributions for different parts of classes, head (more than 100images), medium (20 to 100 images), and tail (less than 20 images) on LT CIFAR-100, IF=100. The upper halfpart in light blue: LWS (cross-entropy). The bottom half part in deep blue: LWS (label-aware smoothing).= 0:2) for both two stages. Top-1 accuracy results of these variants are listed in Table 1. From it,we conclude that: (i) When applying mixup, the performance improvements of Stage-1 are ignorablebut the performances of Stage-2 are greatly enhanced for both cRT and LWS. (ii) Applying additionalmixup in Stage-2 has no obvious improvement or even damages the performance, which means thatmixup encourages representation learning but has a negative or negligible effect on classifier learning.We also draw the final classifier weight norms of these variants in Fig. 2. We show the L2normsof the weight vectors for all classes, as well as the training data distribution sorted in a descendingmanner concerning the number of instances. We observe that when applying mixup (orange line), theweight norms of the tail classes uniformly tend to become larger and the weight norms of the headclasses are decreased, which means mixup may be more friendly to the tail classes.The analysis of calibration for networks whether adding mixup will be discussed in our experimentpart (Sec. 4.2). Due to the poor and unsatisfied enhancement of mixup for classifier learning, wefurther propose a label-aware smoothing to improve both the calibration and classifier learning.3.2 I MPROVING CALIBRATION AND CLASSIFIER LEARNING BY LABEL -AWARE SMOOTHINGAs discussed in the introduction part and Sec. 3.1, two-stage models suffer serious over-confidenceand there is no significant improvement for classifier learning when adding additional mixup. In thissubsection, we try to analyze and deal with these two issues. Suppose that the weight of the classifier isW2RMK, whereMis the number of features and Kis the number of classes. The cross-entropyencourages the whole network to be over-confident on the head classes: Concretely, the cross-entropyloss after the softmax activation is l(y;p) =log(py) =w>yx+ log(Xexp(w>ix)), wherey2f1;2;:::;Kgis the label,x2RMis the feature vector send to classifier and wiis thei-thcolumn vector of W. The optimal solution is wy>x= inf while keeping others w>ix,i6=y, smallenough. Because the head classes contain much more training examples, the network makes theweight normkwkof the head classes become larger to near the optimal solution as much as possible,which results that their predicted probabilities mainly concentrate near 1.0 (see Fig. 3, the upper halfpart showing in light blue). Another fact we can get from Fig. 3 is that the distributions of predictedprobability are severely related to the instance numbers. Unlike balanced recognition, we claim thatapplying different strategies for different classes is extremely necessary for the long-tailed problem.Here, we propose a label-aware smoothing to solve the over-confidence in cross-entropy and thedifferent distributions of predicted probability issue. The mathematical computation of label-awaresmoothing is:l(q;p) =KXi=1qilogpi;qi=1y= 1f(Ny); i=y;yK1=f(Ny)K1; otherwise,(1)whereyis a small label smoothing factor for Class- yand relates to its class number Ny. Now theoptimal solution becomes:wi>x=(log(K1)(1y)y+c; i =y;c; otherwise,(2)4Under review as a conference paper at ICLR 2021whereccan be an arbitrary real number. Comparing with the infinite optimal solution in cross-entropy,the label-aware smoothing encourages a finite output, which can get more generalized results andremedy over-fitting. We suppose the labels of the long-tailed dataset are assigned in a descendingmanner concerning the number of instances, i.e., N1N2:::NK. Because the head classescontain more various and diverse examples, the predicted probabilities are more promising than themof tail classes. Thus, we suggest classes with larger instance numbers should be penalized largerlabel smoothing factors, that is, the related function f(Ny)should be negatively correlated to Ny.We define three types of related function f(Ny):y=f(Ny) =8>>>>>><>>>>>>:(Concave) K+ (1K) sinh(NyNK)2(N1NK)i; y = 1;2;:::;K;(Linear) K+ (1K)NyNKN1NK; y = 1;2;:::;K;(Convex) 1+ (1K) sinh32+(NyNK)2(N1NK)i; y= 1;2;:::;K;(3)where1andKare two hyperparameters. If we set 1K, then we can get 12:::K.It means that if the instance number Nyfor Class-yis larger, label-aware smoothing will allocate alarger smoothing factor and lower the fitting probability to relieve the over-confidence because thehead and medium classes are more likely to be over-confident than the tail classes (see Fig. 3).As the form of label-aware smoothing is more complicated than cross-entropy, we propose a moregeneralized classifier learning framework to fit it. Here we give a quick review about cRT andLWS: cRT tries to learn a new classifier weight, which contains KM learnable parameters. LWS isrestricted to learn the weight scaling vector s2RK, which contains only Klearnable parameters.By contrast, cRT has more learnable parameters. It means cRT has a more powerful representationability. LWS tends to obtain better validation losses and performances on large-scale datasets (referto the experiment part in Kang et al. (2020)). It means LWS has a better generalization property. Tocombine the advantages of both cRT and LWS, we redesign the classifier framework in Stage-2:z=diag(s) (aW+ W)>x: (4)In Eqn. (3), we fix the original classifier weight Win Stage-2. If we make the learnable scalingvectorsfixed, sets=1,a= 0, and just learn the new classifier weight W2RMK, Eqn. (4)will degrade to cRT. Because LWS fixes the original classifier weights Wand only learns the scalings, Eqn. (4) will degrade to LWS if we set a= 1 andW=0. In most cases, LWS generallyachieves better results than cRT on large scale datasets. Thus, we let slearnable and set a= 1. Wealso make Wlearnable to improve the representation ability but optimize Wby a differentlearning rate. Wcan be viewed as doing a shift transformation on W. This transformation canchange the direction of the original weight vector winW, which is what LWS cannot do.3.3 S HIFT LEARNING ON BATCH NORMALIZATIONIn the two-stage training framework, models are first trained with instance-balanced sampling inStage-1 and then trained with class-balanced sampling in Stage-2. Since the framework involves twosamplers, or two datasets: the instance-balanced dataset DIand the class-balanced dataset DC, wecan regard this two-stage training framework as a derivative of transfer learning approaches. However,if we view the two-stage decoupling training framework from the transfer learning perspective, fixingthe backbone part and just fine-tuning the classifier in Stage-2 will be clearly unreasonable, especiallyfor the batch normalization (BN) layers.Concretely, we suppose that the input of the network is xi, the input feature of some BN layer isg(xi), and the mini-batch size is m. The running mean and the running variance of Channel- jforthese two stages are:(j)I=1mmXi=1g(xi)(j);2I(j)=1mmXi=1hg(xi)(j)(j)Ii2;xiPDI(x;y); (5)(j)C=1mmXi=1g(xi)(j);2C(j)=1mmXi=1hg(xi)(j)(j)Ci2;xiPDC(x;y): (6)5Under review as a conference paper at ICLR 2021mixup + cRT mixup + LWS mixup + LWS + shifted BN MiSLAS0.0 0.2 0.4 0.6 0.8 1.0 0.20.40.60.81.0 AccuracyACC=0.452ECE=0.138GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.442ECE=0.225GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.453ECE=0.222GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.470ECE=0.048GapAccuracyConfidenceFigure 4: Reliability diagrams of ResNet-32 trained on LT CIFAR-100, IF=100. From left to right: cRT withmixup, LWS with mixup, LWS with mixup and shifted BN, and MiSLAS. It is better to look together with Fig. 1.Due to the different sampling strategies, the composition ratios of the head, medium, and tail classesare also totally different, which leads to PDI(x;y)6=PDC(x;y). Calculated by Eqn. (5) and (6),there exist some biases in andunder two sampling strategies, i.e., I6=C, and2I6=2C.Thus, it is clearly infeasible for the decoupling framework that BN shares mean and variance acrossdatasets with two sampling strategies. Motivated by AdaBN (Li et al., 2018) and TransNorm (Wanget al., 2019), we unfreeze the update procedures of the running mean and running variance butfix the learnable linear transformation parameters andfor a better normalization in Stage-2.4 E XPERIMENTS4.1 E XPERIMENTAL SETUPOur experimental setup including the implementation details and evaluation protocol mainly followsCao et al. (2019) for LT CIFAR-10 and LT CIFAR-100, and Kang et al. (2020) for ImageNet-LT,Places-LT, and iNuturalist 2018. Please see Appendix A for further details.4.2 A BLATION STUDYImproving calibration. Here we show the reliability diagrams with 15 bins of our methods inFig. 4. Comparing with Fig. 1 in the introduction part, both the mixup and label-aware smoothing cannot only largely enhance the network calibration (even lower ECEs than them on balanced datasets)but also greatly improve the performance for long-tailed recognition. The similar trends can alsobe found on LT CIFAR-10, ImageNet-LT, and Places-LT (please see the figures in Appendix B fordetail), which proves the powerful effects of the proposed method on calibration. According to allexperiment results, training networks on imbalanced datasets leads to more severe over-confidence.Since the conventional mixup and label-smoothing both contain the operation of softening the groundtruth labels, which may suggest that training with hard labels is likely to be another contributingfactor leading to network over-confidence.Further analysis of label-aware smoothing. In our label-aware smoothing, there are two hyper-parameters in Eqn. (3), 1andK, which control the penalties of classes. In recognition system, ifthe predicted probability of some Class- yis larger than 0.5, the classifier will classify the input toClass-y. Thus, to ensure reasonability, we limit 0K10:5. Here we conduct a comparingexperiment for varying 1andKboth from 0.0 to 0.5 on LT CIFAR-100 with imbalanced factor100. We plot the performance matrix upon 1andkin Fig. 5 for all possible variants. Fromit, the classification accuracy can be further improved by 0.9% comparing with the conventionalcross-entropy ( 1= 0,K= 0, green square) when we pick 1= 0:4,K= 0:1(orange square)for label-aware smoothing. A more surprising improvement (growing by 3.3%) can be found onLT CIFAR-10 (see Appendix D.1 for detail). We also find that the concave related function f()inEqn. (3) achieves the best performance but the gain is quite limited (refer Appendix D.2 for detail).To visualize the change in predicted probability distributions, we train two LWS models, one withcross-entropy and the other with label-aware smoothing on long-tailed CIFAR-100 with imbalancedfactor 100. The cross-entropy-based distributions of the head, medium, and tail classes are showing inthe upper half part of Fig. 3 in light blue. The label-aware smoothing-based distributions are showingin the bottom half part in deep blue. We observe that the over-confidence of head and medium classesrelieve greatly, and the whole distribution of the tail classes slightly moves right (a larger mean) whenusing label-aware smoothing. This empirical visualization is consistent with our analysis mentionedin Sec. 3.2.6Under review as a conference paper at ICLR 2021Table 2: Ablation study for all proposed moduleson long-tailed CIFAR-100, IF=100. MU: applyingmixup just in Stage-1. SL: shift learning on batchnormalization. LAS: label-aware smoothing.Module LT CIFAR-100MU SL LAS 100 50 108 8 8 41.2 46.0 58.54 8 8 44.2 50.6 62.24 4 8 45.3 51.4 62.84 4 4 47.0 52.3 63.00.00.10.20.30.40.510.00.10.20.30.40.5K46.1746.7346.8946.9346.8946.8746.1346.6246.8347.0446.9446.1146.6046.7646.9646.0846.4546.8546.1246.5046.0546.246.446.646.847.0Figure 5: Ablation study of two hyperparameters1andKin label-aware smoothing.2468101214160.60.30.00.30.68162432404856641.00.50.00.5246810121416The Frist BN (bn1)14710816243240485664The Last BN (layer3.4.bn2)0.00.20.40.62BN with shift, Acc. 45.3%BN w/o shift, Acc. 44.2%Figure 6: Visualization of the changes in the running mean and variance 2. The ResNet-32 based modelis trained on LT CIFAR-100 with imbalanced factor 100. Left: and2in the first BN of ResNet-32, whichcontains 16 channels. Right: and2in the last BN of ResNet-32, which contains 64 channels.Further analysis of shift learning. In this part, we conduct an empirical experiment to show theeffectiveness and reasonability of shift learning on BN. We train the LWS model on long-tailedCIFAR-100 with imbalanced factor 100. After 10 epochs finetuning in Stage-2, the model trainedwith BN shifting achieves accuracy at 45:3%, which is 1:1%higher than it without BN shifting. Wealso draw a visualization of the change in BN. As shown in Fig. 6, we see that there indeed existbiases inand2between the dataset using different sampling strategies. Due to the compositionratios of the head classes, medium classes and tail classes are different in terms of different samplingstrategies, the statistic running mean and running variance 2are certainly different. We also findsome interesting phenomenons need for future exploration: (i) The changes in the running variance2are larger than the changes in the running mean . (ii) The changes of and2in deep BNlayers are quite smaller than them in shallow BN layers.Overall, Table 2 shows the ablation investigation on the effects of mixup (adding mixup in Stage-1, MU), shift learning on batch normalization (SL), and label-aware smoothing (LAS). From it,each proposed module can further improve the performances on long-tailed CIFAR-100 for allcommonly-used imbalanced factors, which firmly demonstrates the effectiveness.4.3 C OMPARISON WITH THE STATE -OF-THE-ARTIn this subsection, we compare the proposed method against previous one-stage methods, such asRange Loss (Zhang et al., 2017), LDAM Loss (Cao et al., 2019), FSLwF (Gidaris & Komodakis,2018), and OLTR (Liu et al., 2019), and against previous two-stage methods, such as DRS-like,DRW-like (Cao et al., 2019), LFME (Xiang & Ding, 2020), cRT, and LWS (Kang et al., 2020). Forfair comparisons, we also add mixup on the LWS and cRT models. Remix (Chou et al., 2020) is a7Under review as a conference paper at ICLR 2021Table 3: Top-1 accuracy (%) for ResNet-32 models trained on long tailed CIFAR-10 and CIFAR-100.MethodLong-tailed CIFAR-10 Long-tailed CIFAR-100100 50 10 100 50 10CE 70.4 74.8 86.4 38.4 43.9 55.8mixup 73.1 77.8 87.1 39.6 45.0 58.2LDAM+DRW 77.1 81.1 88.4 42.1 46.7 58.8BBN (include mixup) 79.9 82.2 88.4 42.6 47.1 59.2Remix+DRW (300 epochs) 79.8 - 89.1 46.8 - 61.3cRT+mixup 79.1 84.2 89.8 45.1 50.9 62.1LWS+mixup 76.3 82.6 89.6 44.2 50.6 62.2MiSLAS 82.1 85.8 89.9 47.0 52.3 63.0Table 4: Top-1 accuracy (%) on ImageNet-LT (left), iNaturalist 2018 (center) and Place-LT (right).Method ResNet-50CE 44.6CE+DRW 48.5Focal+DRW 47.9LDAM+DRW 48.8CRT+mixup 51.7LWS+mixup 52.0MiSLAS 52.7(a) ImageNet-LTMethod ResNet-50CB-Focal 61.1LDAM+DRW 68.0BBN (include mixup) 69.6Remix+DRW 70.5cRT+mixup 70.2LWS+mixup 70.9MiSLAS 71.6(b) iNaturalist 2018Method ResNet-152Range Loss 35.1FSLwF 34.9OLTR 35.9OLTR+LFME 36.2cRT+mixup 38.3LWS+mixup 39.7MiSLAS 40.4(c) Place-LTrecently proposed augmentation method for long-tail recognition. Because BBN (Zhou et al., 2020)has double samplers and is trained in a mixup-like manner, we directly compare our method with it.Experimental results on CIFAR-LT. We conduct extensive experiments on long-tailed CIFAR-10and CIFAR-100 with imbalanced factors of 10, 50, and 100, which is the same as the previoussetting (Cao et al., 2019; Zhou et al., 2020). The experimental results are summarized in Table 3.Compared with previous methods (+mixup, one/two-stage), our MiSLAS outperforms all previousmethods by a large margin. Moreover, this superiority of the proposed method holds for all imbalancedfactors on both long-tailed CIFAR-10 and CIFAR-100.Experimental results on ImageNet-LT, iNaturalist 2018, and Place-LT. We further verify theeffectiveness of our method on three large-scale imbalanced datasets, i.e., ImageNet-LT, iNaturalist2018, and Place-LT. Table 4 lists experimental results on ImageNet-LT (left), iNaturalist 2018 (center),and Places-LT (right). Notably, our MiSLAS still outperforms all competing approaches and sets newstate-of-the-art records for all three large-scale long-tailed benchmarks. More detailed results aboutthe split class accuracies and different backbones on these three datasets are listed in Appendix C.5 C ONCLUSIONIn this paper, we discover that models trained on long-tailed datasets are more miscalibrated andover-confident than them trained on balanced datasets. The two-stage models suffer the same issueas well. To relieve over-confidence, we propose two solutions: (i) We find that mixup can remedyover-confidence and have a positive effect on representation learning but a negative or negligibleeffect on classifier learning. (ii) To further improve classifier learning and calibration, we proposelabel-aware smoothing to handle the different degrees of over-confidence for different classes. Weare the first to note the dataset bias or domain shift in two-stage resampling methods for long-tailedrecognition. To solve the dataset bias producing by different re-sampling in the decoupling framework,we propose shift learning on the batch normalization layer and this novel model can greatly improvethe performance. Extensive quantitative and qualitative experiments on multiple benchmark datasetsshow that our MiSLAS achieves superior performances over the state-of-the-art methods.8Under review as a conference paper at ICLR 2021
QNzzYbirhI_
Interesting paper with some promising results
6: Marginally above acceptance threshold
Summary: This paper addresses two major issues in previous works of long-tailed recognition: 1) models are over-confident on “head” classes. 2) the domain shift (dataset bias) is ignored in two-stage models. Accordingly, they propose two novel solutions including label-aware smoothing and shift learning on batch normalization. They also discover that mixup can improve the representation learning on long-tailed recognition. The experiments verify the effectiveness of their approaches and show the significant improvements on ImageNet-LT, iNaturalist, and Places-LT. Pros: - Overall, the presentation of this paper is clear and the writing is good. - The authors address the issues in previous works and conduct various ablation studies for verification. - The experimental results on benchmark datasets seem very promising and outperform baselines by a large margin. Cons: - In experiments, I wonder if the authors apply mixup alone or with other basic augmentations, such as rotation, flipping. It would be interesting to see the performance combining with some basic augmentations since training without data augmentations usually gives bad performance. - It would be nice to add a plot to indicate the difference among the three functions for eqn (3), e.g. move the left plot of figure 11 to the main text and explain their behaviors. Overall, I find this paper interesting and novel.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Improving Calibration for Long-Tailed Recognition ### Paper Abstract Deep neural networks often perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods greatly improve the performances by decoupling representation learning and classifier learning. In this paper, we discover that networks trained on long-tailed datasets are more prone to miscalibrated and over-confident. The two-stage models suffer the same issue as well. We design two novel methods to improve calibration and performance in such scenarios. Motivated by the predicted probability distributions of classes are highly related to the numbers of class instances, we propose a label-aware smoothing to deal with the different degrees of over-confidence for different classes and improve classifier learning. Noting that there is a dataset bias between these two stages because of different samplers, we further propose a shifted batch normalization to solve the dataset bias in the decoupling framework. Through extensive experiments, we also observe that mixup can remedy over-confidence and improve representation learning but has a negative or negligible effect on classifier learning. Our proposed methods set new records on multiple popular long-tailed recognition benchmarks including LT CIFAR 10/100, ImageNet-LT, Places-LT, and iNaturalist 2018. ### Paper Keywords ["long tailed recognition", "network calibration", "label-aware smoothing", "mixup", "dataset bias"] ### Paper Content ABSTRACTDeep neural networks often perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods greatly improve the performances bydecoupling representation learning and classifier learning. In this paper, we discoverthat networks trained on long-tailed datasets are more prone to miscalibrated andover-confident. The two-stage models suffer the same issue as well. We designtwo novel methods to improve calibration and performance in such scenarios.Motivated by the predicted probability distributions of classes are highly related tothe numbers of class instances, we propose a label-aware smoothing to deal withthe different degrees of over-confidence for different classes and improve classifierlearning. Noting that there is a dataset bias between these two stages because ofdifferent samplers, we further propose a shifted batch normalization to solve thedataset bias in the decoupling framework. Through extensive experiments, wealso observe that mixup can remedy over-confidence and improve representationlearning but has a negative or negligible effect on classifier learning. Our proposedmethods set new records on multiple popular long-tailed recognition benchmarksincluding LT CIFAR 10/100, ImageNet-LT, Places-LT, and iNaturalist 2018.1 I NTRODUCTIONWith numerous available large-scale and high-quality datasets such as ImageNet (Russakovskyet al., 2015), COCO (Lin et al., 2014), and Places (Zhou et al., 2017), deep convolutional neuralnetworks (CNNs) have made notable breakthroughs in various computer vision tasks such as imagerecognition (Krizhevsky et al., 2012; He et al., 2016), object detection (Ren et al., 2015) and semanticsegmentation (Cordts et al., 2016). These delicate datasets are usually artificially balanced withrespect to the number of instances for each object/class. However, in real-world applications, dataoften follows an unexpected long-tailed distribution, where the numbers of instances for differentclasses are seriously imbalanced. When training CNNs on such long-tailed datasets, the performancesextremely degrade. Motivated by this phenomenon, a number of works have recently emerged thattry to explore long-tailed recognition.Recently, many two-stage approaches have achieved significant improvement comparing with one-stage methods. Concretely, DRS and DRW (Cao et al., 2019) first train CNNs in a normal way inStage-1. DRS finetunes CNNs on datasets with class-balanced resampling while DRW finetunesCNNs by assigning different weights to different classes in Stage-2. Zhou et al. (2020) proposedBBN with one-stage to simulate the process of DRS by dynamically combining the instance-balancedsampler and the reverse-balanced sampler. Kang et al. (2020) proposed two-stage decoupling models,cRT and LWS, to further boost the performance: Decoupling models freeze the backbone and justfinetune the classifier with class-balanced resampling in Stage-2.Confidence calibration (Niculescu-Mizil & Caruana, 2005; Guo et al., 2017) – the problem ofpredicting probability estimates representative of the true correctness likelihood – is important forrecognition models in many applications (Bojarski et al., 2016; Jiang et al., 2012). In this study, wediscover that networks trained on long-tailed datasets are more miscalibrated and over-confident: Wedraw the reliability diagrams with 15 bins in Fig. 1, which compares the plain model trained on theoriginal CIFAR-100 dataset, the plain model, cRT, and LWS trained on long-tailed CIFAR-100 withimbalanced factor (IF) 100. We observe that networks trained on long-tailed datasets have higherexpected calibration errors (ECEs). The two-stage models, cRT and LWS, suffer over-confidence as1Under review as a conference paper at ICLR 2021Org. CIFAR-100 LT CIFAR-100, IF100 LT CIFAR-100, IF100, cRT LT CIFAR-100, IF100, LWS0.0 0.2 0.4 0.6 0.8 1.0 0.20.40.60.81.0 AccuracyACC=0.576ECE=0.190GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.390ECE=0.381GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.407ECE=0.295GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.412ECE=0.363GapAccuracyConfidenceFigure 1: Reliability diagrams of ResNet-32. From left to right: the plain model trained on the originalCIFAR-100 dataset, the plain model, cRT, and LWS trained on long-tailed CIFAR-100 with IF=100.well. Moreover, Fig. 7 and Fig. 8 (the first two plots) in Appendix B depict that this phenomenonalso commonly exists on other long-tailed datasets such as LT CIFAR-10 and ImageNet-LT.Another issue is that two-stage decoupling methods ignore the dataset bias or domain shift (Quionero-Candela et al., 2009) between these two stages. Concretely, two-stage models are first trained on theinstanced-balanced dataset DIin Stage-1. Then, models are trained on the class-balanced datasetDCin Stage-2. Obviously, PDI(x;y)6=PDC(x;y), the distributions of the dataset with differentsampling manners are inconsistent. Motivated by the transfer learning methods (Li et al., 2018; Wanget al., 2019), we focus on the batch normalization (Ioffe & Szegedy, 2015) layer to deal with thedataset bias problem.In this work, we propose a MixupShifted Label-Aware Smoothing model (MiSLAS) to effectivelysolve the above issues. Our key contributions are as follows: (i) We discover that models trained onlong-tailed datasets are much more miscalibrated and over-confident than them trained on balanceddatasets. The two-stage models suffer the same problem as well. (ii) We find that mixup can remedyover-confidence and have a positive effect on representation learning but a negative or negligibleeffect on classifier learning. To further enhance classifier learning and calibration, we propose alabel-aware smoothing to handle the different degrees of over-confidence for different classes. (iii) Weare the first to note the dataset bias or domain shift in two-stage resampling methods for long-tailedrecognition. To deal with the dataset bias in the decoupling framework, we propose shift learning onthe batch normalization layer, which can greatly improve the performance.We extensively validate our MiSLAS on multiple long-tailed recognition benchmark datasets, i.e.,LT CIFAR-10, LT CIFAR-100, ImageNet-LT, Places-LT, and iNaturalist 2018. Experimental resultsmanifest that the effectiveness and our method yields new state-of-the-art.2 R ELATED WORKSRe-sampling and re-weighting. There are two groups of re-sampling strategies: over-samplingthe tail-class images (Shen et al., 2016; Buda et al., 2018; Byrd & Lipton, 2019) and under-samplingthe head-class images (Japkowicz & Stephen, 2002; Buda et al., 2018). Over-sampling is regularlyuseful on large datasets and often suffers from heavy over-fitting to tail classes especially on smalldatasets. For under-sampling, it discards a large portion of data, which inevitably causes degradationof the generalization ability of deep models. Re-weighting (Huang et al., 2016; Wang et al., 2017) isanother prominent strategy. It assigns different weights for classes and even instances. The vanillare-weighting method gives class weights in reverse proportion to the number of samples of classes.However, with large-scale data, re-weighting makes the deep models difficult to optimize duringtraining. Cui et al. (2019) relieved the problem using the effective numbers to calculate the classweights. Another line of work is to adaptively re-weight each instance, e.g., Focal loss (Lin et al.,2017) assigned smaller weights for well-classified samples.Network calibration and regularization. Calibrated confidence is significant for classificationmodels in many applications. The calibration of modern neural networks is first discussed in Guoet al. (2017). The authors discovered that model capacity, normalization, and regularization havestrong effects on network calibration. mixup (Zhang et al., 2018) is a regularization technique thatis proposed to train with interpolations of inputs and labels. mixup inspires several follow-ups likemanifold mixup (Verma et al., 2019), CutMix (Yun et al., 2019), and Remix (Chou et al., 2020) that2Under review as a conference paper at ICLR 2021Table 1: Top-1 accuracy of the decoupling models (cRT and LWS) for ResNet families trained on the ImageNet-LT dataset. We vary the augmentation strategies (with or without mixup = 0:2) on both two stages.Training setup for two stagesResNet-10 ResNet-50 ResNet-101 ResNet-152cRT LWS cRT LWS cRT LWS cRT LWSStage-1 (no mixup) 36.8 36.8 45.8 45.8 47.3 47.3 48.7 48.7Stage-1 (mixup) 35.7 35.7 45.6 45.6 47.7 47.7 48.4 48.4Stage-1 (no mixup) + Stage-2 (no mixup) 43.3 43.5 50.3 51.2 51.4 52.3 52.7 53.8Stage-1 (no mixup) + Stage-2 (mixup) 43.0 43.3 50.2 51.1 51.4 52.2 52.8 53.6Stage-1 (mixup) + Stage-2 (no mixup) 43.4 42.9 51.7 52.0 53.1 53.5 54.2 54.6Stage-1 (mixup) + Stage-2 (mixup) 43.3 42.8 51.6 51.9 53.0 53.5 54.1 54.502004006008001000 Class Index 1.01.21.41.6Weight NormcRT, Acc. 50.2%cRT + mixup, Acc. 51.7%02004006008001000 Class Index 0.80.91.01.11.2Weight NormLWS, Acc. 51.2%LWS + mixup, Acc. 51.9%Figure 2: Classifier weight norms for the ImageNet-LT validation set when classes are sorted by descendingvalues ofNj. Left: weight norms of cRT with or without mixup. Right: weight norms of LWS with or withoutmixup. (light shade: true norm, dark lines: smooth version)have shown significant improvement over mixup. Thulasidasan et al. (2019) found that CNNs trainedwith mixup are significantly better calibrated. Label smoothing (Szegedy et al., 2016) is anotherregularization technique that encourages the model to be less over-confident. Unlike cross-entropycomputes loss upon the ground truth labels, label smoothing computes loss upon a soft version of thelabel, which can relieve the over-fitting and increase calibration and reliability (Müller et al., 2019).Two-stage methods. Cao et al. (2019) first proposed deferred re-weighting (DRW) and deferredre-sampling (DRS) that are superior to conventional one-stage methods: Stage-2, starting from betterfeatures, adjusts the decision boundary and locally fine-tunes the features. Recently, Kang et al. (2020)and Zhou et al. (2020) concluded that although class re-balance strategies matter when jointly trainingrepresentation and classifier, instance-balanced sampling gives more general representations. Basedon this observation, Kang et al. (2020) achieved state-of-the-art results by decomposing representationand classifier learning, i.e., first train the deep models with instance-balanced sampling, then fine-tunethe classifier with class-balanced sampling while keeping parameters of representation learning fixed.Similarly, Zhou et al. (2020) integrated mixup training into the proposed cumulative learning strategywith which they bridged the representation learning and classifier re-balancing. The cumulativelearning strategy requires dual samplers: instance-balanced and reversed instance-balanced sampler.3 M AINAPPROACH3.1 I MPROVING CALIBRATION AND REPRESENTATION LEARNING BY MIXUPFor the two-stage learning framework, Kang et al. (2020) and Zhou et al. (2020) found that instance-balanced sampling gives the most generalizable representations among other sampling methods.Thulasidasan et al. (2019) found that networks trained with mixup are better calibrated. Whenusing instance-balanced sampling, to further improve the representation generalization and relieveover-confidence, we explore the effect of mixup in the two-stage decoupling framework.Here, we train two two-stage models, i.e. cRT and LWS, on ImageNet-LT for 180 epochs in Stage-1and finetune for 10 epochs in Stage-2, respectively. We vary the training setup (with/without mixup3Under review as a conference paper at ICLR 2021-Figure 3: Violin plot of predicted probability distributions for different parts of classes, head (more than 100images), medium (20 to 100 images), and tail (less than 20 images) on LT CIFAR-100, IF=100. The upper halfpart in light blue: LWS (cross-entropy). The bottom half part in deep blue: LWS (label-aware smoothing).= 0:2) for both two stages. Top-1 accuracy results of these variants are listed in Table 1. From it,we conclude that: (i) When applying mixup, the performance improvements of Stage-1 are ignorablebut the performances of Stage-2 are greatly enhanced for both cRT and LWS. (ii) Applying additionalmixup in Stage-2 has no obvious improvement or even damages the performance, which means thatmixup encourages representation learning but has a negative or negligible effect on classifier learning.We also draw the final classifier weight norms of these variants in Fig. 2. We show the L2normsof the weight vectors for all classes, as well as the training data distribution sorted in a descendingmanner concerning the number of instances. We observe that when applying mixup (orange line), theweight norms of the tail classes uniformly tend to become larger and the weight norms of the headclasses are decreased, which means mixup may be more friendly to the tail classes.The analysis of calibration for networks whether adding mixup will be discussed in our experimentpart (Sec. 4.2). Due to the poor and unsatisfied enhancement of mixup for classifier learning, wefurther propose a label-aware smoothing to improve both the calibration and classifier learning.3.2 I MPROVING CALIBRATION AND CLASSIFIER LEARNING BY LABEL -AWARE SMOOTHINGAs discussed in the introduction part and Sec. 3.1, two-stage models suffer serious over-confidenceand there is no significant improvement for classifier learning when adding additional mixup. In thissubsection, we try to analyze and deal with these two issues. Suppose that the weight of the classifier isW2RMK, whereMis the number of features and Kis the number of classes. The cross-entropyencourages the whole network to be over-confident on the head classes: Concretely, the cross-entropyloss after the softmax activation is l(y;p) =log(py) =w>yx+ log(Xexp(w>ix)), wherey2f1;2;:::;Kgis the label,x2RMis the feature vector send to classifier and wiis thei-thcolumn vector of W. The optimal solution is wy>x= inf while keeping others w>ix,i6=y, smallenough. Because the head classes contain much more training examples, the network makes theweight normkwkof the head classes become larger to near the optimal solution as much as possible,which results that their predicted probabilities mainly concentrate near 1.0 (see Fig. 3, the upper halfpart showing in light blue). Another fact we can get from Fig. 3 is that the distributions of predictedprobability are severely related to the instance numbers. Unlike balanced recognition, we claim thatapplying different strategies for different classes is extremely necessary for the long-tailed problem.Here, we propose a label-aware smoothing to solve the over-confidence in cross-entropy and thedifferent distributions of predicted probability issue. The mathematical computation of label-awaresmoothing is:l(q;p) =KXi=1qilogpi;qi=1y= 1f(Ny); i=y;yK1=f(Ny)K1; otherwise,(1)whereyis a small label smoothing factor for Class- yand relates to its class number Ny. Now theoptimal solution becomes:wi>x=(log(K1)(1y)y+c; i =y;c; otherwise,(2)4Under review as a conference paper at ICLR 2021whereccan be an arbitrary real number. Comparing with the infinite optimal solution in cross-entropy,the label-aware smoothing encourages a finite output, which can get more generalized results andremedy over-fitting. We suppose the labels of the long-tailed dataset are assigned in a descendingmanner concerning the number of instances, i.e., N1N2:::NK. Because the head classescontain more various and diverse examples, the predicted probabilities are more promising than themof tail classes. Thus, we suggest classes with larger instance numbers should be penalized largerlabel smoothing factors, that is, the related function f(Ny)should be negatively correlated to Ny.We define three types of related function f(Ny):y=f(Ny) =8>>>>>><>>>>>>:(Concave) K+ (1K) sinh(NyNK)2(N1NK)i; y = 1;2;:::;K;(Linear) K+ (1K)NyNKN1NK; y = 1;2;:::;K;(Convex) 1+ (1K) sinh32+(NyNK)2(N1NK)i; y= 1;2;:::;K;(3)where1andKare two hyperparameters. If we set 1K, then we can get 12:::K.It means that if the instance number Nyfor Class-yis larger, label-aware smoothing will allocate alarger smoothing factor and lower the fitting probability to relieve the over-confidence because thehead and medium classes are more likely to be over-confident than the tail classes (see Fig. 3).As the form of label-aware smoothing is more complicated than cross-entropy, we propose a moregeneralized classifier learning framework to fit it. Here we give a quick review about cRT andLWS: cRT tries to learn a new classifier weight, which contains KM learnable parameters. LWS isrestricted to learn the weight scaling vector s2RK, which contains only Klearnable parameters.By contrast, cRT has more learnable parameters. It means cRT has a more powerful representationability. LWS tends to obtain better validation losses and performances on large-scale datasets (referto the experiment part in Kang et al. (2020)). It means LWS has a better generalization property. Tocombine the advantages of both cRT and LWS, we redesign the classifier framework in Stage-2:z=diag(s) (aW+ W)>x: (4)In Eqn. (3), we fix the original classifier weight Win Stage-2. If we make the learnable scalingvectorsfixed, sets=1,a= 0, and just learn the new classifier weight W2RMK, Eqn. (4)will degrade to cRT. Because LWS fixes the original classifier weights Wand only learns the scalings, Eqn. (4) will degrade to LWS if we set a= 1 andW=0. In most cases, LWS generallyachieves better results than cRT on large scale datasets. Thus, we let slearnable and set a= 1. Wealso make Wlearnable to improve the representation ability but optimize Wby a differentlearning rate. Wcan be viewed as doing a shift transformation on W. This transformation canchange the direction of the original weight vector winW, which is what LWS cannot do.3.3 S HIFT LEARNING ON BATCH NORMALIZATIONIn the two-stage training framework, models are first trained with instance-balanced sampling inStage-1 and then trained with class-balanced sampling in Stage-2. Since the framework involves twosamplers, or two datasets: the instance-balanced dataset DIand the class-balanced dataset DC, wecan regard this two-stage training framework as a derivative of transfer learning approaches. However,if we view the two-stage decoupling training framework from the transfer learning perspective, fixingthe backbone part and just fine-tuning the classifier in Stage-2 will be clearly unreasonable, especiallyfor the batch normalization (BN) layers.Concretely, we suppose that the input of the network is xi, the input feature of some BN layer isg(xi), and the mini-batch size is m. The running mean and the running variance of Channel- jforthese two stages are:(j)I=1mmXi=1g(xi)(j);2I(j)=1mmXi=1hg(xi)(j)(j)Ii2;xiPDI(x;y); (5)(j)C=1mmXi=1g(xi)(j);2C(j)=1mmXi=1hg(xi)(j)(j)Ci2;xiPDC(x;y): (6)5Under review as a conference paper at ICLR 2021mixup + cRT mixup + LWS mixup + LWS + shifted BN MiSLAS0.0 0.2 0.4 0.6 0.8 1.0 0.20.40.60.81.0 AccuracyACC=0.452ECE=0.138GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.442ECE=0.225GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.453ECE=0.222GapAccuracy0.0 0.2 0.4 0.6 0.8 1.0 ACC=0.470ECE=0.048GapAccuracyConfidenceFigure 4: Reliability diagrams of ResNet-32 trained on LT CIFAR-100, IF=100. From left to right: cRT withmixup, LWS with mixup, LWS with mixup and shifted BN, and MiSLAS. It is better to look together with Fig. 1.Due to the different sampling strategies, the composition ratios of the head, medium, and tail classesare also totally different, which leads to PDI(x;y)6=PDC(x;y). Calculated by Eqn. (5) and (6),there exist some biases in andunder two sampling strategies, i.e., I6=C, and2I6=2C.Thus, it is clearly infeasible for the decoupling framework that BN shares mean and variance acrossdatasets with two sampling strategies. Motivated by AdaBN (Li et al., 2018) and TransNorm (Wanget al., 2019), we unfreeze the update procedures of the running mean and running variance butfix the learnable linear transformation parameters andfor a better normalization in Stage-2.4 E XPERIMENTS4.1 E XPERIMENTAL SETUPOur experimental setup including the implementation details and evaluation protocol mainly followsCao et al. (2019) for LT CIFAR-10 and LT CIFAR-100, and Kang et al. (2020) for ImageNet-LT,Places-LT, and iNuturalist 2018. Please see Appendix A for further details.4.2 A BLATION STUDYImproving calibration. Here we show the reliability diagrams with 15 bins of our methods inFig. 4. Comparing with Fig. 1 in the introduction part, both the mixup and label-aware smoothing cannot only largely enhance the network calibration (even lower ECEs than them on balanced datasets)but also greatly improve the performance for long-tailed recognition. The similar trends can alsobe found on LT CIFAR-10, ImageNet-LT, and Places-LT (please see the figures in Appendix B fordetail), which proves the powerful effects of the proposed method on calibration. According to allexperiment results, training networks on imbalanced datasets leads to more severe over-confidence.Since the conventional mixup and label-smoothing both contain the operation of softening the groundtruth labels, which may suggest that training with hard labels is likely to be another contributingfactor leading to network over-confidence.Further analysis of label-aware smoothing. In our label-aware smoothing, there are two hyper-parameters in Eqn. (3), 1andK, which control the penalties of classes. In recognition system, ifthe predicted probability of some Class- yis larger than 0.5, the classifier will classify the input toClass-y. Thus, to ensure reasonability, we limit 0K10:5. Here we conduct a comparingexperiment for varying 1andKboth from 0.0 to 0.5 on LT CIFAR-100 with imbalanced factor100. We plot the performance matrix upon 1andkin Fig. 5 for all possible variants. Fromit, the classification accuracy can be further improved by 0.9% comparing with the conventionalcross-entropy ( 1= 0,K= 0, green square) when we pick 1= 0:4,K= 0:1(orange square)for label-aware smoothing. A more surprising improvement (growing by 3.3%) can be found onLT CIFAR-10 (see Appendix D.1 for detail). We also find that the concave related function f()inEqn. (3) achieves the best performance but the gain is quite limited (refer Appendix D.2 for detail).To visualize the change in predicted probability distributions, we train two LWS models, one withcross-entropy and the other with label-aware smoothing on long-tailed CIFAR-100 with imbalancedfactor 100. The cross-entropy-based distributions of the head, medium, and tail classes are showing inthe upper half part of Fig. 3 in light blue. The label-aware smoothing-based distributions are showingin the bottom half part in deep blue. We observe that the over-confidence of head and medium classesrelieve greatly, and the whole distribution of the tail classes slightly moves right (a larger mean) whenusing label-aware smoothing. This empirical visualization is consistent with our analysis mentionedin Sec. 3.2.6Under review as a conference paper at ICLR 2021Table 2: Ablation study for all proposed moduleson long-tailed CIFAR-100, IF=100. MU: applyingmixup just in Stage-1. SL: shift learning on batchnormalization. LAS: label-aware smoothing.Module LT CIFAR-100MU SL LAS 100 50 108 8 8 41.2 46.0 58.54 8 8 44.2 50.6 62.24 4 8 45.3 51.4 62.84 4 4 47.0 52.3 63.00.00.10.20.30.40.510.00.10.20.30.40.5K46.1746.7346.8946.9346.8946.8746.1346.6246.8347.0446.9446.1146.6046.7646.9646.0846.4546.8546.1246.5046.0546.246.446.646.847.0Figure 5: Ablation study of two hyperparameters1andKin label-aware smoothing.2468101214160.60.30.00.30.68162432404856641.00.50.00.5246810121416The Frist BN (bn1)14710816243240485664The Last BN (layer3.4.bn2)0.00.20.40.62BN with shift, Acc. 45.3%BN w/o shift, Acc. 44.2%Figure 6: Visualization of the changes in the running mean and variance 2. The ResNet-32 based modelis trained on LT CIFAR-100 with imbalanced factor 100. Left: and2in the first BN of ResNet-32, whichcontains 16 channels. Right: and2in the last BN of ResNet-32, which contains 64 channels.Further analysis of shift learning. In this part, we conduct an empirical experiment to show theeffectiveness and reasonability of shift learning on BN. We train the LWS model on long-tailedCIFAR-100 with imbalanced factor 100. After 10 epochs finetuning in Stage-2, the model trainedwith BN shifting achieves accuracy at 45:3%, which is 1:1%higher than it without BN shifting. Wealso draw a visualization of the change in BN. As shown in Fig. 6, we see that there indeed existbiases inand2between the dataset using different sampling strategies. Due to the compositionratios of the head classes, medium classes and tail classes are different in terms of different samplingstrategies, the statistic running mean and running variance 2are certainly different. We also findsome interesting phenomenons need for future exploration: (i) The changes in the running variance2are larger than the changes in the running mean . (ii) The changes of and2in deep BNlayers are quite smaller than them in shallow BN layers.Overall, Table 2 shows the ablation investigation on the effects of mixup (adding mixup in Stage-1, MU), shift learning on batch normalization (SL), and label-aware smoothing (LAS). From it,each proposed module can further improve the performances on long-tailed CIFAR-100 for allcommonly-used imbalanced factors, which firmly demonstrates the effectiveness.4.3 C OMPARISON WITH THE STATE -OF-THE-ARTIn this subsection, we compare the proposed method against previous one-stage methods, such asRange Loss (Zhang et al., 2017), LDAM Loss (Cao et al., 2019), FSLwF (Gidaris & Komodakis,2018), and OLTR (Liu et al., 2019), and against previous two-stage methods, such as DRS-like,DRW-like (Cao et al., 2019), LFME (Xiang & Ding, 2020), cRT, and LWS (Kang et al., 2020). Forfair comparisons, we also add mixup on the LWS and cRT models. Remix (Chou et al., 2020) is a7Under review as a conference paper at ICLR 2021Table 3: Top-1 accuracy (%) for ResNet-32 models trained on long tailed CIFAR-10 and CIFAR-100.MethodLong-tailed CIFAR-10 Long-tailed CIFAR-100100 50 10 100 50 10CE 70.4 74.8 86.4 38.4 43.9 55.8mixup 73.1 77.8 87.1 39.6 45.0 58.2LDAM+DRW 77.1 81.1 88.4 42.1 46.7 58.8BBN (include mixup) 79.9 82.2 88.4 42.6 47.1 59.2Remix+DRW (300 epochs) 79.8 - 89.1 46.8 - 61.3cRT+mixup 79.1 84.2 89.8 45.1 50.9 62.1LWS+mixup 76.3 82.6 89.6 44.2 50.6 62.2MiSLAS 82.1 85.8 89.9 47.0 52.3 63.0Table 4: Top-1 accuracy (%) on ImageNet-LT (left), iNaturalist 2018 (center) and Place-LT (right).Method ResNet-50CE 44.6CE+DRW 48.5Focal+DRW 47.9LDAM+DRW 48.8CRT+mixup 51.7LWS+mixup 52.0MiSLAS 52.7(a) ImageNet-LTMethod ResNet-50CB-Focal 61.1LDAM+DRW 68.0BBN (include mixup) 69.6Remix+DRW 70.5cRT+mixup 70.2LWS+mixup 70.9MiSLAS 71.6(b) iNaturalist 2018Method ResNet-152Range Loss 35.1FSLwF 34.9OLTR 35.9OLTR+LFME 36.2cRT+mixup 38.3LWS+mixup 39.7MiSLAS 40.4(c) Place-LTrecently proposed augmentation method for long-tail recognition. Because BBN (Zhou et al., 2020)has double samplers and is trained in a mixup-like manner, we directly compare our method with it.Experimental results on CIFAR-LT. We conduct extensive experiments on long-tailed CIFAR-10and CIFAR-100 with imbalanced factors of 10, 50, and 100, which is the same as the previoussetting (Cao et al., 2019; Zhou et al., 2020). The experimental results are summarized in Table 3.Compared with previous methods (+mixup, one/two-stage), our MiSLAS outperforms all previousmethods by a large margin. Moreover, this superiority of the proposed method holds for all imbalancedfactors on both long-tailed CIFAR-10 and CIFAR-100.Experimental results on ImageNet-LT, iNaturalist 2018, and Place-LT. We further verify theeffectiveness of our method on three large-scale imbalanced datasets, i.e., ImageNet-LT, iNaturalist2018, and Place-LT. Table 4 lists experimental results on ImageNet-LT (left), iNaturalist 2018 (center),and Places-LT (right). Notably, our MiSLAS still outperforms all competing approaches and sets newstate-of-the-art records for all three large-scale long-tailed benchmarks. More detailed results aboutthe split class accuracies and different backbones on these three datasets are listed in Appendix C.5 C ONCLUSIONIn this paper, we discover that models trained on long-tailed datasets are more miscalibrated andover-confident than them trained on balanced datasets. The two-stage models suffer the same issueas well. To relieve over-confidence, we propose two solutions: (i) We find that mixup can remedyover-confidence and have a positive effect on representation learning but a negative or negligibleeffect on classifier learning. (ii) To further improve classifier learning and calibration, we proposelabel-aware smoothing to handle the different degrees of over-confidence for different classes. Weare the first to note the dataset bias or domain shift in two-stage resampling methods for long-tailedrecognition. To solve the dataset bias producing by different re-sampling in the decoupling framework,we propose shift learning on the batch normalization layer and this novel model can greatly improvethe performance. Extensive quantitative and qualitative experiments on multiple benchmark datasetsshow that our MiSLAS achieves superior performances over the state-of-the-art methods.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Interesting paper with some promising results ### Review Text Summary: This paper addresses two major issues in previous works of long-tailed recognition: 1) models are over-confident on “head” classes. 2) the domain shift (dataset bias) is ignored in two-stage models. Accordingly, they propose two novel solutions including label-aware smoothing and shift learning on batch normalization. They also discover that mixup can improve the representation learning on long-tailed recognition. The experiments verify the effectiveness of their approaches and show the significant improvements on ImageNet-LT, iNaturalist, and Places-LT. Pros: - Overall, the presentation of this paper is clear and the writing is good. - The authors address the issues in previous works and conduct various ablation studies for verification. - The experimental results on benchmark datasets seem very promising and outperform baselines by a large margin. Cons: - In experiments, I wonder if the authors apply mixup alone or with other basic augmentations, such as rotation, flipping. It would be interesting to see the performance combining with some basic augmentations since training without data augmentations usually gives bad performance. - It would be nice to add a plot to indicate the difference among the three functions for eqn (3), e.g. move the left plot of figure 11 to the main text and explain their behaviors. Overall, I find this paper interesting and novel. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
HksioDcxl
ICLR.cc/2017/conference
2017
Joint Training of Ratings and Reviews with Recurrent Recommender Networks
["Chao-Yuan Wu", "Amr Ahmed", "Alex Beutel", "Alexander J. Smola"]
Accurate modeling of ratings and text reviews is at the core of successful recommender systems. While neural networks have been remarkably successful in modeling images and natural language, they have been largely unexplored in recommender system research. In this paper, we provide a neural network model that combines ratings, reviews, and temporal patterns to learn highly accurate recommendations. We co-train for prediction on both numerical ratings and natural language reviews, as well as using a recurrent architecture to capture the dynamic components of users' and items' states. We demonstrate that incorporating text reviews and temporal dynamic gives state-of-the-art results over the IMDb dataset.
["ratings", "reviews", "text reviews", "joint training", "recurrent recommender networks", "modeling", "core", "successful recommender systems", "neural networks"]
ABSTRACTAccurate modeling of ratings and text reviews is at the core of successful rec-ommender systems. While neural networks have been remarkably successful inmodeling images and natural language, they have been largely unexplored in rec-ommender system research. In this paper, we provide a neural network modelthat combines ratings, reviews, and temporal patterns to learn highly accuraterecommendations. We co-train for prediction on both numerical ratings and naturallanguage reviews, as well as using a recurrent architecture to capture the dynamiccomponents of users’ and items’ states. We demonstrate that incorporating textreviews and temporal dynamic gives state-of-the-art results over the IMDb dataset.1 I NTRODUCTIONDesigning highly accurate recommender systems has been the focus of research in many communitiesand at the center of many products for the past decade. The core goal is to predict which items agiven user will like or dislike, typically based on a database of previous ratings and reviews. Inparticular, a good recommender system has been defined as one that predicts the rating for randomlychosen and unseen ( user,item) pairs. During the Netflix Prize contest, a variety of factorizationmodels were proposed to capture the latent embeddings of users and items that would lead to accuraterecommendations (Bell & Koren, 2007; Koren et al., 2009). Generative models for personalizedratings have recently become popular, due to impressive and robust results (Mnih & Salakhutdinov,2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al., 2015).More recently, there has been an interest in the recommender system community to also make use ofthe rich natural language reviews provided by users. Most often, these reviews have been transformedinto a bag-of-words-model and used as a sort of regularization for the rating predictions (McAuley &Leskovec, 2013; Diao et al., 2014; Almahairi et al., 2015; Wu et al., 2016b). Using reviews in thisway has been found to improve prediction accuracy, and in some cases provide detailed explanationsfor the recommendations.This previous research has been remarkably successful, but has two significant limitations that wediscuss and address in this paper. First, prediction accuracy has rarely been measured by the ability ofa model to predict future ratings. Rather, recommendation accuracy has been derived from a randomsplit of the ratings data, which undermines our understanding of the models’ usefulness in practice.Here, we focus on predicting future ratings, splitting our training and testing data by date. In order tobe successful at this task, we incorporate the time of ratings and reviews in our model structure andtraining. Koren (2010) previously derived temporal features of ratings data, but used these featurestoremove temporal effects since the metric of success was interpolation, not extrapolation. Morerecently, Recurrent Recommender Networks (RNN) use a recurrent neural network to capture changesA majority of this work was done while the author was at Carnegie Mellon University.1Under review as a conference paper at ICLR 2017in both user preferences and item perceptions, and extrapolate future ratings in an autoregressive way(Wu et al., 2016a). However, temporal patterns in reviews are largely unexplored. Note that just likeratings, reviews also depend on changing factors, such as user writing styles, user preferences, movieperceptions, or the popularity of certain slang words or emoticons. Here we use a generative LSTMmodel that is able to jointly model the temporal effects in ratings and reviews.Second, models of reviews in recommender system fall significantly behind the state-of-the-art innatural language processing. The bag-of-words model used in previous research improves over notusing text, but is limited in the degree to which it can understand the review. In fact, the drawback ofan underfitting model is especially salient in the case of reviews, because they are much more diverseand unstructured than regular documents. Recently there has been significant research attention onmodeling natural language with neural networks, with encouraging results (Lipton et al., 2015; Yanget al., 2016). Here, we combine these powerful neural-based language models with recurrent neuralnetwork to learn both accurate recommendations and accurate reviews. Our main contributions are asfollows:Joint generative model: We propose a novel joint model of ratings and reviews via inter-acting recurrent networks (particularly LSTM).Nonlinear nonparametric review model: By learning a function of user and movie statedynamics, we can capture the evolution of reviews (as well as ratings) over time.Experiments show that by jointly modeling ratings and reviews along with temporal pat-terns, our model achieves state-of-the-art results on IMDb dataset in terms of forwardprediction, i.e. in the realistic scenario where we use only ratings strictly prior to predictiontime to predict future ratings.2 R ELATED WORKCollaborative Filtering As mentioned in the introduction, recommender systems have been thefocus of many different research communities. The Netflix Prize generated a flurry of research toimprove recommendation accuracy, with a variety of matrix factorization models being proposed(Bell & Koren, 2007; Koren et al., 2009; Koren, 2008). During the Netflix competition and moreafterwards, a stream of research has focused on designing generative Bayesian models for user ratingsdata (Mnih & Salakhutdinov, 2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al.,2014; 2015). Nearly all of these models predict ratings by an inner product between a latent userembedding and a latent item embedding; different approaches primarily regularization, e.g., Bayesianmodels and learning algorithms capture uncertainty in the data.Other models have tried to capture interesting patterns discovered in ratings data. As an example,Beutel et al. (2014) finds that some ratings form bimodal rather than Gaussian distributions anddesigns a model to accommodate this diversity. More closely related to this work, Koren (2010)designs many features to capture and remove the temporal effects in ratings data. By removing thesetemporal effects, Koren (2010) learns better stationary embeddings for users and items. Work such asthis improves prediction accuracy, but has two drawbacks: (1) it requires time consuming featureengineering, and (2) it focuses on interpolation rather than extrapolation into the future. Wu et al.(2016a) addresses both of these concerns by learning a function for the evolution of user preferencesand item properties. However, this work focuses exclusively on modeling ratings over time and,in a large part, on the qualitative patterns discovered in the Netflix dataset. Here we focus on themodel itself and, in particular, the interaction of jointly understanding ratings, reviews, and temporalpatterns.Review Modeling Although the most common metric for recommendation accuracy has beenrating prediction, natural language reviews provide rich, detailed insight into user preferences. Mostoften, reviews have been used in a bag-of-words model to regularize rating prediction (McAuley &Leskovec, 2013; Diao et al., 2014; Wu et al., 2016b). For example, McAuley & Leskovec (2013)effectively learns a topic model of reviews regularize item embeddings. By using such coarse models,the impact of and insight from reviews is limited. More recently, Almahairi et al. (2015) use neuralnetwork based review models to regularize hidden factors, but their model assumes only stationarystates.2Under review as a conference paper at ICLR 2017rijuimjrijtwijtuituit+ uitmjtmjt+ mjtFigure 1: As shown on the left, previous recommendation models learn static stationary embeddingsfor users and movies to predict ratings. As shown on the right, we can also capture temporal effectspresent in the data. We have both user and movie embeddings follow a Markov chain, and use thesedynamic embeddings (along with stationary ones not shown) to predict both ratings and text reviews.Interestingly, data mining research has found that review patterns are dynamic, with different languagebeing adopted by communities over time (Danescu-Niculescu-Mizil et al., 2013). Therefore, it isimportant to capture not just the dynamics of ratings, but also the language used to justify thoseratings.Neural Networks Neural networks have recently offered large improvements in natural languageprocessing. More recently, a few papers have focused these natural language models on onlinereviews (Lipton et al., 2015; Yang et al., 2016). However, while these papers do model online reviews,they differ greatly from our work in that they are not actually used for recommendation.With the recent remarkable successes of neural networks in other domains, there has been growingattention on using neural networks for model graphs and ratings data. Most similar, Sedhain et al.(2015) design an autoencoder for collaborative filtering.LSTM and Recurrent Network Recurrent neural network provides a powerful tool to nonpara-metrically model temporal data by using a latent variable autoregressive model as follows:^zt+1=f(ht;zt)andht+1=g(ht;zt+1):Whereztis the observation at time t,^ztis the model associated estimate, and htdenotes the latentstate. A popular class of RNN is the Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber,1997) and we use this as a building block in our model .The state updates is given below:[ft;it;ot] =[W[ht1;zt] +b] (1)lt= tanh [V[ht1;zt] +d] (2)ct=ftct1+itlt (3)ht=ottanh(ct); (4)whereft,it,otdenote the forget gate, input gate and the output gate respectively. For simplicity inthe following we denote this set of operations by ht= LSTM(ht1;zt). We will refer to htas theoutput embedding from the LSTM.3 M ODELA comparison of our model with traditional recommender systems is illustrated in Figure 1. Inprevious recommender systems, ratings are assumed to be a function of stationary user and movieembeddings. Here we consider dynamic embeddings that predict both ratings and text reviews at agiven time step.Figure 2 shows a depiction of our model: Joint Review-Rating Recurrent Recommender Network.In addition to stationary embeddings as used in traditional recommender systems, here we use two3Under review as a conference paper at ICLR 2017^oij;1 ^oij;2 ^oij;3 ^oij;4 user hnew i yi;t2yi;t1i l m .::: uiuitrijxij F i l mmjtoij;0 oij;1 oij;2 oij;3::: mjhnew i yj;t3yj;t2yj;t1movieFigure 2: Joint Review-Rating Recurrent Recommender Networks: We use recurrent networks tocapture the temporal evolution of user and movies states. The recurrent networks depend on theratings of a user (and movie) in previous time steps. We combine these dynamic states with classicstationary states. We directly use all of these states to predict ratings, and use them within an LSTMto model review text.LSTM RNNs that take user/movie history as input to capture the temporal dynamics in both userand movie states. Given stationary and dynamic states of user iand moviej, we define generatorfunctions that emit both rating rijjtand reviews oijjtat time stept. Formally,rijjt=f(ui;mj;uit;mjt)andoijjt= (ui;mj;uit;mjt)ui;t+1=g(uit;frijjtg)andmj;t+1=h(mjt;frijjtg);whereuiandmjdenote stationary states, and uitandmitdenote the dynamic state at t. Note thatwith learned f; ;g andhand given user/movie history, an user/movie state can be inferred withoutfurther optimization. In other words, different from traditional recommender systems, here we learnthefunctions that find the states instead of learning the states directly.3.1 D YNAMIC USER AND MOVIE STATEHere we give a detailed description on the RNNs that find the dynamic states. The key idea is to useuser/movie rating history as inputs to update the states. In this way we are able to model causalityinstead of just finding correlation. That is, we can model e.g. the change of user (movie) state causedby having watched and liked/disliked a movie (being liked/disliked by certain users). At each step,the network takesyt:=Wembed [xt;1newbie;t;t1]; (5)wherextis the rating vector, 1newbie is the indicator for new users, and tis wall-clock time. The jthelement ofxtis the rating the user gives for movie jat timet, and 0otherwise. 1newbie effectivelyselect a default embedding for a new user, and tandt1gives the model the information tosynchronize between RNNs and model the effects such as rating scale change or movie age. Note thatwith the inclusion of s, we do not need to include the steps where a user did not rate any movie, andthis can drastically speed up training. The state update is given by standard ut:= LSTM(ut1;yt).In the above we omit user index for clarity. In cases where we need to distinguish different users (andmovies) such as in Figure 2, we use additional index ifor userias inuit, and similarly for movie jinmjt.4Under review as a conference paper at ICLR 20173.2 R ATING EMISSIONSWe supplement the time-varying profile vectors uitandmjtwith stationary ones uiandmjrespec-tively. These stationary components encode time-invariant properties such as long-term preference ofa user or the genre of a movie.The review rating is thus modeled as a function of both dynamic and stationary states, i.e.rij=f(uit;mjt;ui;mj) :=h~uit;~mjti+hui;mji (6)where ~uitand~mjtare affine functions of uitandmjtrespectively. That is, we have~uit=Wuseruit+buserand~mjt=Wmoviemjt+bmovieThis makes the model a strict superset of popular matrix factorization recommender systems thataccounts for stationary effects, while we use LSTMs, on top of that, to model longer-range dynamicupdates.3.3 R EVIEW TEXT MODELReview text is modeled by a character-level LSTM network. This network shares the same user/movielatent states with the rating model. After all, the purpose of a review is to explain its rating score. Wefuse the stationary and dynamic states of both user of movie by the bottleneck layer xjoint;ijgivenbelow:xjoint;ij:=(Wjoint[uit;mjt;ui;mj] +bjoint) (7)~xij;k:=xoij;k;xjoint;ij(8)whereoij;kdenotes the character at position kfor the review given by user ito moviej, andxoij;kdenotes the embedding of the character. here is some non-linear function.The review text emission model is itself an RNN, specifically a character-level LSTM generativemodel. For character index k= 1;2;:::,hij;k:= LSTM(hij;k1;~xij;k) (9)^oij;k:= softmax ( Wouthij;k+bout) (10)Here a softmax layer at output of LSTM is used to predict the next character. Generating textconditioned on contents has been applied to various areas, such as machine translation (Sutskeveret al., 2014), question answering (Gao et al., 2015), or image captioning (Vinyals et al., 2015).Probably the most similar approach is Lipton et al. (2015), but it conditions review generation onobserved ratings instead of latent states.3.4 P REDICTIONIn prediction time, we make rating predictions based on predicted future states. That is, we take thelatest ratings as input to update the states, and use the newly predicted states to predict ratings. Thisdiffers from traditional approaches where embeddings are estimated instead of inferred.3.5 T RAININGOur goal is to predict both accurate ratings and accurate reviews, and thus we minimizeL:=X(i;j)2Dtrain"(^rij()rij)2nijXk=1log (Pr(oij;kj))#; (11)whereDtrain is the training set of (i;j)pairs,denotes all model parameters, and nijis the numberof characters in the review user igives to movie j. The first term corresponds to the deviation of theprediction from the actual rating, and the second term is the likelihood of the text reviews. controlsthe weight between predicting accurate ratings and predicting accurate reviews. Our training followsthe subspace descent strategy in Wu et al. (2016a). That is, while the review generative model isupdated in every iteration, the user-state and movie-state RNNs are updated in an alternating way.5Under review as a conference paper at ICLR 2017Data # users # items # ratings # characters(reviews)IMDbTrain Jul 98 - Dec 126,127 8,002402.3k 690.6MTest Jan 13 - Sep 13 11.0k 21.6MNetflix 6 monthsTrain Jun - Nov 11311.3k 17.7k13.7M -Test Dec 11 2.1M -Table 1: IMDb dataset comprises reviews and ratings collected from July 1998 to September 2013.Netflix 6 months data is a subset of original Netflix prize dataset that is split based on time.The gradients are calculated with standard backpropagation. Furthermore, we pre-warm train thereview LSTM over the review text excluding the auxiliary input from the user and movie states. It isundesirable if the review likelihood overwhelms the rating. We hence normalize review likelihoodby the number of characters in a review so that it does not dominates the rating likelihood. Thistechnique is common in NLP literature (Wang & McCallum, 2006).4 E XPERIMENTSIn this section we empirically demonstrate the ability of our model to accurately predict both ratingsand reviews, and capture temporal dynamics.4.1 E XPERIMENTAL SETUPIn the following experiments, we select hyperparameters, optimization parameters and model ar-chitecture by cross-validation. The details are as follows. We use 1-layer LSTM recurrent neuralnetworks with 40 hidden factors for user/movie state transitions. The input of this LSTM is anuser/item embedding of dimension 40. Stationary and dynamic factors are 160 and 40-dimensionalrespectively. A 2-layer LSTM network is used to model texts, which takes 30-dimensional characterembeddingxchar, 40-dimensional state vector xjoint, and a 50-dimensional movie embedding xmovie .To speed up convergence, we initialize the text model by a character-level RNN pre-trained withoutconsidering rating. Stationary factors are initialized by a pre-trained iAutoRec (Sedhain et al., 2015)model based on the last layer. We initialize all the other parameters from uniform distribution between[a;a]witha=p1:5(fin+fout), wherefinandfoutare fan-in and fan-out of transition matrices.`2regularization with magnitude 0:001is applied to all parameters. Dropout with a 0:5rate is appliedafter all fully-connected layers. To prevent exploding gradients in of LSTM, gradients are clipped to[15;15]. ADAM (Kingma & Ba, 2014) with learning rate 0:0015 is used for optimization.k (number of ratings)100105# users with k ratings100102104106(a) User distribution.k (number of ratings)100102104# movies with k ratings100105 (b) Movie distribution.Review length k (characters)100105# reviews of length k100101102103104 (c) Review length distribution.Figure 3: Characteristics of IMDb dataset.Data Here we focus on movie recommendations, where the opinions are highly dynamic. Weevaluate our model on IMDb dataset, first used in Diao et al. (2014), that is the only large-scale movie6Under review as a conference paper at ICLR 2017PMF Time-SVD++ U-AutoRec I-AutoRecRRN RRN(rating) (rating + text)IMDb 1.7355 1.7348 1.7332 1.7135 1.7047 1.7012Netflix 6 months 0.9584 0.9589 0.9836 0.9778 0.9427 -Table 2: RRN outperforms competing models in terms of RMSE. In addition, jointly modeling ratingsand reviews achieves even better accuracy.review dataset available. Restaurant recommendations (e.g. Yelp) could be also a suitable domain,but full rating history is not available in publicly available datasets1.The IMDb dataset contains full review and rating history of all users and all movies from 1998to 2013. The characteristics of this dataset is shown in Figure 3. We see that the user and movieratings follow heavy tail distributions, and thus the majority of users and movies have very fewreviews, making accurate recommendation challenging for these users and movies. Review length issummarized in Figure 3 (c). Since one of the major goal of this project is to study temporal dynamics,we focus on users and items that have multiple interactions with the system. Specifically, we select asubset of k-core of the graph with k= 15 . That is, each user and movie has at least 15 ratings in thissubset. Note that the resulting subgraph is still very sparse – with only 0:8%density, which is sparserthan for example, 1.2 % density of Netflix dataset . For completeness, we also include the 6-monthNetflix dataset as used in Wu et al. (2016a), which has only ratings, to study RRN’s ability to modeltemporal patterns.The dataset is split by date instead of random sampling to simulate the real recommendation settingswhere we need to predict into the future instead of interpolating the past. IMDb training set containsall ratings from July 1998 to December 2012, and the ratings from January to September 2013 arerandomly split into a validation set and a test set. Similarly, the 6-month Netflix dataset is split intoJanuary to November 2011 (training) and December 2011 (testing and validation). We report theresults on testing set with the model that gives the best results on validation set. The summary of thisdataset is given in Table 1.Baselines We compare our model with models including the state-of-the-art temporal model, and astate-of-the-art neural network-based model.PMF (Mnih & Salakhutdinov, 2007): Our model extends matrix factorization by includinga dynamic part and a joint review model. Comparing to PMF directly shows us the advantageof our approaches. LIBPMF (Yu et al., 2012) is used in experiments.Time-SVD++ (Koren, 2010): Time-SVD++ is the state-of-the-art model for temporaleffects. It achieves excellent performance in Netflix contest. Implementation in GraphChi(Kyrola et al., 2012) is used in experiments.AutoRec (Sedhain et al., 2015): AutoRec is the state-of-the-art neural network recom-mender system. It learns an autoencoder that encodes user (item) histories into a low-dimensional space and then predict ratings by decoding. No temporal effects or causalityare considered in this model. We use the software the authors provide in experiments.All models use comparable number of factor sizes. Parameters of PMF and Time-SVD++ are selectedby grid-search. Settings of AutoRec follow the original paper. We also include the performance ofrating-only RRN, as in Wu et al. (2016a), to separate the benefits obtained from temporal modelingand review texts.4.2 R ATING PREDICTIONOne important goal of recommender systems is making accurate rating predictions. Here we evaluatethe accuracy by root-mean-square error (RMSE) of prediction from the true rating. The resultsare summarized in Table 2. For completeness, we include the results from Wu et al. (2016a) on1https://www.yelp.com/dataset_challenge7Under review as a conference paper at ICLR 20176-month Netflix dataset that use ratings only to compare the behavior of different models on differentdatasets. We see that rating-only RRN outperforms all baseline models in terms of rating predictionconsistently in both dataset. More importantly, joint-modeling ratings and reviews boosts theperformance even more , compared to rating-only RRN. This implies that by sharing statisticalstrength between ratings and reviews, the rich information in reviews helps us estimate the latentfactors better. Note that while the absolute improvements in RMSE might not appear to be huge,the 1.98% improvement over PMF is actually considerable in terms of recommendations2. We alsosee that while Time-SVD++ performs well in Netflix contest, it does not work as well for predictingfuture ratings. After all, the goal of Time-SVD++ is estimating the temporal bias in hindsight insteadof extrapolating into future states.4.3 T EXT MODELINGHere we examine the impact of conditioning on user and item states for text modeling. Towards thisend, we compare perplexity of characters in testing set with and without using the user/item factors.Perplexity is defined asppx(Dtest) = exp 1NcXc2Dtestlog Pr(c)!;whereNcis the total number of characters in Dtest, and Pr(c)is the likelihood of character c.Interestingly, we found that by jointly training with user and item states, the perplexity improvesfrom 3.3442 to 3.3362 .4.4 T EMPORAL DYNAMICSHere we study if RRN is able to automatically capture the overall rating trends in IMDb by adaptivelyupdating states along history sequence. Specifically, at each time step, we randomly sample up to1000 users, and see what ratings the users would have given to each of the movie given their statesat the time step, even in reality the user might not have given a rating to the movie. This gives usan unbiased estimation of average behavior of our model on each of the ratings. Figure 4 shows theaverage predicted ratings in this setting and the true average rating in the data set. We see that RRNclearly captures the overall trend in IMDb smoothly.Year1999 2002 2005 2008 2011Avg. rating in data set6.66.877.27.47.6(a) Average ratings on IMDb.Year1999 2002 2005 2008 2011Avg. predicted rating6.66.877.27.47.67.8 (b) Predicted ratings.Figure 4: RRN is able to capture the overall trend of data. (a) show the average ratings of all movieson IMDb over time. In (b) we see the predicted ratings are consistent with this trend.5 D ISCUSSION & C ONCLUSIONWe present a novel approach that jointly models ratings, reviews, and their temporal dynamics withRRN. The contributions we have provided are as follows:2For example, in 2009 SVD++ outperforms SVD by 1.09% and Time-SVD++ outperforms SVD++ by 1.25%,and they are considered important progress in recommender systems.8Under review as a conference paper at ICLR 20171.Joint rating-review modeling: We offer an LSTM-based joint rating-review model thatprovides advantages in both rating prediction and text modeling.2.Nonparametric dynamic review modeling: RRN is based on an autoregressive method tomodel temporal dynamics of users and movies, allowing us to capture how reviews changeover time.3.Empirical results: We demonstrate that our joint model offers state-of-the-art results onrating prediction in real recommendation settings, i.e. predicting into the future.9Under review as a conference paper at ICLR 2017
BJ9p-XmEx
6: Marginally above acceptance threshold
This paper proposed a joint model for rate prediction and text generation. The author compared the methods on a more realistic time based split setting, which requires “predict into the future.” One major flaw of the paper is that it does not address the impact of BOW vs the RNN based text model, specifically RRN(rating+text) already uses RNN for text modeling, so it is unclear whether the improvement comes from RNN(as opposed to BOW) or application of text information. A more clear study on impact of each component could make it more clear and benefit the readers. Another potential improvement direction of the paper is to support ranking objectives, as opposed to rate prediction, which is more realistic for recommendation settings. The overall technique is intuitive and novel, but can be improved to give more insights to the reader,.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Joint Training of Ratings and Reviews with Recurrent Recommender Networks ### Paper Abstract Accurate modeling of ratings and text reviews is at the core of successful recommender systems. While neural networks have been remarkably successful in modeling images and natural language, they have been largely unexplored in recommender system research. In this paper, we provide a neural network model that combines ratings, reviews, and temporal patterns to learn highly accurate recommendations. We co-train for prediction on both numerical ratings and natural language reviews, as well as using a recurrent architecture to capture the dynamic components of users' and items' states. We demonstrate that incorporating text reviews and temporal dynamic gives state-of-the-art results over the IMDb dataset. ### Paper Keywords ["ratings", "reviews", "text reviews", "joint training", "recurrent recommender networks", "modeling", "core", "successful recommender systems", "neural networks"] ### Paper Content ABSTRACTAccurate modeling of ratings and text reviews is at the core of successful rec-ommender systems. While neural networks have been remarkably successful inmodeling images and natural language, they have been largely unexplored in rec-ommender system research. In this paper, we provide a neural network modelthat combines ratings, reviews, and temporal patterns to learn highly accuraterecommendations. We co-train for prediction on both numerical ratings and naturallanguage reviews, as well as using a recurrent architecture to capture the dynamiccomponents of users’ and items’ states. We demonstrate that incorporating textreviews and temporal dynamic gives state-of-the-art results over the IMDb dataset.1 I NTRODUCTIONDesigning highly accurate recommender systems has been the focus of research in many communitiesand at the center of many products for the past decade. The core goal is to predict which items agiven user will like or dislike, typically based on a database of previous ratings and reviews. Inparticular, a good recommender system has been defined as one that predicts the rating for randomlychosen and unseen ( user,item) pairs. During the Netflix Prize contest, a variety of factorizationmodels were proposed to capture the latent embeddings of users and items that would lead to accuraterecommendations (Bell & Koren, 2007; Koren et al., 2009). Generative models for personalizedratings have recently become popular, due to impressive and robust results (Mnih & Salakhutdinov,2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al., 2015).More recently, there has been an interest in the recommender system community to also make use ofthe rich natural language reviews provided by users. Most often, these reviews have been transformedinto a bag-of-words-model and used as a sort of regularization for the rating predictions (McAuley &Leskovec, 2013; Diao et al., 2014; Almahairi et al., 2015; Wu et al., 2016b). Using reviews in thisway has been found to improve prediction accuracy, and in some cases provide detailed explanationsfor the recommendations.This previous research has been remarkably successful, but has two significant limitations that wediscuss and address in this paper. First, prediction accuracy has rarely been measured by the ability ofa model to predict future ratings. Rather, recommendation accuracy has been derived from a randomsplit of the ratings data, which undermines our understanding of the models’ usefulness in practice.Here, we focus on predicting future ratings, splitting our training and testing data by date. In order tobe successful at this task, we incorporate the time of ratings and reviews in our model structure andtraining. Koren (2010) previously derived temporal features of ratings data, but used these featurestoremove temporal effects since the metric of success was interpolation, not extrapolation. Morerecently, Recurrent Recommender Networks (RNN) use a recurrent neural network to capture changesA majority of this work was done while the author was at Carnegie Mellon University.1Under review as a conference paper at ICLR 2017in both user preferences and item perceptions, and extrapolate future ratings in an autoregressive way(Wu et al., 2016a). However, temporal patterns in reviews are largely unexplored. Note that just likeratings, reviews also depend on changing factors, such as user writing styles, user preferences, movieperceptions, or the popularity of certain slang words or emoticons. Here we use a generative LSTMmodel that is able to jointly model the temporal effects in ratings and reviews.Second, models of reviews in recommender system fall significantly behind the state-of-the-art innatural language processing. The bag-of-words model used in previous research improves over notusing text, but is limited in the degree to which it can understand the review. In fact, the drawback ofan underfitting model is especially salient in the case of reviews, because they are much more diverseand unstructured than regular documents. Recently there has been significant research attention onmodeling natural language with neural networks, with encouraging results (Lipton et al., 2015; Yanget al., 2016). Here, we combine these powerful neural-based language models with recurrent neuralnetwork to learn both accurate recommendations and accurate reviews. Our main contributions are asfollows:Joint generative model: We propose a novel joint model of ratings and reviews via inter-acting recurrent networks (particularly LSTM).Nonlinear nonparametric review model: By learning a function of user and movie statedynamics, we can capture the evolution of reviews (as well as ratings) over time.Experiments show that by jointly modeling ratings and reviews along with temporal pat-terns, our model achieves state-of-the-art results on IMDb dataset in terms of forwardprediction, i.e. in the realistic scenario where we use only ratings strictly prior to predictiontime to predict future ratings.2 R ELATED WORKCollaborative Filtering As mentioned in the introduction, recommender systems have been thefocus of many different research communities. The Netflix Prize generated a flurry of research toimprove recommendation accuracy, with a variety of matrix factorization models being proposed(Bell & Koren, 2007; Koren et al., 2009; Koren, 2008). During the Netflix competition and moreafterwards, a stream of research has focused on designing generative Bayesian models for user ratingsdata (Mnih & Salakhutdinov, 2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al.,2014; 2015). Nearly all of these models predict ratings by an inner product between a latent userembedding and a latent item embedding; different approaches primarily regularization, e.g., Bayesianmodels and learning algorithms capture uncertainty in the data.Other models have tried to capture interesting patterns discovered in ratings data. As an example,Beutel et al. (2014) finds that some ratings form bimodal rather than Gaussian distributions anddesigns a model to accommodate this diversity. More closely related to this work, Koren (2010)designs many features to capture and remove the temporal effects in ratings data. By removing thesetemporal effects, Koren (2010) learns better stationary embeddings for users and items. Work such asthis improves prediction accuracy, but has two drawbacks: (1) it requires time consuming featureengineering, and (2) it focuses on interpolation rather than extrapolation into the future. Wu et al.(2016a) addresses both of these concerns by learning a function for the evolution of user preferencesand item properties. However, this work focuses exclusively on modeling ratings over time and,in a large part, on the qualitative patterns discovered in the Netflix dataset. Here we focus on themodel itself and, in particular, the interaction of jointly understanding ratings, reviews, and temporalpatterns.Review Modeling Although the most common metric for recommendation accuracy has beenrating prediction, natural language reviews provide rich, detailed insight into user preferences. Mostoften, reviews have been used in a bag-of-words model to regularize rating prediction (McAuley &Leskovec, 2013; Diao et al., 2014; Wu et al., 2016b). For example, McAuley & Leskovec (2013)effectively learns a topic model of reviews regularize item embeddings. By using such coarse models,the impact of and insight from reviews is limited. More recently, Almahairi et al. (2015) use neuralnetwork based review models to regularize hidden factors, but their model assumes only stationarystates.2Under review as a conference paper at ICLR 2017rijuimjrijtwijtuituit+ uitmjtmjt+ mjtFigure 1: As shown on the left, previous recommendation models learn static stationary embeddingsfor users and movies to predict ratings. As shown on the right, we can also capture temporal effectspresent in the data. We have both user and movie embeddings follow a Markov chain, and use thesedynamic embeddings (along with stationary ones not shown) to predict both ratings and text reviews.Interestingly, data mining research has found that review patterns are dynamic, with different languagebeing adopted by communities over time (Danescu-Niculescu-Mizil et al., 2013). Therefore, it isimportant to capture not just the dynamics of ratings, but also the language used to justify thoseratings.Neural Networks Neural networks have recently offered large improvements in natural languageprocessing. More recently, a few papers have focused these natural language models on onlinereviews (Lipton et al., 2015; Yang et al., 2016). However, while these papers do model online reviews,they differ greatly from our work in that they are not actually used for recommendation.With the recent remarkable successes of neural networks in other domains, there has been growingattention on using neural networks for model graphs and ratings data. Most similar, Sedhain et al.(2015) design an autoencoder for collaborative filtering.LSTM and Recurrent Network Recurrent neural network provides a powerful tool to nonpara-metrically model temporal data by using a latent variable autoregressive model as follows:^zt+1=f(ht;zt)andht+1=g(ht;zt+1):Whereztis the observation at time t,^ztis the model associated estimate, and htdenotes the latentstate. A popular class of RNN is the Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber,1997) and we use this as a building block in our model .The state updates is given below:[ft;it;ot] =[W[ht1;zt] +b] (1)lt= tanh [V[ht1;zt] +d] (2)ct=ftct1+itlt (3)ht=ottanh(ct); (4)whereft,it,otdenote the forget gate, input gate and the output gate respectively. For simplicity inthe following we denote this set of operations by ht= LSTM(ht1;zt). We will refer to htas theoutput embedding from the LSTM.3 M ODELA comparison of our model with traditional recommender systems is illustrated in Figure 1. Inprevious recommender systems, ratings are assumed to be a function of stationary user and movieembeddings. Here we consider dynamic embeddings that predict both ratings and text reviews at agiven time step.Figure 2 shows a depiction of our model: Joint Review-Rating Recurrent Recommender Network.In addition to stationary embeddings as used in traditional recommender systems, here we use two3Under review as a conference paper at ICLR 2017^oij;1 ^oij;2 ^oij;3 ^oij;4 user hnew i yi;t2yi;t1i l m .::: uiuitrijxij F i l mmjtoij;0 oij;1 oij;2 oij;3::: mjhnew i yj;t3yj;t2yj;t1movieFigure 2: Joint Review-Rating Recurrent Recommender Networks: We use recurrent networks tocapture the temporal evolution of user and movies states. The recurrent networks depend on theratings of a user (and movie) in previous time steps. We combine these dynamic states with classicstationary states. We directly use all of these states to predict ratings, and use them within an LSTMto model review text.LSTM RNNs that take user/movie history as input to capture the temporal dynamics in both userand movie states. Given stationary and dynamic states of user iand moviej, we define generatorfunctions that emit both rating rijjtand reviews oijjtat time stept. Formally,rijjt=f(ui;mj;uit;mjt)andoijjt= (ui;mj;uit;mjt)ui;t+1=g(uit;frijjtg)andmj;t+1=h(mjt;frijjtg);whereuiandmjdenote stationary states, and uitandmitdenote the dynamic state at t. Note thatwith learned f; ;g andhand given user/movie history, an user/movie state can be inferred withoutfurther optimization. In other words, different from traditional recommender systems, here we learnthefunctions that find the states instead of learning the states directly.3.1 D YNAMIC USER AND MOVIE STATEHere we give a detailed description on the RNNs that find the dynamic states. The key idea is to useuser/movie rating history as inputs to update the states. In this way we are able to model causalityinstead of just finding correlation. That is, we can model e.g. the change of user (movie) state causedby having watched and liked/disliked a movie (being liked/disliked by certain users). At each step,the network takesyt:=Wembed [xt;1newbie;t;t1]; (5)wherextis the rating vector, 1newbie is the indicator for new users, and tis wall-clock time. The jthelement ofxtis the rating the user gives for movie jat timet, and 0otherwise. 1newbie effectivelyselect a default embedding for a new user, and tandt1gives the model the information tosynchronize between RNNs and model the effects such as rating scale change or movie age. Note thatwith the inclusion of s, we do not need to include the steps where a user did not rate any movie, andthis can drastically speed up training. The state update is given by standard ut:= LSTM(ut1;yt).In the above we omit user index for clarity. In cases where we need to distinguish different users (andmovies) such as in Figure 2, we use additional index ifor userias inuit, and similarly for movie jinmjt.4Under review as a conference paper at ICLR 20173.2 R ATING EMISSIONSWe supplement the time-varying profile vectors uitandmjtwith stationary ones uiandmjrespec-tively. These stationary components encode time-invariant properties such as long-term preference ofa user or the genre of a movie.The review rating is thus modeled as a function of both dynamic and stationary states, i.e.rij=f(uit;mjt;ui;mj) :=h~uit;~mjti+hui;mji (6)where ~uitand~mjtare affine functions of uitandmjtrespectively. That is, we have~uit=Wuseruit+buserand~mjt=Wmoviemjt+bmovieThis makes the model a strict superset of popular matrix factorization recommender systems thataccounts for stationary effects, while we use LSTMs, on top of that, to model longer-range dynamicupdates.3.3 R EVIEW TEXT MODELReview text is modeled by a character-level LSTM network. This network shares the same user/movielatent states with the rating model. After all, the purpose of a review is to explain its rating score. Wefuse the stationary and dynamic states of both user of movie by the bottleneck layer xjoint;ijgivenbelow:xjoint;ij:=(Wjoint[uit;mjt;ui;mj] +bjoint) (7)~xij;k:=xoij;k;xjoint;ij(8)whereoij;kdenotes the character at position kfor the review given by user ito moviej, andxoij;kdenotes the embedding of the character. here is some non-linear function.The review text emission model is itself an RNN, specifically a character-level LSTM generativemodel. For character index k= 1;2;:::,hij;k:= LSTM(hij;k1;~xij;k) (9)^oij;k:= softmax ( Wouthij;k+bout) (10)Here a softmax layer at output of LSTM is used to predict the next character. Generating textconditioned on contents has been applied to various areas, such as machine translation (Sutskeveret al., 2014), question answering (Gao et al., 2015), or image captioning (Vinyals et al., 2015).Probably the most similar approach is Lipton et al. (2015), but it conditions review generation onobserved ratings instead of latent states.3.4 P REDICTIONIn prediction time, we make rating predictions based on predicted future states. That is, we take thelatest ratings as input to update the states, and use the newly predicted states to predict ratings. Thisdiffers from traditional approaches where embeddings are estimated instead of inferred.3.5 T RAININGOur goal is to predict both accurate ratings and accurate reviews, and thus we minimizeL:=X(i;j)2Dtrain"(^rij()rij)2nijXk=1log (Pr(oij;kj))#; (11)whereDtrain is the training set of (i;j)pairs,denotes all model parameters, and nijis the numberof characters in the review user igives to movie j. The first term corresponds to the deviation of theprediction from the actual rating, and the second term is the likelihood of the text reviews. controlsthe weight between predicting accurate ratings and predicting accurate reviews. Our training followsthe subspace descent strategy in Wu et al. (2016a). That is, while the review generative model isupdated in every iteration, the user-state and movie-state RNNs are updated in an alternating way.5Under review as a conference paper at ICLR 2017Data # users # items # ratings # characters(reviews)IMDbTrain Jul 98 - Dec 126,127 8,002402.3k 690.6MTest Jan 13 - Sep 13 11.0k 21.6MNetflix 6 monthsTrain Jun - Nov 11311.3k 17.7k13.7M -Test Dec 11 2.1M -Table 1: IMDb dataset comprises reviews and ratings collected from July 1998 to September 2013.Netflix 6 months data is a subset of original Netflix prize dataset that is split based on time.The gradients are calculated with standard backpropagation. Furthermore, we pre-warm train thereview LSTM over the review text excluding the auxiliary input from the user and movie states. It isundesirable if the review likelihood overwhelms the rating. We hence normalize review likelihoodby the number of characters in a review so that it does not dominates the rating likelihood. Thistechnique is common in NLP literature (Wang & McCallum, 2006).4 E XPERIMENTSIn this section we empirically demonstrate the ability of our model to accurately predict both ratingsand reviews, and capture temporal dynamics.4.1 E XPERIMENTAL SETUPIn the following experiments, we select hyperparameters, optimization parameters and model ar-chitecture by cross-validation. The details are as follows. We use 1-layer LSTM recurrent neuralnetworks with 40 hidden factors for user/movie state transitions. The input of this LSTM is anuser/item embedding of dimension 40. Stationary and dynamic factors are 160 and 40-dimensionalrespectively. A 2-layer LSTM network is used to model texts, which takes 30-dimensional characterembeddingxchar, 40-dimensional state vector xjoint, and a 50-dimensional movie embedding xmovie .To speed up convergence, we initialize the text model by a character-level RNN pre-trained withoutconsidering rating. Stationary factors are initialized by a pre-trained iAutoRec (Sedhain et al., 2015)model based on the last layer. We initialize all the other parameters from uniform distribution between[a;a]witha=p1:5(fin+fout), wherefinandfoutare fan-in and fan-out of transition matrices.`2regularization with magnitude 0:001is applied to all parameters. Dropout with a 0:5rate is appliedafter all fully-connected layers. To prevent exploding gradients in of LSTM, gradients are clipped to[15;15]. ADAM (Kingma & Ba, 2014) with learning rate 0:0015 is used for optimization.k (number of ratings)100105# users with k ratings100102104106(a) User distribution.k (number of ratings)100102104# movies with k ratings100105 (b) Movie distribution.Review length k (characters)100105# reviews of length k100101102103104 (c) Review length distribution.Figure 3: Characteristics of IMDb dataset.Data Here we focus on movie recommendations, where the opinions are highly dynamic. Weevaluate our model on IMDb dataset, first used in Diao et al. (2014), that is the only large-scale movie6Under review as a conference paper at ICLR 2017PMF Time-SVD++ U-AutoRec I-AutoRecRRN RRN(rating) (rating + text)IMDb 1.7355 1.7348 1.7332 1.7135 1.7047 1.7012Netflix 6 months 0.9584 0.9589 0.9836 0.9778 0.9427 -Table 2: RRN outperforms competing models in terms of RMSE. In addition, jointly modeling ratingsand reviews achieves even better accuracy.review dataset available. Restaurant recommendations (e.g. Yelp) could be also a suitable domain,but full rating history is not available in publicly available datasets1.The IMDb dataset contains full review and rating history of all users and all movies from 1998to 2013. The characteristics of this dataset is shown in Figure 3. We see that the user and movieratings follow heavy tail distributions, and thus the majority of users and movies have very fewreviews, making accurate recommendation challenging for these users and movies. Review length issummarized in Figure 3 (c). Since one of the major goal of this project is to study temporal dynamics,we focus on users and items that have multiple interactions with the system. Specifically, we select asubset of k-core of the graph with k= 15 . That is, each user and movie has at least 15 ratings in thissubset. Note that the resulting subgraph is still very sparse – with only 0:8%density, which is sparserthan for example, 1.2 % density of Netflix dataset . For completeness, we also include the 6-monthNetflix dataset as used in Wu et al. (2016a), which has only ratings, to study RRN’s ability to modeltemporal patterns.The dataset is split by date instead of random sampling to simulate the real recommendation settingswhere we need to predict into the future instead of interpolating the past. IMDb training set containsall ratings from July 1998 to December 2012, and the ratings from January to September 2013 arerandomly split into a validation set and a test set. Similarly, the 6-month Netflix dataset is split intoJanuary to November 2011 (training) and December 2011 (testing and validation). We report theresults on testing set with the model that gives the best results on validation set. The summary of thisdataset is given in Table 1.Baselines We compare our model with models including the state-of-the-art temporal model, and astate-of-the-art neural network-based model.PMF (Mnih & Salakhutdinov, 2007): Our model extends matrix factorization by includinga dynamic part and a joint review model. Comparing to PMF directly shows us the advantageof our approaches. LIBPMF (Yu et al., 2012) is used in experiments.Time-SVD++ (Koren, 2010): Time-SVD++ is the state-of-the-art model for temporaleffects. It achieves excellent performance in Netflix contest. Implementation in GraphChi(Kyrola et al., 2012) is used in experiments.AutoRec (Sedhain et al., 2015): AutoRec is the state-of-the-art neural network recom-mender system. It learns an autoencoder that encodes user (item) histories into a low-dimensional space and then predict ratings by decoding. No temporal effects or causalityare considered in this model. We use the software the authors provide in experiments.All models use comparable number of factor sizes. Parameters of PMF and Time-SVD++ are selectedby grid-search. Settings of AutoRec follow the original paper. We also include the performance ofrating-only RRN, as in Wu et al. (2016a), to separate the benefits obtained from temporal modelingand review texts.4.2 R ATING PREDICTIONOne important goal of recommender systems is making accurate rating predictions. Here we evaluatethe accuracy by root-mean-square error (RMSE) of prediction from the true rating. The resultsare summarized in Table 2. For completeness, we include the results from Wu et al. (2016a) on1https://www.yelp.com/dataset_challenge7Under review as a conference paper at ICLR 20176-month Netflix dataset that use ratings only to compare the behavior of different models on differentdatasets. We see that rating-only RRN outperforms all baseline models in terms of rating predictionconsistently in both dataset. More importantly, joint-modeling ratings and reviews boosts theperformance even more , compared to rating-only RRN. This implies that by sharing statisticalstrength between ratings and reviews, the rich information in reviews helps us estimate the latentfactors better. Note that while the absolute improvements in RMSE might not appear to be huge,the 1.98% improvement over PMF is actually considerable in terms of recommendations2. We alsosee that while Time-SVD++ performs well in Netflix contest, it does not work as well for predictingfuture ratings. After all, the goal of Time-SVD++ is estimating the temporal bias in hindsight insteadof extrapolating into future states.4.3 T EXT MODELINGHere we examine the impact of conditioning on user and item states for text modeling. Towards thisend, we compare perplexity of characters in testing set with and without using the user/item factors.Perplexity is defined asppx(Dtest) = exp 1NcXc2Dtestlog Pr(c)!;whereNcis the total number of characters in Dtest, and Pr(c)is the likelihood of character c.Interestingly, we found that by jointly training with user and item states, the perplexity improvesfrom 3.3442 to 3.3362 .4.4 T EMPORAL DYNAMICSHere we study if RRN is able to automatically capture the overall rating trends in IMDb by adaptivelyupdating states along history sequence. Specifically, at each time step, we randomly sample up to1000 users, and see what ratings the users would have given to each of the movie given their statesat the time step, even in reality the user might not have given a rating to the movie. This gives usan unbiased estimation of average behavior of our model on each of the ratings. Figure 4 shows theaverage predicted ratings in this setting and the true average rating in the data set. We see that RRNclearly captures the overall trend in IMDb smoothly.Year1999 2002 2005 2008 2011Avg. rating in data set6.66.877.27.47.6(a) Average ratings on IMDb.Year1999 2002 2005 2008 2011Avg. predicted rating6.66.877.27.47.67.8 (b) Predicted ratings.Figure 4: RRN is able to capture the overall trend of data. (a) show the average ratings of all movieson IMDb over time. In (b) we see the predicted ratings are consistent with this trend.5 D ISCUSSION & C ONCLUSIONWe present a novel approach that jointly models ratings, reviews, and their temporal dynamics withRRN. The contributions we have provided are as follows:2For example, in 2009 SVD++ outperforms SVD by 1.09% and Time-SVD++ outperforms SVD++ by 1.25%,and they are considered important progress in recommender systems.8Under review as a conference paper at ICLR 20171.Joint rating-review modeling: We offer an LSTM-based joint rating-review model thatprovides advantages in both rating prediction and text modeling.2.Nonparametric dynamic review modeling: RRN is based on an autoregressive method tomodel temporal dynamics of users and movies, allowing us to capture how reviews changeover time.3.Empirical results: We demonstrate that our joint model offers state-of-the-art results onrating prediction in real recommendation settings, i.e. predicting into the future.9Under review as a conference paper at ICLR 2017<|im_end|> <|im_start|>assistant ### Review Title ### Review Text This paper proposed a joint model for rate prediction and text generation. The author compared the methods on a more realistic time based split setting, which requires “predict into the future.” One major flaw of the paper is that it does not address the impact of BOW vs the RNN based text model, specifically RRN(rating+text) already uses RNN for text modeling, so it is unclear whether the improvement comes from RNN(as opposed to BOW) or application of text information. A more clear study on impact of each component could make it more clear and benefit the readers. Another potential improvement direction of the paper is to support ranking objectives, as opposed to rate prediction, which is more realistic for recommendation settings. The overall technique is intuitive and novel, but can be improved to give more insights to the reader,. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
ryiAv2xAZ
ICLR.cc/2018/Conference
2018
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
["Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin"]
The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.
["classifiers", "samples", "classifier", "problem", "test sample", "distribution", "different", "many", "machine learning applications", "deep neural networks"]
ABSTRACTThe problem of detecting whether a test sample is from in-distribution (i.e., train-ing distribution by a classifier) or out-of-distribution sufficiently different from itarises in many real-world machine learning applications. However, the state-of-artdeep neural networks are known to be highly overconfident in their predictions,i.e., do not distinguish in- and out-of-distributions. Recently, to handle this is-sue, several threshold-based detectors have been proposed given pre-trained neu-ral classifiers. However, the performance of prior works highly depends on howto train the classifiers since they only focus on improving inference procedures.In this paper, we develop a novel training method for classifiers so that such in-ference algorithms can work better. In particular, we suggest two additional termsadded to the original loss (e.g., cross entropy). The first one forces samples fromout-of-distribution less confident by the classifier and the second one is for (im-plicitly) generating most effective training samples for the first one. In essence,our method jointly trains both classification and generative neural networks forout-of-distribution. We demonstrate its effectiveness using deep convolutionalneural networks on various popular image datasets.1 I NTRODUCTIONDeep neural networks (DNNs) have demonstrated state-of-the-art performance on many classifi-cation tasks, e.g., speech recognition (Hannun et al., 2014), image classification (Girshick, 2015),video prediction (Villegas et al., 2017) and medical diagnosis (Caruana et al., 2015). Even thoughDNNs achieve high accuracy, it has been addressed (Lakshminarayanan et al., 2017; Guo et al.,2017) that they are typically overconfident in their predictions. For example, DNNs trained to clas-sify MNIST images often produce high confident probability 91% even for random noise (see thework of (Hendrycks & Gimpel, 2016)). Since evaluating the quality of their predictive uncertaintyis hard, deploying them in real-world systems raises serious concerns in AI Safety (Amodei et al.,2016), e.g., one can easily break a secure authentication system that can be unlocked by detectingthe gaze and iris of eyes using DNNs (Shrivastava et al., 2017).The overconfidence issue of DNNs is highly related to the problem of detecting out-of-distribution:detect whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it. Formally, it can be formulated as a binary classificationproblem. Let an input x2X and a labely2Y =f1;:::;Kgbe random variables that follow ajoint data distribution Pin(x;y) =Pin(yjx)Pin(x). We assume that a classifier P(yjx)is trainedon a dataset drawn from Pin(x;y), wheredenotes the model parameter. We let Pout(x)denotean out-of-distribution which is ‘far away’ from in-distribution Pin(x). Our problem of interest isdetermining if input xis fromPinorPout, possibly utilizing a well calibrated classifier P(yjx).In other words, we aim to build a detector, g(x) :X!f 0;1g, which assigns label 1 if data is fromin-distribution, and label 0 otherwise.There have been recent efforts toward developing efficient detection methods where they mostlyhave studied simple threshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017) uti-lizing a pre-trained classifier. For each input x, it measures some confidence score q(x)based on apre-trained classifier, and compares the score to some threshold >0. Then, the detector assigns1Published as a conference paper at ICLR 2018label 1 if the confidence score q(x)is above, and label 0, otherwise. Specifically, (Hendrycks& Gimpel, 2016) defined the confidence score as a maximum value of the predictive distribution,and (Liang et al., 2017) further improved the performance by using temperature scaling (Guo et al.,2017) and adding small controlled perturbations to the input data. Although such inference methodsare computationally simple, their performances highly depend on the pre-trained classifier. Namely,they fail to work if the classifier does not separate the maximum value of predictive distributionwell enough with respect to PinandPout. Ideally, a classifier should be trained to separate allclass-dependent in-distributions as well as out-of-distribution in the output space. As another line ofresearch, Bayesian probabilistic models (Li & Gal, 2017; Louizos & Welling, 2017) and ensemblesof classifiers (Lakshminarayanan et al., 2017) were also investigated. However, training or inferringthose models are computationally more expensive. This motivates our approach of developing anew training method for the more plausible simple classifiers. Our direction is orthogonal to theBayesian and ensemble approaches, where one can also combine them for even better performance.Contribution. In this paper, we develop such a training method for detecting out-of-distributionPoutbetter without losing its original classification accuracy. First, we consider a new loss function,called confidence loss . Our key idea on the proposed loss is to additionally minimize the Kullback-Leibler (KL) divergence from the predictive distribution on out-of-distribution samples to the uni-form one in order to give less confident predictions on them. Then, in- and out-of-distributions areexpected to be more separable. However, optimizing the confidence loss requires training samplesfrom out-of-distribution, which are often hard to sample: a priori knowledge on out-of-distributionis not available or its underlying space is too huge to cover. To handle the issue, we consider anew generative adversarial network (GAN) (Goodfellow et al., 2014) for generating most effectivesamples from Pout. Unlike the original GAN, the proposed GAN generates ‘boundary’ samples inthe low-density area of Pin. Finally, we design a joint training scheme minimizing the classifier’sloss and new GAN loss alternatively, i.e., the confident classifier improves the GAN, and vice versa,as training proceeds. Here, we emphasize that the proposed GAN does not need to generate explicitsamples under our scheme, and instead it implicitly encourages training a more confident classifier.We demonstrate the effectiveness of the proposed method using deep convolutional neural networkssuch as AlexNet (Krizhevsky, 2014) and VGGNet (Szegedy et al., 2015) for image classificationtasks on CIFAR (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), ImageNet (Deng et al.,2009), and LSUN (Yu et al., 2015) datasets. The classifier trained by our proposed method dras-tically improves the detection performance of all threshold-based detectors (Hendrycks & Gim-pel, 2016; Liang et al., 2017) in all experiments. In particular, VGGNet with 13 layers trained byour method improves the true negative rate (TNR), i.e., the fraction of detected out-of-distribution(LSUN) samples, compared to the baseline: 14:0%!39:1%and46:3%!98:9%on CIFAR-10and SVHN, respectively, when 95% of in-distribution samples are correctly detected. We also pro-vide visual understandings on the proposed method using the image datasets. We believe that ourmethod can be a strong guideline when other researchers will pursue these tasks in the future.2 T RAINING CONFIDENT NEURAL CLASSIFIERSIn this section, we propose a novel training method for classifiers in order to improve the perfor-mance of prior threshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017) (see Ap-pendix A for more details). Our motivation is that such inference algorithms can work better if theclassifiers are trained so that they map the samples from in- and out-of-distributions into the outputspace separately. Namely, we primarily focus on training an improved classifier, and then use priordetectors under the trained model to measure its performance.2.1 C ONFIDENT CLASSIFIER FOR OUT -OF-DISTRIBUTIONWithout loss of generality, suppose that the cross entropy loss is used for training. Then, we proposethe following new loss function, termed confidence loss:minEPin(bx;by)logP(y=byjbx)+EPout(x)KL(U(y)kP(yjx)); (1)whereKL denotes the Kullback-Leibler (KL) divergence, U(y)is the uniform distribution and >0is a penalty parameter. It is highly intuitive as the new loss forces the predictive distribution2Published as a conference paper at ICLR 2018Class 0 Class 1 (a)Class 0 Class 1 [0,0.2) [0.2,0.8) [0.8,1) (b)Class 0 Class 1 (c)Class 0 Class 1 [0,0.2) [0.2,0.8) [0.8,1) (d)Figure 1: Illustrating the behavior of classifier under different out-of-distribution training datasets.We generate the out-of-distribution samples from (a) 2D box [50;50]2, and show (b) the corre-sponding decision boundary of classifier. We also generate the out-of-distribution samples from (c)2D box [20;20]2, and show (d) the corresponding decision boundary of classifier.on out-of-distribution samples to be closer to the uniform one, i.e., zero confidence, while thatfor samples from in-distribution still follows the label-dependent probability. In other words, theproposed loss is designed for assigning higher maximum prediction values, i.e., maxyP(yjx), toin-distribution samples than out-of-distribution ones. Here, a caveat is that adding the KL divergenceterm might degrade the classification performance. However, we found that it is not the case dueto the high expressive power of deep neural networks, while in- and out-of-distributions becomemore separable with respect to the maximum prediction value by optimizing the confidence loss(see Section 3.1 for supporting experimental results).We remark that minimizing a similar KL loss was studied recently for different purposes (Lee et al.,2017; Pereyra et al., 2017). Training samples for minimizing the KL divergence term is explicitlygiven in their settings while we might not. Ideally, one has to sample all (almost infinite) types of out-of-distribution to minimize the KL term in (1), or require some prior information on testing out-of-distribution for efficient sampling. However, this is often infeasible and fragile. To address the issue,we suggest to sample out-of-distribution close to in-distribution, which could be more effective inimproving the detection performance, without any assumption on testing out-of-distribution.In order to explain our intuition in details, we consider a binary classification task on a simple ex-ample, where each class data is drawn from a Gaussian distribution and entire data space is boundedby 2D box [50;50]2for visualization. We apply the confidence loss to simple fully-connectedneural networks (2 hidden layers and 500 hidden units for each layer) using different types of out-of-distribution training samples. First, as shown in Figure 1(a), we construct an out-of-distributiontraining dataset of 100 (green) points using rejection sampling on the entire data space [50;50]2.Figure 1(b) shows the decision boundary of classifier optimizing the confidence loss on the corre-sponding dataset. One can observe that a classifier still shows overconfident predictions (red andblue regions) near the labeled in-distribution region. On the other hand, if we construct a trainingout-of-distribution dataset of 100 points from [20;20]2, i.e., closer to target, in-distribution space(see Figure 1(c)), a classifier produces confident predictions only on the labeled region and zeroconfidence on the remaining in the entire data space [50;50]2as shown in Figure 1(d). If oneincreases the number of training out-of-distribution samples which are generated from the entirespace, i.e., [50;50]2, Figure 1(b) is expected to be similar to Figure 1(d). In other words, one needmore samples in order to train a confident classifier if samples are generated from the entire space.However, this might be impossible and not efficient since the number of out-of-distribution trainingsamples might be almost infinite to cover its entire, huge actual data space. This implies that trainingout-of-distribution samples nearby the in-distribution region could be more effective in improvingthe detection performance. Our underlying intuition is that the effect of boundary of in-distributionregion might propagate to the entire out-of-distribution space. Our experimental results in Section3.1 also support this: realistic images are more useful as training out-of-distribution than syntheticdatasets (e.g., Gaussian noise) for improving the detection performance when we consider an imageclassification task. This motivates us to develop a new generative adversarial network (GAN) forgenerating such effective out-of-distribution samples.2.2 A DVERSARIAL GENERATOR FOR OUT -OF-DISTRIBUTIONIn this section, we introduce a new training method for learning a generator of out-of-distributioninspired by generative adversarial network (GAN) (Goodfellow et al., 2014). We will first assume3Published as a conference paper at ICLR 2018that the classifier for in-distribution is fixed, and also describe the joint learning framework in thenext section.The GAN framework consists of two main components: discriminator Dand generator G. Thegenerator maps a latent variable zfrom a prior distribution Ppri(z)to generated outputs G(z), anddiscriminator D:X ! [0;1]represents a probability that sample xis from a target distribution.Suppose that we want to recover the in-distribution Pin(x)using the generator G. Then, one canoptimize the following min-max objective for forcing PGPin:minGmaxDEPin(x)logD(x)+EPpri(z)log (1D(G(z))): (2)However, unlike the original GAN, we want to make the generator recover an effective out-of-distributionPoutinstead ofPin. To this end, we propose the following new GAN loss:minGmaxDEPG(x)KL(U(y)kP(yjx))| {z }(a)+EPin(x)logD(x)+EPG(x)log (1D(x))| {z }(b); (3)whereis the model parameter of a classifier trained on in-distribution. The above objective can beinterpreted as follows: the first term (a) corresponds to a replacement of the out-of-distribution Poutin (1)’s KL loss with the generator distribution PG. One can note that this forces the generator togenerate low-density samples since it can be interpreted as minimizing the log negative likelihoodof in-distribution using the classifier, i.e., Pin(x)exp (KL(U(y)kP(yjx))):We remark thatthis approximation is also closely related to the inception score (Salimans et al., 2016) which ispopularly used as a quantitative measure of visual fidelity of the samples. The second term (b) cor-responds to the original GAN loss since we would like to have out-of-distribution samples close toin-distribution, as mentioned in Section 2.1. Suppose that the model parameter of classifier is setappropriately such that the classifier produces the uniform distribution for out of distribution sam-ples. Then, the KL divergence term (a) in (3) is approximately 0 no matter what out-of-distributionsamples are generated. However, if the samples are far away from boundary, the GAN loss (b) in (3)should be high, i.e., the GAN loss forces having samples being not too far from the in-distributionspace. Therefore, one can expect that proposed loss can encourage the generator to produce thesamples which are on the low-density boundary of the in-distribution space. We also provide itsexperimental evidences in Section 3.2.We also remark that (Dai et al., 2017) consider a similar GAN generating samples from out-of-distribution for the purpose of semi-supervised learning. The authors assume the existence of a pre-trained density estimation model such as PixelCNN++ (Salimans et al., 2017) for in-distribution,but such a model might not exist and be expensive to train in general. Instead, we use much simplerconfident classifiers for approximating the density. Hence, under our fully-supervised setting, ourGAN is much easier to train and more suitable.2.3 J OINT TRAINING METHOD OF CONFIDENT CLASSIFIER AND ADVERSARIAL GENERATORIn the previous section, we suggest training the proposed GAN using a pre-trained confident classi-fier. We remind that the converse is also possible, i.e., the motivation of having such a GAN is fortraining a better classifier. Hence, two models can be used for improving each other. This naturallysuggests a joint training scheme where the confident classifier improves the proposed GAN, and viceversa, as training proceeds. Specifically, we suggest the following joint objective function:minGmaxDminEPin(bx;by)logP(y=byjbx)|{z }(c)+EPG(x)KL(U(y)kP(yjx))| {z }(d)+EPin(bx)logD(bx)+EPG(x)log (1D(x)):| {z }(e)(4)The classifier’s confidence loss corresponds to (c) + (d), and the proposed GAN loss correspondsto (d) + (e), i.e., they share the KL divergence term (d) under joint training. To optimize the aboveobjective efficiently, we propose an alternating algorithm, which optimizes model parameters fgof classifier and GAN models fG;Dgalternatively as shown in Algorithm 1. Since the algorithmmonotonically decreases the objective function, it is guaranteed to converge.4Published as a conference paper at ICLR 2018Algorithm 1 Alternating minimization for detecting and generating out-of-distribution.repeat=Update proposed GAN =Samplefz1;:::; zMgandfx1;:::; xMgfrom priorPpri(z)and and in-distribution Pin(x),respectively, and update the discriminator Dby ascending its stochastic gradient of1MMXi=1hlogD(xi) + log (1D(G(zi)))i:Samplefz1;:::; zMgfrom priorPpri(z), and update the generator Gby descending itsstochastic gradient of1MMXi=1hlog (1D(G(zi)))i+MMXi=1hKL(U(y)kP(yjG(zi)))i:=Update confident classifier =Samplefz1;:::; zMgandf(x1;y1);:::; (xM;yM)gfrom priorPpri(z)and in-distributionPin(x;y), respectively, and update the classifier by descending its stochastic gradient of1MMXi=1hlogP(y=yijxi) +KL (U(y)kP(yjG(zi)))i:untilconvergence3 E XPERIMENTAL RESULTSWe demonstrate the effectiveness of our proposed method using various datasets: CIFAR(Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), ImageNet (Deng et al., 2009), LSUN(Yu et al., 2015) and synthetic (Gaussian) noise distribution. We train convolutional neural networks(CNNs) including VGGNet (Szegedy et al., 2015) and AlexNet (Krizhevsky, 2014) for classifyingCIFAR-10 and SVHN datasets. The corresponding test dataset is used as the in-distribution (pos-itive) samples to measure the performance. We use realistic images and synthetic noises as theout-of-distribution (negative) samples. For evaluation, we measure the following metrics using thethreshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017): the true negative rate(TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve(AUROC), the area under the precision-recall curve (AUPR), and the detection accuracy, wherelarger values of all metrics indicate better detection performances. Due to the space limitation, moreexplanations about datasets, metrics and network architectures are given in Appendix B.1In-dist Out-of-distClassificationaccuracyTNRat TPR 95%AUROCDetectionaccuracyAUPRinAUPRoutCross entropy loss / Confidence lossSVHNCIFAR-10 (seen)93.82 / 94.2347.4 / 99.9 62.6 / 99.9 78.6 / 99.9 71.6 / 99.9 91.2 / 99.4TinyImageNet (unseen) 49.0 / 100.0 64.6 / 100.0 79.6 / 100.0 72.7 / 100.0 91.6 / 99.4LSUN (unseen) 46.3 / 100.0 61.8 / 100.0 78.2 / 100.0 71.1 / 100.0 90.8 / 99.4Gaussian (unseen) 56.1 / 100.0 72.0 / 100.0 83.4 / 100.0 77.2 / 100.0 92.8 / 99.4CIFAR-10SVHN (seen)80.14 / 80.5613.7 / 99.8 46.6 / 99.9 66.6 / 99.8 61.4 / 99.9 73.5 / 99.8TinyImageNet (unseen) 13.6 / 9.9 39.6 / 31.8 62.6 / 58.6 58.3 / 55.3 71.0 / 66.1LSUN (unseen) 14.0 / 10.5 40.7 / 34.8 63.2 / 60.2 58.7 / 56.4 71.5 / 68.0Gaussian (unseen) 2.8 / 3.3 10.2 / 14.1 50.0 / 50.0 48.1 / 49.4 39.9 / 47.0Table 1: Performance of the baseline detector (Hendrycks & Gimpel, 2016) using VGGNet. All val-ues are percentages and boldface values indicate relative the better results. For each in-distribution,we minimize the KL divergence term in (1) using training samples from an out-of-distributiondataset denoted by “seen”, where other “unseen” out-of-distributions were only used for testing.1Our code is available at https://github.com/alinlab/Confident_classifier .5Published as a conference paper at ICLR 2018SVHN (in)CIFAR-10 (out / unseen)Gaussian (out / unseen)LSUN (out / unseen)TinyImageNet (out / unseen)Fraction00.10.20.30.40.50.60.70.80.91.0Maximum in softmax scores0.15 0.35 0.55 0.75 0.95(a) Cross entropy lossSVHN (in) CIFAR-10 (out / seen) Gaussian (out / unseen) LSUN (out / unseen) TinyImageNet (out / unseen) Fraction00.10.20.30.40.50.60.70.80.91.0Maximum in softmax scores 0.15 0.35 0.55 0.75 0.95 (b) Confidence loss in (1)Cross entropy lossConfidence loss (Gaussian)Confidence loss (LSUN)Confidence loss (TinyImageNet)Confidence loss (CIFAR-10)TPR on in-distribution (SVHN)0.20.40.60.81.0FPR on out-of-distribution (CIFAR-10)0 0.5 1.00.80.91.000.1 (c) ROC curveFigure 2: For all experiments in (a), (b) and (c), we commonly use the SVHN dataset for in-distribution. Fraction of the maximum prediction value in softmax scores trained by (a) cross en-tropy loss and (b) confidence loss: the x-axis and y-axis represent the maximum prediction valueand the fraction of images receiving the corresponding score, respectively. The receiver operatingcharacteristic (ROC) curves under different losses are reported in (c): the red curve corresponds tothe ROC curve of a model trained by optimizing the naive cross entropy loss, whereas other onescorrespond to the ROC curves of models trained by optimizing the confidence loss. The KL diver-gence term in the confidence loss is optimized using explicit out-of-distribution datasets indicatedin the parentheses, e.g., Confident loss (LSUN) means that we use the LSUN dataset for optimizingthe KL divergence term.3.1 E FFECTS OF CONFIDENCE LOSSWe first verify the effect of confidence loss in (1) trained by some explicit, say seen, out-of-distribution datasets. First, we compare the quality of confidence level by applying various traininglosses. Specifically, the softmax classifier is used and simple CNNs (two convolutional layers fol-lowed by three fully-connected layers) are trained by minimizing the standard cross entropy loss onSVHN dataset. We also apply the confidence loss to the models by additionally optimizing the KLdivergence term using CIFAR-10 dataset (as training out-of-distribution). In Figure 2(a) and 2(b),we report distributions of the maximum prediction value in softmax scores to evaluate the separationquality between in-distribution (i.e., SVHN) and out-of-distributions. It is clear that there exists abetter separation between the SVHN test set (red bar) and other ones when the model is trained bythe confidence loss. Here, we emphasize that the maximum prediction value is also low on even un-trained (unseen) out-of-distributions, e.g., TinyImageNet, LSUN and synthetic datasets. Therefore,it is expected that one can distinguish in- and out-of-distributions more easily when a classifier istrained by optimizing the confidence loss. To verify that, we obtain the ROC curve using the baselinedetector (Hendrycks & Gimpel, 2016) that computes the maximum value of predictive distributionon a test sample and classifies it as positive (i.e., in-distribution) if the confidence score is abovesome threshold. Figure 2(c) shows the ROC curves when we optimize the KL divergence term onvarious datasets. One can observe that realistic images such as TinyImageNet (aqua line) and LSUN(green line) are more useful than synthetic datasets (orange line) for improving the detection perfor-mance. This supports our intuition that out-of-distribution samples close to in-distribution could bemore effective in improving the detection performance as we discussed in Section 2.1.We indeed evaluate the performance of the baseline detector for out-of-distribution using large-scale CNNs, i.e., VGGNets with 13 layers, under various training scenarios, where more results onAlexNet and ODIN detector (Liang et al., 2017) can be found in Appendix C (the overall trendsof results are similar). For optimizing the confidence loss in (1), SVHN and CIFAR-10 trainingdatasets are used for optimizing the KL divergence term for the cases when the in-distribution isCIFAR-10 and SVHN, respectively. Table 1 shows the detection performance for each in- andout-of-distribution pair. When the in-distribution is SVHN, the classifier trained by our methoddrastically improves the detection performance across all out-of-distributions without hurting itsoriginal classification performance. However, when the in-distribution is CIFAR-10, the confidenceloss does not improve the detection performance in overall, where we expect that this is becausethe trained/seen SVHN out-of-distribution does not effectively cover all tested out-of-distributions.Our joint confidence loss in (4), which was designed under the intuition, resolves the issue of theCIFAR-10 (in-distribution) classification case in Table 1 (see Figure 4(b)).6Published as a conference paper at ICLR 2018Generated samples (a)Generated samples (b) (c) (d)Figure 3: The generated samples from original GAN (a)/(c) and proposed GAN (b)/(d). In (a)/(b),the grey area is the 2D histogram of training in-distribution samples drawn from a mixture of twoGaussian distributions and red points indicate generated samples by GANs.CrossentropylossConfidenceloss(samplesfromoriginalGAN)JointconfidencelossConfidenceloss(CIFAR-10)405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: CIFAR-10405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: TinyImageNet405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: LSUN(a) In-distribution: SVHNCrossentropylossConfidenceloss(samplesfromoriginalGAN)JointconfidencelossConfidenceloss(SVHN)0102030405060708090100TNR TPR 95%AUROC Detection accuracyOut-of-distribution: SVHN0102030405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: TinyImageNet0102030405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: LSUN(b) In-distribution: CIFAR-10Figure 4: Performances of the baseline detector (Hendrycks & Gimpel, 2016) under various traininglosses. For training models by the confidence loss, the KL divergence term is optimized usingsamples indicated in the parentheses. For fair comparisons, we only plot the performances forunseen out-of-distributions, where those for seen out-of-distributions (used for minimizing the KLdivergence term in (1)) can be found in Table 1.3.2 E FFECTS OF ADVERSARIAL GENERATOR AND JOINT CONFIDENCE LOSSIn this section, we verify the effect of the proposed GAN in Section 2.2 and evaluate the detectionperformance of the joint confidence loss in (4). To verify that the proposed GAN can produce thesamples nearby the low-density boundary of the in-distribution space, we first compare the gener-ated samples by original GAN and proposed GAN on a simple example where the target distributionis a mixture of two Gaussian distributions. For both the generator and discriminator, we use fully-connected neural networks with 2 hidden layers. For our method, we use a pre-trained classifierwhich minimizes the cross entropy on target distribution samples and the KL divergence on out-of-distribution samples generated by rejection sampling on a bounded 2D box. As shown in Figure3(a), the samples of original GAN cover the high-density area of the target distribution while thoseof proposed GAN does its boundary one (see Figure 3(b)). We also compare the generated samplesof original and proposed GANs on MNIST dataset (LeCun et al., 1998), which consists of hand-written digits. For this experiment, we use deep convolutional GANs (DCGANs) (Radford et al.,2015). In this case, we use a pre-trained classifier which minimizes the cross entropy on MNIST7Published as a conference paper at ICLR 2018SVHN (in)TinyImageNet (out)Cross entropyConfidence loss(samples from original GAN)Joint confidence lossConfidence loss(CIFAR -10)LSUN (out)CIFAR -10 (out)(a) In-distribution: SVHNCIFAR -10 (in)TinyImageNet (out)Cross entropyConfidence loss(samples from original GAN)Joint confidence lossConfidence loss(SVHN)LSUN (out)SVHN (out) (b) In-distribution: CIFAR-10Figure 5: Guided gradient (sensitivity) maps of the top-1 predicted class with respect to the inputimage under various training losses.training samples and the KL divergence on synthetic Gaussian noises. As shown in Figure 3(c)and 3(d), samples of original GAN looks more like digits than those of proposed GAN. Somewhatinterestingly, the proposed GAN still generates some new digit-like images.We indeed evaluate the performance of our joint confidence loss in (4) utilizing the proposed GAN.To this end, we use VGGNets (as classifiers) and DCGANs (as GANs). We also test a variant of con-fidence loss which optimizes the KL divergence term on samples from a pre-trained original GAN(implicitly) modeling the in-distribution. One can expect that samples from the original GAN can bealso useful for improving the detection performance since it may have bad generalization properties(Arora et al., 2017) and generate a few samples on the low-density boundary as like the proposedGAN. Figure 4 shows the performance of the baseline detector for each in- and out-of-distributionpair. First, observe that the joint confidence loss (blue bar) outperforms the confidence loss withsome explicit out-of-distribution datasets (green bar). This is quite remarkable since the formeris trained only using in-distribution datasets, while the latter utilizes additional out-of-distributiondatasets. We also remark that our methods significantly outperform the baseline cross entropy loss(red bar) in all cases without harming its original classification performances (see Table 2 in Ap-pendix C). Interestingly, the confidence loss with the original GAN (orange bar) is often (but notalways) useful for improving the detection performance, whereas that with the proposed GAN (bluebar) still outperforms it in all cases.Finally, we also provide visual interpretations of models using the guided gradient maps (Sprin-genberg et al., 2014). Here, the gradient can be interpreted as an importance value of each pixelwhich influences on the classification decision. As shown in Figure 5, the model trained by thecross entropy loss shows sharp gradient maps for both samples from in- and out-of-distributions,whereas models trained by the confidence losses do only on samples from in-distribution. For thecase of SVHN in-distribution, all confidence losses gave almost zero gradients, which matches tothe results in Figure 4(a): their detection performances are almost perfect. For the case of CIFAR-10 distribution, one can now observe that there exists some connection between gradient maps anddetection performances. This is intuitive because for detecting samples from out-of-distributionsbetter, the classifier should look at more pixels as similar importance and the KL divergence termforces it. We think that our visualization results might give some ideas in future works for developingbetter inference methods for detecting out-of-distribution under our models.4 C ONCLUSIONIn this paper, we aim to develop a training method for neural classification networks for detectingout-of-distribution better without losing its original classification accuracy. In essence, our methodjointly trains two models for detecting and generating out-of-distribution by minimizing their lossesalternatively. Although we primarily focus on image classification in our experiments, our methodcan be used for any classification tasks using deep neural networks. It is also interesting futuredirections applying our methods for other related tasks: regression (Malinin et al., 2017), networkcalibration (Guo et al., 2017), Bayesian probabilistic models (Li & Gal, 2017; Louizos & Welling,2017), ensemble (Lakshminarayanan et al., 2017) and semi-supervised learning (Dai et al., 2017).8Published as a conference paper at ICLR 2018ACKNOWLEDGEMENTSThis work was supported in part by the Institute for Information & communications TechnologyPromotion(IITP) grant funded by the Korea government(MSIT) (No.2017-0-01778, Developmentof Explainable Human-level Deep Machine Learning Inference Framework), the ICT R&D programof MSIP/IITP [R-20161130-004520, Research on Adaptive Machine Learning Technology Devel-opment for Intelligent Autonomous Digital Companion], DARPA Explainable AI (XAI) program#313498 and Sloan Research Fellowship.
B1klq-5lG
interesting idea for robust classification
7: Good paper, accept
The manuscript proposes a generative approach to detect which samples are within vs. out of the sample space of the training distribution. This distribution is used to adjust the classifier so it makes confident predictions within sample, and less confident predictions out of sample, where presumably it is prone to mistakes. Evaluation on several datasets suggests that accounting for the within-sample distribution in this way can often actually improve evaluation performance, and can help the model detect outliers. The manuscript is reasonably well written overall, though some of the writing could be improved e.g. a clearer description of the cost function in section 2. However, equation 4 and algorithm 1 were very helpful in clarifying the cost function. The manuscript also does a good job giving pointers to related prior work. The problem of interest is timely and important, and the provided solution seems reasonable and is well evaluated. Looking at the cost function and the intuition, the difference in figure 1 seems to be primarily due to the relative number of samples used during optimization -- and not to anything inherent about the distribution as is claimed. In particular, if a proportional number of samples is generated for the 50x50 case, I would expect the plots to be similar. I suggest the authors modify the claim of figure 1 accordingly. Along those lines, it would be interesting if instead of the uniform distribution, a model that explicitly models within vs. out of sample might perform better? Though this is partially canceled out by the other terms in the optimization. Finally, the authors claim that the PT is approximately equal to entropy. The cited reference (Zhao et. al. 2017) does not justify the claim. I suggest the authors remove this claim or correctly justify it. Questions: - Could the authors comment on cases where such a strong within-sample assumption may adversely affect performance? - Could the authors comment on how the modifications affect prediction score calibration? - Could the authors comment on whether they think the proposed approach may be more resilient to adversarial attacks? Minor issues: - Figure 1 is unclear using dots. Perhaps the authors can try plotting a smoothed decision boundary to clarify the idea?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples ### Paper Abstract The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets. ### Paper Keywords ["classifiers", "samples", "classifier", "problem", "test sample", "distribution", "different", "many", "machine learning applications", "deep neural networks"] ### Paper Content ABSTRACTThe problem of detecting whether a test sample is from in-distribution (i.e., train-ing distribution by a classifier) or out-of-distribution sufficiently different from itarises in many real-world machine learning applications. However, the state-of-artdeep neural networks are known to be highly overconfident in their predictions,i.e., do not distinguish in- and out-of-distributions. Recently, to handle this is-sue, several threshold-based detectors have been proposed given pre-trained neu-ral classifiers. However, the performance of prior works highly depends on howto train the classifiers since they only focus on improving inference procedures.In this paper, we develop a novel training method for classifiers so that such in-ference algorithms can work better. In particular, we suggest two additional termsadded to the original loss (e.g., cross entropy). The first one forces samples fromout-of-distribution less confident by the classifier and the second one is for (im-plicitly) generating most effective training samples for the first one. In essence,our method jointly trains both classification and generative neural networks forout-of-distribution. We demonstrate its effectiveness using deep convolutionalneural networks on various popular image datasets.1 I NTRODUCTIONDeep neural networks (DNNs) have demonstrated state-of-the-art performance on many classifi-cation tasks, e.g., speech recognition (Hannun et al., 2014), image classification (Girshick, 2015),video prediction (Villegas et al., 2017) and medical diagnosis (Caruana et al., 2015). Even thoughDNNs achieve high accuracy, it has been addressed (Lakshminarayanan et al., 2017; Guo et al.,2017) that they are typically overconfident in their predictions. For example, DNNs trained to clas-sify MNIST images often produce high confident probability 91% even for random noise (see thework of (Hendrycks & Gimpel, 2016)). Since evaluating the quality of their predictive uncertaintyis hard, deploying them in real-world systems raises serious concerns in AI Safety (Amodei et al.,2016), e.g., one can easily break a secure authentication system that can be unlocked by detectingthe gaze and iris of eyes using DNNs (Shrivastava et al., 2017).The overconfidence issue of DNNs is highly related to the problem of detecting out-of-distribution:detect whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it. Formally, it can be formulated as a binary classificationproblem. Let an input x2X and a labely2Y =f1;:::;Kgbe random variables that follow ajoint data distribution Pin(x;y) =Pin(yjx)Pin(x). We assume that a classifier P(yjx)is trainedon a dataset drawn from Pin(x;y), wheredenotes the model parameter. We let Pout(x)denotean out-of-distribution which is ‘far away’ from in-distribution Pin(x). Our problem of interest isdetermining if input xis fromPinorPout, possibly utilizing a well calibrated classifier P(yjx).In other words, we aim to build a detector, g(x) :X!f 0;1g, which assigns label 1 if data is fromin-distribution, and label 0 otherwise.There have been recent efforts toward developing efficient detection methods where they mostlyhave studied simple threshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017) uti-lizing a pre-trained classifier. For each input x, it measures some confidence score q(x)based on apre-trained classifier, and compares the score to some threshold >0. Then, the detector assigns1Published as a conference paper at ICLR 2018label 1 if the confidence score q(x)is above, and label 0, otherwise. Specifically, (Hendrycks& Gimpel, 2016) defined the confidence score as a maximum value of the predictive distribution,and (Liang et al., 2017) further improved the performance by using temperature scaling (Guo et al.,2017) and adding small controlled perturbations to the input data. Although such inference methodsare computationally simple, their performances highly depend on the pre-trained classifier. Namely,they fail to work if the classifier does not separate the maximum value of predictive distributionwell enough with respect to PinandPout. Ideally, a classifier should be trained to separate allclass-dependent in-distributions as well as out-of-distribution in the output space. As another line ofresearch, Bayesian probabilistic models (Li & Gal, 2017; Louizos & Welling, 2017) and ensemblesof classifiers (Lakshminarayanan et al., 2017) were also investigated. However, training or inferringthose models are computationally more expensive. This motivates our approach of developing anew training method for the more plausible simple classifiers. Our direction is orthogonal to theBayesian and ensemble approaches, where one can also combine them for even better performance.Contribution. In this paper, we develop such a training method for detecting out-of-distributionPoutbetter without losing its original classification accuracy. First, we consider a new loss function,called confidence loss . Our key idea on the proposed loss is to additionally minimize the Kullback-Leibler (KL) divergence from the predictive distribution on out-of-distribution samples to the uni-form one in order to give less confident predictions on them. Then, in- and out-of-distributions areexpected to be more separable. However, optimizing the confidence loss requires training samplesfrom out-of-distribution, which are often hard to sample: a priori knowledge on out-of-distributionis not available or its underlying space is too huge to cover. To handle the issue, we consider anew generative adversarial network (GAN) (Goodfellow et al., 2014) for generating most effectivesamples from Pout. Unlike the original GAN, the proposed GAN generates ‘boundary’ samples inthe low-density area of Pin. Finally, we design a joint training scheme minimizing the classifier’sloss and new GAN loss alternatively, i.e., the confident classifier improves the GAN, and vice versa,as training proceeds. Here, we emphasize that the proposed GAN does not need to generate explicitsamples under our scheme, and instead it implicitly encourages training a more confident classifier.We demonstrate the effectiveness of the proposed method using deep convolutional neural networkssuch as AlexNet (Krizhevsky, 2014) and VGGNet (Szegedy et al., 2015) for image classificationtasks on CIFAR (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), ImageNet (Deng et al.,2009), and LSUN (Yu et al., 2015) datasets. The classifier trained by our proposed method dras-tically improves the detection performance of all threshold-based detectors (Hendrycks & Gim-pel, 2016; Liang et al., 2017) in all experiments. In particular, VGGNet with 13 layers trained byour method improves the true negative rate (TNR), i.e., the fraction of detected out-of-distribution(LSUN) samples, compared to the baseline: 14:0%!39:1%and46:3%!98:9%on CIFAR-10and SVHN, respectively, when 95% of in-distribution samples are correctly detected. We also pro-vide visual understandings on the proposed method using the image datasets. We believe that ourmethod can be a strong guideline when other researchers will pursue these tasks in the future.2 T RAINING CONFIDENT NEURAL CLASSIFIERSIn this section, we propose a novel training method for classifiers in order to improve the perfor-mance of prior threshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017) (see Ap-pendix A for more details). Our motivation is that such inference algorithms can work better if theclassifiers are trained so that they map the samples from in- and out-of-distributions into the outputspace separately. Namely, we primarily focus on training an improved classifier, and then use priordetectors under the trained model to measure its performance.2.1 C ONFIDENT CLASSIFIER FOR OUT -OF-DISTRIBUTIONWithout loss of generality, suppose that the cross entropy loss is used for training. Then, we proposethe following new loss function, termed confidence loss:minEPin(bx;by)logP(y=byjbx)+EPout(x)KL(U(y)kP(yjx)); (1)whereKL denotes the Kullback-Leibler (KL) divergence, U(y)is the uniform distribution and >0is a penalty parameter. It is highly intuitive as the new loss forces the predictive distribution2Published as a conference paper at ICLR 2018Class 0 Class 1 (a)Class 0 Class 1 [0,0.2) [0.2,0.8) [0.8,1) (b)Class 0 Class 1 (c)Class 0 Class 1 [0,0.2) [0.2,0.8) [0.8,1) (d)Figure 1: Illustrating the behavior of classifier under different out-of-distribution training datasets.We generate the out-of-distribution samples from (a) 2D box [50;50]2, and show (b) the corre-sponding decision boundary of classifier. We also generate the out-of-distribution samples from (c)2D box [20;20]2, and show (d) the corresponding decision boundary of classifier.on out-of-distribution samples to be closer to the uniform one, i.e., zero confidence, while thatfor samples from in-distribution still follows the label-dependent probability. In other words, theproposed loss is designed for assigning higher maximum prediction values, i.e., maxyP(yjx), toin-distribution samples than out-of-distribution ones. Here, a caveat is that adding the KL divergenceterm might degrade the classification performance. However, we found that it is not the case dueto the high expressive power of deep neural networks, while in- and out-of-distributions becomemore separable with respect to the maximum prediction value by optimizing the confidence loss(see Section 3.1 for supporting experimental results).We remark that minimizing a similar KL loss was studied recently for different purposes (Lee et al.,2017; Pereyra et al., 2017). Training samples for minimizing the KL divergence term is explicitlygiven in their settings while we might not. Ideally, one has to sample all (almost infinite) types of out-of-distribution to minimize the KL term in (1), or require some prior information on testing out-of-distribution for efficient sampling. However, this is often infeasible and fragile. To address the issue,we suggest to sample out-of-distribution close to in-distribution, which could be more effective inimproving the detection performance, without any assumption on testing out-of-distribution.In order to explain our intuition in details, we consider a binary classification task on a simple ex-ample, where each class data is drawn from a Gaussian distribution and entire data space is boundedby 2D box [50;50]2for visualization. We apply the confidence loss to simple fully-connectedneural networks (2 hidden layers and 500 hidden units for each layer) using different types of out-of-distribution training samples. First, as shown in Figure 1(a), we construct an out-of-distributiontraining dataset of 100 (green) points using rejection sampling on the entire data space [50;50]2.Figure 1(b) shows the decision boundary of classifier optimizing the confidence loss on the corre-sponding dataset. One can observe that a classifier still shows overconfident predictions (red andblue regions) near the labeled in-distribution region. On the other hand, if we construct a trainingout-of-distribution dataset of 100 points from [20;20]2, i.e., closer to target, in-distribution space(see Figure 1(c)), a classifier produces confident predictions only on the labeled region and zeroconfidence on the remaining in the entire data space [50;50]2as shown in Figure 1(d). If oneincreases the number of training out-of-distribution samples which are generated from the entirespace, i.e., [50;50]2, Figure 1(b) is expected to be similar to Figure 1(d). In other words, one needmore samples in order to train a confident classifier if samples are generated from the entire space.However, this might be impossible and not efficient since the number of out-of-distribution trainingsamples might be almost infinite to cover its entire, huge actual data space. This implies that trainingout-of-distribution samples nearby the in-distribution region could be more effective in improvingthe detection performance. Our underlying intuition is that the effect of boundary of in-distributionregion might propagate to the entire out-of-distribution space. Our experimental results in Section3.1 also support this: realistic images are more useful as training out-of-distribution than syntheticdatasets (e.g., Gaussian noise) for improving the detection performance when we consider an imageclassification task. This motivates us to develop a new generative adversarial network (GAN) forgenerating such effective out-of-distribution samples.2.2 A DVERSARIAL GENERATOR FOR OUT -OF-DISTRIBUTIONIn this section, we introduce a new training method for learning a generator of out-of-distributioninspired by generative adversarial network (GAN) (Goodfellow et al., 2014). We will first assume3Published as a conference paper at ICLR 2018that the classifier for in-distribution is fixed, and also describe the joint learning framework in thenext section.The GAN framework consists of two main components: discriminator Dand generator G. Thegenerator maps a latent variable zfrom a prior distribution Ppri(z)to generated outputs G(z), anddiscriminator D:X ! [0;1]represents a probability that sample xis from a target distribution.Suppose that we want to recover the in-distribution Pin(x)using the generator G. Then, one canoptimize the following min-max objective for forcing PGPin:minGmaxDEPin(x)logD(x)+EPpri(z)log (1D(G(z))): (2)However, unlike the original GAN, we want to make the generator recover an effective out-of-distributionPoutinstead ofPin. To this end, we propose the following new GAN loss:minGmaxDEPG(x)KL(U(y)kP(yjx))| {z }(a)+EPin(x)logD(x)+EPG(x)log (1D(x))| {z }(b); (3)whereis the model parameter of a classifier trained on in-distribution. The above objective can beinterpreted as follows: the first term (a) corresponds to a replacement of the out-of-distribution Poutin (1)’s KL loss with the generator distribution PG. One can note that this forces the generator togenerate low-density samples since it can be interpreted as minimizing the log negative likelihoodof in-distribution using the classifier, i.e., Pin(x)exp (KL(U(y)kP(yjx))):We remark thatthis approximation is also closely related to the inception score (Salimans et al., 2016) which ispopularly used as a quantitative measure of visual fidelity of the samples. The second term (b) cor-responds to the original GAN loss since we would like to have out-of-distribution samples close toin-distribution, as mentioned in Section 2.1. Suppose that the model parameter of classifier is setappropriately such that the classifier produces the uniform distribution for out of distribution sam-ples. Then, the KL divergence term (a) in (3) is approximately 0 no matter what out-of-distributionsamples are generated. However, if the samples are far away from boundary, the GAN loss (b) in (3)should be high, i.e., the GAN loss forces having samples being not too far from the in-distributionspace. Therefore, one can expect that proposed loss can encourage the generator to produce thesamples which are on the low-density boundary of the in-distribution space. We also provide itsexperimental evidences in Section 3.2.We also remark that (Dai et al., 2017) consider a similar GAN generating samples from out-of-distribution for the purpose of semi-supervised learning. The authors assume the existence of a pre-trained density estimation model such as PixelCNN++ (Salimans et al., 2017) for in-distribution,but such a model might not exist and be expensive to train in general. Instead, we use much simplerconfident classifiers for approximating the density. Hence, under our fully-supervised setting, ourGAN is much easier to train and more suitable.2.3 J OINT TRAINING METHOD OF CONFIDENT CLASSIFIER AND ADVERSARIAL GENERATORIn the previous section, we suggest training the proposed GAN using a pre-trained confident classi-fier. We remind that the converse is also possible, i.e., the motivation of having such a GAN is fortraining a better classifier. Hence, two models can be used for improving each other. This naturallysuggests a joint training scheme where the confident classifier improves the proposed GAN, and viceversa, as training proceeds. Specifically, we suggest the following joint objective function:minGmaxDminEPin(bx;by)logP(y=byjbx)|{z }(c)+EPG(x)KL(U(y)kP(yjx))| {z }(d)+EPin(bx)logD(bx)+EPG(x)log (1D(x)):| {z }(e)(4)The classifier’s confidence loss corresponds to (c) + (d), and the proposed GAN loss correspondsto (d) + (e), i.e., they share the KL divergence term (d) under joint training. To optimize the aboveobjective efficiently, we propose an alternating algorithm, which optimizes model parameters fgof classifier and GAN models fG;Dgalternatively as shown in Algorithm 1. Since the algorithmmonotonically decreases the objective function, it is guaranteed to converge.4Published as a conference paper at ICLR 2018Algorithm 1 Alternating minimization for detecting and generating out-of-distribution.repeat=Update proposed GAN =Samplefz1;:::; zMgandfx1;:::; xMgfrom priorPpri(z)and and in-distribution Pin(x),respectively, and update the discriminator Dby ascending its stochastic gradient of1MMXi=1hlogD(xi) + log (1D(G(zi)))i:Samplefz1;:::; zMgfrom priorPpri(z), and update the generator Gby descending itsstochastic gradient of1MMXi=1hlog (1D(G(zi)))i+MMXi=1hKL(U(y)kP(yjG(zi)))i:=Update confident classifier =Samplefz1;:::; zMgandf(x1;y1);:::; (xM;yM)gfrom priorPpri(z)and in-distributionPin(x;y), respectively, and update the classifier by descending its stochastic gradient of1MMXi=1hlogP(y=yijxi) +KL (U(y)kP(yjG(zi)))i:untilconvergence3 E XPERIMENTAL RESULTSWe demonstrate the effectiveness of our proposed method using various datasets: CIFAR(Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), ImageNet (Deng et al., 2009), LSUN(Yu et al., 2015) and synthetic (Gaussian) noise distribution. We train convolutional neural networks(CNNs) including VGGNet (Szegedy et al., 2015) and AlexNet (Krizhevsky, 2014) for classifyingCIFAR-10 and SVHN datasets. The corresponding test dataset is used as the in-distribution (pos-itive) samples to measure the performance. We use realistic images and synthetic noises as theout-of-distribution (negative) samples. For evaluation, we measure the following metrics using thethreshold-based detectors (Hendrycks & Gimpel, 2016; Liang et al., 2017): the true negative rate(TNR) at 95% true positive rate (TPR), the area under the receiver operating characteristic curve(AUROC), the area under the precision-recall curve (AUPR), and the detection accuracy, wherelarger values of all metrics indicate better detection performances. Due to the space limitation, moreexplanations about datasets, metrics and network architectures are given in Appendix B.1In-dist Out-of-distClassificationaccuracyTNRat TPR 95%AUROCDetectionaccuracyAUPRinAUPRoutCross entropy loss / Confidence lossSVHNCIFAR-10 (seen)93.82 / 94.2347.4 / 99.9 62.6 / 99.9 78.6 / 99.9 71.6 / 99.9 91.2 / 99.4TinyImageNet (unseen) 49.0 / 100.0 64.6 / 100.0 79.6 / 100.0 72.7 / 100.0 91.6 / 99.4LSUN (unseen) 46.3 / 100.0 61.8 / 100.0 78.2 / 100.0 71.1 / 100.0 90.8 / 99.4Gaussian (unseen) 56.1 / 100.0 72.0 / 100.0 83.4 / 100.0 77.2 / 100.0 92.8 / 99.4CIFAR-10SVHN (seen)80.14 / 80.5613.7 / 99.8 46.6 / 99.9 66.6 / 99.8 61.4 / 99.9 73.5 / 99.8TinyImageNet (unseen) 13.6 / 9.9 39.6 / 31.8 62.6 / 58.6 58.3 / 55.3 71.0 / 66.1LSUN (unseen) 14.0 / 10.5 40.7 / 34.8 63.2 / 60.2 58.7 / 56.4 71.5 / 68.0Gaussian (unseen) 2.8 / 3.3 10.2 / 14.1 50.0 / 50.0 48.1 / 49.4 39.9 / 47.0Table 1: Performance of the baseline detector (Hendrycks & Gimpel, 2016) using VGGNet. All val-ues are percentages and boldface values indicate relative the better results. For each in-distribution,we minimize the KL divergence term in (1) using training samples from an out-of-distributiondataset denoted by “seen”, where other “unseen” out-of-distributions were only used for testing.1Our code is available at https://github.com/alinlab/Confident_classifier .5Published as a conference paper at ICLR 2018SVHN (in)CIFAR-10 (out / unseen)Gaussian (out / unseen)LSUN (out / unseen)TinyImageNet (out / unseen)Fraction00.10.20.30.40.50.60.70.80.91.0Maximum in softmax scores0.15 0.35 0.55 0.75 0.95(a) Cross entropy lossSVHN (in) CIFAR-10 (out / seen) Gaussian (out / unseen) LSUN (out / unseen) TinyImageNet (out / unseen) Fraction00.10.20.30.40.50.60.70.80.91.0Maximum in softmax scores 0.15 0.35 0.55 0.75 0.95 (b) Confidence loss in (1)Cross entropy lossConfidence loss (Gaussian)Confidence loss (LSUN)Confidence loss (TinyImageNet)Confidence loss (CIFAR-10)TPR on in-distribution (SVHN)0.20.40.60.81.0FPR on out-of-distribution (CIFAR-10)0 0.5 1.00.80.91.000.1 (c) ROC curveFigure 2: For all experiments in (a), (b) and (c), we commonly use the SVHN dataset for in-distribution. Fraction of the maximum prediction value in softmax scores trained by (a) cross en-tropy loss and (b) confidence loss: the x-axis and y-axis represent the maximum prediction valueand the fraction of images receiving the corresponding score, respectively. The receiver operatingcharacteristic (ROC) curves under different losses are reported in (c): the red curve corresponds tothe ROC curve of a model trained by optimizing the naive cross entropy loss, whereas other onescorrespond to the ROC curves of models trained by optimizing the confidence loss. The KL diver-gence term in the confidence loss is optimized using explicit out-of-distribution datasets indicatedin the parentheses, e.g., Confident loss (LSUN) means that we use the LSUN dataset for optimizingthe KL divergence term.3.1 E FFECTS OF CONFIDENCE LOSSWe first verify the effect of confidence loss in (1) trained by some explicit, say seen, out-of-distribution datasets. First, we compare the quality of confidence level by applying various traininglosses. Specifically, the softmax classifier is used and simple CNNs (two convolutional layers fol-lowed by three fully-connected layers) are trained by minimizing the standard cross entropy loss onSVHN dataset. We also apply the confidence loss to the models by additionally optimizing the KLdivergence term using CIFAR-10 dataset (as training out-of-distribution). In Figure 2(a) and 2(b),we report distributions of the maximum prediction value in softmax scores to evaluate the separationquality between in-distribution (i.e., SVHN) and out-of-distributions. It is clear that there exists abetter separation between the SVHN test set (red bar) and other ones when the model is trained bythe confidence loss. Here, we emphasize that the maximum prediction value is also low on even un-trained (unseen) out-of-distributions, e.g., TinyImageNet, LSUN and synthetic datasets. Therefore,it is expected that one can distinguish in- and out-of-distributions more easily when a classifier istrained by optimizing the confidence loss. To verify that, we obtain the ROC curve using the baselinedetector (Hendrycks & Gimpel, 2016) that computes the maximum value of predictive distributionon a test sample and classifies it as positive (i.e., in-distribution) if the confidence score is abovesome threshold. Figure 2(c) shows the ROC curves when we optimize the KL divergence term onvarious datasets. One can observe that realistic images such as TinyImageNet (aqua line) and LSUN(green line) are more useful than synthetic datasets (orange line) for improving the detection perfor-mance. This supports our intuition that out-of-distribution samples close to in-distribution could bemore effective in improving the detection performance as we discussed in Section 2.1.We indeed evaluate the performance of the baseline detector for out-of-distribution using large-scale CNNs, i.e., VGGNets with 13 layers, under various training scenarios, where more results onAlexNet and ODIN detector (Liang et al., 2017) can be found in Appendix C (the overall trendsof results are similar). For optimizing the confidence loss in (1), SVHN and CIFAR-10 trainingdatasets are used for optimizing the KL divergence term for the cases when the in-distribution isCIFAR-10 and SVHN, respectively. Table 1 shows the detection performance for each in- andout-of-distribution pair. When the in-distribution is SVHN, the classifier trained by our methoddrastically improves the detection performance across all out-of-distributions without hurting itsoriginal classification performance. However, when the in-distribution is CIFAR-10, the confidenceloss does not improve the detection performance in overall, where we expect that this is becausethe trained/seen SVHN out-of-distribution does not effectively cover all tested out-of-distributions.Our joint confidence loss in (4), which was designed under the intuition, resolves the issue of theCIFAR-10 (in-distribution) classification case in Table 1 (see Figure 4(b)).6Published as a conference paper at ICLR 2018Generated samples (a)Generated samples (b) (c) (d)Figure 3: The generated samples from original GAN (a)/(c) and proposed GAN (b)/(d). In (a)/(b),the grey area is the 2D histogram of training in-distribution samples drawn from a mixture of twoGaussian distributions and red points indicate generated samples by GANs.CrossentropylossConfidenceloss(samplesfromoriginalGAN)JointconfidencelossConfidenceloss(CIFAR-10)405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: CIFAR-10405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: TinyImageNet405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: LSUN(a) In-distribution: SVHNCrossentropylossConfidenceloss(samplesfromoriginalGAN)JointconfidencelossConfidenceloss(SVHN)0102030405060708090100TNR TPR 95%AUROC Detection accuracyOut-of-distribution: SVHN0102030405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: TinyImageNet0102030405060708090100TNR at TPR 95%AUROC Detection accuracyOut-of-distribution: LSUN(b) In-distribution: CIFAR-10Figure 4: Performances of the baseline detector (Hendrycks & Gimpel, 2016) under various traininglosses. For training models by the confidence loss, the KL divergence term is optimized usingsamples indicated in the parentheses. For fair comparisons, we only plot the performances forunseen out-of-distributions, where those for seen out-of-distributions (used for minimizing the KLdivergence term in (1)) can be found in Table 1.3.2 E FFECTS OF ADVERSARIAL GENERATOR AND JOINT CONFIDENCE LOSSIn this section, we verify the effect of the proposed GAN in Section 2.2 and evaluate the detectionperformance of the joint confidence loss in (4). To verify that the proposed GAN can produce thesamples nearby the low-density boundary of the in-distribution space, we first compare the gener-ated samples by original GAN and proposed GAN on a simple example where the target distributionis a mixture of two Gaussian distributions. For both the generator and discriminator, we use fully-connected neural networks with 2 hidden layers. For our method, we use a pre-trained classifierwhich minimizes the cross entropy on target distribution samples and the KL divergence on out-of-distribution samples generated by rejection sampling on a bounded 2D box. As shown in Figure3(a), the samples of original GAN cover the high-density area of the target distribution while thoseof proposed GAN does its boundary one (see Figure 3(b)). We also compare the generated samplesof original and proposed GANs on MNIST dataset (LeCun et al., 1998), which consists of hand-written digits. For this experiment, we use deep convolutional GANs (DCGANs) (Radford et al.,2015). In this case, we use a pre-trained classifier which minimizes the cross entropy on MNIST7Published as a conference paper at ICLR 2018SVHN (in)TinyImageNet (out)Cross entropyConfidence loss(samples from original GAN)Joint confidence lossConfidence loss(CIFAR -10)LSUN (out)CIFAR -10 (out)(a) In-distribution: SVHNCIFAR -10 (in)TinyImageNet (out)Cross entropyConfidence loss(samples from original GAN)Joint confidence lossConfidence loss(SVHN)LSUN (out)SVHN (out) (b) In-distribution: CIFAR-10Figure 5: Guided gradient (sensitivity) maps of the top-1 predicted class with respect to the inputimage under various training losses.training samples and the KL divergence on synthetic Gaussian noises. As shown in Figure 3(c)and 3(d), samples of original GAN looks more like digits than those of proposed GAN. Somewhatinterestingly, the proposed GAN still generates some new digit-like images.We indeed evaluate the performance of our joint confidence loss in (4) utilizing the proposed GAN.To this end, we use VGGNets (as classifiers) and DCGANs (as GANs). We also test a variant of con-fidence loss which optimizes the KL divergence term on samples from a pre-trained original GAN(implicitly) modeling the in-distribution. One can expect that samples from the original GAN can bealso useful for improving the detection performance since it may have bad generalization properties(Arora et al., 2017) and generate a few samples on the low-density boundary as like the proposedGAN. Figure 4 shows the performance of the baseline detector for each in- and out-of-distributionpair. First, observe that the joint confidence loss (blue bar) outperforms the confidence loss withsome explicit out-of-distribution datasets (green bar). This is quite remarkable since the formeris trained only using in-distribution datasets, while the latter utilizes additional out-of-distributiondatasets. We also remark that our methods significantly outperform the baseline cross entropy loss(red bar) in all cases without harming its original classification performances (see Table 2 in Ap-pendix C). Interestingly, the confidence loss with the original GAN (orange bar) is often (but notalways) useful for improving the detection performance, whereas that with the proposed GAN (bluebar) still outperforms it in all cases.Finally, we also provide visual interpretations of models using the guided gradient maps (Sprin-genberg et al., 2014). Here, the gradient can be interpreted as an importance value of each pixelwhich influences on the classification decision. As shown in Figure 5, the model trained by thecross entropy loss shows sharp gradient maps for both samples from in- and out-of-distributions,whereas models trained by the confidence losses do only on samples from in-distribution. For thecase of SVHN in-distribution, all confidence losses gave almost zero gradients, which matches tothe results in Figure 4(a): their detection performances are almost perfect. For the case of CIFAR-10 distribution, one can now observe that there exists some connection between gradient maps anddetection performances. This is intuitive because for detecting samples from out-of-distributionsbetter, the classifier should look at more pixels as similar importance and the KL divergence termforces it. We think that our visualization results might give some ideas in future works for developingbetter inference methods for detecting out-of-distribution under our models.4 C ONCLUSIONIn this paper, we aim to develop a training method for neural classification networks for detectingout-of-distribution better without losing its original classification accuracy. In essence, our methodjointly trains two models for detecting and generating out-of-distribution by minimizing their lossesalternatively. Although we primarily focus on image classification in our experiments, our methodcan be used for any classification tasks using deep neural networks. It is also interesting futuredirections applying our methods for other related tasks: regression (Malinin et al., 2017), networkcalibration (Guo et al., 2017), Bayesian probabilistic models (Li & Gal, 2017; Louizos & Welling,2017), ensemble (Lakshminarayanan et al., 2017) and semi-supervised learning (Dai et al., 2017).8Published as a conference paper at ICLR 2018ACKNOWLEDGEMENTSThis work was supported in part by the Institute for Information & communications TechnologyPromotion(IITP) grant funded by the Korea government(MSIT) (No.2017-0-01778, Developmentof Explainable Human-level Deep Machine Learning Inference Framework), the ICT R&D programof MSIP/IITP [R-20161130-004520, Research on Adaptive Machine Learning Technology Devel-opment for Intelligent Autonomous Digital Companion], DARPA Explainable AI (XAI) program#313498 and Sloan Research Fellowship.<|im_end|> <|im_start|>assistant ### Review Title interesting idea for robust classification ### Review Text The manuscript proposes a generative approach to detect which samples are within vs. out of the sample space of the training distribution. This distribution is used to adjust the classifier so it makes confident predictions within sample, and less confident predictions out of sample, where presumably it is prone to mistakes. Evaluation on several datasets suggests that accounting for the within-sample distribution in this way can often actually improve evaluation performance, and can help the model detect outliers. The manuscript is reasonably well written overall, though some of the writing could be improved e.g. a clearer description of the cost function in section 2. However, equation 4 and algorithm 1 were very helpful in clarifying the cost function. The manuscript also does a good job giving pointers to related prior work. The problem of interest is timely and important, and the provided solution seems reasonable and is well evaluated. Looking at the cost function and the intuition, the difference in figure 1 seems to be primarily due to the relative number of samples used during optimization -- and not to anything inherent about the distribution as is claimed. In particular, if a proportional number of samples is generated for the 50x50 case, I would expect the plots to be similar. I suggest the authors modify the claim of figure 1 accordingly. Along those lines, it would be interesting if instead of the uniform distribution, a model that explicitly models within vs. out of sample might perform better? Though this is partially canceled out by the other terms in the optimization. Finally, the authors claim that the PT is approximately equal to entropy. The cited reference (Zhao et. al. 2017) does not justify the claim. I suggest the authors remove this claim or correctly justify it. Questions: - Could the authors comment on cases where such a strong within-sample assumption may adversely affect performance? - Could the authors comment on how the modifications affect prediction score calibration? - Could the authors comment on whether they think the proposed approach may be more resilient to adversarial attacks? Minor issues: - Figure 1 is unclear using dots. Perhaps the authors can try plotting a smoothed decision boundary to clarify the idea? ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
Byxv9aioz
MIDL.amsterdam/2018/Conference
2018
Test-time augmentation with uncertainty estimation for deep learning-based medical image segmentation
["Guotai Wang", "Wenqi Li", "Michael Aertsen", "Jan Deprest", "Sebastien Ourselin", "Tom Vercauteren"]
Data augmentation has been widely used for training deep learning systems for medical image segmentation and plays an important role in obtaining robust and transformation-invariant predictions. However, it has seldom been used at test time for segmentation and not been formulated in a consistent mathematical framework. In this paper, we first propose a theoretical formulation of test-time augmentation for deep learning in image recognition, where the prediction is obtained through estimating its expectation by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We then propose a novel uncertainty estimation method based on the formulated test-time augmentation. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation than calculating the model-based uncertainty alone and helps to reduce overconfident incorrect predictions.
["data augmentation", "uncertainty estimation", "image segmentation", "deep learning"]
Test-time augmentation with uncertainty estimationfor deep learning-based medical image segmentationGuotai WangUniversity College Londonguotai.wang.14@ucl.ac.ukWenqi LiUniversity College Londonwenqi.li@ucl.ac.ukMichael AertsenyKU Leuvenmichael.aertsen@uzleuven.beJan DeprestyKU Leuvenjan.deprest@uzleuven.beSébastien OurselinUniversity College Londons.ourselin@ucl.ac.ukTom VercauterenUniversity College Londont.vercauteren@ucl.ac.ukAbstractData augmentation has been widely used for training deep learning systems formedical image segmentation and plays an important role in obtaining robust andtransformation-invariant predictions. However, it has seldom been used at test timefor segmentation and not been formulated in a consistent mathematical framework.In this paper, we first propose a theoretical formulation of test-time augmentationfor deep learning in image recognition, where the prediction is obtained throughestimating its expectation by Monte Carlo simulation with prior distributions ofparameters in an image acquisition model that involves image transformationsand noise. We then propose a novel uncertainty estimation method based on theformulated test-time augmentation. Experiments with segmentation of fetal brainsand brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation thancalculating the model-based uncertainty alone and helps to reduce overconfidentincorrect predictions.1 IntroductionIn recent years, deep learning has become the state-of-the-art method for medical image recognitiontasks such as image classification, object detection and segmentation [ 8]. As a type of data-drivenapproach, it learns features automatically, without explicitly modeling the complex variations ofimages. From the perspective of image acquisition, the acquired image may contain noise related tothe environment, and different viewpoints can lead to transformed versions of the same object. Itis desirable to enable a recognition system to be robust against noise and transformations, leadingto noise-invariant and transformation-invariant recognition, e.g., rotation-, translation- and scale-invariant. Convolutional Neural Networks (CNN) are designed to be translation-invariant by sharingDepartment of Medical Physics and Biomedical Engineering, Wellcome EPSRC Centre for Interventionaland Surgical Sciences (WEISS), University College London, London, WC1E 6BT, UK. Tom Vercauteren is alsowith KU Leuven.yDepartment of Obstetrics & Gynaecology, and Radiology, University Hospitals KU Leuven, 3000 Leuven,Belgium. Jan Deprest is also with WEISS and Institute for Women’s Health, University College London.1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.weights at different positions of the image. Dropout techniques [ 2] have been used to make the modelmore robust to noise. However, CNNs are not inherently invariant to more general transformations.To alleviate this problem, many researchers have tried to collect a training dataset that is as large aspossible in order to include a large variation of image contexts to train the model. When collectingsuch a large dataset is difficult or impossible, data augmentation is commonly used to enlarge arelatively small dataset by applying transformations to its samples to create new ones for training [ 6],and this helps to improve invariance to spatial transformations at test time. The transformations foraugmentation typically include flipping, cropping, rotating, and scaling training images. Krizhevskyet al. [ 6] also altered the intensities of the training images for data augmentation. In [ 1,13], elasticdeformations were used for biomedical image segmentation. Recently, convex combination of pairsof samples has been proposed as a way of data augmentation for training [18].While data augmentation is typically employed for training of CNNs, using it at test time has seldombeen investigated. Only few studies have empirically found that combining predictions of multipletransformed versions of a test image helps to improve the performance. For example, Matsunaga etal. [10] geometrically transformed test images for skin lesion classification. Radosavovic et al. [ 12]used a single model to predict multiple transformed copies of unlabeled images for data distillation.Jin et al. [ 4] tested on samples extended by rotation and translation for pulmonary nodule detection.However, all these methods used data augmentation for testing as an ad hoc method, without detailedformulation or theoretical explanation.In addition to robustness to imaging conditions, uncertainty estimation plays a critical role in medicalimage recognition. For example, for chest X-ray image classification [ 16], a testing result with highuncertainty may need a human expert to give a decision. In a segmentation task, the predicted labelsof pixels near the boundary of organs are likely to be uncertain [ 15], which can be used to guide userinteractions. Several methods have been proposed for the estimation of model uncertainty. ExactBayesian models offer a mathematically grounded method to infer model uncertainty, but they arehard to implement for CNNs. Alternatively, it has been shown that dropout can be cast as a Bayesianapproximation to represent model uncertainty [ 2]. In [ 19], Stein Variational Gradient Descent(SVGD) was used to perform approximate Bayesian inference on uncertain CNN parameters. In [ 7],ensembles of multiple models were proposed for uncertainty estimation. However, these methodsonly consider the uncertainty resulting from the trained models, and tend to produce overconfidentincorrect predictions. Kendall et al. [ 5] proposed a framework based on Bayesian deep learning tomodel uncertainties related not only to network parameters but also to image noise. In addition tothe model and image noise, the prediction uncertainty of the observed image may also be related toviewpoints or transformations of the inherent object, as such factors also affect the prediction output,leading to more uncertainty that depends on the input. To the best of our knowledge, the uncertaintyrelated to viewpoints or transformations has rarely been investigated for deep learning with CNNs.Summary of contributions : Our contribution in this paper is two-fold. First, we propose a theoreticalformulation of test-time augmentation for deep learning. We represent an image as a result of anacquisition process which involves geometric transformations and image noise. We model the hiddenparameters of the image acquisition process with prior distributions, and infer the prediction outputfor a given image by estimating its expectation with a Monte Carlo simulation process. The formula-tion is a mathematical explanation of test-time augmentation that is general for image recognitiontasks. Second, we propose a novel uncertainty estimation method based on the formulated test-timeaugmentation for image recognition tasks. We demonstrate the effect of test-time augmentation with2D and 3D segmentation tasks, and show that our proposed method provides a better uncertaintyestimation with fewer overconfident incorrect predictions than using model-based uncertainty.2 MethodsThe proposed method with test-time augmentation includes two parts. The first part is a mathematicalrepresentation of ensembles of predictions of multiple transformed versions of the input. In Sec-tion 2.1, we represent an image as a result of an image acquisition model with hidden parameters.In Section 2.2, we formulate test-time augmentation as inference with hidden parameters followinggiven prior distributions. The second part calculates the diversity of the prediction results of anaugmented test image, and it is used for estimation of the uncertainty related to image transformationsand noise. This will be detailed in Section 2.3.22.1 Image acquisition modelThe image acquisition model describes the process by which the observed images have been obtained.This process is confronted with a lot of factors that can be related or unrelated to the imaged object,such as blurring, down-sampling, spatial transformation, and system noise. While blurring anddown-sampling are commonly considered for image super-resolution [ 17], in the context of imagerecognition they have a relatively lower impact. Therefore, we focus on the spatial transformationand noise, with adding intensity changes being a straightforward extension.X=T(X0) +e (1)whereX0is an underlying image in a different position and orientation, i.e., a hidden variable. Tis atransformation operator that is applied to X0.is the set of parameters of the transformation, anderepresents the noise that is added to the transformed image. Xdenotes the observed image thatis used for inference at test time. Though the transformations can be in spatial, intensity or featurespace, in this work we only study the impact of reversible spatial transformations (e.g., flipping,scaling, rotation and translation), which are the most common types of transformations occurringduring image acquisition and used for data augmentation purposes. Let T1denote the inversetransformation ofT, then we have:X0=T1(Xe) (2)Similarly to data augmentation, we assume that XandX0follow the same distribution. In a givenapplication, this assumption leads to some prior distributions of the transformation parameters andnoise. For example, in a 2D slice of fetal brain Magnetic Resonance Images (MRI), the orientation ofthe fetal brain can range among all the possible directions in a 2D plane, therefore the rotation anglercan be modeled with a uniform prior distribution rU(0;2). The image noise is commonlymodeled as a Gaussian distribution, i.e., eN(;), whereandare the mean and standarddeviation respectively. Let PandPerepresent the prior distribution of anderespectively,therefore we have PandePe.LetYandY0be the labels related to XandX0respectively. For image classification, YandY0arecategorical variables, and they should be invariant with regard to transformations and noise, thereforeY=Y0. For image segmentation, YandY0are discretized label maps, and they are covariant withthe spatial transformation, i.e., Y=T(Y0).2.2 Inference with hidden variablesIn the context of deep learning, let f()be the function represented by a neural network, and represent the parameters learned from a set of training images with their corresponding annotations.In a standard formulation, the label Yof a test image Xis inferred by:Y=f(;X) (3)SinceXis only one of many possible observations of the underlying image X0, direct inference withXmay lead to a biased result affected by the specific transformation and noise associated with X. Toaddress this problem, we infer with the help of X0instead:Y=T(Y0) =Tf;X0)=Tf;T1(Xe)(4)where the exact values of andeforXare unknown. Instead of finding a deterministic predictionofX, we alternatively compute the distribution of Yconsidering the distributions of ande.P(Y) =P Tf;T1(Xe)!;whereP;ePe (5)To obtain the final prediction for X, we calculate the expectation of Yusing the distribution P(Y).E(Y) =ZyP(y)dy=ZP;ePeTf;T1(Xe)P()P(e)dde (6)3CalculatingE(Y)with Eq. (6)is computationally expensive, as andemay take continuous valuesandPis a complex joint distribution of different types of transformations. Alternatively, we estimateE(Y)by using Monte Carlo simulation:E(Y)1NNXn=1yn=1NNXn=1Tnf;T1n(Xen), wherenP;enPe (7)whereNis the total number of simulation runs. In each simulation run, we first randomly samplenandenfrom the prior distributions PandPe, respectively. Then we obtain one possible hiddenimage withnandenbased on Eq. (2), and feed it into the trained network to get its prediction,which is transformed with nto obtainynaccording to Eq. (4). Therefore, this is an inferenceprocedure with test-time augmentation.2.3 Uncertainty estimation with test-time augmentationThe uncertainty is estimated by measuring how diverse the predictions for a given image are. Boththe variance and entropy of the distribution P(Y)can be used to estimate uncertainty. However,variance is not representative in the context of multi-modal distributions. In this paper we use entropyfor uncertainty estimation.H(Y) =ZP(y)lnP(y)dy (8)With the Monte Carlo simulation in Eq. (7), we can approximate H(Y)from the simulation resultsY=fy1;y2;:::;y Ng. Suppose there are Munique values inYand the frequency of the m-th uniquevalue is ^pm, thenH(Y)is approximated as:H(Y)MXm=1^pmln(^pm) (9)For segmentation tasks, pixel-wise uncertainty estimation is desirable. Let Yidenote the predictedlabel for the i-th pixel. With the Monte Carlo simulation, a set of values for Yiare obtainedYi=fyi1;yi2;:::;yiNg. The entropy of the distribution of Yiis therefore approximated as:H(Yi)MXm=1^pimln(^pim) (10)where ^pimis the frequency of the m-th unique value in Yi.3 ExperimentsWe validated our proposed testing and uncertainty estimation method with two segmentation tasks:2D fetal brain segmentation from MRI and 3D brain tumor segmentation from multi-modal MRI. Inboth tasks, we show how the inference with test-time augmentation affects segmentation accuracy, andanalyze the uncertainty of the segmentation results. For a given trained CNN model, we comparedfour ways of testing: 1) the proposed test-time augmentation (TTA) for prediction, 2) test-timedropout (TTD) [ 2] where the output is an ensemble of Npredictions with random dropout at testtime, 3) a combination of TTA and TTD where both TTA and TTD are used in all the testing runs,and 4) a single-prediction baseline that obtains the prediction without TTA and TTD. For the firstthree methods, the uncertainty was obtained by Eq. (10) withNpredictions. For TTD and TTA +TTD, the dropout probability was set as a typical value of 0.5. Prediction error rate and the Dicescore between a segmentation result and the ground truth were used for quantitative measurements ofsegmentation accuracy.3.1 2D fetal brain segmentation from MRI3.1.1 Data and implementationWe collected clinical T2-weighted MRI scans of 60 fetuses in the second trimester with Single-ShotFast Spin Echo (SSFSE). The data for each fetus contained three stacks of 2D slices acquired in axial,4BaselineTTDTTATTA+TTDBaselineTTDTTATTA+TTD(a)In-plane(b)Through-planeFigure 1: Visual comparison of different testing methods for 2D segmentation of the fetal brain. Firstrow: segmentation results (green curve) and ground truth (yellow curve). Second row: uncertaintymaps with color bars. The white arrows show an area with high certainty while mis-segmented. TTD:test-time dropout, TTA: test-time augmentation.Figure 2: Dice distributions of segmentation results with different testing methods for five examplestacks of 2D slices of fetal brain MRI.sagittal and coronal views respectively, with pixel size 0.63 mm to 1.58 mm and slice thickness 3 mmto 6 mm. The gestational age ranged from 19 weeks to 33 weeks. We used 120 stacks of 40 patientsfor training, 12 stacks of 4 patients for validation and 48 stacks of 16 patients for testing. Manualsegmentation results of these images were used as the ground truth. We normalized each stack by itsintensity mean and standard deviation, and resampled each slice with pixel size 1.0 mm.We used 2D networks of Fully Convolutional Network (FCN) [ 9], U-Net [ 13] and P-Net [ 15]. Thenetworks were implemented in TensorFlow using NiftyNet [ 3]. During training, we used AdaptiveMoment Estimation (Adam) to adjust the learning rate that was initialized as 103, with batch size 5,weight decay 107and iteration number 10k. We augmented the data by flipping along each axiswith a probability of 0.5, rotation with an angle rU(0;2), scaling with a factor sU(0:8;1:2),and adding random noise with eN(0;0:05), as a median-filter smoothed version of a normalizedimage in our dataset has a standard deviation around 0.95.3.1.2 Segmentation results with uncertaintyFig. 1 shows a visual comparison of four different testing methods for a fetal brain image. The resultswere based on the same trained model of U-Net, and the Monte Carlo simulation number Nwas20 for TTD, TTA, and TTA + TTD. The first row presents segmentation results compared with theground truth. It shows the baseline obtained an obviously mis-segmented region outside the brain inFig. 1(a), and the difference between the baseline and TTD is hardly observable. In comparison, TTAachieved better results than TTD. The difference between TTA and TTA + TTD is tiny. The secondrow presents the uncertainty maps for the segmentation results. For TTD, most of the uncertainsegmentations are located near the border of the segmented foreground, while the pixels with a largerdistance to the border have a very high confidence. This leads to a lot of overconfident incorrectsegmentations, as shown by the white arrows in Fig. 1(a). In comparison, TTA obtained a largeruncertain area that is mainly corresponding to mis-segmented regions of the baseline. The uncertaintymaps of TTA + TTD look similar to that of TTA.5Figure 3: Dice of 2D fetal brain segmentation with the change of Monte Carlo simulation runs N.Figure 4: Normalized joint histogram of prediction uncertainty and error rate for 2D fetal brainsegmentation. The average error rates at different uncertainty levels are depicted by the red curves.The dashed ellipses show that TTA reduces the occurrence of overconfident incorrect predictions.To quantitatively evaluate the segmentation results, we measured Dice score of each prediction fordifferent testing methods. Fig. 2 shows examples of five stacks of fetal brain MRI. The resultswere based on the same trained model of U-Net. Note that the baseline had only one prediction foreach image, and the Monte Carlo simulation number Nwas 20 for TTD, TTA and TTA + TTD. Itcan be observed that for each case, the Dice of TTD distributes nearly to that of the baseline. Incomparison, the Dice distribution of TTA has a higher average and larger variance, which shows thatTTA outperforms TTD in improving segmentation accuracy. Fig. 2 also shows that the performanceof TTA + TTD is close to that of TTA.We also measured the performance of different network structures with FCN [ 9], U-Net [ 13] andP-Net [ 15], and investigated how the segmentation accuracy changes with the increase of the MonteCarlo simulation runs N. The results measured with all the testing images are shown in Fig. 3. Wefound that for all of these three networks, the segmentation accuracy of TTD remains close to that ofthe baseline. For TTA and TTA + TTD, an improvement of segmentation accuracy can be observedwhenNincreases from 1 to 10. When Nis larger than 20, the segmentation accuracy for these twomethods reaches a plateau.To study the correlation between prediction uncertainty and accuracy, we measured the joint his-tograms of uncertainty and error rate for TTD, TTA, and TTA + TTD. Each histogram was obtainedby statistically calculating the error rate of pixels at different uncertainty levels in each slice. Theresults based on U-Net with N= 20 are shown in Fig. 4, where the joint histograms have beennormalized by the number of total pixels in the testing images for visualization. We calculated theaverage error rate at each uncertainty level, leading to a curve of error rate as a function of uncertainty,i.e., the red curves in Fig. 4. This figure shows that the majority of pixels have a low uncertainty witha small error rate. When the uncertainty increases, the error rate also improves gradually. However,when the prediction uncertainty is low, TTD has a steeper increase of error rate than TTA, whichdemonstrates that TTA has fewer overconfident incorrect predictions. The dashed ellipses in Fig. 4also show the different levels of overconfident incorrect predictions for these testing methods.6TTDTTATTA + TTDBaselineFLAIR and ground truthFigure 5: Visual comparison of different testing methods for 3D brain tumor segmentation. First row:FLAIR image and segmentation results (green: edema, yellow: enhancing core, red: necrotic core).Second row: ground truth and uncertainty maps.3.2 3D brain tumor segmentation from multi-modal MRI3.2.1 Data and implementationWe used the BraTS 20173[14] training dataset that consisted of volumetric images from 285 studies,with ground truth provided by the organizers. We randomly selected 20 studies for validation and50 studies for testing, and used the remaining for training. For each study, there were four scans ofT1-weighted, contrast enhanced T1-weighted (T1c), T2-weighted and Fluid Attenuation InversionRecovery (FLAIR) images, and they had been co-registered. All the images were skull-stripped andre-sampled to an isotropic 1 mm3resolution. The task was to segment these multi-modal images intomultiple classes: edema, enhancing core, necrotic core and the background (Fig. 5). We used 3DU-Net [ 1] and V-Net [ 11] implemented with NiftyNet [ 3], and employed Adam during training withinitial learning rate 103, batch size 2, weight decay 107and iteration number 20k. We augmentedthe data by flipping along each axis with a probability of 0.5, rotation with an angle along each axisrU(0;2), scaling with a factor sU(0:8;1:2), and adding random noise with eN(0;0:05)based on the reduced standard deviation of a median-filtered version of a normalized image.3.2.2 Segmentation results with uncertaintyFig. 5 shows an example of segmentation results with different testing methods that used the sametrained model of 3D U-Net. The Monte Carlo simulation number Nwas 40 for TTD, TTA, and TTA+ TTD. It can be observed that the baseline method led to an over segmentation of the necrotic corein the edema region. Compared with TTD, TTA and TTA + TTD had a better ability to correct thismis-segmentation. Fig. 5 also shows that TTA and TTA + TTD obtained their segmentations withhigher uncertainties than TTD, especially in the region that was mis-segmented by the baseline.We measured the error rate in each testing image at different uncertainty levels, and obtained thenormalized joint histogram of prediction uncertainty and error rate. Fig. 6 shows the results basedon 3D U-Net with N= 40 . The red curve shows the average error rate as a function of predictionuncertainty. Fig. 6 shows that the average error rate increases with the growth of uncertainty. However,TTD has a higher average error rate than TTA and TTA + TTD when the uncertainty is low (< 0.6).Following typical evaluation methods used for the BraTS dataset, we calculated the Dice scores forthree structures: 1) the whole tumor including edema, enhancing core and necrotic core, 2) the tumorcore without edema, and 3) enhancing tumor core [ 14]. We found that for 3D U-Net and V-Net,the performance of the multi-prediction testing methods reaches a plateau when Nis larger than40. Table 1 shows the evaluation results with N= 40. It can be observed that for both networks,3http://www.med.upenn.edu/sbia/brats2017.html7Figure 6: Normalized joint histogram of prediction uncertainty and error rate for 3D brain tumorsegmentation. The average error rates at different uncertainty levels are depicted by the red curves.Table 1: Dice (%) of 3D brain tumor segmentation by 3D U-Net [ 1] and V-Net [ 11] with differenttesting methods. WT: whole tumor, TC: tumor core, EC: enhancing core.3D U-Net V-NetWT TC EC WT TC ECBaseline 87.69 5.65 78.72 17.96 74.49 20.35 86.92 6.90 76.38 19.15 74.17 26.04TTD 88.22 5.87 79.25 17.90 75.75 21.03 87.04 6.92 76.61 19.27 74.29 26.01TTA 88.39 5.74 79.54 17.11 75.9420.64 87.526.36 76.9319.37 74.5526.03TTA + TTD 88.525.95 79.61 17.02 75.7020.41 87.606.25 76.8619.26 74.6925.98multi-prediction methods lead to better performance than the baseline with a single prediction, andTTA outperforms TTD in terms of average Dice score for all the three structures.4 Discussion and conclusionIn our mathematical formulation of test-time augmentation based on an image acquisition model,we explicitly modeled spatial transformations and image noise. However, it can be easily extendedto include more general transformations such as elastic deformations [ 1] or add a simulated biasfield for MRI. In addition to the variation of possible values of model parameters, the predictionresult is also dependent on the input data, e.g., image noise and transformations related to the object.Therefore, a good uncertainty estimation should take these factors into consideration. Fig. 1 and 5show that model uncertainty alone is likely to obtain overconfident incorrect predictions, and TTAplays an important role in reducing such predictions. We have demonstrated TTA based on the imageacquisition model for image segmentation tasks, but it is general for different image recognition tasks,such as image classification, object detection, and regression. For regression tasks where the outputsare not discretized category labels, the variation of the output distribution might be more suitablethan entropy for uncertainty estimation.In conclusion, we proposed a theoretical and mathematical formulation of test-time augmentationfor medical image segmentation. With the formulation, we obtain the prediction by estimating itsexpectation with Monte Carlo simulation and modeling prior distributions of parameters in an imageacquisition model. The formulation also leads to transformation-based uncertainty in the prediction.Experiments showed that TTA based on our formulation leads to higher segmentation accuracythan a single-prediction baseline and dropout-based multiple predictions, and demonstrated that ouruncertainty estimation with TTA helps to reduce overconfident incorrect predictions encountered bymodel-based uncertainty estimation.AcknowledgmentsThis work was supported through an Innovative Engineering for Health award by the WellcomeTrust (WT101957); Engineering and Physical Sciences Research Council (EPSRC) (NS/A000027/1,EP/H046410/1, EP/J020990/1, EP/K005278), Wellcome/EPSRC [203145Z/16/Z], the National8Institute for Health Research University College London Hospitals Biomedical Research Centre(NIHR BRC UCLH/UCL), the Royal Society [RG160569], and hardware donated by NVIDIA.References[1]Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, and Olaf Ronneberger. 3D U-Net: Learning densevolumetric segmentation from sparse annotation. In MICCAI , pages 424–432, 2016.[2]Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: representing model uncertaintyin deep learning. In ICML , pages 1050–1059, 2016.[3]Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir, Guotai Wang, Zach Eaton-Rosen,Robert Gray, Tom Doel, Yipeng Hu, Tom Whyntie, Parashkev Nachev, Marc Modat, Dean C. Barratt,Sébastien Ourselin, M. Jorge Cardoso, and Tom Vercauteren. NiftyNet: A deep-learning platform formedical imaging. Computer Methods and Programs in Biomedicine , 158:113–122, 2018.[4]Hongsheng Jin, Zongyao Li, Ruofeng Tong, and Lanfen Lin. A deep 3D residual CNN for false positivereduction in pulmonary nodule detection. Medical Physics , 2018.[5]Alex Kendall and Yarin Gal. What uncertainties do we need in Bayesian deep learning for computervision? In NIPS , pages 5580–5590, 2017.[6]Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutionalneural networks. In NIPS , pages 1097–1105, 2012.[7]Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictiveuncertainty estimation using deep ensembles. In NIPS , pages 6405–6416, 2017.[8]Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi,Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram Van Ginneken, and Clara I. Sánchez. A surveyon deep learning in medical image analysis. Medical Image Analysis , 42:60–88, 2017.[9]Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmenta-tion. In CVPR , pages 3431–3440, 2015.[10] Kazuhisa Matsunaga, Akira Hamada, Akane Minagawa, and Hiroshi Koga. Image classificationof melanoma, nevus and seborrheic keratosis by deep neural network ensemble. arXiv preprintarXiv:1703.03108 , 2017.[11] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-Net: Fully convolutional neural networksfor volumetric medical image segmentation. In IC3DV , pages 565–571, 2016.[12] Ilija Radosavovic, Piotr Dollár, Ross Girshick, Georgia Gkioxari, and Kaiming He. Data distillation:towards omni-supervised learning. arXiv preprint arXiv:1712.04440 , 2017.[13] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedicalimage segmentation. In MICCAI , pages 234–241, 2015.[14] Bakas Spyridon, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin S. Kirby, John B.Freymann, Keyvan Farahani, and Christos Davatzikos. Advancing the cancer genome atlas glioma MRIcollections with expert segmentation labels and radiomic features. Nature Scientific Data , 2017.[15] Guotai Wang, Wenqi Li, Maria A. Zuluaga, Rosalind Pratt, Premal A. Patel, Michael Aertsen, TomDoel, Anna L. David, Jan Deprest, Sebastien Ourselin, and Tom Vercauteren. Interactive medical imagesegmentation using deep learning with image-specific fine-tuning. IEEE Transactions on Medical Imaging ,PP(99):1–1, 2018.[16] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers.Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification andlocalization of common thorax diseases. In CVPR , pages 3462–3471, 2017.[17] Linwei Yue, Huanfeng Shen, Jie Li, Qiangqiang Yuan, Hongyan Zhang, and Liangpei Zhang. Imagesuper-resolution: The techniques, applications, and future. Signal Processing , 128:389–408, 2016.[18] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. Mixup: Beyond empirical riskminimization. arXiv preprint arXiv:1710.09412 , pages 1–11, 2017.[19] Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder-decoder networks for surrogatemodeling and uncertainty quantification. arXiv preprint arXiv:1801.06879 , 2018.9
B1WRAm4AG
Good topic to delve into, but we do not learn too much from this study
2: Marginally below acceptance threshold
This paper investigates a heuristic trick that has been used a lot in deep learning system design known as test time augmentation. Pro: - Interesting to analyze this aspect/trick of optimizing the performance of deep networks (and possibly other methods too) in more detail - Well written Comments: - The fetal experiments use 2D networks for a 3D segmentation task and the results show through plane artifacts. This makes the results hardly relevant. There are obvious ways to improve the results here, so why focus on TTA? - The literature review seems outdated. TTA has been used a lot, many papers mention and use it but do not refer to it in the abstract or title. Like ensembling, it seems to be a standard trick to eke out a bit more performance at the expense of some extra computation at test time. The authors describe it as some rarely used technique, but I think after http://benanne.github.io/2015/03/17/plankton.html it has been a standard component in the DL engineer's toolkit. - I expect there must be some relationship between the use of augmentations during training and the added value of TTA. If a model is trained with a lot of augmentations it should start to give very similar output for augmentation at test time, and TTA should not add much. It would be interesting to investigate this in a paper like this. - The differences in Table 1 are very small, are they significant?
2: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Test-time augmentation with uncertainty estimation for deep learning-based medical image segmentation ### Paper Abstract Data augmentation has been widely used for training deep learning systems for medical image segmentation and plays an important role in obtaining robust and transformation-invariant predictions. However, it has seldom been used at test time for segmentation and not been formulated in a consistent mathematical framework. In this paper, we first propose a theoretical formulation of test-time augmentation for deep learning in image recognition, where the prediction is obtained through estimating its expectation by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We then propose a novel uncertainty estimation method based on the formulated test-time augmentation. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation than calculating the model-based uncertainty alone and helps to reduce overconfident incorrect predictions. ### Paper Keywords ["data augmentation", "uncertainty estimation", "image segmentation", "deep learning"] ### Paper Content Test-time augmentation with uncertainty estimationfor deep learning-based medical image segmentationGuotai WangUniversity College Londonguotai.wang.14@ucl.ac.ukWenqi LiUniversity College Londonwenqi.li@ucl.ac.ukMichael AertsenyKU Leuvenmichael.aertsen@uzleuven.beJan DeprestyKU Leuvenjan.deprest@uzleuven.beSébastien OurselinUniversity College Londons.ourselin@ucl.ac.ukTom VercauterenUniversity College Londont.vercauteren@ucl.ac.ukAbstractData augmentation has been widely used for training deep learning systems formedical image segmentation and plays an important role in obtaining robust andtransformation-invariant predictions. However, it has seldom been used at test timefor segmentation and not been formulated in a consistent mathematical framework.In this paper, we first propose a theoretical formulation of test-time augmentationfor deep learning in image recognition, where the prediction is obtained throughestimating its expectation by Monte Carlo simulation with prior distributions ofparameters in an image acquisition model that involves image transformationsand noise. We then propose a novel uncertainty estimation method based on theformulated test-time augmentation. Experiments with segmentation of fetal brainsand brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation thancalculating the model-based uncertainty alone and helps to reduce overconfidentincorrect predictions.1 IntroductionIn recent years, deep learning has become the state-of-the-art method for medical image recognitiontasks such as image classification, object detection and segmentation [ 8]. As a type of data-drivenapproach, it learns features automatically, without explicitly modeling the complex variations ofimages. From the perspective of image acquisition, the acquired image may contain noise related tothe environment, and different viewpoints can lead to transformed versions of the same object. Itis desirable to enable a recognition system to be robust against noise and transformations, leadingto noise-invariant and transformation-invariant recognition, e.g., rotation-, translation- and scale-invariant. Convolutional Neural Networks (CNN) are designed to be translation-invariant by sharingDepartment of Medical Physics and Biomedical Engineering, Wellcome EPSRC Centre for Interventionaland Surgical Sciences (WEISS), University College London, London, WC1E 6BT, UK. Tom Vercauteren is alsowith KU Leuven.yDepartment of Obstetrics & Gynaecology, and Radiology, University Hospitals KU Leuven, 3000 Leuven,Belgium. Jan Deprest is also with WEISS and Institute for Women’s Health, University College London.1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.weights at different positions of the image. Dropout techniques [ 2] have been used to make the modelmore robust to noise. However, CNNs are not inherently invariant to more general transformations.To alleviate this problem, many researchers have tried to collect a training dataset that is as large aspossible in order to include a large variation of image contexts to train the model. When collectingsuch a large dataset is difficult or impossible, data augmentation is commonly used to enlarge arelatively small dataset by applying transformations to its samples to create new ones for training [ 6],and this helps to improve invariance to spatial transformations at test time. The transformations foraugmentation typically include flipping, cropping, rotating, and scaling training images. Krizhevskyet al. [ 6] also altered the intensities of the training images for data augmentation. In [ 1,13], elasticdeformations were used for biomedical image segmentation. Recently, convex combination of pairsof samples has been proposed as a way of data augmentation for training [18].While data augmentation is typically employed for training of CNNs, using it at test time has seldombeen investigated. Only few studies have empirically found that combining predictions of multipletransformed versions of a test image helps to improve the performance. For example, Matsunaga etal. [10] geometrically transformed test images for skin lesion classification. Radosavovic et al. [ 12]used a single model to predict multiple transformed copies of unlabeled images for data distillation.Jin et al. [ 4] tested on samples extended by rotation and translation for pulmonary nodule detection.However, all these methods used data augmentation for testing as an ad hoc method, without detailedformulation or theoretical explanation.In addition to robustness to imaging conditions, uncertainty estimation plays a critical role in medicalimage recognition. For example, for chest X-ray image classification [ 16], a testing result with highuncertainty may need a human expert to give a decision. In a segmentation task, the predicted labelsof pixels near the boundary of organs are likely to be uncertain [ 15], which can be used to guide userinteractions. Several methods have been proposed for the estimation of model uncertainty. ExactBayesian models offer a mathematically grounded method to infer model uncertainty, but they arehard to implement for CNNs. Alternatively, it has been shown that dropout can be cast as a Bayesianapproximation to represent model uncertainty [ 2]. In [ 19], Stein Variational Gradient Descent(SVGD) was used to perform approximate Bayesian inference on uncertain CNN parameters. In [ 7],ensembles of multiple models were proposed for uncertainty estimation. However, these methodsonly consider the uncertainty resulting from the trained models, and tend to produce overconfidentincorrect predictions. Kendall et al. [ 5] proposed a framework based on Bayesian deep learning tomodel uncertainties related not only to network parameters but also to image noise. In addition tothe model and image noise, the prediction uncertainty of the observed image may also be related toviewpoints or transformations of the inherent object, as such factors also affect the prediction output,leading to more uncertainty that depends on the input. To the best of our knowledge, the uncertaintyrelated to viewpoints or transformations has rarely been investigated for deep learning with CNNs.Summary of contributions : Our contribution in this paper is two-fold. First, we propose a theoreticalformulation of test-time augmentation for deep learning. We represent an image as a result of anacquisition process which involves geometric transformations and image noise. We model the hiddenparameters of the image acquisition process with prior distributions, and infer the prediction outputfor a given image by estimating its expectation with a Monte Carlo simulation process. The formula-tion is a mathematical explanation of test-time augmentation that is general for image recognitiontasks. Second, we propose a novel uncertainty estimation method based on the formulated test-timeaugmentation for image recognition tasks. We demonstrate the effect of test-time augmentation with2D and 3D segmentation tasks, and show that our proposed method provides a better uncertaintyestimation with fewer overconfident incorrect predictions than using model-based uncertainty.2 MethodsThe proposed method with test-time augmentation includes two parts. The first part is a mathematicalrepresentation of ensembles of predictions of multiple transformed versions of the input. In Sec-tion 2.1, we represent an image as a result of an image acquisition model with hidden parameters.In Section 2.2, we formulate test-time augmentation as inference with hidden parameters followinggiven prior distributions. The second part calculates the diversity of the prediction results of anaugmented test image, and it is used for estimation of the uncertainty related to image transformationsand noise. This will be detailed in Section 2.3.22.1 Image acquisition modelThe image acquisition model describes the process by which the observed images have been obtained.This process is confronted with a lot of factors that can be related or unrelated to the imaged object,such as blurring, down-sampling, spatial transformation, and system noise. While blurring anddown-sampling are commonly considered for image super-resolution [ 17], in the context of imagerecognition they have a relatively lower impact. Therefore, we focus on the spatial transformationand noise, with adding intensity changes being a straightforward extension.X=T(X0) +e (1)whereX0is an underlying image in a different position and orientation, i.e., a hidden variable. Tis atransformation operator that is applied to X0.is the set of parameters of the transformation, anderepresents the noise that is added to the transformed image. Xdenotes the observed image thatis used for inference at test time. Though the transformations can be in spatial, intensity or featurespace, in this work we only study the impact of reversible spatial transformations (e.g., flipping,scaling, rotation and translation), which are the most common types of transformations occurringduring image acquisition and used for data augmentation purposes. Let T1denote the inversetransformation ofT, then we have:X0=T1(Xe) (2)Similarly to data augmentation, we assume that XandX0follow the same distribution. In a givenapplication, this assumption leads to some prior distributions of the transformation parameters andnoise. For example, in a 2D slice of fetal brain Magnetic Resonance Images (MRI), the orientation ofthe fetal brain can range among all the possible directions in a 2D plane, therefore the rotation anglercan be modeled with a uniform prior distribution rU(0;2). The image noise is commonlymodeled as a Gaussian distribution, i.e., eN(;), whereandare the mean and standarddeviation respectively. Let PandPerepresent the prior distribution of anderespectively,therefore we have PandePe.LetYandY0be the labels related to XandX0respectively. For image classification, YandY0arecategorical variables, and they should be invariant with regard to transformations and noise, thereforeY=Y0. For image segmentation, YandY0are discretized label maps, and they are covariant withthe spatial transformation, i.e., Y=T(Y0).2.2 Inference with hidden variablesIn the context of deep learning, let f()be the function represented by a neural network, and represent the parameters learned from a set of training images with their corresponding annotations.In a standard formulation, the label Yof a test image Xis inferred by:Y=f(;X) (3)SinceXis only one of many possible observations of the underlying image X0, direct inference withXmay lead to a biased result affected by the specific transformation and noise associated with X. Toaddress this problem, we infer with the help of X0instead:Y=T(Y0) =Tf;X0)=Tf;T1(Xe)(4)where the exact values of andeforXare unknown. Instead of finding a deterministic predictionofX, we alternatively compute the distribution of Yconsidering the distributions of ande.P(Y) =P Tf;T1(Xe)!;whereP;ePe (5)To obtain the final prediction for X, we calculate the expectation of Yusing the distribution P(Y).E(Y) =ZyP(y)dy=ZP;ePeTf;T1(Xe)P()P(e)dde (6)3CalculatingE(Y)with Eq. (6)is computationally expensive, as andemay take continuous valuesandPis a complex joint distribution of different types of transformations. Alternatively, we estimateE(Y)by using Monte Carlo simulation:E(Y)1NNXn=1yn=1NNXn=1Tnf;T1n(Xen), wherenP;enPe (7)whereNis the total number of simulation runs. In each simulation run, we first randomly samplenandenfrom the prior distributions PandPe, respectively. Then we obtain one possible hiddenimage withnandenbased on Eq. (2), and feed it into the trained network to get its prediction,which is transformed with nto obtainynaccording to Eq. (4). Therefore, this is an inferenceprocedure with test-time augmentation.2.3 Uncertainty estimation with test-time augmentationThe uncertainty is estimated by measuring how diverse the predictions for a given image are. Boththe variance and entropy of the distribution P(Y)can be used to estimate uncertainty. However,variance is not representative in the context of multi-modal distributions. In this paper we use entropyfor uncertainty estimation.H(Y) =ZP(y)lnP(y)dy (8)With the Monte Carlo simulation in Eq. (7), we can approximate H(Y)from the simulation resultsY=fy1;y2;:::;y Ng. Suppose there are Munique values inYand the frequency of the m-th uniquevalue is ^pm, thenH(Y)is approximated as:H(Y)MXm=1^pmln(^pm) (9)For segmentation tasks, pixel-wise uncertainty estimation is desirable. Let Yidenote the predictedlabel for the i-th pixel. With the Monte Carlo simulation, a set of values for Yiare obtainedYi=fyi1;yi2;:::;yiNg. The entropy of the distribution of Yiis therefore approximated as:H(Yi)MXm=1^pimln(^pim) (10)where ^pimis the frequency of the m-th unique value in Yi.3 ExperimentsWe validated our proposed testing and uncertainty estimation method with two segmentation tasks:2D fetal brain segmentation from MRI and 3D brain tumor segmentation from multi-modal MRI. Inboth tasks, we show how the inference with test-time augmentation affects segmentation accuracy, andanalyze the uncertainty of the segmentation results. For a given trained CNN model, we comparedfour ways of testing: 1) the proposed test-time augmentation (TTA) for prediction, 2) test-timedropout (TTD) [ 2] where the output is an ensemble of Npredictions with random dropout at testtime, 3) a combination of TTA and TTD where both TTA and TTD are used in all the testing runs,and 4) a single-prediction baseline that obtains the prediction without TTA and TTD. For the firstthree methods, the uncertainty was obtained by Eq. (10) withNpredictions. For TTD and TTA +TTD, the dropout probability was set as a typical value of 0.5. Prediction error rate and the Dicescore between a segmentation result and the ground truth were used for quantitative measurements ofsegmentation accuracy.3.1 2D fetal brain segmentation from MRI3.1.1 Data and implementationWe collected clinical T2-weighted MRI scans of 60 fetuses in the second trimester with Single-ShotFast Spin Echo (SSFSE). The data for each fetus contained three stacks of 2D slices acquired in axial,4BaselineTTDTTATTA+TTDBaselineTTDTTATTA+TTD(a)In-plane(b)Through-planeFigure 1: Visual comparison of different testing methods for 2D segmentation of the fetal brain. Firstrow: segmentation results (green curve) and ground truth (yellow curve). Second row: uncertaintymaps with color bars. The white arrows show an area with high certainty while mis-segmented. TTD:test-time dropout, TTA: test-time augmentation.Figure 2: Dice distributions of segmentation results with different testing methods for five examplestacks of 2D slices of fetal brain MRI.sagittal and coronal views respectively, with pixel size 0.63 mm to 1.58 mm and slice thickness 3 mmto 6 mm. The gestational age ranged from 19 weeks to 33 weeks. We used 120 stacks of 40 patientsfor training, 12 stacks of 4 patients for validation and 48 stacks of 16 patients for testing. Manualsegmentation results of these images were used as the ground truth. We normalized each stack by itsintensity mean and standard deviation, and resampled each slice with pixel size 1.0 mm.We used 2D networks of Fully Convolutional Network (FCN) [ 9], U-Net [ 13] and P-Net [ 15]. Thenetworks were implemented in TensorFlow using NiftyNet [ 3]. During training, we used AdaptiveMoment Estimation (Adam) to adjust the learning rate that was initialized as 103, with batch size 5,weight decay 107and iteration number 10k. We augmented the data by flipping along each axiswith a probability of 0.5, rotation with an angle rU(0;2), scaling with a factor sU(0:8;1:2),and adding random noise with eN(0;0:05), as a median-filter smoothed version of a normalizedimage in our dataset has a standard deviation around 0.95.3.1.2 Segmentation results with uncertaintyFig. 1 shows a visual comparison of four different testing methods for a fetal brain image. The resultswere based on the same trained model of U-Net, and the Monte Carlo simulation number Nwas20 for TTD, TTA, and TTA + TTD. The first row presents segmentation results compared with theground truth. It shows the baseline obtained an obviously mis-segmented region outside the brain inFig. 1(a), and the difference between the baseline and TTD is hardly observable. In comparison, TTAachieved better results than TTD. The difference between TTA and TTA + TTD is tiny. The secondrow presents the uncertainty maps for the segmentation results. For TTD, most of the uncertainsegmentations are located near the border of the segmented foreground, while the pixels with a largerdistance to the border have a very high confidence. This leads to a lot of overconfident incorrectsegmentations, as shown by the white arrows in Fig. 1(a). In comparison, TTA obtained a largeruncertain area that is mainly corresponding to mis-segmented regions of the baseline. The uncertaintymaps of TTA + TTD look similar to that of TTA.5Figure 3: Dice of 2D fetal brain segmentation with the change of Monte Carlo simulation runs N.Figure 4: Normalized joint histogram of prediction uncertainty and error rate for 2D fetal brainsegmentation. The average error rates at different uncertainty levels are depicted by the red curves.The dashed ellipses show that TTA reduces the occurrence of overconfident incorrect predictions.To quantitatively evaluate the segmentation results, we measured Dice score of each prediction fordifferent testing methods. Fig. 2 shows examples of five stacks of fetal brain MRI. The resultswere based on the same trained model of U-Net. Note that the baseline had only one prediction foreach image, and the Monte Carlo simulation number Nwas 20 for TTD, TTA and TTA + TTD. Itcan be observed that for each case, the Dice of TTD distributes nearly to that of the baseline. Incomparison, the Dice distribution of TTA has a higher average and larger variance, which shows thatTTA outperforms TTD in improving segmentation accuracy. Fig. 2 also shows that the performanceof TTA + TTD is close to that of TTA.We also measured the performance of different network structures with FCN [ 9], U-Net [ 13] andP-Net [ 15], and investigated how the segmentation accuracy changes with the increase of the MonteCarlo simulation runs N. The results measured with all the testing images are shown in Fig. 3. Wefound that for all of these three networks, the segmentation accuracy of TTD remains close to that ofthe baseline. For TTA and TTA + TTD, an improvement of segmentation accuracy can be observedwhenNincreases from 1 to 10. When Nis larger than 20, the segmentation accuracy for these twomethods reaches a plateau.To study the correlation between prediction uncertainty and accuracy, we measured the joint his-tograms of uncertainty and error rate for TTD, TTA, and TTA + TTD. Each histogram was obtainedby statistically calculating the error rate of pixels at different uncertainty levels in each slice. Theresults based on U-Net with N= 20 are shown in Fig. 4, where the joint histograms have beennormalized by the number of total pixels in the testing images for visualization. We calculated theaverage error rate at each uncertainty level, leading to a curve of error rate as a function of uncertainty,i.e., the red curves in Fig. 4. This figure shows that the majority of pixels have a low uncertainty witha small error rate. When the uncertainty increases, the error rate also improves gradually. However,when the prediction uncertainty is low, TTD has a steeper increase of error rate than TTA, whichdemonstrates that TTA has fewer overconfident incorrect predictions. The dashed ellipses in Fig. 4also show the different levels of overconfident incorrect predictions for these testing methods.6TTDTTATTA + TTDBaselineFLAIR and ground truthFigure 5: Visual comparison of different testing methods for 3D brain tumor segmentation. First row:FLAIR image and segmentation results (green: edema, yellow: enhancing core, red: necrotic core).Second row: ground truth and uncertainty maps.3.2 3D brain tumor segmentation from multi-modal MRI3.2.1 Data and implementationWe used the BraTS 20173[14] training dataset that consisted of volumetric images from 285 studies,with ground truth provided by the organizers. We randomly selected 20 studies for validation and50 studies for testing, and used the remaining for training. For each study, there were four scans ofT1-weighted, contrast enhanced T1-weighted (T1c), T2-weighted and Fluid Attenuation InversionRecovery (FLAIR) images, and they had been co-registered. All the images were skull-stripped andre-sampled to an isotropic 1 mm3resolution. The task was to segment these multi-modal images intomultiple classes: edema, enhancing core, necrotic core and the background (Fig. 5). We used 3DU-Net [ 1] and V-Net [ 11] implemented with NiftyNet [ 3], and employed Adam during training withinitial learning rate 103, batch size 2, weight decay 107and iteration number 20k. We augmentedthe data by flipping along each axis with a probability of 0.5, rotation with an angle along each axisrU(0;2), scaling with a factor sU(0:8;1:2), and adding random noise with eN(0;0:05)based on the reduced standard deviation of a median-filtered version of a normalized image.3.2.2 Segmentation results with uncertaintyFig. 5 shows an example of segmentation results with different testing methods that used the sametrained model of 3D U-Net. The Monte Carlo simulation number Nwas 40 for TTD, TTA, and TTA+ TTD. It can be observed that the baseline method led to an over segmentation of the necrotic corein the edema region. Compared with TTD, TTA and TTA + TTD had a better ability to correct thismis-segmentation. Fig. 5 also shows that TTA and TTA + TTD obtained their segmentations withhigher uncertainties than TTD, especially in the region that was mis-segmented by the baseline.We measured the error rate in each testing image at different uncertainty levels, and obtained thenormalized joint histogram of prediction uncertainty and error rate. Fig. 6 shows the results basedon 3D U-Net with N= 40 . The red curve shows the average error rate as a function of predictionuncertainty. Fig. 6 shows that the average error rate increases with the growth of uncertainty. However,TTD has a higher average error rate than TTA and TTA + TTD when the uncertainty is low (< 0.6).Following typical evaluation methods used for the BraTS dataset, we calculated the Dice scores forthree structures: 1) the whole tumor including edema, enhancing core and necrotic core, 2) the tumorcore without edema, and 3) enhancing tumor core [ 14]. We found that for 3D U-Net and V-Net,the performance of the multi-prediction testing methods reaches a plateau when Nis larger than40. Table 1 shows the evaluation results with N= 40. It can be observed that for both networks,3http://www.med.upenn.edu/sbia/brats2017.html7Figure 6: Normalized joint histogram of prediction uncertainty and error rate for 3D brain tumorsegmentation. The average error rates at different uncertainty levels are depicted by the red curves.Table 1: Dice (%) of 3D brain tumor segmentation by 3D U-Net [ 1] and V-Net [ 11] with differenttesting methods. WT: whole tumor, TC: tumor core, EC: enhancing core.3D U-Net V-NetWT TC EC WT TC ECBaseline 87.69 5.65 78.72 17.96 74.49 20.35 86.92 6.90 76.38 19.15 74.17 26.04TTD 88.22 5.87 79.25 17.90 75.75 21.03 87.04 6.92 76.61 19.27 74.29 26.01TTA 88.39 5.74 79.54 17.11 75.9420.64 87.526.36 76.9319.37 74.5526.03TTA + TTD 88.525.95 79.61 17.02 75.7020.41 87.606.25 76.8619.26 74.6925.98multi-prediction methods lead to better performance than the baseline with a single prediction, andTTA outperforms TTD in terms of average Dice score for all the three structures.4 Discussion and conclusionIn our mathematical formulation of test-time augmentation based on an image acquisition model,we explicitly modeled spatial transformations and image noise. However, it can be easily extendedto include more general transformations such as elastic deformations [ 1] or add a simulated biasfield for MRI. In addition to the variation of possible values of model parameters, the predictionresult is also dependent on the input data, e.g., image noise and transformations related to the object.Therefore, a good uncertainty estimation should take these factors into consideration. Fig. 1 and 5show that model uncertainty alone is likely to obtain overconfident incorrect predictions, and TTAplays an important role in reducing such predictions. We have demonstrated TTA based on the imageacquisition model for image segmentation tasks, but it is general for different image recognition tasks,such as image classification, object detection, and regression. For regression tasks where the outputsare not discretized category labels, the variation of the output distribution might be more suitablethan entropy for uncertainty estimation.In conclusion, we proposed a theoretical and mathematical formulation of test-time augmentationfor medical image segmentation. With the formulation, we obtain the prediction by estimating itsexpectation with Monte Carlo simulation and modeling prior distributions of parameters in an imageacquisition model. The formulation also leads to transformation-based uncertainty in the prediction.Experiments showed that TTA based on our formulation leads to higher segmentation accuracythan a single-prediction baseline and dropout-based multiple predictions, and demonstrated that ouruncertainty estimation with TTA helps to reduce overconfident incorrect predictions encountered bymodel-based uncertainty estimation.AcknowledgmentsThis work was supported through an Innovative Engineering for Health award by the WellcomeTrust (WT101957); Engineering and Physical Sciences Research Council (EPSRC) (NS/A000027/1,EP/H046410/1, EP/J020990/1, EP/K005278), Wellcome/EPSRC [203145Z/16/Z], the National8Institute for Health Research University College London Hospitals Biomedical Research Centre(NIHR BRC UCLH/UCL), the Royal Society [RG160569], and hardware donated by NVIDIA.References[1]Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, and Olaf Ronneberger. 3D U-Net: Learning densevolumetric segmentation from sparse annotation. In MICCAI , pages 424–432, 2016.[2]Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: representing model uncertaintyin deep learning. In ICML , pages 1050–1059, 2016.[3]Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir, Guotai Wang, Zach Eaton-Rosen,Robert Gray, Tom Doel, Yipeng Hu, Tom Whyntie, Parashkev Nachev, Marc Modat, Dean C. Barratt,Sébastien Ourselin, M. Jorge Cardoso, and Tom Vercauteren. NiftyNet: A deep-learning platform formedical imaging. Computer Methods and Programs in Biomedicine , 158:113–122, 2018.[4]Hongsheng Jin, Zongyao Li, Ruofeng Tong, and Lanfen Lin. A deep 3D residual CNN for false positivereduction in pulmonary nodule detection. Medical Physics , 2018.[5]Alex Kendall and Yarin Gal. What uncertainties do we need in Bayesian deep learning for computervision? In NIPS , pages 5580–5590, 2017.[6]Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutionalneural networks. In NIPS , pages 1097–1105, 2012.[7]Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictiveuncertainty estimation using deep ensembles. In NIPS , pages 6405–6416, 2017.[8]Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi,Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram Van Ginneken, and Clara I. Sánchez. A surveyon deep learning in medical image analysis. Medical Image Analysis , 42:60–88, 2017.[9]Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmenta-tion. In CVPR , pages 3431–3440, 2015.[10] Kazuhisa Matsunaga, Akira Hamada, Akane Minagawa, and Hiroshi Koga. Image classificationof melanoma, nevus and seborrheic keratosis by deep neural network ensemble. arXiv preprintarXiv:1703.03108 , 2017.[11] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-Net: Fully convolutional neural networksfor volumetric medical image segmentation. In IC3DV , pages 565–571, 2016.[12] Ilija Radosavovic, Piotr Dollár, Ross Girshick, Georgia Gkioxari, and Kaiming He. Data distillation:towards omni-supervised learning. arXiv preprint arXiv:1712.04440 , 2017.[13] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedicalimage segmentation. In MICCAI , pages 234–241, 2015.[14] Bakas Spyridon, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin S. Kirby, John B.Freymann, Keyvan Farahani, and Christos Davatzikos. Advancing the cancer genome atlas glioma MRIcollections with expert segmentation labels and radiomic features. Nature Scientific Data , 2017.[15] Guotai Wang, Wenqi Li, Maria A. Zuluaga, Rosalind Pratt, Premal A. Patel, Michael Aertsen, TomDoel, Anna L. David, Jan Deprest, Sebastien Ourselin, and Tom Vercauteren. Interactive medical imagesegmentation using deep learning with image-specific fine-tuning. IEEE Transactions on Medical Imaging ,PP(99):1–1, 2018.[16] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers.Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification andlocalization of common thorax diseases. In CVPR , pages 3462–3471, 2017.[17] Linwei Yue, Huanfeng Shen, Jie Li, Qiangqiang Yuan, Hongyan Zhang, and Liangpei Zhang. Imagesuper-resolution: The techniques, applications, and future. Signal Processing , 128:389–408, 2016.[18] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. Mixup: Beyond empirical riskminimization. arXiv preprint arXiv:1710.09412 , pages 1–11, 2017.[19] Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder-decoder networks for surrogatemodeling and uncertainty quantification. arXiv preprint arXiv:1801.06879 , 2018.9<|im_end|> <|im_start|>assistant ### Review Title Good topic to delve into, but we do not learn too much from this study ### Review Text This paper investigates a heuristic trick that has been used a lot in deep learning system design known as test time augmentation. Pro: - Interesting to analyze this aspect/trick of optimizing the performance of deep networks (and possibly other methods too) in more detail - Well written Comments: - The fetal experiments use 2D networks for a 3D segmentation task and the results show through plane artifacts. This makes the results hardly relevant. There are obvious ways to improve the results here, so why focus on TTA? - The literature review seems outdated. TTA has been used a lot, many papers mention and use it but do not refer to it in the abstract or title. Like ensembling, it seems to be a standard trick to eke out a bit more performance at the expense of some extra computation at test time. The authors describe it as some rarely used technique, but I think after http://benanne.github.io/2015/03/17/plankton.html it has been a standard component in the DL engineer's toolkit. - I expect there must be some relationship between the use of augmentations during training and the added value of TTA. If a model is trained with a lot of augmentations it should start to give very similar output for augmentation at test time, and TTA should not add much. It would be interesting to investigate this in a paper like this. - The differences in Table 1 are very small, are they significant? ### Review Rating 2: Marginally below acceptance threshold ### Review Confidence 2: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
ryllpA21vB
NeurIPS.cc/2019/Workshop/Program_Transformations
2019
Differentiation of High-Level Language Semantics
["Michael Innes"]
Though analytic differentiation (AD) is a program transformation, AD tools have typically supported only very limited program representations, consisting of primitive mathematical operations and basic structured control flow. Zygote, an AD for the Julia language, instead operates on Julia code. This presents an interesting challenge for the AD implementor: the program representation now contains not just mathematical operations, but arbitrary control flow, user-defined functions, recursion, data structures, mutation, metaprogramming, foreign function calls, specialised hardware, and even concurrency and parallelism primitives. This paper explains how Zygote handles these high-level features safely and efficiently, making an unusually large set of Julia programs differentiable.
["zygote", "differentiation", "language semantics differentiation", "language semantics", "analytic differentiation", "program transformation", "ad tools", "limited program representations", "primitive mathematical operations"]
Differentiation of High-Level Language SemanticsMichael InnesJulia Computing, Inc.Edinburgh, UKmike.j.innes@gmail.comAbstractThough analytic differentiation (AD) is a program transformation, AD tools havetypically supported only very limited program representations, consisting of prim-itive mathematical operations and basic structured control flow. Zygote, an ADfor the Julia language, instead operates on Julia code. This presents an interestingchallenge for the AD implementor: the program representation now contains notjust mathematical operations, but arbitrary control flow, user-defined functions,recursion, data structures, mutation, metaprogramming, foreign function calls,specialised hardware, and even concurrency and parallelism primitives. This pa-per explains how Zygote handles these high-level features safely and efficiently,making an unusually large set of Julia programs differentiable.1 IntroductionReverse-mode analytic differentiation (AD) is a program transformation [ 18]; AD tools transform aninput program (the primal) into a new program that calculates derivatives (the adjoint)1. However,AD tools have typically only supported simple program representations, consisting only of variableassignments and mathematical operations (in AD terminology, a ‘Wengert list’, [ 21]; in machinelearning a ‘graph’ or ‘trace’). More complex programming languages, such as Python and C++, aresupported by tracing operations in the original language into a simpler representation; recent machinelearning frameworks also augment the trace with basic structured control flow.2Zygote [ 10] achieves performance and flexibility by differentiating Julia’s syntax tree (AST) aheadof time. However, this means that it must handle any language feature that could appear in adifferentiable program, from closures to concurrency. Zygote’s core is a program transformation thathandles arbitrary control flow, function calls and user-defined gradients; this paper details how webuild on this foundation to support everything else.Notable related work includes Tapenade [ 5] and Swift for TensorFlow [ 20], which support a usefulsubset of Fortran and Swift’s semantics respectively, alongside Stalin r[16] and Myia [ 19], whichextend AD with support for closures and recursion.2 Custom AdjointsZygote’s core transform is simple and mechanical, and only around 200 lines of code. Almost allof Zygote’s semantics and functionality are provided via its library of custom adjoints, which havesomewhat surprising expressive power. Defining a custom adjoint is similar to defining a normal1In ‘dynamic’ or ‘eager-mode’ ADs, like autograd [ 14], the program transformation is more implicit, sinceit is interleaved with numerical evaluation. The adjoint trace need not be fully realised except when nestingderivatives.2A paradigmatic example of this format is the XLA intermediate representation, [1].33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.Julia function, except that the definition must return both the output of the function and a pullbackclosure, which propagates gradients as in [11].@adjoint a *b = (a*b, dc -> (dc'b, dc'a))Alongside mathematical gradients, we can define utilities such as gradient hooks, which allow anarbitrary function to be applied to the gradient. For example, hook(-, x) reverses the sign of x;more generally it can also be used for gradient clipping and debugging.hook(f, x) = x@adjoint hook(f, x) = (x, dx -> (nothing, f(dx)))The function nestlevel is able to do reflection on the gradient process itself; if called within adifferentiated function it will return the order of differentiation being performed.nestlevel() = 0@adjoint nestlevel() = (nestlevel()+1, _ -> nothing)A simple implementation of checkpointing is similarly straightforward.checkpoint(f, x) = f(x)@adjoint checkpoint(f, x) = (f(x), dy -> J(f, x)[2](dy))All functionality in this paper is similarly implemented via custom adjoints.3 Data Structures & MutationThe most fundamental data structure is the cons cell , a tuple of two values like C= (x1;x2). Ifwe call first(C) to retrieve the first element we must then find the gradient with respect to Cinthe adjoint program. We create an adjoint object C, which mirrors the structure of Cwhile storingthe gradient of each internal element (x1;x2), so the adjoint for y=first(C)iscons(y;0). Thisnaturally generalises across different numbers of fields or names of accessor functions.To handle mutation, consider a one-element ‘box’ structure B. We can get(B)to retrieve the currentstored value, and set(B;x)to erase that value and replace it with x. The adjoint object Bis also abox. The pullback for getaccumulates the gradient in the adjoint box while the pullback for setreturns it, reseting the adjoint to 0. Any data structure can then be modelled as a combination of conscells and boxes, though more efficient or direct representations and adjoints can easily be provided.Since the adjoints for data structures do not capture their inputs by value, later mutations do notinvalidate the pullback.Closures are just objects with a call method [ 4]; the fields of the object represent the closure’senvironment. In our compiler all functions actually accept a hidden environment argument—whichmay be empty as a special case—so both closures and higher-order functions are supported with noextra effort.4 Concurrency and ParallelismJulia supports a concurrency model based on communicating sequential processes (CSP, [ 6]). Azero-argument function or closure (a thunk) can be scheduled as a task (orcoroutine ), and executedindependently of the main thread. Tasks communicate with each other through shared queues calledchannels . Typically, the main thread will create a series of tasks and wait for them all to finish beforecontinuing.Zygote makes CSP differentiable by the following transformation. Firstly, when a task is scheduled,its thunkfis replaced byJ(f), producing a pullback. Once the task is complete, we associate it withan adjoint task which will run the pullback. During the reverse pass, we reach the point where theoriginal task was awaited in the primal code, and schedule the adjoint task. The adjoint task executesand communicates with other adjoint tasks as needed, finally producing a gradient of the thunk f.Channels can be differentiated as in §3; for each channel cwe create an empty adjoint channel c.Sending a value to cbecomes receiving a sensitivity from cand vice versa.2Julia supports shared-memory parallelism by multiplexing tasks onto OS threads, so support for tasksmeans that multithreaded code is also differentiable. Julia uses the same concepts, though a slightlydifferent API, for distributed / multi-node parellelism, so the same techniques can be straightforwardlytransferred to differentiation of distributed code. In an experimental setting we were able to achievea1:5speedup when using two cores to get the gradient of a simple function using map-reduceparallelism.Care must be taken that write/accumulate operations in the adjoint are atomic, since there mayotherwise be a race condition due to multiple reads from the same array location in the primal.Differentiation of parallel code at other levels of abstraction, such as the level of parallel forloops ormap-reduce, presents different challenges and opportunities [9, 8, 15, 3].5 Mixed-Mode ADAlternatives to reverse-mode AD have advantages in many situations, even when calculating gradients.For example, Julia’s forward-mode AD [ 17] has constant memory overhead (compared to reversemode’s tape, linear in the number of instructions executed) and has minimal time overhead, makingit ideal for long-running computations with a small number of inputs. Similarly, TaylorSeries.jl[2] can calculate arbitrary-order forward-mode derivatives in one shot. Mixed mode is exposed bywriting forwarddiff(f, x) . This calculates the same result as f(x), but additionally calculates theJacobian via forward mode, stores it, and applies it during the backwards pass using a custom adjoint.Similarly, checkpointed AD is exposed via checkpoint(f, x) (§2). Zygote can be instructed toalways use forward mode (or another AD technique) on a given function, or even to have heuristicsfor the best method, so that for users of a library, differentiation is efficient by default.An equivalent problem is differentiating code in other languages, for example Python code invokedvia PyCall.jl [ 13]. In this case, we can write an adjoint for the low-level pycall function whichinvokes a Python AD, capturing its tape in a pullback. To a user, calling imported Python functionsinside a call to gradient then works transparently.6 Complex DifferentiationZygote defines the sensitivity of a complex number z=x+yibyz= x+ yi. This definition isuseful for gradient descent since for small, real ,f(z+z)f(z) +zz, and thus the usualgradient update z:=zzlowers the loss. (This is equivalent to differentiating a pair of tworeals (x;y).) Zygote’s pre-defined rules for numerical operations (like and+) are automaticallyconsistent with this definition, so a only rule for one of real ,imag orconj is needed in addition forfull complex support.This sensitivity is not the true complex derivative@@z=@@x+@@iy=@@xi@@y, which (for holomorphicfunctions) will satisfy f(z+)f(z) +@f@z. By the Cauchy-Riemann equations,@f@zis conjugateto the sensitivity zof<f(z)making it straightforward and efficient to calculate. In the more generalnon-holomorphic case one needs either the equivalent 22real Jacobian or the two Wirtingerderivatives (@f@z;@f@z), both of which are readily derived from the sensitivities of <f(z)and=f(z).7 Staged ProgrammingZygote works well with Julia’s excellent meta-programming facilities. For example, many numericallibraries provide an einsum interface, allowing tensor operations to be expressed with a syntax basedon Einstein notation. The syntax is usually expressed as a string and, in dynamic interfaces likePyTorch, parsing the string incurs an overhead each time the expression is run. In Julia, Einsumcan be implemented as a macro , explicitly parsing the notation at compile time and leaving onlyraw tensor operations behind. Zygote sees only the final matrix multiply and sum operations, sothis has no overhead compared to writing them manually. The same is true of Julia’s other powerfulmetaprogramming and staging tools, such as generated functions.38 Hardware BackendsZygote transforms generic programs and mathematical expressions – written in terms of mathematicaloperators like,+etc. – into new generic programs that calculate a gradient. Thus Zygote iscompletely agnostic to the data types running through the program and how they are implemented orrepresented in memory. A Zygote program written for floating point numbers therefore works equallywell with rational numbers, arbitrary-precision floats and integers, measurements, hardware-specifictypes like BFloat16 , and combinations of these.julia> gradient(x -> x^2 + 3x + 1, 1/3)(3.6666666666666665,)julia> gradient(x -> x^2 + 3x + 1, 1//3)(11//3,)julia> gradient(x -> x^2 + 3x + 1, 1/3 ± 0.01)(3.6666666666666665 ± 0.02,)The same is true for arrays; the program gradient(x -> .(W*x .+ b), x) works equally wellwhetherW,bandxare dense arrays, sparse arrays, arrays backed by GPU memory, or distributedarrays stored over a cluster of hundreds of nodes. The operations , broadcasting and so on are calledon the adjoint arrays and thus launched on the GPU or cluster as appropriate.9 External LibrariesSupport for types and libraries distinguishes frameworks from programming languages, and wesupport these in differentiable programming too. For example, the Colors.jl package [ 7] providesrepresentations of RGB colours (among many other colour spaces), and functions over these colourspaces can be differentiated, even when they comprise hundreds of lines of code and many languagefeatures.julia> a = RGB(1, 0, 0); b = RGB(0, 1, 0);julia> gradient(a -> a.r^2, a)((r = 2.0f0, g = nothing, b = nothing),)julia> colordiff(a, b)86.60823557376344julia> gradient(b -> colordiff(a, b), b)((r = -1.77, g = 28.88, b = -0.04),)Aside from the correctness benefits of working with types, it is increasingly recognised that incorpo-rating existing knowledge and code into machine learning leads to richer and more powerful models;this is particularly valuable in scientific computing, where powerful explicit models exist for manysystems that need not be learned from scratch [12].10 ConclusionWe have demonstrated how Zygote, an AD for the Julia language, can support features of a high-levelprogramming language. We also show how the same techniques can be used to add advanced featuresto Zygote, such as checkpointed, mixed-mode and cross-language AD, without changes to Zygote’score source transform. We believe that Zygote’s unusual extensibility makes it an appealing target forresearch into advanced AD techniques.Languages for differentiable and probabilistic programming need not sacrifice modern programminglanguage design, or be restricted to simple DSLs. We hope that this work can inform the design bothof ADs for existing languages, and of new languages and IRs designed to support machine learningfrom the ground up.4References[1] XLA overview. tensorflow.org/performance/xla , 2018. Accessed: 2018-09-22.[2]L. Benet, D. Sanders, et al. TaylorSeries.jl. https://github.com/JuliaDiff/TaylorSeries.jl ,2018. Accessed: 2018-09-22.[3]H. M. Bücker, B. Lang, C. H. Bischof, et al. Bringing together automatic differentiation and openmp. InProceedings of the 15th international conference on Supercomputing , pages 246–251. ACM, 2001.[4]c2 Wiki Contributors. Closures and objects are equivalent. wiki.c2.com/?ClosuresAndObjectsAreEquivalent , 2018. Accessed: 2018-09-22.[5]L. Hascoet and V . Pascual. The tapenade automatic differentiation tool: principles, model, and specification.ACM Transactions on Mathematical Software (TOMS) , 39(3):20, 2013.[6]C. A. R. Hoare. Communicating sequential processes. In The origin of concurrent programming , pages413–443. Springer, 1978.[7]T. Holy, D. C. Jones, and contributors. Colors.jl. github.com/JuliaGraphics/Colors.jl , 2018.Accessed: 2018-09-22.[8]P. D. Hovland. Automatic differentiation of parallel programs . Number 2003. University of Illinois atUrbana-Champaign Champaign, IL, USA, 1997.[9]J. Hückelheim, P. Hovland, M. M. Strout, and J.-D. Müller. Reverse-mode algorithmic differentiation of anopenmp-parallel compressible flow solver. The International Journal of High Performance ComputingApplications , 33(1):140–154, 2019.[10] M. Innes. Don’t unroll adjoint: Differentiating ssa-form programs, 2018.[11] M. Innes. Flux: Elegant machine learning with julia. Journal of Open Source Software , 2018.[12] M. Innes, A. Edelman, K. Fischer, C. Rackauckas, E. Saba, V . B. Shah, and W. Tebbutt. A differentiableprogramming system to bridge machine learning and scientific computing. CoRR , abs/1907.07587, 2019.[13] S. G. Johnson et al. PyCall.jl. https://github.com/JuliaPy/PyCall.jl , 2018. Accessed: 2018-09-22.[14] D. Maclaurin, D. Duvenaud, and R. P. Adams. Autograd: Effortless gradients in numpy. In ICML 2015AutoML Workshop , 2015.[15] U. Naumann, L. Hascoët, C. Hill, P. Hovland, J. Riehme, and J. Utke. A framework for proving correctnessof adjoint message-passing programs. In European Parallel Virtual Machine/Message Passing InterfaceUsers’ Group Meeting , pages 316–321. Springer, 2008.[16] B. A. Pearlmutter and J. M. Siskind. Reverse-mode ad in a functional framework: Lambda the ultimatebackpropagator. ACM Transactions on Programming Languages and Systems (TOPLAS) , 30(2):7, 2008.[17] J. Revels, M. Lubin, and T. Papamarkou. Forward-mode automatic differentiation in julia. arXiv preprintarXiv:1607.07892 , 2016.[18] B. Speelpenning. Compiling fast partial derivatives of functions given by algorithms. Technical report,Illinois Univ., Urbana (USA). Dept. of Computer Science, 1980.[19] B. van Merriënboer, O. Breuleux, A. Bergeron, and P. Lamblin. Automatic differentiation in ml: Wherewe are and where we should be going. In Advances in Neural Information Processing Systems , pages8770–8780, 2018.[20] R. Wei, D. Zheng, et al. Swift for TensorFlow. github.com/tensorflow/swift , 2018. Accessed:2018-09-22.[21] R. E. Wengert. A simple automatic derivative evaluation program. Communications of the ACM , 7(8):463–464, 1964.5
Bke0oRWkOH
Solid implementation of AD in Julia
7: Good paper, accept
The article is a good overview of all the AD features that have been made to work in Zygote, an implementation of AD in Julia. The whole language (including recursion and higher-level functions), excluding macros themselves, is supported. Additional features have also been implemented, such as checkpointing, hooks, and mixed-mode AD. It is interesting to see that simple design decisions on the representations of data (cells and boxes) and derivatives (pullback closures) enabled different extensions. It would be great to learn about how those concepts needed to be specialized or optimized for different use cases, such as sparse arrays, asynchronous execution.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Differentiation of High-Level Language Semantics ### Paper Abstract Though analytic differentiation (AD) is a program transformation, AD tools have typically supported only very limited program representations, consisting of primitive mathematical operations and basic structured control flow. Zygote, an AD for the Julia language, instead operates on Julia code. This presents an interesting challenge for the AD implementor: the program representation now contains not just mathematical operations, but arbitrary control flow, user-defined functions, recursion, data structures, mutation, metaprogramming, foreign function calls, specialised hardware, and even concurrency and parallelism primitives. This paper explains how Zygote handles these high-level features safely and efficiently, making an unusually large set of Julia programs differentiable. ### Paper Keywords ["zygote", "differentiation", "language semantics differentiation", "language semantics", "analytic differentiation", "program transformation", "ad tools", "limited program representations", "primitive mathematical operations"] ### Paper Content Differentiation of High-Level Language SemanticsMichael InnesJulia Computing, Inc.Edinburgh, UKmike.j.innes@gmail.comAbstractThough analytic differentiation (AD) is a program transformation, AD tools havetypically supported only very limited program representations, consisting of prim-itive mathematical operations and basic structured control flow. Zygote, an ADfor the Julia language, instead operates on Julia code. This presents an interestingchallenge for the AD implementor: the program representation now contains notjust mathematical operations, but arbitrary control flow, user-defined functions,recursion, data structures, mutation, metaprogramming, foreign function calls,specialised hardware, and even concurrency and parallelism primitives. This pa-per explains how Zygote handles these high-level features safely and efficiently,making an unusually large set of Julia programs differentiable.1 IntroductionReverse-mode analytic differentiation (AD) is a program transformation [ 18]; AD tools transform aninput program (the primal) into a new program that calculates derivatives (the adjoint)1. However,AD tools have typically only supported simple program representations, consisting only of variableassignments and mathematical operations (in AD terminology, a ‘Wengert list’, [ 21]; in machinelearning a ‘graph’ or ‘trace’). More complex programming languages, such as Python and C++, aresupported by tracing operations in the original language into a simpler representation; recent machinelearning frameworks also augment the trace with basic structured control flow.2Zygote [ 10] achieves performance and flexibility by differentiating Julia’s syntax tree (AST) aheadof time. However, this means that it must handle any language feature that could appear in adifferentiable program, from closures to concurrency. Zygote’s core is a program transformation thathandles arbitrary control flow, function calls and user-defined gradients; this paper details how webuild on this foundation to support everything else.Notable related work includes Tapenade [ 5] and Swift for TensorFlow [ 20], which support a usefulsubset of Fortran and Swift’s semantics respectively, alongside Stalin r[16] and Myia [ 19], whichextend AD with support for closures and recursion.2 Custom AdjointsZygote’s core transform is simple and mechanical, and only around 200 lines of code. Almost allof Zygote’s semantics and functionality are provided via its library of custom adjoints, which havesomewhat surprising expressive power. Defining a custom adjoint is similar to defining a normal1In ‘dynamic’ or ‘eager-mode’ ADs, like autograd [ 14], the program transformation is more implicit, sinceit is interleaved with numerical evaluation. The adjoint trace need not be fully realised except when nestingderivatives.2A paradigmatic example of this format is the XLA intermediate representation, [1].33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.Julia function, except that the definition must return both the output of the function and a pullbackclosure, which propagates gradients as in [11].@adjoint a *b = (a*b, dc -> (dc'b, dc'a))Alongside mathematical gradients, we can define utilities such as gradient hooks, which allow anarbitrary function to be applied to the gradient. For example, hook(-, x) reverses the sign of x;more generally it can also be used for gradient clipping and debugging.hook(f, x) = x@adjoint hook(f, x) = (x, dx -> (nothing, f(dx)))The function nestlevel is able to do reflection on the gradient process itself; if called within adifferentiated function it will return the order of differentiation being performed.nestlevel() = 0@adjoint nestlevel() = (nestlevel()+1, _ -> nothing)A simple implementation of checkpointing is similarly straightforward.checkpoint(f, x) = f(x)@adjoint checkpoint(f, x) = (f(x), dy -> J(f, x)[2](dy))All functionality in this paper is similarly implemented via custom adjoints.3 Data Structures & MutationThe most fundamental data structure is the cons cell , a tuple of two values like C= (x1;x2). Ifwe call first(C) to retrieve the first element we must then find the gradient with respect to Cinthe adjoint program. We create an adjoint object C, which mirrors the structure of Cwhile storingthe gradient of each internal element (x1;x2), so the adjoint for y=first(C)iscons(y;0). Thisnaturally generalises across different numbers of fields or names of accessor functions.To handle mutation, consider a one-element ‘box’ structure B. We can get(B)to retrieve the currentstored value, and set(B;x)to erase that value and replace it with x. The adjoint object Bis also abox. The pullback for getaccumulates the gradient in the adjoint box while the pullback for setreturns it, reseting the adjoint to 0. Any data structure can then be modelled as a combination of conscells and boxes, though more efficient or direct representations and adjoints can easily be provided.Since the adjoints for data structures do not capture their inputs by value, later mutations do notinvalidate the pullback.Closures are just objects with a call method [ 4]; the fields of the object represent the closure’senvironment. In our compiler all functions actually accept a hidden environment argument—whichmay be empty as a special case—so both closures and higher-order functions are supported with noextra effort.4 Concurrency and ParallelismJulia supports a concurrency model based on communicating sequential processes (CSP, [ 6]). Azero-argument function or closure (a thunk) can be scheduled as a task (orcoroutine ), and executedindependently of the main thread. Tasks communicate with each other through shared queues calledchannels . Typically, the main thread will create a series of tasks and wait for them all to finish beforecontinuing.Zygote makes CSP differentiable by the following transformation. Firstly, when a task is scheduled,its thunkfis replaced byJ(f), producing a pullback. Once the task is complete, we associate it withan adjoint task which will run the pullback. During the reverse pass, we reach the point where theoriginal task was awaited in the primal code, and schedule the adjoint task. The adjoint task executesand communicates with other adjoint tasks as needed, finally producing a gradient of the thunk f.Channels can be differentiated as in §3; for each channel cwe create an empty adjoint channel c.Sending a value to cbecomes receiving a sensitivity from cand vice versa.2Julia supports shared-memory parallelism by multiplexing tasks onto OS threads, so support for tasksmeans that multithreaded code is also differentiable. Julia uses the same concepts, though a slightlydifferent API, for distributed / multi-node parellelism, so the same techniques can be straightforwardlytransferred to differentiation of distributed code. In an experimental setting we were able to achievea1:5speedup when using two cores to get the gradient of a simple function using map-reduceparallelism.Care must be taken that write/accumulate operations in the adjoint are atomic, since there mayotherwise be a race condition due to multiple reads from the same array location in the primal.Differentiation of parallel code at other levels of abstraction, such as the level of parallel forloops ormap-reduce, presents different challenges and opportunities [9, 8, 15, 3].5 Mixed-Mode ADAlternatives to reverse-mode AD have advantages in many situations, even when calculating gradients.For example, Julia’s forward-mode AD [ 17] has constant memory overhead (compared to reversemode’s tape, linear in the number of instructions executed) and has minimal time overhead, makingit ideal for long-running computations with a small number of inputs. Similarly, TaylorSeries.jl[2] can calculate arbitrary-order forward-mode derivatives in one shot. Mixed mode is exposed bywriting forwarddiff(f, x) . This calculates the same result as f(x), but additionally calculates theJacobian via forward mode, stores it, and applies it during the backwards pass using a custom adjoint.Similarly, checkpointed AD is exposed via checkpoint(f, x) (§2). Zygote can be instructed toalways use forward mode (or another AD technique) on a given function, or even to have heuristicsfor the best method, so that for users of a library, differentiation is efficient by default.An equivalent problem is differentiating code in other languages, for example Python code invokedvia PyCall.jl [ 13]. In this case, we can write an adjoint for the low-level pycall function whichinvokes a Python AD, capturing its tape in a pullback. To a user, calling imported Python functionsinside a call to gradient then works transparently.6 Complex DifferentiationZygote defines the sensitivity of a complex number z=x+yibyz= x+ yi. This definition isuseful for gradient descent since for small, real ,f(z+z)f(z) +zz, and thus the usualgradient update z:=zzlowers the loss. (This is equivalent to differentiating a pair of tworeals (x;y).) Zygote’s pre-defined rules for numerical operations (like and+) are automaticallyconsistent with this definition, so a only rule for one of real ,imag orconj is needed in addition forfull complex support.This sensitivity is not the true complex derivative@@z=@@x+@@iy=@@xi@@y, which (for holomorphicfunctions) will satisfy f(z+)f(z) +@f@z. By the Cauchy-Riemann equations,@f@zis conjugateto the sensitivity zof<f(z)making it straightforward and efficient to calculate. In the more generalnon-holomorphic case one needs either the equivalent 22real Jacobian or the two Wirtingerderivatives (@f@z;@f@z), both of which are readily derived from the sensitivities of <f(z)and=f(z).7 Staged ProgrammingZygote works well with Julia’s excellent meta-programming facilities. For example, many numericallibraries provide an einsum interface, allowing tensor operations to be expressed with a syntax basedon Einstein notation. The syntax is usually expressed as a string and, in dynamic interfaces likePyTorch, parsing the string incurs an overhead each time the expression is run. In Julia, Einsumcan be implemented as a macro , explicitly parsing the notation at compile time and leaving onlyraw tensor operations behind. Zygote sees only the final matrix multiply and sum operations, sothis has no overhead compared to writing them manually. The same is true of Julia’s other powerfulmetaprogramming and staging tools, such as generated functions.38 Hardware BackendsZygote transforms generic programs and mathematical expressions – written in terms of mathematicaloperators like,+etc. – into new generic programs that calculate a gradient. Thus Zygote iscompletely agnostic to the data types running through the program and how they are implemented orrepresented in memory. A Zygote program written for floating point numbers therefore works equallywell with rational numbers, arbitrary-precision floats and integers, measurements, hardware-specifictypes like BFloat16 , and combinations of these.julia> gradient(x -> x^2 + 3x + 1, 1/3)(3.6666666666666665,)julia> gradient(x -> x^2 + 3x + 1, 1//3)(11//3,)julia> gradient(x -> x^2 + 3x + 1, 1/3 ± 0.01)(3.6666666666666665 ± 0.02,)The same is true for arrays; the program gradient(x -> .(W*x .+ b), x) works equally wellwhetherW,bandxare dense arrays, sparse arrays, arrays backed by GPU memory, or distributedarrays stored over a cluster of hundreds of nodes. The operations , broadcasting and so on are calledon the adjoint arrays and thus launched on the GPU or cluster as appropriate.9 External LibrariesSupport for types and libraries distinguishes frameworks from programming languages, and wesupport these in differentiable programming too. For example, the Colors.jl package [ 7] providesrepresentations of RGB colours (among many other colour spaces), and functions over these colourspaces can be differentiated, even when they comprise hundreds of lines of code and many languagefeatures.julia> a = RGB(1, 0, 0); b = RGB(0, 1, 0);julia> gradient(a -> a.r^2, a)((r = 2.0f0, g = nothing, b = nothing),)julia> colordiff(a, b)86.60823557376344julia> gradient(b -> colordiff(a, b), b)((r = -1.77, g = 28.88, b = -0.04),)Aside from the correctness benefits of working with types, it is increasingly recognised that incorpo-rating existing knowledge and code into machine learning leads to richer and more powerful models;this is particularly valuable in scientific computing, where powerful explicit models exist for manysystems that need not be learned from scratch [12].10 ConclusionWe have demonstrated how Zygote, an AD for the Julia language, can support features of a high-levelprogramming language. We also show how the same techniques can be used to add advanced featuresto Zygote, such as checkpointed, mixed-mode and cross-language AD, without changes to Zygote’score source transform. We believe that Zygote’s unusual extensibility makes it an appealing target forresearch into advanced AD techniques.Languages for differentiable and probabilistic programming need not sacrifice modern programminglanguage design, or be restricted to simple DSLs. We hope that this work can inform the design bothof ADs for existing languages, and of new languages and IRs designed to support machine learningfrom the ground up.4References[1] XLA overview. tensorflow.org/performance/xla , 2018. Accessed: 2018-09-22.[2]L. Benet, D. Sanders, et al. TaylorSeries.jl. https://github.com/JuliaDiff/TaylorSeries.jl ,2018. Accessed: 2018-09-22.[3]H. M. Bücker, B. Lang, C. H. Bischof, et al. Bringing together automatic differentiation and openmp. InProceedings of the 15th international conference on Supercomputing , pages 246–251. ACM, 2001.[4]c2 Wiki Contributors. Closures and objects are equivalent. wiki.c2.com/?ClosuresAndObjectsAreEquivalent , 2018. Accessed: 2018-09-22.[5]L. Hascoet and V . Pascual. The tapenade automatic differentiation tool: principles, model, and specification.ACM Transactions on Mathematical Software (TOMS) , 39(3):20, 2013.[6]C. A. R. Hoare. Communicating sequential processes. In The origin of concurrent programming , pages413–443. Springer, 1978.[7]T. Holy, D. C. Jones, and contributors. Colors.jl. github.com/JuliaGraphics/Colors.jl , 2018.Accessed: 2018-09-22.[8]P. D. Hovland. Automatic differentiation of parallel programs . Number 2003. University of Illinois atUrbana-Champaign Champaign, IL, USA, 1997.[9]J. Hückelheim, P. Hovland, M. M. Strout, and J.-D. Müller. Reverse-mode algorithmic differentiation of anopenmp-parallel compressible flow solver. The International Journal of High Performance ComputingApplications , 33(1):140–154, 2019.[10] M. Innes. Don’t unroll adjoint: Differentiating ssa-form programs, 2018.[11] M. Innes. Flux: Elegant machine learning with julia. Journal of Open Source Software , 2018.[12] M. Innes, A. Edelman, K. Fischer, C. Rackauckas, E. Saba, V . B. Shah, and W. Tebbutt. A differentiableprogramming system to bridge machine learning and scientific computing. CoRR , abs/1907.07587, 2019.[13] S. G. Johnson et al. PyCall.jl. https://github.com/JuliaPy/PyCall.jl , 2018. Accessed: 2018-09-22.[14] D. Maclaurin, D. Duvenaud, and R. P. Adams. Autograd: Effortless gradients in numpy. In ICML 2015AutoML Workshop , 2015.[15] U. Naumann, L. Hascoët, C. Hill, P. Hovland, J. Riehme, and J. Utke. A framework for proving correctnessof adjoint message-passing programs. In European Parallel Virtual Machine/Message Passing InterfaceUsers’ Group Meeting , pages 316–321. Springer, 2008.[16] B. A. Pearlmutter and J. M. Siskind. Reverse-mode ad in a functional framework: Lambda the ultimatebackpropagator. ACM Transactions on Programming Languages and Systems (TOPLAS) , 30(2):7, 2008.[17] J. Revels, M. Lubin, and T. Papamarkou. Forward-mode automatic differentiation in julia. arXiv preprintarXiv:1607.07892 , 2016.[18] B. Speelpenning. Compiling fast partial derivatives of functions given by algorithms. Technical report,Illinois Univ., Urbana (USA). Dept. of Computer Science, 1980.[19] B. van Merriënboer, O. Breuleux, A. Bergeron, and P. Lamblin. Automatic differentiation in ml: Wherewe are and where we should be going. In Advances in Neural Information Processing Systems , pages8770–8780, 2018.[20] R. Wei, D. Zheng, et al. Swift for TensorFlow. github.com/tensorflow/swift , 2018. Accessed:2018-09-22.[21] R. E. Wengert. A simple automatic derivative evaluation program. Communications of the ACM , 7(8):463–464, 1964.5<|im_end|> <|im_start|>assistant ### Review Title Solid implementation of AD in Julia ### Review Text The article is a good overview of all the AD features that have been made to work in Zygote, an implementation of AD in Julia. The whole language (including recursion and higher-level functions), excluding macros themselves, is supported. Additional features have also been implemented, such as checkpointing, hooks, and mixed-mode AD. It is interesting to see that simple design decisions on the representations of data (cells and boxes) and derivatives (pullback closures) enabled different extensions. It would be great to learn about how those concepts needed to be specialized or optimized for different use cases, such as sparse arrays, asynchronous execution. ### Review Rating 7: Good paper, accept ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
thhdrl4IdMm
ICLR.cc/2021/Conference
2021
A Chain Graph Interpretation of Real-World Neural Networks
["Yuesong Shen", "Daniel Cremers"]
The last decade has witnessed a boom of deep learning research and applications achieving state-of-the-art results in various domains. However, most advances have been established empirically, and their theoretical analysis remains lacking. One major issue is that our current interpretation of neural networks (NNs) as function approximators is too generic to support in-depth analysis. In this paper, we remedy this by proposing an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure. The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models, while at the same time remains general enough to cover real-world NNs with arbitrary depth, multi-branching and varied activations, as well as common structures including convolution / recurrent layers, residual block and dropout. We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques, as well as derive new deep learning approaches such as the concept of partially collapsed feed-forward inference. It is thus a promising framework that deepens our understanding of neural networks and provides a coherent theoretical formulation for future deep learning research.
["neural network interpretation", "chain graph", "deep learning theory", "probabilistic graphical model"]
ABSTRACTThe last decade has witnessed a boom of deep learning research and applicationsachieving state-of-the-art results in various domains. However, most advanceshave been established empirically, and their theoretical analysis remains lacking.One major issue is that our current interpretation of neural networks (NNs) asfunction approximators is too generic to support in-depth analysis. In this pa-per, we remedy this by proposing an alternative interpretation that identifies NNsas chain graphs (CGs) and feed-forward as an approximate inference procedure.The CG interpretation specifies the nature of each NN component within therich theoretical framework of probabilistic graphical models, while at the sametime remains general enough to cover real-world NNs with arbitrary depth, multi-branching and varied activations, as well as common structures including convo-lution / recurrent layers, residual block and dropout. We demonstrate with con-crete examples that the CG interpretation can provide novel theoretical supportand insights for various NN techniques, as well as derive new deep learning ap-proaches such as the concept of partially collapsed feed-forward inference. It isthus a promising framework that deepens our understanding of neural networksand provides a coherent theoretical formulation for future deep learning research.1 I NTRODUCTIONDuring the last decade, deep learning (Goodfellow et al., 2016), the study of neural networks (NNs),has achieved ground-breaking results in diverse areas such as computer vision (Krizhevsky et al.,2012; He et al., 2016; Long et al., 2015; Chen et al., 2018), natural language processing (Hintonet al., 2012; Vaswani et al., 2017; Devlin et al., 2019), generative modeling (Kingma & Welling,2014; Goodfellow et al., 2014) and reinforcement learning (Mnih et al., 2015; Silver et al., 2016),and various network designs have been proposed. However, neural networks have been treatedlargely as “black-box” function approximators, and their designs have chiefly been found via trial-and-error, with little or no theoretical justification. A major cause that hinders the theoretical anal-ysis is the current overly generic modeling of neural networks as function approximators: simplyinterpreting a neural network as a composition of parametrized functions provides little insight todecipher the nature of its components or its behavior during the learning process.In this paper, we show that a neural network can actually be interpreted as a probabilistic graphicalmodel (PGM) called chain graph (CG) (Koller & Friedman, 2009), and feed-forward as an efficientapproximate probabilistic inference on it. This offers specific interpretations for various neuralnetwork components, allowing for in-depth theoretical analysis and derivation of new approaches.1.1 R ELATED WORKIn terms of theoretical understanding of neural networks, a well known result based on the functionapproximator view is the universal approximation theorem (Goodfellow et al., 2016), however it onlyestablishes the representational power of NNs. Also, there have been many efforts on alternative NNinterpretations. One prominent approach identifies infinite width NNs as Gaussian processes (Neal,1996; Lee et al., 2018), enabling kernel method analysis (Jacot et al., 2018). Other works alsoemploy theories such as optimal transport (Genevay et al., 2017; Chizat & Bach, 2018) or meanfield (Mei et al., 2019). These approaches lead to interesting findings, however they tend to onlyhold under limited or unrealistic settings and have difficulties interpreting practical real-world NNs.1Under review as a conference paper at ICLR 2021Figure 1: Neural networks can be interpreted as layered chain graphs where activation functions aredetermined by node distributions. Left: An example neural network interpreted as a chain graphwith three chain components which represent its layers; Right : A variety of activation functions(softplus, ReLU, leaky ReLU) approximated by nodes following rectified Gaussian distributions(e;qas in Eq. (7)). We visualize the approximations stochastically by averaging over 200 samples.Alternatively, some existing works study the post-hoc interpretability (Lipton, 2018), proposingmethods to analyze the empirical behavior of trained neural networks: activation maximization(Erhan et al., 2009), typical input synthesis (Nguyen et al., 2016), deconvolution (Zeiler & Fergus,2014), layer-wise relevance propagation (Bach et al., 2015), etc. These methods can offer valuableinsights to the practical behavior of neural networks, however they represent distinct approaches andfocuses, and are all limited within the function approximator view.Our work links neural networks to probabilistic graphical models (Koller & Friedman, 2009), a richtheoretical framework that models and visualizes probabilistic systems composed of random vari-ables (RVs) and their interdependencies. There are several types of graphical models. The chaingraph model (also referred to as the LWF chain graph model) (Koller & Friedman, 2009; Lauritzen& Wermuth, 1989; Frydenberg, 1990) used in our work is a general form that unites directed andundirected variants, visualized as a partially directed acyclic graph (PDAG). Interestingly, thereexists a series of works on constructing hierarchical graphical models for data-driven learning prob-lems, such as sigmoid belief network (Neal, 1992), deep belief network (Hinton et al., 2006), deepBoltzmann machine (Salakhutdinov & Hinton, 2012) and sum product network (Poon & Domingos,2011). As alternatives to neural networks, these models have shown promising potentials for gen-erative modeling and unsupervised learning. Nevertheless, they are yet to demonstrate competitiveperformances over neural network for discriminative learning.Neural networks and graphical models have so far been treated as two distinct approaches in general.Existing works that combine them (Zheng et al., 2015; Chen et al., 2018; Lample et al., 2016) mainlytreat either neural networks as function approximators for amortized inference, or graphical modelsas post-processing steps. Tang & Salakhutdinov (2013) create a hybrid model, the stochastic feed-forward neural network (SFNN), by concatenating deterministic neurons with stochastic Bernoullirandom variables, in order to represent multimodal distributions. Some also consider neural net-works as graphical models with deterministic hidden nodes (Buntine, 1994). However this is anatypical degenerate regime. To the best of our knowledge, our work provides the first rigorous andcomprehensive formulation of a (non-degenerate) graphical model interpretation for neural networksin practical use.1.2 O UR CONTRIBUTIONSThe main contributions of our work are summarized as follows:We propose a layered chain graph representation of neural networks, interpret feed-forward asan approximate probabilistic inference procedure, and show that this interpretation provides anextensive coverage of practical NN components (Section 2);2Under review as a conference paper at ICLR 2021To illustrate its advantages, we show with concrete examples (residual block, RNN, dropout) thatthe chain graph interpretation enables coherent and in-depth theoretical support, and providesadditional insights to various empirically established network structures (Section 3);Furthermore, we demonstrate the potential of the chain graph interpretation for discovering newapproaches by using it to derive a novel stochastic inference method named partially collapsedfeed-forward, and establish experimentally its empirical effectiveness (Section 4).2 C HAIN GRAPH INTERPRETATION OF NEURAL NETWORKSWithout further delay, we derive the chain graph interpretation of neural networks in this section.We will state and discuss the main results here and leave the proofs in the appendix.2.1 T HE LAYERED CHAIN GRAPH REPRESENTATIONWe start by formulating the so called layered chain graph that corresponds to neural networks we usein practice: Consider a system represented by Llayers of random variables (X1;:::;XL), whereXliis thei-th variable node in the l-th layer, and denote Nlthe number of nodes in layer l. Weassume that nodes Xliin the same layer lhave the same distribution type characterized by a featurefunction Tlthat can be multidimensional. Also, we assume that the layers are ordered topologicallyand denotePa(Xl)the parent layers of Xl. To ease our discussion, we assume that X1is the inputlayer and XLthe output layer (our formulation can easily extend to multi-input/output cases). Alayered chain graph is then defined as follows:Definition 1. Alayered chain graph that involves Llayers of random variables (X1;:::;XL)is achain graph that encodes the overall distribution P(X2;:::;XLjX1)such that:1. It can be factored into layerwise chain components P(XljPa(Xl))following the topological or-der, and nodes Xliwithin each chain component P(XljPa(Xl))are conditionally independentgiven their parents (this results in bipartite chain components), thus allowing for further decom-position into nodewise conditional distributions P(XlijPa(Xl)). This means we haveP(X2;:::;XLjX1) =LYl=2P(XljPa(Xl)) =LYl=2NlYi=1P(XlijPa(Xl)); (1)2. For each layer lwith parent layers Pa(Xl) =fXp1;:::Xpng;p1;:::;pn2f1;:::;l1g, itsnodewise conditional distributions P(XlijPa(Xl))are modeled by pairwise conditional randomfields (CRFs) with with unary ( bli) and pairwise ( Wp;lj;i) weights (as we will see, they actuallycorrespond to biases and weights in NN layers):P(XlijPa(Xl)) =flTl(Xli);eliTp1(Xp1);:::;Tpn(Xpn)(2)with eliTp1(Xp1);:::;Tpn(Xpn)=bli+pnXp=p1NpXj=1Wp;lj;iTp(Xpj): (3)Figure 1 Left illustrates an example three-layer network as layered chain graph and its chain com-ponent factorization. In Eq. (2), flis an arbitrary function that represents a probability distri-bution. For exponential family distributions (Koller & Friedman, 2009), Eq. (2) simply becomesP(XlijPa(Xl))/expTl(Xli)eliTp1(Xp1);:::;Tpn(Xpn).Note that layered chain graph has a globally directed graph structure and has an equivalent modelingbased on directed graphical model (Bayesian network) (Koller & Friedman, 2009), we elaborate onthis point for interested readers in Appendix A.2.2 F EED-FORWARD AS APPROXIMATE PROBABILISTIC INFERENCETo identify layered chain graphs with real-world neural networks, we need to show that they canbehave the same way during inference and learning. For this, we establish the fact that feed-forwardcan actually be seen as performing an approximate probabilistic inference on a layered chain graph:3Under review as a conference paper at ICLR 2021Given an input sample ~x1, we consider the problem of inferring the marginal distribution Qliof anodeXliand its expected features qli, defined asQli(xlij~x1) =P(Xli=xlijX1=~x1);qli=EQli[Tl(Xli)] (q1=~x1): (4)Consider a non-input layer lwith parent layers p1;:::;pn, the independence assumptions encodedby the layered chain graph lead to the following recursive expression for marginal distributions Q:Qli(xlij~x1) =EQp1;:::;Qpn[P(xlijPa(Xl))]: (5)However, the above expression is in general intractable, as it integrates over the entire admissiblestates of all parents nodes in Pa(Xl). To proceed further, simplifying approximations are needed.Interestingly, by using linear approximations, we can obtain the following results (in case of discreterandom variable the integration in Eq. 7 is replaced by summation):Proposition 1. If we make the assumptions that the corresponding expressions are approximatelylinear w.r.t. parent features Tp1(Xp1);:::;Tpn(Xpn), we obtain the following approximations:Qli(xlij~x1)flTl(xli);eli(qp1;:::;qpn); (6)qliZxliTl(xli)flTl(xli);eli(qp1;:::;qpn)dxli:=gl(eli(qp1;:::;qpn)): (7)Especially, Eq. (7)is a feed-forward expression for expected features qliwith activation function gldetermined by Tlandfl, i.e. the distribution type of random variable nodes in layer l.The proof is provided in Appendix B.1. This allows us to identify feed-forward as an approximateprobabilistic inference procedure for layered chain graphs. For learning, the loss function is typicallya function of (QL;qL)obtainable via feed-forward, and we can follow the same classical neuralnetwork parameter update using stochastic gradient descent and backpropagation. Thus we are ableto replicate the exact neural network training process with this layered chain graph framework.The following corollary provides concrete examples of some common activation functions g(weemphasize their names in bold, detailed formulations and proofs are given in Appendix B.2):Corollary 2. We have the following node distribution - activation function correspondences:1. Binary nodes taking values f;gresults in sigmoidal activations, especially, we obtain sigmoidwith= 0;= 1andtanh with=1;= 1(; are interchangeable);2. Multilabel nodes characterized by label indicator features result in the softmax activation;3. Variants of (leaky) rectified Gaussian distributions ( Tli(Xli) =Xli= max(Yli;Yli)withYliNeli;(sli(eli))2) can approximate activations such as softplus (= 0;sli1:7761 ) and-leakyrectified linear unit (ReLU) (sli= tanh(eli)) including ReLU (= 0) and identity (= 1).Figure 1 Right illustrates activation functions approximated by various rectified Gaussian variants.We also plotted (in orange) an alternative approximation of ReLU with sigmoid-modulated standarddeviation proposed by Nair & Hinton (2010) which is less accurate around the kink at the origin.The linear approximations, needed for feed-forward, is coarse and only accurate for small pairwiseweights (kWk 1) or already linear regions. This might justify weight decay beyond the general“anti-overfit” argument and the empirical superiority of piecewise linear activations like ReLU (Nair& Hinton, 2010). Conversely, as a source of error, it might explain some “failure cases” of neuralnetworks such as their vulnerability against adversarial samples, see e.g., Goodfellow et al. (2015).2.3 G ENERALITY OF THE CHAIN GRAPH INTERPRETATIONThe chain graph interpretation formulated in Sections 2.1 and 2.2 is a general framework that candescribe many practical network structures. To demonstrate this, we list here a wide range of neuralnetwork designs (marked in bold) that are chain graph interpretable.In terms of network architecture, it is clear that the chain graph interpretation can model net-works of arbitrary depth, and with general multi-branched structures such as inception modules(Szegedy et al., 2015) or residual blocks (He et al., 2016; He et al., 2016) discussed in Sec-tion 3.1. Also, it is possible to built up recurrent neural networks (RNNs) for sequential data4Under review as a conference paper at ICLR 2021learning, as we will see in Section 3.2. Furthermore, the modularity of chain components justifiestransfer learning via partial reuse of pre-trained networks , e.g., backbones trained for imageclassification can be reused for segmentation (Chen et al., 2018).In terms of layer structure, we are free to employ sparse connection patterns and shared/fixedweight, so that we can obtain not only dense connections , but also connections like convolu-tion,average pooling orskip connections . Moreover, as shown in Section 3.3, dropout canbe reproduced by introducing and sampling from auxiliary random variables, and normalizationlayers like batch normalization (Ioffe & Szegedy, 2015) can be seen as reparametrizations ofnode distributions and fall within the general form (Eq. (2)). Finally, we can extend the layeredchain graph model to allow for intra-layer connections, which enables non-bipartite CRF layerswhich are typically used on output layers for structured prediction tasks like image segmenta-tion (Zheng et al., 2015; Chen et al., 2018) or named entity recognition (Lample et al., 2016).However, feed-forward is no longer applicable through these intra-connected layers.Node distributions can be chosen freely, leading to a variety of nonlinearities (e.g., Corollary 2).3 S ELECTED CASE STUDIES OF EXISTING NEURAL NETWORK DESIGNSThe proposed chain graph interpretation offers a detailed description of the underlying mechanismof neural networks. This allows us to obtain novel theoretical support and insights for various net-work designs which are consistent within a unified framework. We illustrate this with the followingconcrete examples where we perform in-depth analysis based on the chain graph formulation.3.1 R ESIDUAL BLOCK AS REFINEMENT MODULEThe residual block, proposed originally in He et al. (2016) and improved later (He et al., 2016) withthe preactivation form, is an effective design for building up very deep networks. Here we showthat a preactivation residual block corresponds to a refinement module within a chain graph. We usemodules to refer to encapsulations of layered chain subgraphs as input–output mappings withoutspecifying their internal structures. A refinement module is defined as follows:Definition 2. Given a base submodule from layer Xl1to layer Xl, arefinement module augmentsthis base submodule with a side branch that chains a copy of the base submodule (sharing weightwith its original) from Xl1to a duplicated layer ~Xl, and then a refining submodule from ~XltoXl.Xl−1XlZlX~lqlX ql−1Xq~lXqlZW,bW,b W,b σ σσFigure 2: Example of a refinement module (left) and its corresponding computational graph (right),composed of a base submodule Xl1!Xl(blue background) and a refining submodule ~Xl!Zl!Xl(red background). In the computational graph each W;b represents a linear connection(Eq. (3)) and an activation function. Same color identifies corresponding parts in the two graphs.We see that this refinement module corresponds exactly to a preactivation residual block.Proposition 3. A refinement module corresponds to a preactivation residual block.We provide a proof in Appendix B.3 and illustrate this correspondence in Figure 2. An interestingremark is that the refinement process can be recursive: the base submodule of a refinement modulecan be a refinement module itself. This results in a sequence of consecutive residual blocks.While a vanilla layered chain component encodes a generalized linear model during feed-forward(c.f. Eqs. (7),(3)), the refinement process introduces a nonlinear extension term to the previously lin-ear output preactivation, effectively increasing the representational power. This provides a possibleexplanation to the empirical improvement generally observed when using residual blocks.Note that it is also possible to interpret the original postactivation residual blocks, however in asomewhat artificial manner, as it requires defining identity connections with manually fixed weights.5Under review as a conference paper at ICLR 20213.2 R ECURRENT NEURAL NETWORKSRecurrent neural networks (RNNs) (Goodfellow et al., 2016) are widely used for handling sequentialdata. An unrolled recurrent neural network can be interpreted as a dynamic layered chain graphconstructed as follows: a given base layered chain graph is copied for each time step, then thesecopies are connected together through recurrent chain components following the Markov assumption(Koller & Friedman, 2009): each recurrent layer Xl;tat timetis connected by its correspondinglayerXl;t1from the previous time step t1. Especially, denoting Pat(Xl;t)the non-recurrentparent layers of Xl;tin the base chain graph, we can easily interpret the following two variants:Proposition 4. Given a recurrent chain component that encodes P(Xl;tjPat(Xl;t);Xl;t1),1. It corresponds to a simple (or vanilla / Elman) recurrent layer (Goodfellow et al., 2016) if theconnection from Xl;t1toXl;tis dense;2. It corresponds to an independently RNN (IndRNN) (Li et al., 2018) layer if the conditional inde-pendence assumptions among the nodes Xl;tiwithin layer lare kept through time:8i2f1;:::;Nlg; P(Xl;tijPat(Xl;t);Xl;t1) =P(Xl;tijPat(Xl;t);Xl;t1i): (8)We provide a proof in Appendix B.4 and illustrates both variants in Figure 3.t−1 t t+1t−1 t t+1Figure 3: Comparison of an simple recurrent layer (left) v.s. an IndRNN (right) recurrent layer.IndRNN, the better variant, enforces the intra-layer conditional independence through time.The simple recurrent layer, despite its exhaustive dense recurrent connection, is known to suffer fromvanishing/exploding gradient and can not handle long sequences. The commonly used long-shortterm memory (Hochreiter & Schmidhuber, 1997) and gated recurrent unit (Cho et al., 2014) alleviatethis issue via long term memory cells and gating. However, they tend to result in bloated structures,and still cannot handle very long sequences (Li et al., 2018). On the other hand, IndRNNs canprocess much longer sequences and significantly outperform not only simple RNNs, but also LSTM-based variants (Li et al., 2018; 2019). This indicates that the assumption of intra-layer conditionalindependence through time, analogue to the local receptive fields of convolutional neural networks,could be an essential sparse network design tailored for sequential modeling.3.3 D ROPOUTDropout (Srivastava et al., 2014) is a practical stochastic regularization method commonly usedespecially for regularizing fully connected layers. As we see in the following proposition, from thechain graph point of view, dropout corresponds to introducing Bernoulli auxiliary random variablesthat serve as noise generators for feed-forward during training:Proposition 5. Adding dropout with drop rate 1plto layerlcorresponds to the following chaingraph construction: for each node Xliin layerlwe introduce an auxiliary Bernoulli random variableDliBernoulli (pl)and multiply it with the pairwise interaction terms in all preactivations (Eq. (3))involvingXlias parent (this makes Dlia parent of all child nodes of Xliand extend their pairwiseinteractions with Xlito ternary ones). The behavior of dropout is reproduced exactly if:During training, we sample auxiliary nodes Dliduring each feed-forward. This results in droppingeach activation qliof nodeXliwith probability 1pl;At test time, we marginalize auxiliary nodes Dliduring each feed-forward. This leads to deter-ministic evaluations with a constant scaling of plfor the node activations qli.We provide a proof in Appendix B.5. Note that among other things, this chain graph interpretationof dropout provides a theoretical justification of the constant scaling at test time. This was originallyproposed as a heuristic in Srivastava et al. (2014) to maintain consistent behavior after training.6Under review as a conference paper at ICLR 20214 P ARTIALLY COLLAPSED FEED -FORWARDThe theoretical formulation provided by the chain graph interpretation can also be used to derivenew approaches for neural networks. It allows us to create new deep learning methods followinga coherent framework that provides specific semantics to the building blocks of neural networks.Moreover, we can make use of the abundant existing work from the PGM field, which also serves as arich source of inspiration. As a concrete example, we derive in this section a new stochastic inferenceprocedure called partially collapsed feed-forward (PCFF) using the chain graph formulation.4.1 PCFF: CHAIN GRAPH FORMULATIONA layered chain graph, which can represent a neural network, is itself a probabilistic graphical modelthat encodes an overall distribution conditioned on the input. This means that, to achieve stochasticbehavior, we can directly draw samples from this distribution, instead of introducing additional“noise generators” like in dropout. In fact, given the globally directed structure of layered chaingraph, and the fact that the conditioned input nodes are ancestral nodes without parent, it is a well-known PGM result that we can apply forward sampling (or ancestral sampling) (Koller & Friedman,2009) to efficiently generate samples: given an input sample ~x1, we follow the topological order andsample each non-input node Xliusing its nodewise distribution (Eq. (2)) conditioned on the samples(xp1;:::;xpn)of its parents. Compared to feed-forward, forward sampling also performs a singleforward pass, but generates instead an unbiased stochastic sample estimate.While in general an unbiased estimate is preferable and the stochastic behavior can also introduceregularization during training (Srivastava et al., 2014), forward sampling can not directly replacefeed-forward, since the sampling operation is not differentiable and will jeopardize the gradient flowduring backpropagation. To tackle this, one idea is to apply the reparametrization trick (Kingma &Welling, 2014) on continuous random variables (for discrete RVs the Gumbel softmax trick (Janget al., 2017) can be used but requires additional continuous relaxation). An alternative solution is toonly sample part of the nodes as in the case of dropout.The proposed partially collapse feed-forward follows the second idea: we simply “mix up” feed-forward and forward sampling, so that for each forward inference during training, we randomlyselect a portion of nodes to sample and the rest to compute deterministically with feed-forward.Thus for a node Xliwith parents (Xp1;:::;Xpn), its forward inference update becomesqli gl(eli(qp1;:::;qpn)) if collapsed (feed-forward) ;Tl(xli); xliflTl(Xli);eli(qp1;:::;qpn)if uncollapsed (forward sampling) :(9)Following the collapsed sampling (Koller & Friedman, 2009) terminology, we call this method thepartially collapsed feed-forward (PCFF). PCFF is a generalization over feed-forward and forwardsampling, which can be seen as its fully collapsed / uncollapsed extremes. Furthermore, it offers abias–variance trade-off, and can be combined with the reparametrization trick to achieve unbiasedestimates with full sampling, while simultaneously maintaining the gradient flow.Relation to stochastic feedforward neural network While PCFF can also be seen as a stochasticgeneralization of the feed-forward inference, it represents a substantially distinct approach comparedto SFNN: Apart from the clear difference that PCFF uses forward sampling and SFNN uses impor-tance sampling, a major dissimilarity is that SFNN makes a clear distinction between deterministicneurons and stochastic random variables, whereas PCFF identifies neurons with random variablesthanks to the layered chain graph interpretation. This is why PCFF can freely choose a different sub-set of nodes to sample during each forward pass. From the chain graph interpretation perspective,SFNN can be seen as a layered chain graph having a fixed subset of nodes with stochastic behavior,and it performs a hybrid of feed-forward and importance sampling for inference.4.2 PCFF: EXPERIMENTAL VALIDATIONIn the previous sections, we have been discussing existing approaches whose empirical evaluationshave been thoroughly covered by prior work. The novel PCFF approach proposed in this section,however, requires experiments to check its practical effectiveness. For this we conduct here a series7Under review as a conference paper at ICLR 2021of experiments1. Our emphasis is to understand the behavior of PCFF under various contexts andnot to achieve best result for any specific task. We only use chain graph interpretable components,and we adopt the reparameterization trick (Kingma & Welling, 2014) for ReLU PCFF samples.The following experiments show that PCFF is overall an effective stochastic regularization method.Compared to dropout, it tends to produce more consistent performance improvement, and can some-times outperform dropout. This confirms that our chain graph based reasoning has successfullyfound an interesting novel deep learning method.0.0000.0250.0500.0750.1000.125Activation functionTest errorRegularizationDropoutNonePCFFMNISTReLUTanhFashionMNISTReLUTanh0.10.30.50.00.10.20.30.40.50.60.70.80.91.0Drop/Sample rateTest error (log scale)RegularizationDropoutNonePCFFCIFAR-10Figure 4: Comparison of stochastic methods (None/Dropout/PCFF) in terms of image classificationtest errors (lower is better) under various settings. Left: MNIST/FashionMNIST datasets with asimple dense network and tanh/ReLU activation functions; Right: CIFAR-10 dataset with ResNet20and varying drop/sample rates. All reported results are average values of three runs. Compared todropout, PCFF can achieve comparable results, and tend to deliver more consistent improvements.Simple dense network We start with a simple network with two dense hidden layers of 1024nodes to classify MNIST (Lecun et al., 1998) and FashionMNIST (Xiao et al., 2017) images. Weuse PyTorch (Paszke et al., 2017), train with stochastic gradient descent (learning rate 0:01, mo-mentum 0:9), and set up 20% of training data as validation set for performance monitoring andearly-stopping. We set drop rate to 0.5 for dropout, and for PCFF we set the sample rate to 0.4 fortanh and 1.0 (full sampling) for ReLU. Figure 4 Left reports the test errors with different activationfunctions and stochastic regularizations.We see that dropout and PCFF are overall comparable, and both improve the results in most cases.Also, the ReLU activation consistently produces better results that tanh. Additional experimentsshow that PCFF and dropout can be used together, which sometimes yields improved performance.Convolutional residual network To figure out the applicability of PCFF in convolutional residualnetworks, we experiment on CIFAR-10 (Krizhevsky, 2009) image classification. For this we adaptan existing implementation (Idelbayev) to use the preactivation variant. We focus on the ResNet20structure, and follow the original learning rate schedule except for setting up a validation set of10% training data to monitor training performance. Figure 4 Right summarizes the test errors underdifferent drop/sample rates.We observe that in this case PCFF can improve the performance over a wide range of sample rates,whereas dropout is only effective with drop rate 0:1, and large drop rates in this case significantlydeteriorate the performance. We also observe a clear trade-off of the PCFF sample rate, where apartial sampling of 0.3 yields the best result.Independently RNN We complete our empirical evaluations of PCFF with an RNN test case. Forthis we used IndRNNs with 6 layers to solve the sequential/permuted MNIST classification problemsbased on an existing Implementation2provided by the authors of IndRNN (Li et al., 2018; 2019).We tested over dropout with drop rate 0.1 and PCFF with sample rate 0.1 and report the average testaccuracy of three runs. We notice that, while in the permuted MNIST case both dropout (0.9203)and PCFF (0.9145) improves the result (0.9045), in the sequential MNIST case, dropout (0.9830)seems to worsen the performance (0.9841) whereas PCFF (0.9842) delivers comparable result.1Implementation available at: (Github link placeholder, provided as supplementary material.)2https://github.com/Sunnydreamrain/IndRNN_pytorch8Under review as a conference paper at ICLR 20215 C ONCLUSIONS AND DISCUSSIONSIn this work, we show that neural networks can be interpreted as layered chain graphs, and that feed-forward can be viewed as an approximate inference procedure for these models. This chain graphinterpretation provides a unified theoretical framework that elucidates the underlying mechanism ofreal-world neural networks and provides coherent and in-depth theoretical support for a wide rangeof empirically established network designs. Furthermore, it also offers a solid foundation to derivenew deep learning approaches, with the additional help from the rich existing work on PGMs. It isthus a promising alternative neural network interpretation that deepens our theoretical understandingand unveils a new perspective for future deep learning research.In the future, we plan to investigate a number of open questions that stem from this work, especially:Is the current chain graph interpretation sufficient to capture the full essence of neural networks?Based on the current results, we are reasonably optimistic that the proposed interpretation cancover an essential part of the neural network mechanism. However, compared to the functionapproximator view, it only covers a subset of existing techniques. Is this subset good enough?On a related note: can we find chain graph interpretations for other important network designs (orotherwise some chain graph interpretable alternatives with comparable or better performance)?The current work provides a good start, but it is by no means an exhaustive study.Finally, what other new deep learning models and procedures can we build up based on the chaingraph framework? The partially collapsed feed-forward inference proposed in this work is just asimple illustrative example, and we believe that many other promising deep learning techniquescan be derived from the proposed chain graph interpretation.
9A3BfoaoUYc
alternative interpretation based on chain graph
4: Ok but not good enough - rejection
This paper tries to interpret neural networks with chain graphs that provides theoretical analysis on various neural network components. Furthermore, this chain graph interpretation has been used to propose a new approach (architecture), which is a partially collapsed feed-forward. A layered chain graph representation is adopted to formulate the neural networks with layered chain graphs. This further establishes to interpret feed-forward as an approximate probabilistic inference with using linear approximations. Some concrete examples are shown to be analyzed based on the chain graph formulation. The overall context (analysis) seems straightforward to interpret the neural networks with chain graphs, but it is hard to achieve some meaningful information from this new interpretation to improve the current neural network models in terms of learning procedure or optimization. The proposed partially collapsed feed-forward is a good example to come up with a new approach based on the chain graph interpretation. However, in terms of performance and complexity, it is practically not showing impressive improvements compared to the baseline methods. Moreover, it seems quite similar to previous works as far as I remember and one similar work is 'stochastic feedforward neural networks'. I fully agree the future works (open questions) in the conclusion and discussion section that this work still needs more investigations although this paper is a good initiative work.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Chain Graph Interpretation of Real-World Neural Networks ### Paper Abstract The last decade has witnessed a boom of deep learning research and applications achieving state-of-the-art results in various domains. However, most advances have been established empirically, and their theoretical analysis remains lacking. One major issue is that our current interpretation of neural networks (NNs) as function approximators is too generic to support in-depth analysis. In this paper, we remedy this by proposing an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure. The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models, while at the same time remains general enough to cover real-world NNs with arbitrary depth, multi-branching and varied activations, as well as common structures including convolution / recurrent layers, residual block and dropout. We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques, as well as derive new deep learning approaches such as the concept of partially collapsed feed-forward inference. It is thus a promising framework that deepens our understanding of neural networks and provides a coherent theoretical formulation for future deep learning research. ### Paper Keywords ["neural network interpretation", "chain graph", "deep learning theory", "probabilistic graphical model"] ### Paper Content ABSTRACTThe last decade has witnessed a boom of deep learning research and applicationsachieving state-of-the-art results in various domains. However, most advanceshave been established empirically, and their theoretical analysis remains lacking.One major issue is that our current interpretation of neural networks (NNs) asfunction approximators is too generic to support in-depth analysis. In this pa-per, we remedy this by proposing an alternative interpretation that identifies NNsas chain graphs (CGs) and feed-forward as an approximate inference procedure.The CG interpretation specifies the nature of each NN component within therich theoretical framework of probabilistic graphical models, while at the sametime remains general enough to cover real-world NNs with arbitrary depth, multi-branching and varied activations, as well as common structures including convo-lution / recurrent layers, residual block and dropout. We demonstrate with con-crete examples that the CG interpretation can provide novel theoretical supportand insights for various NN techniques, as well as derive new deep learning ap-proaches such as the concept of partially collapsed feed-forward inference. It isthus a promising framework that deepens our understanding of neural networksand provides a coherent theoretical formulation for future deep learning research.1 I NTRODUCTIONDuring the last decade, deep learning (Goodfellow et al., 2016), the study of neural networks (NNs),has achieved ground-breaking results in diverse areas such as computer vision (Krizhevsky et al.,2012; He et al., 2016; Long et al., 2015; Chen et al., 2018), natural language processing (Hintonet al., 2012; Vaswani et al., 2017; Devlin et al., 2019), generative modeling (Kingma & Welling,2014; Goodfellow et al., 2014) and reinforcement learning (Mnih et al., 2015; Silver et al., 2016),and various network designs have been proposed. However, neural networks have been treatedlargely as “black-box” function approximators, and their designs have chiefly been found via trial-and-error, with little or no theoretical justification. A major cause that hinders the theoretical anal-ysis is the current overly generic modeling of neural networks as function approximators: simplyinterpreting a neural network as a composition of parametrized functions provides little insight todecipher the nature of its components or its behavior during the learning process.In this paper, we show that a neural network can actually be interpreted as a probabilistic graphicalmodel (PGM) called chain graph (CG) (Koller & Friedman, 2009), and feed-forward as an efficientapproximate probabilistic inference on it. This offers specific interpretations for various neuralnetwork components, allowing for in-depth theoretical analysis and derivation of new approaches.1.1 R ELATED WORKIn terms of theoretical understanding of neural networks, a well known result based on the functionapproximator view is the universal approximation theorem (Goodfellow et al., 2016), however it onlyestablishes the representational power of NNs. Also, there have been many efforts on alternative NNinterpretations. One prominent approach identifies infinite width NNs as Gaussian processes (Neal,1996; Lee et al., 2018), enabling kernel method analysis (Jacot et al., 2018). Other works alsoemploy theories such as optimal transport (Genevay et al., 2017; Chizat & Bach, 2018) or meanfield (Mei et al., 2019). These approaches lead to interesting findings, however they tend to onlyhold under limited or unrealistic settings and have difficulties interpreting practical real-world NNs.1Under review as a conference paper at ICLR 2021Figure 1: Neural networks can be interpreted as layered chain graphs where activation functions aredetermined by node distributions. Left: An example neural network interpreted as a chain graphwith three chain components which represent its layers; Right : A variety of activation functions(softplus, ReLU, leaky ReLU) approximated by nodes following rectified Gaussian distributions(e;qas in Eq. (7)). We visualize the approximations stochastically by averaging over 200 samples.Alternatively, some existing works study the post-hoc interpretability (Lipton, 2018), proposingmethods to analyze the empirical behavior of trained neural networks: activation maximization(Erhan et al., 2009), typical input synthesis (Nguyen et al., 2016), deconvolution (Zeiler & Fergus,2014), layer-wise relevance propagation (Bach et al., 2015), etc. These methods can offer valuableinsights to the practical behavior of neural networks, however they represent distinct approaches andfocuses, and are all limited within the function approximator view.Our work links neural networks to probabilistic graphical models (Koller & Friedman, 2009), a richtheoretical framework that models and visualizes probabilistic systems composed of random vari-ables (RVs) and their interdependencies. There are several types of graphical models. The chaingraph model (also referred to as the LWF chain graph model) (Koller & Friedman, 2009; Lauritzen& Wermuth, 1989; Frydenberg, 1990) used in our work is a general form that unites directed andundirected variants, visualized as a partially directed acyclic graph (PDAG). Interestingly, thereexists a series of works on constructing hierarchical graphical models for data-driven learning prob-lems, such as sigmoid belief network (Neal, 1992), deep belief network (Hinton et al., 2006), deepBoltzmann machine (Salakhutdinov & Hinton, 2012) and sum product network (Poon & Domingos,2011). As alternatives to neural networks, these models have shown promising potentials for gen-erative modeling and unsupervised learning. Nevertheless, they are yet to demonstrate competitiveperformances over neural network for discriminative learning.Neural networks and graphical models have so far been treated as two distinct approaches in general.Existing works that combine them (Zheng et al., 2015; Chen et al., 2018; Lample et al., 2016) mainlytreat either neural networks as function approximators for amortized inference, or graphical modelsas post-processing steps. Tang & Salakhutdinov (2013) create a hybrid model, the stochastic feed-forward neural network (SFNN), by concatenating deterministic neurons with stochastic Bernoullirandom variables, in order to represent multimodal distributions. Some also consider neural net-works as graphical models with deterministic hidden nodes (Buntine, 1994). However this is anatypical degenerate regime. To the best of our knowledge, our work provides the first rigorous andcomprehensive formulation of a (non-degenerate) graphical model interpretation for neural networksin practical use.1.2 O UR CONTRIBUTIONSThe main contributions of our work are summarized as follows:We propose a layered chain graph representation of neural networks, interpret feed-forward asan approximate probabilistic inference procedure, and show that this interpretation provides anextensive coverage of practical NN components (Section 2);2Under review as a conference paper at ICLR 2021To illustrate its advantages, we show with concrete examples (residual block, RNN, dropout) thatthe chain graph interpretation enables coherent and in-depth theoretical support, and providesadditional insights to various empirically established network structures (Section 3);Furthermore, we demonstrate the potential of the chain graph interpretation for discovering newapproaches by using it to derive a novel stochastic inference method named partially collapsedfeed-forward, and establish experimentally its empirical effectiveness (Section 4).2 C HAIN GRAPH INTERPRETATION OF NEURAL NETWORKSWithout further delay, we derive the chain graph interpretation of neural networks in this section.We will state and discuss the main results here and leave the proofs in the appendix.2.1 T HE LAYERED CHAIN GRAPH REPRESENTATIONWe start by formulating the so called layered chain graph that corresponds to neural networks we usein practice: Consider a system represented by Llayers of random variables (X1;:::;XL), whereXliis thei-th variable node in the l-th layer, and denote Nlthe number of nodes in layer l. Weassume that nodes Xliin the same layer lhave the same distribution type characterized by a featurefunction Tlthat can be multidimensional. Also, we assume that the layers are ordered topologicallyand denotePa(Xl)the parent layers of Xl. To ease our discussion, we assume that X1is the inputlayer and XLthe output layer (our formulation can easily extend to multi-input/output cases). Alayered chain graph is then defined as follows:Definition 1. Alayered chain graph that involves Llayers of random variables (X1;:::;XL)is achain graph that encodes the overall distribution P(X2;:::;XLjX1)such that:1. It can be factored into layerwise chain components P(XljPa(Xl))following the topological or-der, and nodes Xliwithin each chain component P(XljPa(Xl))are conditionally independentgiven their parents (this results in bipartite chain components), thus allowing for further decom-position into nodewise conditional distributions P(XlijPa(Xl)). This means we haveP(X2;:::;XLjX1) =LYl=2P(XljPa(Xl)) =LYl=2NlYi=1P(XlijPa(Xl)); (1)2. For each layer lwith parent layers Pa(Xl) =fXp1;:::Xpng;p1;:::;pn2f1;:::;l1g, itsnodewise conditional distributions P(XlijPa(Xl))are modeled by pairwise conditional randomfields (CRFs) with with unary ( bli) and pairwise ( Wp;lj;i) weights (as we will see, they actuallycorrespond to biases and weights in NN layers):P(XlijPa(Xl)) =flTl(Xli);eliTp1(Xp1);:::;Tpn(Xpn)(2)with eliTp1(Xp1);:::;Tpn(Xpn)=bli+pnXp=p1NpXj=1Wp;lj;iTp(Xpj): (3)Figure 1 Left illustrates an example three-layer network as layered chain graph and its chain com-ponent factorization. In Eq. (2), flis an arbitrary function that represents a probability distri-bution. For exponential family distributions (Koller & Friedman, 2009), Eq. (2) simply becomesP(XlijPa(Xl))/expTl(Xli)eliTp1(Xp1);:::;Tpn(Xpn).Note that layered chain graph has a globally directed graph structure and has an equivalent modelingbased on directed graphical model (Bayesian network) (Koller & Friedman, 2009), we elaborate onthis point for interested readers in Appendix A.2.2 F EED-FORWARD AS APPROXIMATE PROBABILISTIC INFERENCETo identify layered chain graphs with real-world neural networks, we need to show that they canbehave the same way during inference and learning. For this, we establish the fact that feed-forwardcan actually be seen as performing an approximate probabilistic inference on a layered chain graph:3Under review as a conference paper at ICLR 2021Given an input sample ~x1, we consider the problem of inferring the marginal distribution Qliof anodeXliand its expected features qli, defined asQli(xlij~x1) =P(Xli=xlijX1=~x1);qli=EQli[Tl(Xli)] (q1=~x1): (4)Consider a non-input layer lwith parent layers p1;:::;pn, the independence assumptions encodedby the layered chain graph lead to the following recursive expression for marginal distributions Q:Qli(xlij~x1) =EQp1;:::;Qpn[P(xlijPa(Xl))]: (5)However, the above expression is in general intractable, as it integrates over the entire admissiblestates of all parents nodes in Pa(Xl). To proceed further, simplifying approximations are needed.Interestingly, by using linear approximations, we can obtain the following results (in case of discreterandom variable the integration in Eq. 7 is replaced by summation):Proposition 1. If we make the assumptions that the corresponding expressions are approximatelylinear w.r.t. parent features Tp1(Xp1);:::;Tpn(Xpn), we obtain the following approximations:Qli(xlij~x1)flTl(xli);eli(qp1;:::;qpn); (6)qliZxliTl(xli)flTl(xli);eli(qp1;:::;qpn)dxli:=gl(eli(qp1;:::;qpn)): (7)Especially, Eq. (7)is a feed-forward expression for expected features qliwith activation function gldetermined by Tlandfl, i.e. the distribution type of random variable nodes in layer l.The proof is provided in Appendix B.1. This allows us to identify feed-forward as an approximateprobabilistic inference procedure for layered chain graphs. For learning, the loss function is typicallya function of (QL;qL)obtainable via feed-forward, and we can follow the same classical neuralnetwork parameter update using stochastic gradient descent and backpropagation. Thus we are ableto replicate the exact neural network training process with this layered chain graph framework.The following corollary provides concrete examples of some common activation functions g(weemphasize their names in bold, detailed formulations and proofs are given in Appendix B.2):Corollary 2. We have the following node distribution - activation function correspondences:1. Binary nodes taking values f;gresults in sigmoidal activations, especially, we obtain sigmoidwith= 0;= 1andtanh with=1;= 1(; are interchangeable);2. Multilabel nodes characterized by label indicator features result in the softmax activation;3. Variants of (leaky) rectified Gaussian distributions ( Tli(Xli) =Xli= max(Yli;Yli)withYliNeli;(sli(eli))2) can approximate activations such as softplus (= 0;sli1:7761 ) and-leakyrectified linear unit (ReLU) (sli= tanh(eli)) including ReLU (= 0) and identity (= 1).Figure 1 Right illustrates activation functions approximated by various rectified Gaussian variants.We also plotted (in orange) an alternative approximation of ReLU with sigmoid-modulated standarddeviation proposed by Nair & Hinton (2010) which is less accurate around the kink at the origin.The linear approximations, needed for feed-forward, is coarse and only accurate for small pairwiseweights (kWk 1) or already linear regions. This might justify weight decay beyond the general“anti-overfit” argument and the empirical superiority of piecewise linear activations like ReLU (Nair& Hinton, 2010). Conversely, as a source of error, it might explain some “failure cases” of neuralnetworks such as their vulnerability against adversarial samples, see e.g., Goodfellow et al. (2015).2.3 G ENERALITY OF THE CHAIN GRAPH INTERPRETATIONThe chain graph interpretation formulated in Sections 2.1 and 2.2 is a general framework that candescribe many practical network structures. To demonstrate this, we list here a wide range of neuralnetwork designs (marked in bold) that are chain graph interpretable.In terms of network architecture, it is clear that the chain graph interpretation can model net-works of arbitrary depth, and with general multi-branched structures such as inception modules(Szegedy et al., 2015) or residual blocks (He et al., 2016; He et al., 2016) discussed in Sec-tion 3.1. Also, it is possible to built up recurrent neural networks (RNNs) for sequential data4Under review as a conference paper at ICLR 2021learning, as we will see in Section 3.2. Furthermore, the modularity of chain components justifiestransfer learning via partial reuse of pre-trained networks , e.g., backbones trained for imageclassification can be reused for segmentation (Chen et al., 2018).In terms of layer structure, we are free to employ sparse connection patterns and shared/fixedweight, so that we can obtain not only dense connections , but also connections like convolu-tion,average pooling orskip connections . Moreover, as shown in Section 3.3, dropout canbe reproduced by introducing and sampling from auxiliary random variables, and normalizationlayers like batch normalization (Ioffe & Szegedy, 2015) can be seen as reparametrizations ofnode distributions and fall within the general form (Eq. (2)). Finally, we can extend the layeredchain graph model to allow for intra-layer connections, which enables non-bipartite CRF layerswhich are typically used on output layers for structured prediction tasks like image segmenta-tion (Zheng et al., 2015; Chen et al., 2018) or named entity recognition (Lample et al., 2016).However, feed-forward is no longer applicable through these intra-connected layers.Node distributions can be chosen freely, leading to a variety of nonlinearities (e.g., Corollary 2).3 S ELECTED CASE STUDIES OF EXISTING NEURAL NETWORK DESIGNSThe proposed chain graph interpretation offers a detailed description of the underlying mechanismof neural networks. This allows us to obtain novel theoretical support and insights for various net-work designs which are consistent within a unified framework. We illustrate this with the followingconcrete examples where we perform in-depth analysis based on the chain graph formulation.3.1 R ESIDUAL BLOCK AS REFINEMENT MODULEThe residual block, proposed originally in He et al. (2016) and improved later (He et al., 2016) withthe preactivation form, is an effective design for building up very deep networks. Here we showthat a preactivation residual block corresponds to a refinement module within a chain graph. We usemodules to refer to encapsulations of layered chain subgraphs as input–output mappings withoutspecifying their internal structures. A refinement module is defined as follows:Definition 2. Given a base submodule from layer Xl1to layer Xl, arefinement module augmentsthis base submodule with a side branch that chains a copy of the base submodule (sharing weightwith its original) from Xl1to a duplicated layer ~Xl, and then a refining submodule from ~XltoXl.Xl−1XlZlX~lqlX ql−1Xq~lXqlZW,bW,b W,b σ σσFigure 2: Example of a refinement module (left) and its corresponding computational graph (right),composed of a base submodule Xl1!Xl(blue background) and a refining submodule ~Xl!Zl!Xl(red background). In the computational graph each W;b represents a linear connection(Eq. (3)) and an activation function. Same color identifies corresponding parts in the two graphs.We see that this refinement module corresponds exactly to a preactivation residual block.Proposition 3. A refinement module corresponds to a preactivation residual block.We provide a proof in Appendix B.3 and illustrate this correspondence in Figure 2. An interestingremark is that the refinement process can be recursive: the base submodule of a refinement modulecan be a refinement module itself. This results in a sequence of consecutive residual blocks.While a vanilla layered chain component encodes a generalized linear model during feed-forward(c.f. Eqs. (7),(3)), the refinement process introduces a nonlinear extension term to the previously lin-ear output preactivation, effectively increasing the representational power. This provides a possibleexplanation to the empirical improvement generally observed when using residual blocks.Note that it is also possible to interpret the original postactivation residual blocks, however in asomewhat artificial manner, as it requires defining identity connections with manually fixed weights.5Under review as a conference paper at ICLR 20213.2 R ECURRENT NEURAL NETWORKSRecurrent neural networks (RNNs) (Goodfellow et al., 2016) are widely used for handling sequentialdata. An unrolled recurrent neural network can be interpreted as a dynamic layered chain graphconstructed as follows: a given base layered chain graph is copied for each time step, then thesecopies are connected together through recurrent chain components following the Markov assumption(Koller & Friedman, 2009): each recurrent layer Xl;tat timetis connected by its correspondinglayerXl;t1from the previous time step t1. Especially, denoting Pat(Xl;t)the non-recurrentparent layers of Xl;tin the base chain graph, we can easily interpret the following two variants:Proposition 4. Given a recurrent chain component that encodes P(Xl;tjPat(Xl;t);Xl;t1),1. It corresponds to a simple (or vanilla / Elman) recurrent layer (Goodfellow et al., 2016) if theconnection from Xl;t1toXl;tis dense;2. It corresponds to an independently RNN (IndRNN) (Li et al., 2018) layer if the conditional inde-pendence assumptions among the nodes Xl;tiwithin layer lare kept through time:8i2f1;:::;Nlg; P(Xl;tijPat(Xl;t);Xl;t1) =P(Xl;tijPat(Xl;t);Xl;t1i): (8)We provide a proof in Appendix B.4 and illustrates both variants in Figure 3.t−1 t t+1t−1 t t+1Figure 3: Comparison of an simple recurrent layer (left) v.s. an IndRNN (right) recurrent layer.IndRNN, the better variant, enforces the intra-layer conditional independence through time.The simple recurrent layer, despite its exhaustive dense recurrent connection, is known to suffer fromvanishing/exploding gradient and can not handle long sequences. The commonly used long-shortterm memory (Hochreiter & Schmidhuber, 1997) and gated recurrent unit (Cho et al., 2014) alleviatethis issue via long term memory cells and gating. However, they tend to result in bloated structures,and still cannot handle very long sequences (Li et al., 2018). On the other hand, IndRNNs canprocess much longer sequences and significantly outperform not only simple RNNs, but also LSTM-based variants (Li et al., 2018; 2019). This indicates that the assumption of intra-layer conditionalindependence through time, analogue to the local receptive fields of convolutional neural networks,could be an essential sparse network design tailored for sequential modeling.3.3 D ROPOUTDropout (Srivastava et al., 2014) is a practical stochastic regularization method commonly usedespecially for regularizing fully connected layers. As we see in the following proposition, from thechain graph point of view, dropout corresponds to introducing Bernoulli auxiliary random variablesthat serve as noise generators for feed-forward during training:Proposition 5. Adding dropout with drop rate 1plto layerlcorresponds to the following chaingraph construction: for each node Xliin layerlwe introduce an auxiliary Bernoulli random variableDliBernoulli (pl)and multiply it with the pairwise interaction terms in all preactivations (Eq. (3))involvingXlias parent (this makes Dlia parent of all child nodes of Xliand extend their pairwiseinteractions with Xlito ternary ones). The behavior of dropout is reproduced exactly if:During training, we sample auxiliary nodes Dliduring each feed-forward. This results in droppingeach activation qliof nodeXliwith probability 1pl;At test time, we marginalize auxiliary nodes Dliduring each feed-forward. This leads to deter-ministic evaluations with a constant scaling of plfor the node activations qli.We provide a proof in Appendix B.5. Note that among other things, this chain graph interpretationof dropout provides a theoretical justification of the constant scaling at test time. This was originallyproposed as a heuristic in Srivastava et al. (2014) to maintain consistent behavior after training.6Under review as a conference paper at ICLR 20214 P ARTIALLY COLLAPSED FEED -FORWARDThe theoretical formulation provided by the chain graph interpretation can also be used to derivenew approaches for neural networks. It allows us to create new deep learning methods followinga coherent framework that provides specific semantics to the building blocks of neural networks.Moreover, we can make use of the abundant existing work from the PGM field, which also serves as arich source of inspiration. As a concrete example, we derive in this section a new stochastic inferenceprocedure called partially collapsed feed-forward (PCFF) using the chain graph formulation.4.1 PCFF: CHAIN GRAPH FORMULATIONA layered chain graph, which can represent a neural network, is itself a probabilistic graphical modelthat encodes an overall distribution conditioned on the input. This means that, to achieve stochasticbehavior, we can directly draw samples from this distribution, instead of introducing additional“noise generators” like in dropout. In fact, given the globally directed structure of layered chaingraph, and the fact that the conditioned input nodes are ancestral nodes without parent, it is a well-known PGM result that we can apply forward sampling (or ancestral sampling) (Koller & Friedman,2009) to efficiently generate samples: given an input sample ~x1, we follow the topological order andsample each non-input node Xliusing its nodewise distribution (Eq. (2)) conditioned on the samples(xp1;:::;xpn)of its parents. Compared to feed-forward, forward sampling also performs a singleforward pass, but generates instead an unbiased stochastic sample estimate.While in general an unbiased estimate is preferable and the stochastic behavior can also introduceregularization during training (Srivastava et al., 2014), forward sampling can not directly replacefeed-forward, since the sampling operation is not differentiable and will jeopardize the gradient flowduring backpropagation. To tackle this, one idea is to apply the reparametrization trick (Kingma &Welling, 2014) on continuous random variables (for discrete RVs the Gumbel softmax trick (Janget al., 2017) can be used but requires additional continuous relaxation). An alternative solution is toonly sample part of the nodes as in the case of dropout.The proposed partially collapse feed-forward follows the second idea: we simply “mix up” feed-forward and forward sampling, so that for each forward inference during training, we randomlyselect a portion of nodes to sample and the rest to compute deterministically with feed-forward.Thus for a node Xliwith parents (Xp1;:::;Xpn), its forward inference update becomesqli gl(eli(qp1;:::;qpn)) if collapsed (feed-forward) ;Tl(xli); xliflTl(Xli);eli(qp1;:::;qpn)if uncollapsed (forward sampling) :(9)Following the collapsed sampling (Koller & Friedman, 2009) terminology, we call this method thepartially collapsed feed-forward (PCFF). PCFF is a generalization over feed-forward and forwardsampling, which can be seen as its fully collapsed / uncollapsed extremes. Furthermore, it offers abias–variance trade-off, and can be combined with the reparametrization trick to achieve unbiasedestimates with full sampling, while simultaneously maintaining the gradient flow.Relation to stochastic feedforward neural network While PCFF can also be seen as a stochasticgeneralization of the feed-forward inference, it represents a substantially distinct approach comparedto SFNN: Apart from the clear difference that PCFF uses forward sampling and SFNN uses impor-tance sampling, a major dissimilarity is that SFNN makes a clear distinction between deterministicneurons and stochastic random variables, whereas PCFF identifies neurons with random variablesthanks to the layered chain graph interpretation. This is why PCFF can freely choose a different sub-set of nodes to sample during each forward pass. From the chain graph interpretation perspective,SFNN can be seen as a layered chain graph having a fixed subset of nodes with stochastic behavior,and it performs a hybrid of feed-forward and importance sampling for inference.4.2 PCFF: EXPERIMENTAL VALIDATIONIn the previous sections, we have been discussing existing approaches whose empirical evaluationshave been thoroughly covered by prior work. The novel PCFF approach proposed in this section,however, requires experiments to check its practical effectiveness. For this we conduct here a series7Under review as a conference paper at ICLR 2021of experiments1. Our emphasis is to understand the behavior of PCFF under various contexts andnot to achieve best result for any specific task. We only use chain graph interpretable components,and we adopt the reparameterization trick (Kingma & Welling, 2014) for ReLU PCFF samples.The following experiments show that PCFF is overall an effective stochastic regularization method.Compared to dropout, it tends to produce more consistent performance improvement, and can some-times outperform dropout. This confirms that our chain graph based reasoning has successfullyfound an interesting novel deep learning method.0.0000.0250.0500.0750.1000.125Activation functionTest errorRegularizationDropoutNonePCFFMNISTReLUTanhFashionMNISTReLUTanh0.10.30.50.00.10.20.30.40.50.60.70.80.91.0Drop/Sample rateTest error (log scale)RegularizationDropoutNonePCFFCIFAR-10Figure 4: Comparison of stochastic methods (None/Dropout/PCFF) in terms of image classificationtest errors (lower is better) under various settings. Left: MNIST/FashionMNIST datasets with asimple dense network and tanh/ReLU activation functions; Right: CIFAR-10 dataset with ResNet20and varying drop/sample rates. All reported results are average values of three runs. Compared todropout, PCFF can achieve comparable results, and tend to deliver more consistent improvements.Simple dense network We start with a simple network with two dense hidden layers of 1024nodes to classify MNIST (Lecun et al., 1998) and FashionMNIST (Xiao et al., 2017) images. Weuse PyTorch (Paszke et al., 2017), train with stochastic gradient descent (learning rate 0:01, mo-mentum 0:9), and set up 20% of training data as validation set for performance monitoring andearly-stopping. We set drop rate to 0.5 for dropout, and for PCFF we set the sample rate to 0.4 fortanh and 1.0 (full sampling) for ReLU. Figure 4 Left reports the test errors with different activationfunctions and stochastic regularizations.We see that dropout and PCFF are overall comparable, and both improve the results in most cases.Also, the ReLU activation consistently produces better results that tanh. Additional experimentsshow that PCFF and dropout can be used together, which sometimes yields improved performance.Convolutional residual network To figure out the applicability of PCFF in convolutional residualnetworks, we experiment on CIFAR-10 (Krizhevsky, 2009) image classification. For this we adaptan existing implementation (Idelbayev) to use the preactivation variant. We focus on the ResNet20structure, and follow the original learning rate schedule except for setting up a validation set of10% training data to monitor training performance. Figure 4 Right summarizes the test errors underdifferent drop/sample rates.We observe that in this case PCFF can improve the performance over a wide range of sample rates,whereas dropout is only effective with drop rate 0:1, and large drop rates in this case significantlydeteriorate the performance. We also observe a clear trade-off of the PCFF sample rate, where apartial sampling of 0.3 yields the best result.Independently RNN We complete our empirical evaluations of PCFF with an RNN test case. Forthis we used IndRNNs with 6 layers to solve the sequential/permuted MNIST classification problemsbased on an existing Implementation2provided by the authors of IndRNN (Li et al., 2018; 2019).We tested over dropout with drop rate 0.1 and PCFF with sample rate 0.1 and report the average testaccuracy of three runs. We notice that, while in the permuted MNIST case both dropout (0.9203)and PCFF (0.9145) improves the result (0.9045), in the sequential MNIST case, dropout (0.9830)seems to worsen the performance (0.9841) whereas PCFF (0.9842) delivers comparable result.1Implementation available at: (Github link placeholder, provided as supplementary material.)2https://github.com/Sunnydreamrain/IndRNN_pytorch8Under review as a conference paper at ICLR 20215 C ONCLUSIONS AND DISCUSSIONSIn this work, we show that neural networks can be interpreted as layered chain graphs, and that feed-forward can be viewed as an approximate inference procedure for these models. This chain graphinterpretation provides a unified theoretical framework that elucidates the underlying mechanism ofreal-world neural networks and provides coherent and in-depth theoretical support for a wide rangeof empirically established network designs. Furthermore, it also offers a solid foundation to derivenew deep learning approaches, with the additional help from the rich existing work on PGMs. It isthus a promising alternative neural network interpretation that deepens our theoretical understandingand unveils a new perspective for future deep learning research.In the future, we plan to investigate a number of open questions that stem from this work, especially:Is the current chain graph interpretation sufficient to capture the full essence of neural networks?Based on the current results, we are reasonably optimistic that the proposed interpretation cancover an essential part of the neural network mechanism. However, compared to the functionapproximator view, it only covers a subset of existing techniques. Is this subset good enough?On a related note: can we find chain graph interpretations for other important network designs (orotherwise some chain graph interpretable alternatives with comparable or better performance)?The current work provides a good start, but it is by no means an exhaustive study.Finally, what other new deep learning models and procedures can we build up based on the chaingraph framework? The partially collapsed feed-forward inference proposed in this work is just asimple illustrative example, and we believe that many other promising deep learning techniquescan be derived from the proposed chain graph interpretation.<|im_end|> <|im_start|>assistant ### Review Title alternative interpretation based on chain graph ### Review Text This paper tries to interpret neural networks with chain graphs that provides theoretical analysis on various neural network components. Furthermore, this chain graph interpretation has been used to propose a new approach (architecture), which is a partially collapsed feed-forward. A layered chain graph representation is adopted to formulate the neural networks with layered chain graphs. This further establishes to interpret feed-forward as an approximate probabilistic inference with using linear approximations. Some concrete examples are shown to be analyzed based on the chain graph formulation. The overall context (analysis) seems straightforward to interpret the neural networks with chain graphs, but it is hard to achieve some meaningful information from this new interpretation to improve the current neural network models in terms of learning procedure or optimization. The proposed partially collapsed feed-forward is a good example to come up with a new approach based on the chain graph interpretation. However, in terms of performance and complexity, it is practically not showing impressive improvements compared to the baseline methods. Moreover, it seems quite similar to previous works as far as I remember and one similar work is 'stochastic feedforward neural networks'. I fully agree the future works (open questions) in the conclusion and discussion section that this work still needs more investigations although this paper is a good initiative work. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
rkeMj3VtvB
ICLR.cc/2020/Conference
2020
Multi-Task Adapters for On-Device Audio Inference
["M. Tagliasacchi", "F. de Chaumont Quitry", "D. Roblek"]
The deployment of deep networks on mobile devices requires to efficiently use the scarce computational resources, expressed as either available memory or computing cost. When addressing multiple tasks simultaneously, it is extremely important to share resources across tasks, especially when they all consume the same input data, e.g., audio samples captured by the on-board microphones. In this paper we propose a multi-task model architecture that consists of a shared encoder and multiple task-specific adapters. During training, we learn the model parameters as well as the allocation of the task-specific additional resources across both tasks and layers. A global tuning parameter can be used to obtain different multi-task network configurations finding the desired trade-off between cost and the level of accuracy across tasks. Our results show that this solution significantly outperforms a multi-head model baseline. Interestingly, we observe that the optimal resource allocation depends on both the task intrinsic characteristics as well as on the targeted cost measure (e.g., memory or computing cost).
["Audio", "multi-task learning"]
ABSTRACTThe deployment of deep networks on mobile devices requires to efficiently use thescarce computational resources, expressed as either available memory or computingcost. When addressing multiple tasks simultaneously, it is extremely importantto share resources across tasks, especially when they all consume the same inputdata, e.g., audio samples captured by the on-board microphones. In this paperwe propose a multi-task model architecture that consists of a shared encoder andmultiple task-specific adapters. During training, we learn the model parameters aswell as the allocation of the task-specific additional resources across both tasks andlayers. A global tuning parameter can be used to obtain different multi-task networkconfigurations finding the desired trade-off between cost and the level of accuracyacross tasks. Our results show that this solution significantly outperforms a multi-head model baseline. Interestingly, we observe that the optimal resource allocationdepends on both the task intrinsic characteristics as well as on the targeted costmeasure (e.g., memory or computing cost).1 I NTRODUCTIONThe availability of large annotated audio datasets (e.g., AudioSet (Gemmeke et al., 2017)) has enabledto train models that are able to target a large number of audio classes, ranging from speech and music,to coughing and baby crying. However, to achieve a good level of accuracy, it is necessary to usecomplex network architectures, characterized by a large number of parameters and floating pointoperations (FLOPs) (Hershey et al., 2017). For this reason, when models need to be deployed onmobile device, it is customary to train more focused detectors, each targeting a handful of classes.This approach has two main advantages: i) the resulting model is typically significantly less complex,thus lending itself to be deployed on device; ii) training can leverage task-specific datasets and dataaugmentation strategies, thus leading to a higher level of accuracy when deployed in-the-wild .On the other side, training and deploying independent models for each task fails to leverage the factthat such models might be extracting common features, given that they all consume the same input.Indeed, it might be argued that especially in the early layers of the network architecture, modelsmight be learning low-level features that are not task-specific. As a consequence, such independentmodels do not make optimal use of the scarce computational resources.A common solution to this problem is to deploy a multi-head model (Georgiev et al., 2017; Leeet al., 2019), in which a shared common encoder computes general-purpose audio embeddings, andtask-specific fully connected heads are added to target each task. However, when the number andheterogeneity of tasks increases, the audio embeddings might fail to capture all the information neededto solve all tasks. In this paper we propose a multi-task model that overcomes the aforementionedissue by adding task adapter networks in parallel to a shared encoder, as illustrated in Figure 1. Thegoal of such adapters is to learn task-specific features, possibly at different depths. The adapters havethe same architecture as that of the shared encoder, but with a smaller number of channels in eachlayer. We designed the architecture in such a way that each layer in a task adapter receives as inputthe concatenation of the activations at the layer below, computed by both the shared encoder andthe task adapter itself. As a consequence, there are no inter-dependencies across tasks, and duringinference one can decide to compute simultaneously either all tasks or a subset of them, dependingon the available resource budget.1Under review as a conference paper at ICLR 2020Task 1 Task 2 Task K Layer L - 2 (Conv) Layer L - 1 (Conv) Layer L - 2 (Conv) Layer L - 1 (FC) Shared Encoder Layer L (softmax) Input Figure 1: Overview of the proposed model architecture with task adapters. Each parallelogramdenotes a 2-dimensional channel (time and frequency). Arrow with square ending denote gatingvariables that control which subset of the channels contribute to the input of the layer above.Generally, tasks might be characterized by a different level of intrinsic difficulty, and require adapta-tion at different layers in the network. A fixed allocation of extra channels is likely to be suboptimalwhen costs are explicitly taken into account. Thus, our key contribution is to let the network learnwhich additional channels to use in each layer of each task adapter network, subject to a globalcost constraint. Note that cost can be expressed either in terms of number of parameters or FLOPs,depending on the requirements imposed by the deployment on device (respectively due to memory orbattery constraints).Our solution consists of introducing an explicit gating mechanism, controlled by a small set oftrainable variables that determine whether each channel of the task adapters is used as input to thelayer above. By learning such gating variables, the model can effectively decide to turn off some ofthe channels in the task adapter networks, thus learning how to allocate the available budget to tasksand layers. In summary, we propose the following main contributions:We propose a model that addresses multiple audio tasks simultaneously, sharing representa-tions via a common encoder network and learning task-specific adapters at different depths,which are able to augment the common representation and achieving higher accuracy.We propose a learnable gating mechanism that allows one to sweep different trade-offsbetween accuracy and overall cost, by selectively turning off some of the channels in thetask adapter networks.We evaluate the proposed model simultaneously on eight different audio tasks, rangingfrom keyword spotting to audio scene recognition, speaker identification, etc. Our empiricalresults show that it is possible to significantly improve the level of accuracy of several taskswith respect to a multi-head model by only marginally increasing the cost.2 R ELATED WORKLearning representations that can be re-used for multiple tasks has received a great deal of attentionin the recent literature. Domain adaptation and transfer learning (Yosinski et al., 2014; Donahueet al., 2014) are common methods used to fine-tune a linear classifier on top of the embeddings2Under review as a conference paper at ICLR 2020produced by a pre-trained network to address multiple tasks. An alternative approach consists of fullfine-tuning (Cui et al., 2018), in which a pre-trained network is used as starting point for the trainingprocess. However, when multiple tasks need to be addressed, neither solution is particularly suitable.In the first case, task-adaptation is limited to the output layer of the network, which might not besufficient when tasks require heterogeneous representations. In the second case, full fine-tuningmight lead to very different models for each task, due to catastrophic forgetting. To overcome theselimitations, Rebuffi et al. (2017) address the problem of adapting a common representation to differentvisual domains. They propose to use residual adapter modules, i.e., parametric modules that can steerthe internal network representation from one domain to another. This approach was later extendedin (Rebuffi et al., 2018), introducing a form of adapter that can be added in parallel to the mainnetwork architecture, and successfully applied to the NLP domain in (Houlsby et al., 2019). Analternative approach is proposed in (Mudrakarta et al., 2019), in which a task-specific model patchis learned to produce different embeddings for different downstream tasks. The patch can take theform of either batch-norm parameters or a subset of the weights of spatial convolution filters. Allthese methods allow to adapt the network by changing a small number of weights. At the same time,during inference the whole network has to be reevaluated from scratch when moving from one task tothe other, due to the dependencies introduced in the computation graph. This is in constrast with ourmodel, which is able to target simultaneously multiple tasks at once.Most of the works above deal with vision-related tasks. Multi-task learning in the context of general-purpose audio has been less explored. The prevailing approach is to train a single model addressingmultiple classes at once (Hershey et al., 2017). However, this approach does not benefit from theavailability of task-specific datasets, and model capacity might not be tailored to the subset of classesof interest. Recently, Lee et al. (2019) proposed a model architecture that addresses simultaneouslythree tasks. Unlike our approach, they start directly from time-domain waveforms. In addition, thetask adaptation only occurs in the last layer of a multi-head model architecture. Similarly to ourwork, Georgiev et al. (2017) address multi-task audio learning for deployment on embedded devices.Depending on their characteristics, tasks can be processed by a multi-head model, in which onlythe last layer is task-specific, or have its own task-specific network. Conversely, our model canaccommodate task adaptation at different depths and in a task-specific manner.In our work we deliberately keep the task adapters of individual tasks separate from each other, sothat it is possible to select the subset of tasks to evaluate depending on the available budget. This isin constrast to the approach explored by Cheung et al. (2019), which superimpose multiple modelsinto a single entangled model, from which task-specific models can be later retrieved. At the sametime, this approach seems to be more suitable for server-side inference, where the overall modelcomplexity is less critical. In addition, while in our work we focus on a single modality, we recognizethe importance of handling multiple modalities at once. For example, Kaiser et al. (2017) exploredthe case in which multiple tasks belong to different modalities (e.g., image, speech, text), showingthat they might still benefit from sharing part of the network architecture when training is performedconcurrently on all tasks. Also in this case the resulting model is large and not specifically tailored tobe deployed on mobile devices.Finally, the proposed method for determining how to size the adapters based on the available budgetis related to the MorphNet solution previously appeared in (Gordon et al., 2018). However, ourapproach differs from multiple angles: i) a single-task learning model is discussed in (Gordon et al.,2018), while we focus on multi-task learning, thus investigating how allocation is performed acrosstasks; ii) we introduce explicit gating variables instead of re-using batch-norm scaling variables.This has the advantage of applying the solution also to layers in which batch norm might not beused (e.g., fully connected layers); iii) we adopt a different relaxation of the discrete cost allocationproblem (further discussed in Section 3); iv) we evaluate the model in the context of audio tasks,while (Gordon et al., 2018) is mostly concerned with vision tasks.3 M ETHODSWe consider a model architecture that receives one audio recording as input and produces as outputpredictions for Kdownstream tasks simultaneously. The architecture consists of a shared encoderandKtask-adapter encoders. The underlying idea is that the shared encoder provides a generalpurpose representation for the audio inputs, which might be suitable for different downstream tasks.3Under review as a conference paper at ICLR 2020However, higher level of accuracy might be achieved by refining the representations computed atdifferent depths adding task-specific adapters in the form of additional channels.The overall architecture of the model is illustrated in Figure 1. Both the shared encoder and each of thetask adapters consist of the same number of convolutional layers, followed by a global max-poolinglayer and a fully connected layer, for a total of Llayers. Letfk;i(),i= 1;:::;L , denote the functioncomputed by the generic layer at depth i. To simplify the notation, we denote with k= 0the sharedencoder and with k= 1;:::;K , the task specific encoders. The function fk;i()produces as output atensor of size TiFiCk;i. Note that the number of temporal frames Tiand frequency bins Fiis thesame for all values of k. For the task-specific encoders, we include a number of task-specific channelsCk;i=max(1;biC0;ic), whereC0;iandiare hyperparameters that determine the maximumachievable complexity of the model. Although it is possible to use a different value of iat eachlayer, throughout the rest of this paper we assume i=,i= 1;:::;L .In the shared encoder, f0;ireceives as input only the output of the previous layer. However, in eachtask-adapter encoder, fk;i,k6= 0, receives as input the concatenation of the outputs of f0;i1andfk;i1along the channel dimension. Therefore, the cost of computing fk;i,k6= 0, can be expressedas:costk;i=i;kCk;i(C0;i1+Ck;i1) (1)(withC0;0= 1, andCk;0= 0, fork6= 0). That is, the cost is proportional to the number of outputchannelsCk;imultiplied by the number of input channels (C0;i1+Ck;i1). The cost scaling factori;kis a constant value that can be computed based on: i) the intrinsic architecture of the layer; ii) theknown sizes TiFi; iii) the target cost measure, i.e., FLOPs or number of parameters.The proposed method aims at learning how to scale the number of channels to be used in each layerof the task adapters encoder, i.e., to determine ck;iCk;i, subject to a constraint on the total cost.To this end, we introduce a gating mechanism that controls the flow of the activations in the taskadapters encoders. Namely, for each layer of the task adapters we introduce Ck;iadditional trainablevariables ak;i= [ak;i;1;:::;ak;i;C k;i], which modulate the output of each channel:~fk;i;c(x) =(ak;i;c)fk;i;c(x); (2)where()is a non-linear function that maps its input to non-negative real numbers, i.e., R!R+.In our work, we use a clipped ReLU nonlinearity defined as follows:(a;s) = min(1;ReLU (sa+ 0:5)) (3)The slope of the non-linearity sis progressively increased during training, in such a way that, ass!1 ,(3)acts as a gating function. Note that when the gating non-linearity is driven to be either0 or 1, it is locked at this value, as the gradients are equal to zero. Therefore, it performs a hardselection of those channels that are contributing to the network output and those that can be discarded.The number of active channels in the i-th layer of the k-th task adapter is equal to:ck;i=Ck;iXc=11(ak;i;c )>0 (4)During training, we jointly learn both the parameters of the network and the gating variables. This isachieved by optimizing the following loss function:L=KXk=1wkhLXEk+Cadapterski(5)4Under review as a conference paper at ICLR 2020(a) Accuracy vs. cost.Num. params FLOPsTask Multi-head = 102= 104= 102= 104Single-taskMUS 0.94 0.94 (+0.5%) 0.93 (-1.1%) 0.95 (+0.4%) 0.95 (+0.3%) 0.98LSP 0.91 0.91 (+0.8%) 0.95 (+5.2%) 0.94 (+4.2%) 0.95 (+5.0%) 0.98BSD 0.74 0.75 (+0.8%) 0.76 (+2.6%) 0.76 (+2.3%) 0.76 (+2.1%) 0.73TUT 0.71 0.72 (+1.8%) 0.76 (+7.8%) 0.75 (+5.0%) 0.77 (+6.7%) 0.82SPC 0.66 0.65 (-0.3%) 0.65 (-0.4%) 0.66 (+2.2%) 0.67 (+3.1%) 0.75LID 0.59 0.65 (+11.9%) 0.67 (+14.7%) 0.63 (+5.3%) 0.65 (+8.9%) 0.64NPI 0.62 0.65 (+4.6%) 0.70 (+12.8%) 0.66 (+6.8%) 0.71 (+15.0%) 0.79NIF 0.55 0.57 (+2.7%) 0.59 (+6.6%) 0.59 (+8.1%) 0.59 (+7.8%) 0.63Mean 0.71 0.73 (+2.9%) 0.75 (+6.0%) 0.74 (+4.3%) 0.75 (+6.1%) 0.7910k 20k 30k 40k 50k 60k 70k# params50%60%70%80%90%100%AccuracyShared encoder: trainedMUSLSPBSDTUTSPCLIDNPINIF(b) Number of parameters0m 1m 2m 3m 4m 5mFLOPs50%60%70%80%90%100%AccuracyShared encoder: trainedMUSLSPBSDTUTSPCLIDNPINIF (c) FLOPsFigure 2: Accuracy vs. cost for the multi-task learning scenario.whereLXEkis the cross-entropy loss for the k-th task,wkis an optional weighting term, and Cadapterskis a penalty term that captures the cost of the k-th task adapter for a given configuration of the gatingvariables:Cadaptersk=LXi=1i;kkak;ik1(C0;i1+kak1;ik1): (6)The Lagrange multiplier controls indirectly the target cost, i.e., when = 0the optimizer minimizesthe cross-entropy loss LXEkonly, thus potentially using all the available capacity, both of the sharedencoder and of the task-adapter channels (i.e., ck;i=Ck;i). Conversely, when increasing the useof additional channels is penalized, thus inducing the network to use fewer channels. Note thatkak1;ik1is upper bounded by bC0;i1c, therefore when 1, the second term in equation (6)isdominated by the constant C0;i1, andCadapterskis proportional to the l-1 norm of the gating variablevector, thus promoting a sparse solution in which only a subset of the channels are used.4 E XPERIMENTSAudio front-end : In our work we consistently use the same audio frontend, which processes inputsequences sampled at 16 kHz, with a window size of 25 ms and a hop size equal to 10 ms to computethe short-time Fourier transform (STFT), and then computes F= 64 mel-spaced frequency bins inthe range 60–7800 Hz. Finally, we take the logarithm of the resulting spectrogram.Audio tasks : We evaluate the proposed multi-task adapters architecture addressing simultaneously 8different audio-based tasks, covering both speech and non-speech related tasks. In all cases, the modelreceives as input a spectrogram slice of size 9664, so that the receptive field is equal to T= 975 ms.We use the Speech Commands (SPC) dataset (Warden, 2018) to evaluate keyword spotting on 35distinct keywords. LibriSpeech (LSP) (Panayotov et al., 2015) contains audio books read by 2515Under review as a conference paper at ICLR 2020different speakers. We use the 100 hours training set to evaluate a speaker identification task. TheSpoken Language Identification (LID) dataset (Tomasz, 2018) contains samples that belong to threedifferent languages: English, Spanish and German, while the MUSAN (MUS) dataset (Snyder et al.,2015) distinguishes across three classes, namely music, speech and noise. We also use two datasetsreleased in the context of the recent DCASE2018 Challenge, Bird Audio Detection (Stowell et al.,2018) (BSD), and TUT Urban Acoustic Scenes 2018 (Mesaros et al., 2018) (TUT), which containslabeled audio samples from 10 different urban environments. Finally, we consider two tasks basedon the NSynth dataset (Engel et al., 2017). NSynthPitch (NPI) contains notes played by differentmusical instruments at 128 different pitch levels, while NSynthInstrument (NIF) distinguishes 11different families of musical instruments. For all datasets, we consider the default train/test split, andprovide results on slices extracted from the test set only. Note that the choice of the tasks used for theevaluation is consistent with the selected temporal granularity. As such, we do not consider speechrecognition tasks, which generally require a much finer temporal granularity.Model architecture : For both the shared encoder and the task-adapter networks, we use a convo-lutional neural network with L= 5layers. Each convolutional layer is followed by max-pooling,to reduce the time-frequency dimensions by a factor of two at each layer, a ReLU non-linearity andbatch-normalization. Finally, a global max-pooling layer is followed by a fully-connected layer.For each task, the output softmax layer receives as input the embeddings produced by the encoder,concatenated with the embeddings produced by the task-adapter network.We consider two scenarios for the shared encoder architecture. In the multi-task learning scenario, theencoder is trained together with the task adapters. In this case, the number of channels in each layeris equal to [6;12;24;48;96], for a total of 65k parameters and 6M FLOPs. In the transfer learningscenario, we consider embeddings produced by an encoder pre-trained using a self-supervisedlearning method (Audio2Vec - CBoW), as described in (Tagliasacchi et al., 2019). In this case we use[8;16;32;64;128] channels in each layer and the encoder weights are frozen during training, for atotal of 125k parameters and 18M FLOPs. The maximum number of channels in the task-adapternetworks is determined by setting = 0:2.The loss function is minimized with stochastic gradient descent using the Adam optimizer with alearning rate equal to 103. The batch size is set equal to 256 samples, that is, 256 / 8 = 32 samplesfrom each task in a batch. Training is stopped after 1 million batch iterations, when the level ofaccuracy of all tasks is saturated.Baselines : As a baseline, we consider a multi-head architecture which consists of a shared encoderand 8 different fully connected layers, one for each task. We also include results obtained by traininga task specific model. In this case, we use a model with [8;16;32;64;128] channels in each layer fora total of 125k parameters and 18M FLOPs up to the embedding layer. The number of parameters ofthe output softmax layer depends on the number of output classes. For example, the LibriSpeech headrequires additional 251128'32kparameters, the MUSAN head only 3128 = 384 parameters.The FLOPs cost of this layer is negligible when compared to the rest of the network.Results : We evaluate the proposed model architecture by computing the classification accuracy ofeach of the 8 tasks. We consider cost expressed either as number of task-specific parameters, ortask-specific FLOPs. Figure 2 shows the results for the multi-task learning scenario. We let theparametervary, so as to target different cost levels, and report the task accuracy in each case. Theleftmost point in each curve represents the multi-head baseline, and the other two points are obtainedby setting= 102;104, when expressing number of parameters in thousands and FLOPs inmillions. Note that the x-axis in both figures represents the task-specific cost only, i.e., it does notinclude the cost of computing the shared encoder. For this reason the curves do not start at zero,because we include the task-specific cost of the softmax layer of head, particularly noticeable whenconsidering cost based on the number of parameters.Overall, the average accuracy across tasks grows from 0.71 to 0.75 by adding task-specific adapters(+6% in relative terms). However, there are significant differences across tasks. For example, MUSANstarts from a higher level of accuracy in the multi-head model, and no improvement is observedwhen adding task-specific adapters. Conversely, NSynthPitch is quite different from all other tasks,and the shared encoder is unable to capture the features necessary to solve this task. As a result, arelative +15% / + 13% is observed when cost is measured in terms of number of parameters andFLOPs, respectively. For 6 out of the 8 tasks, the proposed model achieves a level of accuracy which6Under review as a conference paper at ICLR 2020(a) Accuracy vs. cost.Num. params FLOPsTask Multi-head = 102= 104= 102= 104Single-taskMUS 0.93 0.94 (+1.2%) 0.94 (+0.4%) 0.95 (+1.9%) 0.94 (+0.8%) 0.98LSP 0.62 0.73 (+17.5%) 0.83 (+33.8%) 0.82 (+32.4%) 0.85 (+37.8%) 0.98BSD 0.67 0.72 (+6.8%) 0.75 (+11.2%) 0.73 (+7.9%) 0.74 (+8.4%) 0.73TUT 0.47 0.51 (+8.6%) 0.54 (+15.3%) 0.55 (+16.7%) 0.56 (+18.3%) 0.82SPC 0.19 0.49 (+160%) 0.61 (+223%) 0.59 (+209%) 0.63 (+228%) 0.75LID 0.52 0.68 (+32.0%) 0.66 (+28.1%) 0.69 (+38.4%) 0.72 (+44.0%) 0.64NPI 0.46 0.50 (+9.2%) 0.59 (+28.6%) 0.58 (+27.2%) 0.60 (+30.8%) 0.79NIF 0.40 0.47 (+18.4%) 0.51 (+27.9%) 0.54 (+35.6%) 0.51 (+30.2%) 0.63Mean 0.53 0.63 (+31.6%) 0.68 (+46.0%) 0.68 (+46.2%) 0.69 (+49.8%) 0.7910k 20k 30k 40k 50k 60k 70k# params0%20%40%60%80%100%AccuracyShared encoder: frozenMUSLSPBSDTUTSPCLIDNPINIF(b) Number of parameters0m 1m 2m 3mFLOPs0%20%40%60%80%100%AccuracyShared encoder: frozenMUSLSPBSDTUTSPCLIDNPINIF (c) FLOPsFigure 3: Accuracy vs. cost for the transfer learning scenario.is in-between the multi-head baseline and independent single-task models. When comparing with thelatter, one needs to bear in mind that the overall complexity of the single task models is significantlylarger than the architecture evaluated in the multi-task learning scenario, also when !0and allgates are open. To further bridge the gap, one could increase , thus allocating additional task-specificchannels, in exchange for additional complexity. For two tasks, namely Bird Audio Detection andSpoken Language Identification , the proposed multi-task architecture outperforms the correspondingsingle-task baselines. This can be explained by the fact that both tasks have a relatively small datasetand the single-task model is likely to overfit. Conversely, when trained jointly with other tasks, thisacts as a form of regularization, thus leading to higher accuracy on the test set.Figure 3 show similar results for the transfer learning scenario. In this case, the pre-trained encoderis frozen and, as already observed in (Tagliasacchi et al., 2019), it provides good representations onlyfor some of the tasks. For example, a simple multi-head model achieves a level of accuracy equal to0.19 on the Speech Commands task. By adding task-specific adapters, the accuracy grows above 0.60,regardless of the adopted cost measure. In general, much larger improvements are observed in thiscase, with an average relative increase in accuracy above 40%.It is also interesting to observe how the model decides to allocate the additional budget available fortask adapters, inspecting the status of the gates upon training convergence. Figure 4a illustrates thisaspect for two tasks, namely Bird Audio Detection andNSynthPitch . Each sub-figure shows howthe gates evolve by decreasing the value of , thus relaxing the cost constraint. First, we observethat different tasks require a different level of adaptation. In this example, NSynthPitch uses a largernumber of additional channels. Second, the status of the gates depend on the selected cost measure.When considering FLOPs, the last fully layer is relatively inexpensive. Thus, most of the gates arekept open. Conversely, when considering number of parameters requires a more parsimonious use ofthe fully connected layer, as this accounts for a large fraction of the total cost.7Under review as a conference paper at ICLR 20200.00.51.0Gate valuebirdsong_detection0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value(a)0.00.51.0Gate valuebirdsong_detection0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value (b)0.00.51.0Gate valuensynth_pitch0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value(c)0.00.51.0Gate valuensynth_pitch0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value (d)Figure 4: Gate values - Cost measure: Number of parameters (a) and (c); FLOPs (b) and (d). Crossesrepresent closed gates corresponding to unused channels. Within each sub-figure, the Lagrangemultiplierincreases going top-to-bottom, allowing more gates to stay open.5 C ONCLUSIONIn this paper we propose a multi-task learning model that is able to address a wide variety of audiotasks. Unlike previously proposed approaches, our model can compute simultaneously either alltasks at once, or a subset of them, depending on the available computational resource budget. Theallocation of the task-specific resources is handled jointly with training, by learning which additionalchannels should be used for each task and layer. Experimental results show that the proposed modeloutperforms a multi-head architecture baseline and approaches the accuracy achievable when usingseparate task-specific models. Our future research will pursue two different directions: i) on theone hand, we will explore how tasks can be group together, so as to share a common representationwithin each group; ii) on the other hand, we will relax the constraint on the input audio front-end,investigating how different tasks may benefit from task-specific time-to-frequency transformations.
BJgWXCrpFS
Official Blind Review #2
3: Weak Reject
The authors present a method for the audio classification problem, using a pre-trained network (shared encoder) and creating adapters for each new task. This is an interesting alternative to transfer learning, and it would be great if minimal adapters can be designed to work with a baseline network. The adapter network is of the same depth as the shared encoder network, and uses both its own output, and the encoder's output from the previous layer for its next layer. Using learnable gates, the channels of its previous layers can be selected to be on / off. The idea is to create a network that can be tuned for accuracy and cost (params and FLOPs) for on-device inference. Since the motivation behind the paper is networks that are suitable for on-device inference and multi-task adapters, I find the paper lacking in the following respect: 1. Describing how exactly they will implement this architecture on-device. The channels that have been turned off by the gates, can have their parameters trimmed, and the next layer can ignore them. However, it would be great to get actual numbers in terms of latency, size, etc. from a model that is running on-device, with explanation of the work done to translate the training model graph to an efficient inference graph. 2. Comparison with other on-device model compression techniques like quantization, distillation, pruning etc. 3. Comparison with the transfer learning variant, where only the last few layers are trainable. While this is an interesting approach, it would be more convincing to know that this approach: a) Translates to actual improvements in on-device inference, along with implementation details. b) Is better than / competitive with other techniques for compression and transfer learning. With the paper as it currently stands, I would recommend consideration for a dedicated workshop, but weak reject for the main conference. I would be willing to be convinced otherwise, if the authors can fill in more details as mentioned.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Multi-Task Adapters for On-Device Audio Inference ### Paper Abstract The deployment of deep networks on mobile devices requires to efficiently use the scarce computational resources, expressed as either available memory or computing cost. When addressing multiple tasks simultaneously, it is extremely important to share resources across tasks, especially when they all consume the same input data, e.g., audio samples captured by the on-board microphones. In this paper we propose a multi-task model architecture that consists of a shared encoder and multiple task-specific adapters. During training, we learn the model parameters as well as the allocation of the task-specific additional resources across both tasks and layers. A global tuning parameter can be used to obtain different multi-task network configurations finding the desired trade-off between cost and the level of accuracy across tasks. Our results show that this solution significantly outperforms a multi-head model baseline. Interestingly, we observe that the optimal resource allocation depends on both the task intrinsic characteristics as well as on the targeted cost measure (e.g., memory or computing cost). ### Paper Keywords ["Audio", "multi-task learning"] ### Paper Content ABSTRACTThe deployment of deep networks on mobile devices requires to efficiently use thescarce computational resources, expressed as either available memory or computingcost. When addressing multiple tasks simultaneously, it is extremely importantto share resources across tasks, especially when they all consume the same inputdata, e.g., audio samples captured by the on-board microphones. In this paperwe propose a multi-task model architecture that consists of a shared encoder andmultiple task-specific adapters. During training, we learn the model parameters aswell as the allocation of the task-specific additional resources across both tasks andlayers. A global tuning parameter can be used to obtain different multi-task networkconfigurations finding the desired trade-off between cost and the level of accuracyacross tasks. Our results show that this solution significantly outperforms a multi-head model baseline. Interestingly, we observe that the optimal resource allocationdepends on both the task intrinsic characteristics as well as on the targeted costmeasure (e.g., memory or computing cost).1 I NTRODUCTIONThe availability of large annotated audio datasets (e.g., AudioSet (Gemmeke et al., 2017)) has enabledto train models that are able to target a large number of audio classes, ranging from speech and music,to coughing and baby crying. However, to achieve a good level of accuracy, it is necessary to usecomplex network architectures, characterized by a large number of parameters and floating pointoperations (FLOPs) (Hershey et al., 2017). For this reason, when models need to be deployed onmobile device, it is customary to train more focused detectors, each targeting a handful of classes.This approach has two main advantages: i) the resulting model is typically significantly less complex,thus lending itself to be deployed on device; ii) training can leverage task-specific datasets and dataaugmentation strategies, thus leading to a higher level of accuracy when deployed in-the-wild .On the other side, training and deploying independent models for each task fails to leverage the factthat such models might be extracting common features, given that they all consume the same input.Indeed, it might be argued that especially in the early layers of the network architecture, modelsmight be learning low-level features that are not task-specific. As a consequence, such independentmodels do not make optimal use of the scarce computational resources.A common solution to this problem is to deploy a multi-head model (Georgiev et al., 2017; Leeet al., 2019), in which a shared common encoder computes general-purpose audio embeddings, andtask-specific fully connected heads are added to target each task. However, when the number andheterogeneity of tasks increases, the audio embeddings might fail to capture all the information neededto solve all tasks. In this paper we propose a multi-task model that overcomes the aforementionedissue by adding task adapter networks in parallel to a shared encoder, as illustrated in Figure 1. Thegoal of such adapters is to learn task-specific features, possibly at different depths. The adapters havethe same architecture as that of the shared encoder, but with a smaller number of channels in eachlayer. We designed the architecture in such a way that each layer in a task adapter receives as inputthe concatenation of the activations at the layer below, computed by both the shared encoder andthe task adapter itself. As a consequence, there are no inter-dependencies across tasks, and duringinference one can decide to compute simultaneously either all tasks or a subset of them, dependingon the available resource budget.1Under review as a conference paper at ICLR 2020Task 1 Task 2 Task K Layer L - 2 (Conv) Layer L - 1 (Conv) Layer L - 2 (Conv) Layer L - 1 (FC) Shared Encoder Layer L (softmax) Input Figure 1: Overview of the proposed model architecture with task adapters. Each parallelogramdenotes a 2-dimensional channel (time and frequency). Arrow with square ending denote gatingvariables that control which subset of the channels contribute to the input of the layer above.Generally, tasks might be characterized by a different level of intrinsic difficulty, and require adapta-tion at different layers in the network. A fixed allocation of extra channels is likely to be suboptimalwhen costs are explicitly taken into account. Thus, our key contribution is to let the network learnwhich additional channels to use in each layer of each task adapter network, subject to a globalcost constraint. Note that cost can be expressed either in terms of number of parameters or FLOPs,depending on the requirements imposed by the deployment on device (respectively due to memory orbattery constraints).Our solution consists of introducing an explicit gating mechanism, controlled by a small set oftrainable variables that determine whether each channel of the task adapters is used as input to thelayer above. By learning such gating variables, the model can effectively decide to turn off some ofthe channels in the task adapter networks, thus learning how to allocate the available budget to tasksand layers. In summary, we propose the following main contributions:We propose a model that addresses multiple audio tasks simultaneously, sharing representa-tions via a common encoder network and learning task-specific adapters at different depths,which are able to augment the common representation and achieving higher accuracy.We propose a learnable gating mechanism that allows one to sweep different trade-offsbetween accuracy and overall cost, by selectively turning off some of the channels in thetask adapter networks.We evaluate the proposed model simultaneously on eight different audio tasks, rangingfrom keyword spotting to audio scene recognition, speaker identification, etc. Our empiricalresults show that it is possible to significantly improve the level of accuracy of several taskswith respect to a multi-head model by only marginally increasing the cost.2 R ELATED WORKLearning representations that can be re-used for multiple tasks has received a great deal of attentionin the recent literature. Domain adaptation and transfer learning (Yosinski et al., 2014; Donahueet al., 2014) are common methods used to fine-tune a linear classifier on top of the embeddings2Under review as a conference paper at ICLR 2020produced by a pre-trained network to address multiple tasks. An alternative approach consists of fullfine-tuning (Cui et al., 2018), in which a pre-trained network is used as starting point for the trainingprocess. However, when multiple tasks need to be addressed, neither solution is particularly suitable.In the first case, task-adaptation is limited to the output layer of the network, which might not besufficient when tasks require heterogeneous representations. In the second case, full fine-tuningmight lead to very different models for each task, due to catastrophic forgetting. To overcome theselimitations, Rebuffi et al. (2017) address the problem of adapting a common representation to differentvisual domains. They propose to use residual adapter modules, i.e., parametric modules that can steerthe internal network representation from one domain to another. This approach was later extendedin (Rebuffi et al., 2018), introducing a form of adapter that can be added in parallel to the mainnetwork architecture, and successfully applied to the NLP domain in (Houlsby et al., 2019). Analternative approach is proposed in (Mudrakarta et al., 2019), in which a task-specific model patchis learned to produce different embeddings for different downstream tasks. The patch can take theform of either batch-norm parameters or a subset of the weights of spatial convolution filters. Allthese methods allow to adapt the network by changing a small number of weights. At the same time,during inference the whole network has to be reevaluated from scratch when moving from one task tothe other, due to the dependencies introduced in the computation graph. This is in constrast with ourmodel, which is able to target simultaneously multiple tasks at once.Most of the works above deal with vision-related tasks. Multi-task learning in the context of general-purpose audio has been less explored. The prevailing approach is to train a single model addressingmultiple classes at once (Hershey et al., 2017). However, this approach does not benefit from theavailability of task-specific datasets, and model capacity might not be tailored to the subset of classesof interest. Recently, Lee et al. (2019) proposed a model architecture that addresses simultaneouslythree tasks. Unlike our approach, they start directly from time-domain waveforms. In addition, thetask adaptation only occurs in the last layer of a multi-head model architecture. Similarly to ourwork, Georgiev et al. (2017) address multi-task audio learning for deployment on embedded devices.Depending on their characteristics, tasks can be processed by a multi-head model, in which onlythe last layer is task-specific, or have its own task-specific network. Conversely, our model canaccommodate task adaptation at different depths and in a task-specific manner.In our work we deliberately keep the task adapters of individual tasks separate from each other, sothat it is possible to select the subset of tasks to evaluate depending on the available budget. This isin constrast to the approach explored by Cheung et al. (2019), which superimpose multiple modelsinto a single entangled model, from which task-specific models can be later retrieved. At the sametime, this approach seems to be more suitable for server-side inference, where the overall modelcomplexity is less critical. In addition, while in our work we focus on a single modality, we recognizethe importance of handling multiple modalities at once. For example, Kaiser et al. (2017) exploredthe case in which multiple tasks belong to different modalities (e.g., image, speech, text), showingthat they might still benefit from sharing part of the network architecture when training is performedconcurrently on all tasks. Also in this case the resulting model is large and not specifically tailored tobe deployed on mobile devices.Finally, the proposed method for determining how to size the adapters based on the available budgetis related to the MorphNet solution previously appeared in (Gordon et al., 2018). However, ourapproach differs from multiple angles: i) a single-task learning model is discussed in (Gordon et al.,2018), while we focus on multi-task learning, thus investigating how allocation is performed acrosstasks; ii) we introduce explicit gating variables instead of re-using batch-norm scaling variables.This has the advantage of applying the solution also to layers in which batch norm might not beused (e.g., fully connected layers); iii) we adopt a different relaxation of the discrete cost allocationproblem (further discussed in Section 3); iv) we evaluate the model in the context of audio tasks,while (Gordon et al., 2018) is mostly concerned with vision tasks.3 M ETHODSWe consider a model architecture that receives one audio recording as input and produces as outputpredictions for Kdownstream tasks simultaneously. The architecture consists of a shared encoderandKtask-adapter encoders. The underlying idea is that the shared encoder provides a generalpurpose representation for the audio inputs, which might be suitable for different downstream tasks.3Under review as a conference paper at ICLR 2020However, higher level of accuracy might be achieved by refining the representations computed atdifferent depths adding task-specific adapters in the form of additional channels.The overall architecture of the model is illustrated in Figure 1. Both the shared encoder and each of thetask adapters consist of the same number of convolutional layers, followed by a global max-poolinglayer and a fully connected layer, for a total of Llayers. Letfk;i(),i= 1;:::;L , denote the functioncomputed by the generic layer at depth i. To simplify the notation, we denote with k= 0the sharedencoder and with k= 1;:::;K , the task specific encoders. The function fk;i()produces as output atensor of size TiFiCk;i. Note that the number of temporal frames Tiand frequency bins Fiis thesame for all values of k. For the task-specific encoders, we include a number of task-specific channelsCk;i=max(1;biC0;ic), whereC0;iandiare hyperparameters that determine the maximumachievable complexity of the model. Although it is possible to use a different value of iat eachlayer, throughout the rest of this paper we assume i=,i= 1;:::;L .In the shared encoder, f0;ireceives as input only the output of the previous layer. However, in eachtask-adapter encoder, fk;i,k6= 0, receives as input the concatenation of the outputs of f0;i1andfk;i1along the channel dimension. Therefore, the cost of computing fk;i,k6= 0, can be expressedas:costk;i=i;kCk;i(C0;i1+Ck;i1) (1)(withC0;0= 1, andCk;0= 0, fork6= 0). That is, the cost is proportional to the number of outputchannelsCk;imultiplied by the number of input channels (C0;i1+Ck;i1). The cost scaling factori;kis a constant value that can be computed based on: i) the intrinsic architecture of the layer; ii) theknown sizes TiFi; iii) the target cost measure, i.e., FLOPs or number of parameters.The proposed method aims at learning how to scale the number of channels to be used in each layerof the task adapters encoder, i.e., to determine ck;iCk;i, subject to a constraint on the total cost.To this end, we introduce a gating mechanism that controls the flow of the activations in the taskadapters encoders. Namely, for each layer of the task adapters we introduce Ck;iadditional trainablevariables ak;i= [ak;i;1;:::;ak;i;C k;i], which modulate the output of each channel:~fk;i;c(x) =(ak;i;c)fk;i;c(x); (2)where()is a non-linear function that maps its input to non-negative real numbers, i.e., R!R+.In our work, we use a clipped ReLU nonlinearity defined as follows:(a;s) = min(1;ReLU (sa+ 0:5)) (3)The slope of the non-linearity sis progressively increased during training, in such a way that, ass!1 ,(3)acts as a gating function. Note that when the gating non-linearity is driven to be either0 or 1, it is locked at this value, as the gradients are equal to zero. Therefore, it performs a hardselection of those channels that are contributing to the network output and those that can be discarded.The number of active channels in the i-th layer of the k-th task adapter is equal to:ck;i=Ck;iXc=11(ak;i;c )>0 (4)During training, we jointly learn both the parameters of the network and the gating variables. This isachieved by optimizing the following loss function:L=KXk=1wkhLXEk+Cadapterski(5)4Under review as a conference paper at ICLR 2020(a) Accuracy vs. cost.Num. params FLOPsTask Multi-head = 102= 104= 102= 104Single-taskMUS 0.94 0.94 (+0.5%) 0.93 (-1.1%) 0.95 (+0.4%) 0.95 (+0.3%) 0.98LSP 0.91 0.91 (+0.8%) 0.95 (+5.2%) 0.94 (+4.2%) 0.95 (+5.0%) 0.98BSD 0.74 0.75 (+0.8%) 0.76 (+2.6%) 0.76 (+2.3%) 0.76 (+2.1%) 0.73TUT 0.71 0.72 (+1.8%) 0.76 (+7.8%) 0.75 (+5.0%) 0.77 (+6.7%) 0.82SPC 0.66 0.65 (-0.3%) 0.65 (-0.4%) 0.66 (+2.2%) 0.67 (+3.1%) 0.75LID 0.59 0.65 (+11.9%) 0.67 (+14.7%) 0.63 (+5.3%) 0.65 (+8.9%) 0.64NPI 0.62 0.65 (+4.6%) 0.70 (+12.8%) 0.66 (+6.8%) 0.71 (+15.0%) 0.79NIF 0.55 0.57 (+2.7%) 0.59 (+6.6%) 0.59 (+8.1%) 0.59 (+7.8%) 0.63Mean 0.71 0.73 (+2.9%) 0.75 (+6.0%) 0.74 (+4.3%) 0.75 (+6.1%) 0.7910k 20k 30k 40k 50k 60k 70k# params50%60%70%80%90%100%AccuracyShared encoder: trainedMUSLSPBSDTUTSPCLIDNPINIF(b) Number of parameters0m 1m 2m 3m 4m 5mFLOPs50%60%70%80%90%100%AccuracyShared encoder: trainedMUSLSPBSDTUTSPCLIDNPINIF (c) FLOPsFigure 2: Accuracy vs. cost for the multi-task learning scenario.whereLXEkis the cross-entropy loss for the k-th task,wkis an optional weighting term, and Cadapterskis a penalty term that captures the cost of the k-th task adapter for a given configuration of the gatingvariables:Cadaptersk=LXi=1i;kkak;ik1(C0;i1+kak1;ik1): (6)The Lagrange multiplier controls indirectly the target cost, i.e., when = 0the optimizer minimizesthe cross-entropy loss LXEkonly, thus potentially using all the available capacity, both of the sharedencoder and of the task-adapter channels (i.e., ck;i=Ck;i). Conversely, when increasing the useof additional channels is penalized, thus inducing the network to use fewer channels. Note thatkak1;ik1is upper bounded by bC0;i1c, therefore when 1, the second term in equation (6)isdominated by the constant C0;i1, andCadapterskis proportional to the l-1 norm of the gating variablevector, thus promoting a sparse solution in which only a subset of the channels are used.4 E XPERIMENTSAudio front-end : In our work we consistently use the same audio frontend, which processes inputsequences sampled at 16 kHz, with a window size of 25 ms and a hop size equal to 10 ms to computethe short-time Fourier transform (STFT), and then computes F= 64 mel-spaced frequency bins inthe range 60–7800 Hz. Finally, we take the logarithm of the resulting spectrogram.Audio tasks : We evaluate the proposed multi-task adapters architecture addressing simultaneously 8different audio-based tasks, covering both speech and non-speech related tasks. In all cases, the modelreceives as input a spectrogram slice of size 9664, so that the receptive field is equal to T= 975 ms.We use the Speech Commands (SPC) dataset (Warden, 2018) to evaluate keyword spotting on 35distinct keywords. LibriSpeech (LSP) (Panayotov et al., 2015) contains audio books read by 2515Under review as a conference paper at ICLR 2020different speakers. We use the 100 hours training set to evaluate a speaker identification task. TheSpoken Language Identification (LID) dataset (Tomasz, 2018) contains samples that belong to threedifferent languages: English, Spanish and German, while the MUSAN (MUS) dataset (Snyder et al.,2015) distinguishes across three classes, namely music, speech and noise. We also use two datasetsreleased in the context of the recent DCASE2018 Challenge, Bird Audio Detection (Stowell et al.,2018) (BSD), and TUT Urban Acoustic Scenes 2018 (Mesaros et al., 2018) (TUT), which containslabeled audio samples from 10 different urban environments. Finally, we consider two tasks basedon the NSynth dataset (Engel et al., 2017). NSynthPitch (NPI) contains notes played by differentmusical instruments at 128 different pitch levels, while NSynthInstrument (NIF) distinguishes 11different families of musical instruments. For all datasets, we consider the default train/test split, andprovide results on slices extracted from the test set only. Note that the choice of the tasks used for theevaluation is consistent with the selected temporal granularity. As such, we do not consider speechrecognition tasks, which generally require a much finer temporal granularity.Model architecture : For both the shared encoder and the task-adapter networks, we use a convo-lutional neural network with L= 5layers. Each convolutional layer is followed by max-pooling,to reduce the time-frequency dimensions by a factor of two at each layer, a ReLU non-linearity andbatch-normalization. Finally, a global max-pooling layer is followed by a fully-connected layer.For each task, the output softmax layer receives as input the embeddings produced by the encoder,concatenated with the embeddings produced by the task-adapter network.We consider two scenarios for the shared encoder architecture. In the multi-task learning scenario, theencoder is trained together with the task adapters. In this case, the number of channels in each layeris equal to [6;12;24;48;96], for a total of 65k parameters and 6M FLOPs. In the transfer learningscenario, we consider embeddings produced by an encoder pre-trained using a self-supervisedlearning method (Audio2Vec - CBoW), as described in (Tagliasacchi et al., 2019). In this case we use[8;16;32;64;128] channels in each layer and the encoder weights are frozen during training, for atotal of 125k parameters and 18M FLOPs. The maximum number of channels in the task-adapternetworks is determined by setting = 0:2.The loss function is minimized with stochastic gradient descent using the Adam optimizer with alearning rate equal to 103. The batch size is set equal to 256 samples, that is, 256 / 8 = 32 samplesfrom each task in a batch. Training is stopped after 1 million batch iterations, when the level ofaccuracy of all tasks is saturated.Baselines : As a baseline, we consider a multi-head architecture which consists of a shared encoderand 8 different fully connected layers, one for each task. We also include results obtained by traininga task specific model. In this case, we use a model with [8;16;32;64;128] channels in each layer fora total of 125k parameters and 18M FLOPs up to the embedding layer. The number of parameters ofthe output softmax layer depends on the number of output classes. For example, the LibriSpeech headrequires additional 251128'32kparameters, the MUSAN head only 3128 = 384 parameters.The FLOPs cost of this layer is negligible when compared to the rest of the network.Results : We evaluate the proposed model architecture by computing the classification accuracy ofeach of the 8 tasks. We consider cost expressed either as number of task-specific parameters, ortask-specific FLOPs. Figure 2 shows the results for the multi-task learning scenario. We let theparametervary, so as to target different cost levels, and report the task accuracy in each case. Theleftmost point in each curve represents the multi-head baseline, and the other two points are obtainedby setting= 102;104, when expressing number of parameters in thousands and FLOPs inmillions. Note that the x-axis in both figures represents the task-specific cost only, i.e., it does notinclude the cost of computing the shared encoder. For this reason the curves do not start at zero,because we include the task-specific cost of the softmax layer of head, particularly noticeable whenconsidering cost based on the number of parameters.Overall, the average accuracy across tasks grows from 0.71 to 0.75 by adding task-specific adapters(+6% in relative terms). However, there are significant differences across tasks. For example, MUSANstarts from a higher level of accuracy in the multi-head model, and no improvement is observedwhen adding task-specific adapters. Conversely, NSynthPitch is quite different from all other tasks,and the shared encoder is unable to capture the features necessary to solve this task. As a result, arelative +15% / + 13% is observed when cost is measured in terms of number of parameters andFLOPs, respectively. For 6 out of the 8 tasks, the proposed model achieves a level of accuracy which6Under review as a conference paper at ICLR 2020(a) Accuracy vs. cost.Num. params FLOPsTask Multi-head = 102= 104= 102= 104Single-taskMUS 0.93 0.94 (+1.2%) 0.94 (+0.4%) 0.95 (+1.9%) 0.94 (+0.8%) 0.98LSP 0.62 0.73 (+17.5%) 0.83 (+33.8%) 0.82 (+32.4%) 0.85 (+37.8%) 0.98BSD 0.67 0.72 (+6.8%) 0.75 (+11.2%) 0.73 (+7.9%) 0.74 (+8.4%) 0.73TUT 0.47 0.51 (+8.6%) 0.54 (+15.3%) 0.55 (+16.7%) 0.56 (+18.3%) 0.82SPC 0.19 0.49 (+160%) 0.61 (+223%) 0.59 (+209%) 0.63 (+228%) 0.75LID 0.52 0.68 (+32.0%) 0.66 (+28.1%) 0.69 (+38.4%) 0.72 (+44.0%) 0.64NPI 0.46 0.50 (+9.2%) 0.59 (+28.6%) 0.58 (+27.2%) 0.60 (+30.8%) 0.79NIF 0.40 0.47 (+18.4%) 0.51 (+27.9%) 0.54 (+35.6%) 0.51 (+30.2%) 0.63Mean 0.53 0.63 (+31.6%) 0.68 (+46.0%) 0.68 (+46.2%) 0.69 (+49.8%) 0.7910k 20k 30k 40k 50k 60k 70k# params0%20%40%60%80%100%AccuracyShared encoder: frozenMUSLSPBSDTUTSPCLIDNPINIF(b) Number of parameters0m 1m 2m 3mFLOPs0%20%40%60%80%100%AccuracyShared encoder: frozenMUSLSPBSDTUTSPCLIDNPINIF (c) FLOPsFigure 3: Accuracy vs. cost for the transfer learning scenario.is in-between the multi-head baseline and independent single-task models. When comparing with thelatter, one needs to bear in mind that the overall complexity of the single task models is significantlylarger than the architecture evaluated in the multi-task learning scenario, also when !0and allgates are open. To further bridge the gap, one could increase , thus allocating additional task-specificchannels, in exchange for additional complexity. For two tasks, namely Bird Audio Detection andSpoken Language Identification , the proposed multi-task architecture outperforms the correspondingsingle-task baselines. This can be explained by the fact that both tasks have a relatively small datasetand the single-task model is likely to overfit. Conversely, when trained jointly with other tasks, thisacts as a form of regularization, thus leading to higher accuracy on the test set.Figure 3 show similar results for the transfer learning scenario. In this case, the pre-trained encoderis frozen and, as already observed in (Tagliasacchi et al., 2019), it provides good representations onlyfor some of the tasks. For example, a simple multi-head model achieves a level of accuracy equal to0.19 on the Speech Commands task. By adding task-specific adapters, the accuracy grows above 0.60,regardless of the adopted cost measure. In general, much larger improvements are observed in thiscase, with an average relative increase in accuracy above 40%.It is also interesting to observe how the model decides to allocate the additional budget available fortask adapters, inspecting the status of the gates upon training convergence. Figure 4a illustrates thisaspect for two tasks, namely Bird Audio Detection andNSynthPitch . Each sub-figure shows howthe gates evolve by decreasing the value of , thus relaxing the cost constraint. First, we observethat different tasks require a different level of adaptation. In this example, NSynthPitch uses a largernumber of additional channels. Second, the status of the gates depend on the selected cost measure.When considering FLOPs, the last fully layer is relatively inexpensive. Thus, most of the gates arekept open. Conversely, when considering number of parameters requires a more parsimonious use ofthe fully connected layer, as this accounts for a large fraction of the total cost.7Under review as a conference paper at ICLR 20200.00.51.0Gate valuebirdsong_detection0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value(a)0.00.51.0Gate valuebirdsong_detection0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value (b)0.00.51.0Gate valuensynth_pitch0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value(c)0.00.51.0Gate valuensynth_pitch0.00.51.0Gate value0 1 2 3 4 5Layer0.00.51.0Gate value (d)Figure 4: Gate values - Cost measure: Number of parameters (a) and (c); FLOPs (b) and (d). Crossesrepresent closed gates corresponding to unused channels. Within each sub-figure, the Lagrangemultiplierincreases going top-to-bottom, allowing more gates to stay open.5 C ONCLUSIONIn this paper we propose a multi-task learning model that is able to address a wide variety of audiotasks. Unlike previously proposed approaches, our model can compute simultaneously either alltasks at once, or a subset of them, depending on the available computational resource budget. Theallocation of the task-specific resources is handled jointly with training, by learning which additionalchannels should be used for each task and layer. Experimental results show that the proposed modeloutperforms a multi-head architecture baseline and approaches the accuracy achievable when usingseparate task-specific models. Our future research will pursue two different directions: i) on theone hand, we will explore how tasks can be group together, so as to share a common representationwithin each group; ii) on the other hand, we will relax the constraint on the input audio front-end,investigating how different tasks may benefit from task-specific time-to-frequency transformations.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The authors present a method for the audio classification problem, using a pre-trained network (shared encoder) and creating adapters for each new task. This is an interesting alternative to transfer learning, and it would be great if minimal adapters can be designed to work with a baseline network. The adapter network is of the same depth as the shared encoder network, and uses both its own output, and the encoder's output from the previous layer for its next layer. Using learnable gates, the channels of its previous layers can be selected to be on / off. The idea is to create a network that can be tuned for accuracy and cost (params and FLOPs) for on-device inference. Since the motivation behind the paper is networks that are suitable for on-device inference and multi-task adapters, I find the paper lacking in the following respect: 1. Describing how exactly they will implement this architecture on-device. The channels that have been turned off by the gates, can have their parameters trimmed, and the next layer can ignore them. However, it would be great to get actual numbers in terms of latency, size, etc. from a model that is running on-device, with explanation of the work done to translate the training model graph to an efficient inference graph. 2. Comparison with other on-device model compression techniques like quantization, distillation, pruning etc. 3. Comparison with the transfer learning variant, where only the last few layers are trainable. While this is an interesting approach, it would be more convincing to know that this approach: a) Translates to actual improvements in on-device inference, along with implementation details. b) Is better than / competitive with other techniques for compression and transfer learning. With the paper as it currently stands, I would recommend consideration for a dedicated workshop, but weak reject for the main conference. I would be willing to be convinced otherwise, if the authors can fill in more details as mentioned. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
kLyLW3RRqU
ICLR.cc/2021/Conference
2021
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
["Shashank Kotyan", "Moe Matsuki", "Danilo Vasconcellos Vargas"]
Neural networks have been shown vulnerable to a variety of adversarial algorithms. A crucial step for understanding the rationale behind this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features. Here, we propose a method to understand the representation quality of the neural networks using a novel test based on Zero-Shot Learning, entitled Raw Zero-Shot. The principal idea is that if an algorithm learns rich features, such features should be able to interpret 'new or unknown' classes as a combination of previously learned features. This is because unknown classes usually share several regular features with recognised (learned) classes, given that the features learned are general enough. We further introduce two metrics to assess this learned representation which interprets unknown classes. One is based on inter-cluster validation technique, while the other is based on the difference in the representation between the case when the class is unknown and the case when it is known to the classifier. Experiments suggest that several adversarial defences not only decrease the attack accuracy of some attacks but also improve the representation quality of the classifiers. Further, a low p-value of the paired-samples t-test suggests that several adversarial defences, in general, change the representation quality significantly. Moreover, experiments also reveal a relationship between the proposed metrics and adversarial attacks (a high Pearson Correlation Coefficient (PCC) and low p-value).
["Understanding Neural Networks", "Representation Metrics", "Adversarial Machine Learning", "Adversarial Attacks", "Adversarial Defences"]
ABSTRACTNeural networks have been shown vulnerable to a variety of adversarial algorithms.A crucial step for understanding the rationale behind this lack of robustness is toassess the potential of the neural networks’ representation to encode the existingfeatures. Here, we propose a method to understand the representation quality ofthe neural networks using a novel test based on Zero-Shot Learning, entitled RawZero-Shot. The principal idea is that if an algorithm learns rich features, suchfeatures should be able to interpret ‘new or unknown’ classes as a combination ofpreviously learned features. This is because unknown classes usually share severalregular features with recognised (learned) classes, given that the features learnedare general enough. We further introduce two metrics to assess this learnedrepresentation which interprets unknown classes. One is based on inter-clustervalidation technique, while the other is based on the difference in therepresentation between the case when the class is unknown and the case when it isknown to the classifier. Experiments suggest that several adversarial defences notonly decrease the attack accuracy of some attacks but also improve therepresentation quality of the classifiers. Further, a low p-value of thepaired-samples t-test suggests that several adversarial defences, in general, changethe representation quality significantly. Moreover, experiments also reveal arelationship between the proposed metrics and adversarial attacks (a high Pearsoncorrelation coefficient and low p-value).1 I NTRODUCTIONAdversarial samples are noise-perturbed samples that can fail neural networks for tasks like imageclassification. Since they were discovered some years ago by Szegedy (2014), both the quality andvariety of adversarial samples have grown. These adversarial samples can be generated by a specificclass of algorithms known as adversarial attacks (Nguyen et al., 2015; Brown et al., 2017; Moosavi-Dezfooli et al., 2017; Su et al., 2019). Most of these adversarial attacks can also be transformedinto real-world attacks (Sharif et al., 2016; Kurakin et al., 2016; Athalye & Sutskever, 2018), whichconfer a big issue as well as a security risk for current neural networks’ applications. Despite theexistence of many variants of defences to these adversarial attacks (Goodfellow et al., 2014; Huanget al., 2015; Papernot et al., 2016; Dziugaite et al., 2016; Hazan et al., 2016; Das et al., 2017; Guoet al., 2018; Song et al., 2018; Xu et al., 2017; Madry et al., 2018; Ma et al., 2018; Buckman et al.,2018), ‘no known learning algorithm or procedure can defend consistently’ (Carlini & Wagner, 2017;Tramèr et al., 2017; Athalye et al., 2018; Uesato et al., 2018; Vargas & Kotyan, 2019; Tramer et al.,2020). This shows that a more profound understanding of the adversarial algorithms is needed toenable the formulation of consistent and robust defences.Several works have focused on understanding the reasoning behind such a lack of robust performance.It is hypothesised in Goodfellow et al. (2014) that neural networks’ linearity is one of the mainreasons for failure. Other investigation by Thesing et al. (2019) shows that with deep learning, neuralnetworks learn false structures that are simpler to learn rather than the ones expected. Moreover,research by Vargas & Su (2019) unveil that adversarial attacks are altering where the algorithm ispaying attention. In Sabour et al. (2015), it is discussed that an adversarial sample may have differentinternal representation than the benign sample. The authors show that internal representations ofadversarial samples are remarkably similar to different images of different true-class and linksadversarial robustness to representations learned by deep neural networks.1Under review as a conference paper at ICLR 2021BearZebraBird[1, 0, 0]?Predicted [0.98, 0.01, 0.01][0.19, 0.8, 0.01][0.1, 0.03, 0.87][0.6, 0.39, 0.01][0, 1, 0][0, 0, 1]Train Target Input SamplesKnown Classes Unknown Class60% Similar to Bear, 39% Similar to ZebraAmalgam ProportionFigure 1: Raw Zero-Shot IllustrationContributions: In this article, we try to open up a new perspective on understanding adversarialalgorithms based on evaluating the representation quality of unseen classes based on learned classes.We do this, by verifying that the representation quality of neural networks is indeed linked withthe adversarial attacks and defences. Specifically, we propose a methodology based on Zero-ShotLearning entitled Raw Zero-Shot (Section 3) for evaluating the representation quality of the neuralnetworks.We conducted experiments over the soft-labels of an unfamiliar class to assess the representationquality of the classifiers. This is based on the hypothesis that, if the classifier is capable of learninguseful features, an unfamiliar class would also be associated with some of these learned features(Amalgam Proportion) (Figure 1). We call this type of inspection over unfamiliar class, Raw Zero-Shot (Section 3). Furthermore, we also introduce two associated metrics to evaluate the representationquality of neural networks. One is based upon Clustering Hypothesis (Section 3.1), while the other isbased on Amalgam Hypothesis (Section 3.2).We evaluated our Raw Zero-Shot test over a wide assortment of datasets (and classifiers) such asFashion MNIST, CIFAR-10, and a customised Imagenet to assess the representation quality of thevanilla classifiers (Section 4). We also evaluated different adversarial defences to prove that whenan adversarial defence is applied to a classifier, it gives better representation quality than the vanillaclassifier. We also conducted a paired samples t-test to determine the statistical relevance of the effectof adversarial defences on the representation quality (Section 5). We then reveal a link between therepresentation quality and attack susceptibility by verifying that the proposed metrics have a highPearson correlation coefficient with the adversarial attacks (Section 6).2 R ELATED WORKSUnderstanding Adversarial Attacks: Since the discovery of adversarial samples in Szegedy(2014), many researchers have tried to understand the adversarial attacks. It is hypothesised inGoodfellow et al. (2014) that neural networks’ linearity is one of the principal reasons for failureagainst an adversary. A geometric perspective is analysed in Moosavi-Dezfooli et al. (2018), where itis shown that adversarial samples lie in shared subspace, along which the decision boundary of aclassifier is positively curved. Further, in Fawzi et al. (2018), a relationship between sensitivity toadditive perturbations of the inputs, and the curvature of the decision boundary of deep networks isshown. Another aspect of robustness is discussed in Madry et al. (2018), where authors suggest thatthe capacity of the neural networks’ architecture is relevant to the robustness. It is also stated in Ilyaset al. (2019) that the adversarial vulnerability is a significant consequence of the dominant supervisedlearning paradigm and a classifier’s sensitivity to well-generalising features in the known inputdistribution. Also, research by Tao et al. (2018) argue that adversarial attacks are entangled withinterpretability of neural networks as results on adversarial samples can hardly be explained. Further,the existence of different internal representations learned by neural networks for an adversarialsample compared to a benign sample is shown in Sabour et al. (2015). In this article, we explore anew perspective to understand adversarial attacks and defences based on the representation quality ofthe neural networks evaluated using Amalgam Proportion.2Under review as a conference paper at ICLR 2021Zero-Shot learning: Zero-Shot learning is a method to estimate unfamiliar classes which do notappear in the training data. The motivation of Zero-Shot learning is to transfer knowledge fromrecognised (learned) classes to unfamiliar classes. Existing methods address the problem by estimatingunfamiliar classes from an attribute vector defined manually for all classes. For each class, whethersuch an attribute (like colour, shape) relates to the class or not is represented by one or zero. Lampertet al. (2009) introduced Direct Attribute Prediction (DAP) model, which learns each parameter ofthe input sample for estimating the attributes of the sample from the feature vector generated. Basedon this research, other zero-shot learning methods have been proposed which uses an embeddedrepresentation generated using a natural language processing algorithm instead of a manually createdattribute vector (Norouzi et al., 2013; Fu et al., 2015; Akata et al., 2015; Zhang & Saligrama, 2016;Bucher et al., 2016). Zhang & Saligrama (2015) proposed a different strategy by constructing thehistogram of known classes distribution for an unknown class to estimate unknown classes. Theyassume that the unknown classes are the same if these histograms generated in the prediction domainand the source domain are similar. Our Raw Zero-Shot test is distinguished from other zero-shotlearning algorithms as in Raw Zero-Shot the neural network has no access to features (attributevector), or additional supplementary knowledge.3 R AWZERO-SHOTSoft -labels(N-1classes)Compare the histograms of soft-labelsRaw Zero -Shot Classifier(N-1class classifier)Standard Classifier(Nclass classifier)Delete target class in soft-label distribution and normaliseDavied -Bouldin Metric: Clustering HypothesisAmalgam Metric: Amalgam HypothesisSingle Class SamplesHistogram of soft -labels (H)2D Soft -labels spaceClass is unknown for the classifierClass is known for the classifierHistogram of soft -labels Histogram of soft -labels (H’)Soft -labels(Nclasses)Visualisation of ( N-1)dimensional soft-labels space in 2DFigure 2: Illustration of proposed metrics.Raw Zero-Shot is a learning test in which only N1of theNclasses in the dataset are presentedto the classifier during training, or in other words, all the samples of one specific class are removedfrom the standard training dataset. Such a classifier trained on only N1of theNclasses is called‘Raw Zero-Shot Classifier’ . Please note that a ‘Standard Classifier’ is trained on all Nclasses hasNsoft-label dimensions in the soft-label space. In contrast, a Raw Zero-Shot Classifier has only N1soft-label dimensions in the soft-label space due to the forced exclusion of a class. The excludedunknown class then can be predicted as a combination of the remaining N1soft-label dimensionsof the known (learned) classes. We call this combination as ‘Amalgam Proportion’ (Figure 1). Duringtesting, only the unknown class (excluded class from N) is provided to the classifier. AmalgamProportion for the given unknown class is recorded for the classifier. This process is iterated for allpotential (N)classes, excluding a different class each time.Soft-labels of a classifier composes a space in which a given image would be categorised as a weightedvector involving the previously learned classes. If neural networks can learn the features existing inthe classes, it would be reasonable to consider that the Amalgam Proportion also describes a givenimage as a combination of the previously learned classes (Figure 1). Similar to a vector space in linearalgebra, the soft-labels can be combined to describe unknown objects in this space. In our example(Figure 1), the unknown class (Giant Panda) is represented as a combination of previously recognised(learned) classes (Bear, Zebra, Bird) where 60% of the features of Bear (like body-shape) and 39%of the features of Zebra (like stripes pattern) is ‘associated’ with the Giant Panda. This is analogousto how children associate unseen objects (Giant Panda) as a combination of recognised objects (Bearand Zebra) when they are asked to describe the unseen object with their learned knowledge (Walker &Gopnik, 2014; Walker et al., 2016). Thus, all the images of the class Giant Panda should have similarAmalgam Proportion as the hypothetical classifier can associate Giant Panda with some features ofZebra and Bear classes.3Under review as a conference paper at ICLR 2021Metrics are then computed over the Amalgam Proportion of the unknown (excluded) class to assessthis representation quality of a classifier, (Figure 2). These metrics are each based on a differenthypothesis of what defines a feature or a class. In the same way, as there are various aspects ofrobustness, there are also different variations of representation quality . Therefore, our metrics arecomplementary, each highlighting a different perspective of the whole . The following subsectionsdefine them.3.1 D AVIES –BOULDIN METRIC (DBM) – C LUSTERING HYPOTHESISWe can use cluster validation techniques to assess the representation (Amalgam Proportion),considering that the cluster of Amalgam Proportion of an unfamiliar class would constitute a class initself. Here, we choose for simplicity Davies-Bouldin Index (Davies & Bouldin, 1979), one of themost used metrics in internal cluster validation. Hence, Davies–Bouldin Metric (DBM) for anunknown class can be defined as follows:DBM =0@1nnXj=1jzjGj21A1=2in which,nis the number of samples (samples from unknown class), Gis the centroid of the clusterformed by the soft-labels of all the nsamples, and zis soft-label of a single sample of unknownclass. A denser cluster would have a lower DBM Score representing a consistent view taken by theclassifier in terms of features learned from the known classes.3.2 A MALGAM METRIC (AM) – A MALGAM HYPOTHESISDifferently from the previous metric, here we establish our metric on the hypothesis that the classesthat are learned by a classifier share some similarity with the unfamiliar class and the classifiercan associate this similarity in its representation while evaluating these unfamiliar classes. Thishypothesis formulates from the fact that humans can combine available perceptual information withstored knowledge of experiential regularities which helps us to describe things that are ‘similar’ asclose and things that are ‘dissimilar’ as far apart (Casasanto, 2008). However, what would constitutethe baseline Amalgam Proportion for a given unfamiliar class still needs to be determined to assessthe extent of the classifier to exploit this existence of similarity between classes.To calculate the baseline Amalgam Proportion of a given unknown class, we use here the assumptionthat ‘Standard Classifiers should output a good approximation of the Amalgam Proportion sincethe class is known to the Standard Classifier in the training phase. We thus associate the evaluatedAmalgam Proportion of the Raw Zero-Shot Classifier and the baseline Amalgam Proportion of theStandard Classifier for a given class with our Amalgam Metric (AM) (Figure 2) as,AM =kH0Hk1N1whereH=nXj=1zj; H0=nXj=1z0jin which,z0is the normalized soft-labels of non-target classes from the Standard classifier, and zis the soft-labels of known classes from the Raw Zer-Shot Classifier. Note that, the given class is‘known’ (target) by the standard classifier and is ‘unknown’ to the Raw Zero-Shot Classifier. Hence,the Amalgam Metric captures the existence of some unique features learned which are specific to aclass which in-turn changes the Amalgam Proportion between Raw Zero-Shot Classifier and StandardClassifier. A higher AM score corresponds to a classifier preferring to learn special features of a classover general features present across the distribution. In other words, a lower AM score correspondsto a classifier preferring to learn general features over special features. A non-zero AM score thusverifies the existence of the unique special features to a class which are learned by training theclassifier on that specific class.4Under review as a conference paper at ICLR 20214 E XPERIMENTAL DESIGN ANDRESULTSConsidered Datasets: We conducted experiments on three diverse datasets to evaluate therepresentation of the neural networks. We used Fashion MNIST (F-MNIST) (Xiao et al., 2017),CIFAR-10 (Krizhevsky et al., 2009) and a customised Sub-Imagenet (Sub) dataset for ourevaluations. The details of the customised Sub-Imagenet dataset is mentioned in Appendix B. Notethat, the number of samples ( 7000 for Fashion MNIST, 6000 for CIFAR-10, and roughly 13500samples for Sub-Imagenet dataset) in the assumed unknown class differ with the dataset. We use thesamples from both training and testing dataset for the ‘unknown’ class for evaluation because weexclude these samples in the training process.Considered Classifiers: We evaluated different architectures for different datasets. For the FashionMNIST datasets, we chose to evaluate Multi-Layer Perceptron (MLP), and a shallow ConvolutionNeural Network (ConvNet). For the CIFAR-10 dataset, LeNet (a simpler architecture which is ahistorical mark) (LeCun et al., 1998), VGG (a previous state-of-the-art architecture which is ahistorical mark) (Simonyan & Zisserman, 2014), All Convolutional Network (AllConv) (anarchitecture without max pooling and fully-connected layers) (Springenberg et al., 2014), Network inNetwork (NIN) (an architecture which uses micro neural networks instead of linear filters) (Lin et al.,2013), Residual Networks (ResNet) (an architecture based on skip connections) (He et al., 2016),Wide Residual Networks (WideResNet) (an architecture which also expands in width) (Zagoruyko &Komodakis, 2016), DenseNet (an architecture which is a logical extension of ResNet) (Huang et al.,2017), and Capsule Networks (CapsNet) (a recently proposed completely different architecture basedon dynamic routing and capsules) (Sabour et al., 2017). For our Sub-Imagenet dataset, we choseInceptionV3 (Szegedy et al., 2016), and ResNet-50 (He et al., 2016). Details about the Standard andRaw Zero-Shot Classifiers are mentioned in Appendix C.Considered Adversarial Defences: We also evaluated the representation quality of some of theadversarial defences for CIFAR-10 dataset, such as Feature Squeezing (FS) (Xu et al., 2017), SpatialSmoothing (SS) (Xu et al., 2017) , Label Smoothing (LS) (Hazan et al., 2016), Thermometer Encoding(TE) (Buckman et al., 2018), and Adversarial Training (AT) (Madry et al., 2018). We also evaluateclassifiers trained with augmented dataset having Gaussian Noise of = 1:0(G Aug). Details aboutthe adversarial defences are mentioned in Appendix D. For a discussion about the performance ofadversarial defences in general, please refer to Athalye et al. (2018).Considered Attacks: We also evaluated all our standard vanilla classifiers against well-knownadversarial attacks such as Fast Gradient Method (FGM) (Goodfellow et al., 2014), Basic IterativeMethod (BIM) (Kurakin et al., 2016), Projected Gradient Descent Method (PGD) (Madry et al.,2018), DeepFool (DF) (Moosavi-Dezfooli et al., 2016), and NewtonFool (NF) (Jang et al., 2017).Details about the adversarial attacks are mentioned in Appendix E.Architecture DBM AMFor Fashion MNIST DatasetMLP 0:510:09 670:7181:79ConvNet 0:470:10 683:5576:39For Sub-Imagenet DatasetInceptionV3 0:560:07 1335:6531:83ResNet-50 0:550:15 1311:9737:59Architecture DBM AMFor CIFAR-10 DatasetLeNet 0:540:04 473:9791:53VGG 0:610:12 645:8615:19AllConv 0:640:08 634:0422:01NIN 0:630:09 646:0416:40ResNet 0:640:13 654:906:40DenseNet 0:610:14 658:214:05WideResNet 0:580:15 660:003:60CapsNet 0:430:03 385:8583:77Table 1: Mean and Standard Deviation of DBM and AM Scores for vanilla Raw Zero-Shot Classifiers.Experimental Results For Vanilla Classifiers: Table 1 shows the results of our metrics (DBM andAM) for vanilla classifiers. Note that, we use mean across all the metric values for Nclasses of thedataset to be characteristic metric value for an architecture. To enable the visualisation of DBM, weplot a projection of all the points in the decision space of unknown class ( N1dimensions) intotwo-dimensional space (Appendix F). Similarly, we can also visualise AM, in the form of histogramsof soft-labels for the classifiers (Appendix G). Table 1 reveals that for CIFAR-10 dataset, CapsNetpossesses the best representation quality amongst all classifiers examined as it has the least (best)score in both of our metrics. At the same time, LeNet has the second-best representation quality.Moreover, other architectures possess similar representation quality. Also for Sub-Imagenet dataset,5Under review as a conference paper at ICLR 2021both architectures (InceptionV3 and ResNet-50) are equally clustered and predict the AmalgamProportion similarly. However, ResNet-50 has marginally better representation quality than theInceptionV3 as it has better scores for both of our metrics. Similarly, for Fashion MNIST dataset,both architectures (MLP and ConvNet) have a similar quality of representation. While ConvNetseems marginally superior to the MLP in terms of clustering the unknown classes more tightly(suggested by DBM), MLP seems marginally superior to predict the Amalgam Proportion better thanthe ConvNet (suggested by AM).5 L INKBETWEEN REPRESENTATION QUALITY ANDADVERSARIALDEFENCESDavies–Bouldin Metric (DBM)Architecture No Defence Gaussian Augmentation Label Smoothing Adversarial TrainingLeNet 0:540:04 0:560:04 (0:00) 0:430:02 (0:00) 0:320:04 (0:00)VGG 0:610:12 0:630:12 (0:07) 0:550:10 (0:00) 0:470:07 (0:00)AllConv 0:640:08 0:660:11 (0:27) 0:480:05 (0:00) 0:500:06 (0:00)NIN 0:630:09 0:640:11 (0:17) 0:520:08 (0:00) 0:430:06 (0:00)ResNet 0:640:13 0:630:14 (0:09) 0:540:11 (0:00) 0:430:07 (0:00)DenseNet 0:610:14 0:600:15 (0:05) 0:550:13 (0:00) 0:500:10 (0:02)WideResNet 0:580:15 0:590:15 (0:58) 0:460:09 (0:00) 0:610:10 (0:13)CapsNet 0:220:01 0:230:01 (0:00) 0:180:01 (0:00) 0:150:02 (0:00)Architecture No Defence Feature Squeezing Spatial Smoothing Thermometer EncodingLeNet 0:540:04 0:540:04 (0:38) 0:500:03 (0:01) 0:520:04 (0:09)VGG 0:610:12 0:620:11 (0:14) 0:630:09 (0:52) 0:650:05 (0:27)AllConv 0:640:08 0:640:08 (0:20) 0:630:08 (0:66) 0:670:05 (0:12)NIN 0:630:09 0:630:09 (0:13) 0:650:06 (0:39) 0:650:06 (0:14)ResNet 0:640:13 0:650:13 (0:20) 0:660:11 (0:61) 0:710:06 (0:02)DenseNet 0:610:14 0:620:12 (0:16) 0:640:11 (0:57) 0:690:09 (0:00)WideResNet 0:580:15 0:590:14 (0:13) 0:620:11 (0:51) 0:660:08 (0:02)CapsNet 0:220:01 0:220:01 (0:00) 0:210:01 (0:09) 0:200:02 (0:03)Amalgam Metric (AM)Architecture No Defence Gaussian Augmentation Label Smoothing Adversarial TrainingLeNet 115:9736:92 84:0026:39 (0:03) 177:0897:77 (0:10) 29:9316:06 (0:00)VGG 270:76186:04 287:75122:58 (0:75) 579:05121:89 (0:00) 218:47100:50 (0:44)AllConv 150:3539:16 153:7365:96 (0:90) 395:28143:78 (0:00) 188:6667:98 (0:16)NIN 186:1497:41 222:68104:12 (0:03) 503:32145:15 (0:00) 86:4517:60 (0:01)ResNet 233:84109:08 266:61124:12 (0:17) 592:57119:06 (0:00) 86:7146:24 (0:00)DenseNet 314:93130:50 303:04120:54 (0:70) 629:48131:86 (0:00) 187:3471:01 (0:04)WideResNet 417:37180:78 443:95157:46 (0:13) 586:84132:92 (0:00) 365:29199:90 (0:13)CapsNet 96:9638:59 111:4656:69 (0:07) 100:0142:72 (0:54) 54:4820:38 (0:00)Architecture No Defence Feature Squeezing Spatial Smoothing Thermometer EncodingLeNet 115:9736:92 116:8537:42 (0:37) 72:1320:02 (0:00) 272:0380:86 (0:00)VGG 270:76186:04 271:42184:10 (0:78) 183:06128:16 (0:02) 510:3985:82 (0:00)AllConv 150:3539:16 149:4738:17 (0:50) 179:4468:03 (0:14) 537:4874:51 (0:00)NIN 186:1497:41 185:82100:53 (0:92) 148:72100:69 (0:00) 516:7292:20 (0:00)ResNet 233:84109:08 226:19105:21 (0:06) 199:6499:87 (0:14) 531:5480:03 (0:00)DenseNet 314:93130:50 319:33136:19 (0:68) 246:0899:05 (0:09) 585:3856:48 (0:00)WideResNet 417:37180:78 402:62185:48 (0:04) 207:62131:18 (0:00) 646:8510:66 (0:00)CapsNet 96:9638:59 96:9538:57 (0:82) 84:0231:37 (0:03) 280:3958:42 (0:00)Table 2: Mean and Standard Deviation of DBM and AM values for different Raw Zero-ShotClassifiers with and without the adversarial defences on CIFAR-10. Values in the parentheses arep-values of the paired samples t-test between the metric values of defences and those without defences.Table 2 shows the results of our metrics (DBM and AM) for vanilla classifiers and classifiers employedwith a variety of adversarial defences for improving the robustness of vanilla classifiers for CIFAR-10. We also analyse the statistical relevance of the change in metric values due to introduction ofadversarial defences. A paired samples t-test (David & Gunnink, 1997) was conducted for our metrics’distributions (DBM and AM) of Vanilla Classifiers (without adversarial defence), and Adversariallydefended classifiers (Table 2) to test the significance in the change in metric values due to AdversarialDefences. The Null hypothesis of paired samples t-test assumes that the true mean difference betweenthe distributions is equal to zero. Based on the results (Table 2) adversarial defences, ‘in general’, tendto improve the representation quality of the neural networks evaluated using Amalgam Proportion. Itdoes so by either by creating a more dense cluster of the soft-labels (suggested by DBM) or learningmore general/special features (suggested by AM), or both.6Under review as a conference paper at ICLR 2021Raw DBM Score values for weaker defences such as G Aug, FS, SS and TE lie within the standarddeviation of vanilla classifiers suggesting that they affect minimally in clustering the AmalgamProportion of unknown classes. At the same time, DBM Score values for defences such as LSand AT are noticeably lower than vanilla classifiers suggesting they try to form a denser clusterof Amalgam Proportion compared to the vanilla classifiers. Thus a better association of availablefeatures is observed for the more robust defences. From the perspective of AM Score values, theresults suggest that LS favours learning special features belonging to a class while AT favours to learnmore general features. Interestingly, a general low p-value for the paired samples t-test is observedfor the adversarial defences, which suggests that underlying representation of adversarial defencesdiffer from the vanilla classifiers with high statistical relevance.6LINKBETWEEN REPRESENTATION QUALITY ANDADVERSARIAL ATTACKSArchitectureDBM with Mean L2Score AM with Mean L2ScoreFGM BIM PGD DF NF FGM BIM PGD DF NFFashion MNISTMLP -0.20 (0.58) -0.17 (0.64) -0.17 (0.64) -0.04 (0.91) -0.02 (0.97) 0.82 (0.00) 0.26 (0.47) 0.26 (0.47) 0.83 (0.00) 0.84 (0.00)ConvNet -0.24 (0.50) -0.30 (0.40) -0.30 (0.40) -0.26 (0.46) -0.22 (0.55) 0.83 (0.00) -0.07 (0.84) -0.09 (0.80) 0.81 (0.00) 0.82 (0.00)CIFAR-10LeNet -0.18 (0.61) -0.70 (0.02) -0.66 (0.04) -0.51 (0.13) -0.36 (0.31) 0.93 (0.00) 0.32 (0.36) 0.25 (0.49) 0.81 (0.00) 0.89 (0.00)VGG -0.62 (0.06) -0.21 (0.55) -0.20 (0.58) -0.52 (0.13) -0.63 (0.05) 0.71 (0.02) -0.04 (0.91) -0.07 (0.85) 0.87 (0.00) 0.74 (0.01)AllConv -0.31 (0.39) -0.56 (0.09) -0.54 (0.11) -0.10 (0.78) -0.30 (0.41) 0.67 (0.03) 0.42 (0.23) 0.41 (0.24) 0.94 (0.00) 0.73 (0.02)NIN -0.56 (0.09) -0.57 (0.08) -0.57 (0.09) -0.42 (0.22) -0.43 (0.21) 0.78 (0.01) 0.84 (0.00) 0.84 (0.00) 0.96 (0.00) 0.89 (0.00)ResNet -0.52 (0.12) -0.76 (0.01) -0.76 (0.01) -0.47 (0.17) -0.51 (0.13) 0.35 (0.32) 0.57 (0.09) 0.57 (0.09) 0.79 (0.01) 0.83 (0.00)DenseNet -0.62 (0.06) -0.50 (0.14) -0.49 (0.15) -0.16 (0.65) -0.22 (0.55) 0.53 (0.11) 0.78 (0.01) 0.78 (0.01) 0.78 (0.01) 0.84 (0.00)WideResNet -0.68 (0.03) -0.75 (0.01) -0.75 (0.01) -0.68 (0.03) -0.75 (0.01) 0.66 (0.04) 0.68 (0.03) 0.68 (0.03) 0.78 (0.01) 0.68 (0.03)CapsNet -0.71 (0.02) -0.45 (0.19) -0.49 (0.15) -0.39 (0.26) -0.48 (0.17) 0.98 (0.00) 0.69 (0.03) 0.73 (0.02) -0.17 (0.63) 0.47 (0.17)Sub-ImagenetInceptionV3 -0.76 (0.01) -0.52 (0.13) -0.52 (0.13) -0.35 (0.32) -0.50 (0.14) 0.75 (0.01) 0.14 (0.70) 0.14 (0.70) 0.28 (0.44) 0.25 (0.49)ResNet-50 -0.34 (0.34) -0.12 (0.74) -0.12 (0.74) -0.54 (0.10) -0.25 (0.48) 0.82 (0.00) 0.31 (0.39) 0.31 (0.39) 0.51 (0.13) 0.50 (0.15)Table 3: Pearson correlation coefficient of DBM and AM with Mean L2Score of Adversarial Attacksfor each vanilla classifier and attack pair. Values in the parentheses are p-values of the Pearsoncorrelation test.Since, the results in Table 2, suggests a link between the representation quality and the adversarialdefences as discussed above. It is intuitive to assume that there also exists a link between therepresentation quality and the adversarial attacks. To evaluate the statistical relevance of this linkbetween representation quality evaluated using Amalgam Proportion and adversarial attacks, weconducted a Pearson correlation coefficient test (Freedman et al., 2007) of our metrics (DBM andAM) of the vanilla classifiers with adversarial attacks. The Pearson correlation analysis of our metricssuggests a relationship between our metrics and the adversarial attacks in general.We use the analysis of adversarial attacks in the form of Mean L2Score (L2difference between theoriginal sample and the adversarial one) to compute the correlation (Moosavi-Dezfooli et al., 2016).The Pearson correlation coefficients of our metrics (DBM and AM) with Mean L2Score is shown inTable 3 for every architecture and attacks. Moreover, these Pearson relationships between our metricsand MeanL2Score can also be visualised (Appendix H). We also analyse the impact of adversarialattacks on the correct class soft-label (Appendix I).We do observe some anomalies in the Pearson correlation coefficient of AM with BIM and PGDattacks for the ConvNet, and VGG network and DeepFool for CapsNet. These anomalies are studiedin detail (Appendix H) to understand their existence. Our extended analysis suggests that theseanomalies exist due to abnormal behaviour of some classes. On careful study, we note that for all theclasses of VGG network BIM and PGD have similar AM Scores, while at the same time the Mean L2Score differs for across classes. We observe that for the CapsNet, the Airplane class had abnormallylow MeanL2score suggesting less perturbation which was abnormal compared to the other classesin the same setting. These anomalies further suggest that baseline Amalgam Proportion for someof the classes differ. However, the study of these representation qualities of ‘individual’ classes andtheir effect overall representation quality is beyond the scope of the current article and hence, left asfuture work.7Under review as a conference paper at ICLR 20217 G ENERAL DISCUSSION ONREPRESENTATION QUALITYOn carefully observing the metric values (Tables 1, 2, and 3), we found that our assessment ofrepresentation quality using Amalgam Proportion also explains some of the propositions by otherresearchers, we highlight some of our key findings below,Does a model with high capacity will have a better representation quality? Our results revealthat a deeper network which generally has a higher capacity (Madry et al., 2018) does not necessarilycorrespond to have a better representation quality of the input features. As CapsNet and LeNet, whichare much shallower than the other deeper networks, are shown to have superior representation qualitythan other deeper networks (Table 2).Why CapsNet has better representation quality than other deeper networks? We observe thatCapsule Networks (CapsNet) has the best representation amongst other neural networks (Table 2).Our results suggest that CapsNet not only produces a denser cluster for Amalgam Proportion but alsolearns more general features it might be because of the dynamical nature (routing) of the CapsNet.Thus our results call for a more in-depth investigation of Capsule Networks and their representationquality.How does augmenting the dataset with Gaussian Noise affect the representation quality? Weobserve that Gaussian Augmentation degrades the representation quality of all the classifiers (Table2). This supports our intuition (Section 3), as adding Gaussian noise to the images subdue the featuresof the image by blurring making the classifier harder to interpret these features. Consequently, aweaker association of the representation with these features is observed through the perspective ofAmalgam Proportion.How does Label Smoothing improve the representation quality? Our results corroborate theanalysis in Müller et al. (2019) that Label Smoothing (LS) encourages the representations to groupin tight, equally distant clusters. The raw metric values from our experiments for LS suggests thatclassifiers employed with LS do form a tighter cluster in soft-label space (as suggested by DBM)(Table 2). At the same time, LS also favours the classifiers to learn special features belonging to aclass (as suggested by AM).Do adversarial defences which work on the principle of obfuscated gradients affectrepresentation quality? Since, some adversarial defences rely on obfuscating gradients (Athalyeet al., 2018) such as Feature Squeezing, Spatial Smoothing, and Thermometer Encoding, they fail toimprove the representation quality of the classifiers (suggested by DBM). At the same time, morerobust adversarial defences like Adversarial Training which do not rely on obfuscating gradients havebetter representation quality. Hence, adversarial defences can be evaluated using our metrics toanalyse, if an adversarial defence improves the robustness of the classifier by improving therepresentation quality of the classifier or rely on some other criterion.8 C ONCLUSIONSIn this article, we propose a novel Zero-Shot learning-based method, entitled Raw Zero-Shot, to assessthe representation of the several neural networks. In order to assess the representation, two associatedmetrics are formally defined based on different hypotheses of representation quality. Results fromthe experiments reveal that classifiers employed with adversarial defences not only decrease theattack accuracy as presumed but also improve the representation quality of the classifiers as evaluatedby our proposed metrics (DBM and AM). Further, adversarial defences, have a low p-value in thepaired samples t-test when compared to vanilla classifiers in general, suggesting that representationquality is significantly affected by various adversarial defences. Moreover, a high Pearson correlationcoefficient and low p-value of the Pearson correlation test between the proposed metrics and theadversarial attacks suggest a link between the representation quality and the adversarial attacks. Ourexperimental results suggest that CapsNet (dynamic routing network) has the best representationquality amongst classifiers which calls for a more in-depth investigation of Capsule Networks. Hence,the proposed Raw Zero-Shot was able to assess and understand the representation quality from theperspective of unknown classes of different neural networks’ architectures, along with the adversarialdefences and link this representation quality of the neural networks with adversarial attacks anddefences. It also opens up new possibilities of using representation quality for both evaluation (i.e. asa quality assessment) and the development (e.g. as a loss function) of neural networks.8Under review as a conference paper at ICLR 2021
EF-GGQnDpu2
Further development needed?
4: Ok but not good enough - rejection
This paper is concerned with the assessment of the vulnerability of neural networks to adversarial attack, according to its ability to interpret new classes using previously-learned features. The underlying assumption made is that combinations of features from a sufficiently rich learned collection should be able to help identify members of previously unseen classes. For cases where the representational quality is poor, and the learned features do not suffice for new classes, the authors expect adversarial perturbation to be more successful. Conversely, they aim to show that when adversarial defense is added to a learning framework, it facilitates the identification of new classes from previously-learned features. In order to assess the generalizability of learned features to new classes, the authors propose the use of zero-shot learning. Using a leave-one-out strategy ("Raw Zero-Shot"), features are learned over all but one class, and then tested on the class left out. The results are then combined over all possible choices of class to exclude from the training set. For the evaluation, the authors consider an internal clustering validation measure (Davies-Bouldin metric), and a measure of average L1 deviation between normalized probability vectors (soft label vectors) of the zero-shot classifier (where the new class is excluded from the training set) and standard classifier (new class included). The authors then proceed to use these measures to evaluate the performances of classifiers in two main scenarios: generalizability of features when adversarial defences are applied vs when not applied; and the correlation of success / failure of adversarial attack vs the generalizability of features. Pros: ----- 1) The experimental validation does provide evidence that the vulnerability of adversarial attack does correlate with the quality of the features. However, this connection is not unexpected, as adversarial perturbation techniques generally seek to discover nearby regions in which the features are less discriminative. 2) The proposed method is easy to implement, and could be beneficial in practice when some indication is needed as to whether the training data is robust to attack, or can be expected to generalize well to new data classes. 3) The paper is generally readable. However, more intuition could be given in the introduction to support the author's contention that feature generalizability could be related to the vulnerability to adversarial attack. Cons: ----- 1) The technical contribution of the paper is not very novel. It essentially boils down to the use of zero-shot learning with cross-validation techniques. 2) As proposed in the paper, Raw Zero-Shot takes a rather simplistic view of the nature of classes or their relative contribution to the training process; for example, it makes no allowance for differences among the class sizes in the training set. Also, Raw Zero-Shot seeks to characterize the overall generalizability of features over the entire training collection. Can any insight be provided at the level of individual test objects, so as to help give a more useful characterization of adversarial examples? 3) This work could be better situated with respect to the relevant literature on the characterization of adversarial attack and of the generalizability / transferability of learned features. For example, other work on adversarial attack has taken the view that adversarial examples can be easier to detect when they lie far from the underlying data manifold. It seems that this work is implicitly making a related assumption. Discuss?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences ### Paper Abstract Neural networks have been shown vulnerable to a variety of adversarial algorithms. A crucial step for understanding the rationale behind this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features. Here, we propose a method to understand the representation quality of the neural networks using a novel test based on Zero-Shot Learning, entitled Raw Zero-Shot. The principal idea is that if an algorithm learns rich features, such features should be able to interpret 'new or unknown' classes as a combination of previously learned features. This is because unknown classes usually share several regular features with recognised (learned) classes, given that the features learned are general enough. We further introduce two metrics to assess this learned representation which interprets unknown classes. One is based on inter-cluster validation technique, while the other is based on the difference in the representation between the case when the class is unknown and the case when it is known to the classifier. Experiments suggest that several adversarial defences not only decrease the attack accuracy of some attacks but also improve the representation quality of the classifiers. Further, a low p-value of the paired-samples t-test suggests that several adversarial defences, in general, change the representation quality significantly. Moreover, experiments also reveal a relationship between the proposed metrics and adversarial attacks (a high Pearson Correlation Coefficient (PCC) and low p-value). ### Paper Keywords ["Understanding Neural Networks", "Representation Metrics", "Adversarial Machine Learning", "Adversarial Attacks", "Adversarial Defences"] ### Paper Content ABSTRACTNeural networks have been shown vulnerable to a variety of adversarial algorithms.A crucial step for understanding the rationale behind this lack of robustness is toassess the potential of the neural networks’ representation to encode the existingfeatures. Here, we propose a method to understand the representation quality ofthe neural networks using a novel test based on Zero-Shot Learning, entitled RawZero-Shot. The principal idea is that if an algorithm learns rich features, suchfeatures should be able to interpret ‘new or unknown’ classes as a combination ofpreviously learned features. This is because unknown classes usually share severalregular features with recognised (learned) classes, given that the features learnedare general enough. We further introduce two metrics to assess this learnedrepresentation which interprets unknown classes. One is based on inter-clustervalidation technique, while the other is based on the difference in therepresentation between the case when the class is unknown and the case when it isknown to the classifier. Experiments suggest that several adversarial defences notonly decrease the attack accuracy of some attacks but also improve therepresentation quality of the classifiers. Further, a low p-value of thepaired-samples t-test suggests that several adversarial defences, in general, changethe representation quality significantly. Moreover, experiments also reveal arelationship between the proposed metrics and adversarial attacks (a high Pearsoncorrelation coefficient and low p-value).1 I NTRODUCTIONAdversarial samples are noise-perturbed samples that can fail neural networks for tasks like imageclassification. Since they were discovered some years ago by Szegedy (2014), both the quality andvariety of adversarial samples have grown. These adversarial samples can be generated by a specificclass of algorithms known as adversarial attacks (Nguyen et al., 2015; Brown et al., 2017; Moosavi-Dezfooli et al., 2017; Su et al., 2019). Most of these adversarial attacks can also be transformedinto real-world attacks (Sharif et al., 2016; Kurakin et al., 2016; Athalye & Sutskever, 2018), whichconfer a big issue as well as a security risk for current neural networks’ applications. Despite theexistence of many variants of defences to these adversarial attacks (Goodfellow et al., 2014; Huanget al., 2015; Papernot et al., 2016; Dziugaite et al., 2016; Hazan et al., 2016; Das et al., 2017; Guoet al., 2018; Song et al., 2018; Xu et al., 2017; Madry et al., 2018; Ma et al., 2018; Buckman et al.,2018), ‘no known learning algorithm or procedure can defend consistently’ (Carlini & Wagner, 2017;Tramèr et al., 2017; Athalye et al., 2018; Uesato et al., 2018; Vargas & Kotyan, 2019; Tramer et al.,2020). This shows that a more profound understanding of the adversarial algorithms is needed toenable the formulation of consistent and robust defences.Several works have focused on understanding the reasoning behind such a lack of robust performance.It is hypothesised in Goodfellow et al. (2014) that neural networks’ linearity is one of the mainreasons for failure. Other investigation by Thesing et al. (2019) shows that with deep learning, neuralnetworks learn false structures that are simpler to learn rather than the ones expected. Moreover,research by Vargas & Su (2019) unveil that adversarial attacks are altering where the algorithm ispaying attention. In Sabour et al. (2015), it is discussed that an adversarial sample may have differentinternal representation than the benign sample. The authors show that internal representations ofadversarial samples are remarkably similar to different images of different true-class and linksadversarial robustness to representations learned by deep neural networks.1Under review as a conference paper at ICLR 2021BearZebraBird[1, 0, 0]?Predicted [0.98, 0.01, 0.01][0.19, 0.8, 0.01][0.1, 0.03, 0.87][0.6, 0.39, 0.01][0, 1, 0][0, 0, 1]Train Target Input SamplesKnown Classes Unknown Class60% Similar to Bear, 39% Similar to ZebraAmalgam ProportionFigure 1: Raw Zero-Shot IllustrationContributions: In this article, we try to open up a new perspective on understanding adversarialalgorithms based on evaluating the representation quality of unseen classes based on learned classes.We do this, by verifying that the representation quality of neural networks is indeed linked withthe adversarial attacks and defences. Specifically, we propose a methodology based on Zero-ShotLearning entitled Raw Zero-Shot (Section 3) for evaluating the representation quality of the neuralnetworks.We conducted experiments over the soft-labels of an unfamiliar class to assess the representationquality of the classifiers. This is based on the hypothesis that, if the classifier is capable of learninguseful features, an unfamiliar class would also be associated with some of these learned features(Amalgam Proportion) (Figure 1). We call this type of inspection over unfamiliar class, Raw Zero-Shot (Section 3). Furthermore, we also introduce two associated metrics to evaluate the representationquality of neural networks. One is based upon Clustering Hypothesis (Section 3.1), while the other isbased on Amalgam Hypothesis (Section 3.2).We evaluated our Raw Zero-Shot test over a wide assortment of datasets (and classifiers) such asFashion MNIST, CIFAR-10, and a customised Imagenet to assess the representation quality of thevanilla classifiers (Section 4). We also evaluated different adversarial defences to prove that whenan adversarial defence is applied to a classifier, it gives better representation quality than the vanillaclassifier. We also conducted a paired samples t-test to determine the statistical relevance of the effectof adversarial defences on the representation quality (Section 5). We then reveal a link between therepresentation quality and attack susceptibility by verifying that the proposed metrics have a highPearson correlation coefficient with the adversarial attacks (Section 6).2 R ELATED WORKSUnderstanding Adversarial Attacks: Since the discovery of adversarial samples in Szegedy(2014), many researchers have tried to understand the adversarial attacks. It is hypothesised inGoodfellow et al. (2014) that neural networks’ linearity is one of the principal reasons for failureagainst an adversary. A geometric perspective is analysed in Moosavi-Dezfooli et al. (2018), where itis shown that adversarial samples lie in shared subspace, along which the decision boundary of aclassifier is positively curved. Further, in Fawzi et al. (2018), a relationship between sensitivity toadditive perturbations of the inputs, and the curvature of the decision boundary of deep networks isshown. Another aspect of robustness is discussed in Madry et al. (2018), where authors suggest thatthe capacity of the neural networks’ architecture is relevant to the robustness. It is also stated in Ilyaset al. (2019) that the adversarial vulnerability is a significant consequence of the dominant supervisedlearning paradigm and a classifier’s sensitivity to well-generalising features in the known inputdistribution. Also, research by Tao et al. (2018) argue that adversarial attacks are entangled withinterpretability of neural networks as results on adversarial samples can hardly be explained. Further,the existence of different internal representations learned by neural networks for an adversarialsample compared to a benign sample is shown in Sabour et al. (2015). In this article, we explore anew perspective to understand adversarial attacks and defences based on the representation quality ofthe neural networks evaluated using Amalgam Proportion.2Under review as a conference paper at ICLR 2021Zero-Shot learning: Zero-Shot learning is a method to estimate unfamiliar classes which do notappear in the training data. The motivation of Zero-Shot learning is to transfer knowledge fromrecognised (learned) classes to unfamiliar classes. Existing methods address the problem by estimatingunfamiliar classes from an attribute vector defined manually for all classes. For each class, whethersuch an attribute (like colour, shape) relates to the class or not is represented by one or zero. Lampertet al. (2009) introduced Direct Attribute Prediction (DAP) model, which learns each parameter ofthe input sample for estimating the attributes of the sample from the feature vector generated. Basedon this research, other zero-shot learning methods have been proposed which uses an embeddedrepresentation generated using a natural language processing algorithm instead of a manually createdattribute vector (Norouzi et al., 2013; Fu et al., 2015; Akata et al., 2015; Zhang & Saligrama, 2016;Bucher et al., 2016). Zhang & Saligrama (2015) proposed a different strategy by constructing thehistogram of known classes distribution for an unknown class to estimate unknown classes. Theyassume that the unknown classes are the same if these histograms generated in the prediction domainand the source domain are similar. Our Raw Zero-Shot test is distinguished from other zero-shotlearning algorithms as in Raw Zero-Shot the neural network has no access to features (attributevector), or additional supplementary knowledge.3 R AWZERO-SHOTSoft -labels(N-1classes)Compare the histograms of soft-labelsRaw Zero -Shot Classifier(N-1class classifier)Standard Classifier(Nclass classifier)Delete target class in soft-label distribution and normaliseDavied -Bouldin Metric: Clustering HypothesisAmalgam Metric: Amalgam HypothesisSingle Class SamplesHistogram of soft -labels (H)2D Soft -labels spaceClass is unknown for the classifierClass is known for the classifierHistogram of soft -labels Histogram of soft -labels (H’)Soft -labels(Nclasses)Visualisation of ( N-1)dimensional soft-labels space in 2DFigure 2: Illustration of proposed metrics.Raw Zero-Shot is a learning test in which only N1of theNclasses in the dataset are presentedto the classifier during training, or in other words, all the samples of one specific class are removedfrom the standard training dataset. Such a classifier trained on only N1of theNclasses is called‘Raw Zero-Shot Classifier’ . Please note that a ‘Standard Classifier’ is trained on all Nclasses hasNsoft-label dimensions in the soft-label space. In contrast, a Raw Zero-Shot Classifier has only N1soft-label dimensions in the soft-label space due to the forced exclusion of a class. The excludedunknown class then can be predicted as a combination of the remaining N1soft-label dimensionsof the known (learned) classes. We call this combination as ‘Amalgam Proportion’ (Figure 1). Duringtesting, only the unknown class (excluded class from N) is provided to the classifier. AmalgamProportion for the given unknown class is recorded for the classifier. This process is iterated for allpotential (N)classes, excluding a different class each time.Soft-labels of a classifier composes a space in which a given image would be categorised as a weightedvector involving the previously learned classes. If neural networks can learn the features existing inthe classes, it would be reasonable to consider that the Amalgam Proportion also describes a givenimage as a combination of the previously learned classes (Figure 1). Similar to a vector space in linearalgebra, the soft-labels can be combined to describe unknown objects in this space. In our example(Figure 1), the unknown class (Giant Panda) is represented as a combination of previously recognised(learned) classes (Bear, Zebra, Bird) where 60% of the features of Bear (like body-shape) and 39%of the features of Zebra (like stripes pattern) is ‘associated’ with the Giant Panda. This is analogousto how children associate unseen objects (Giant Panda) as a combination of recognised objects (Bearand Zebra) when they are asked to describe the unseen object with their learned knowledge (Walker &Gopnik, 2014; Walker et al., 2016). Thus, all the images of the class Giant Panda should have similarAmalgam Proportion as the hypothetical classifier can associate Giant Panda with some features ofZebra and Bear classes.3Under review as a conference paper at ICLR 2021Metrics are then computed over the Amalgam Proportion of the unknown (excluded) class to assessthis representation quality of a classifier, (Figure 2). These metrics are each based on a differenthypothesis of what defines a feature or a class. In the same way, as there are various aspects ofrobustness, there are also different variations of representation quality . Therefore, our metrics arecomplementary, each highlighting a different perspective of the whole . The following subsectionsdefine them.3.1 D AVIES –BOULDIN METRIC (DBM) – C LUSTERING HYPOTHESISWe can use cluster validation techniques to assess the representation (Amalgam Proportion),considering that the cluster of Amalgam Proportion of an unfamiliar class would constitute a class initself. Here, we choose for simplicity Davies-Bouldin Index (Davies & Bouldin, 1979), one of themost used metrics in internal cluster validation. Hence, Davies–Bouldin Metric (DBM) for anunknown class can be defined as follows:DBM =0@1nnXj=1jzjGj21A1=2in which,nis the number of samples (samples from unknown class), Gis the centroid of the clusterformed by the soft-labels of all the nsamples, and zis soft-label of a single sample of unknownclass. A denser cluster would have a lower DBM Score representing a consistent view taken by theclassifier in terms of features learned from the known classes.3.2 A MALGAM METRIC (AM) – A MALGAM HYPOTHESISDifferently from the previous metric, here we establish our metric on the hypothesis that the classesthat are learned by a classifier share some similarity with the unfamiliar class and the classifiercan associate this similarity in its representation while evaluating these unfamiliar classes. Thishypothesis formulates from the fact that humans can combine available perceptual information withstored knowledge of experiential regularities which helps us to describe things that are ‘similar’ asclose and things that are ‘dissimilar’ as far apart (Casasanto, 2008). However, what would constitutethe baseline Amalgam Proportion for a given unfamiliar class still needs to be determined to assessthe extent of the classifier to exploit this existence of similarity between classes.To calculate the baseline Amalgam Proportion of a given unknown class, we use here the assumptionthat ‘Standard Classifiers should output a good approximation of the Amalgam Proportion sincethe class is known to the Standard Classifier in the training phase. We thus associate the evaluatedAmalgam Proportion of the Raw Zero-Shot Classifier and the baseline Amalgam Proportion of theStandard Classifier for a given class with our Amalgam Metric (AM) (Figure 2) as,AM =kH0Hk1N1whereH=nXj=1zj; H0=nXj=1z0jin which,z0is the normalized soft-labels of non-target classes from the Standard classifier, and zis the soft-labels of known classes from the Raw Zer-Shot Classifier. Note that, the given class is‘known’ (target) by the standard classifier and is ‘unknown’ to the Raw Zero-Shot Classifier. Hence,the Amalgam Metric captures the existence of some unique features learned which are specific to aclass which in-turn changes the Amalgam Proportion between Raw Zero-Shot Classifier and StandardClassifier. A higher AM score corresponds to a classifier preferring to learn special features of a classover general features present across the distribution. In other words, a lower AM score correspondsto a classifier preferring to learn general features over special features. A non-zero AM score thusverifies the existence of the unique special features to a class which are learned by training theclassifier on that specific class.4Under review as a conference paper at ICLR 20214 E XPERIMENTAL DESIGN ANDRESULTSConsidered Datasets: We conducted experiments on three diverse datasets to evaluate therepresentation of the neural networks. We used Fashion MNIST (F-MNIST) (Xiao et al., 2017),CIFAR-10 (Krizhevsky et al., 2009) and a customised Sub-Imagenet (Sub) dataset for ourevaluations. The details of the customised Sub-Imagenet dataset is mentioned in Appendix B. Notethat, the number of samples ( 7000 for Fashion MNIST, 6000 for CIFAR-10, and roughly 13500samples for Sub-Imagenet dataset) in the assumed unknown class differ with the dataset. We use thesamples from both training and testing dataset for the ‘unknown’ class for evaluation because weexclude these samples in the training process.Considered Classifiers: We evaluated different architectures for different datasets. For the FashionMNIST datasets, we chose to evaluate Multi-Layer Perceptron (MLP), and a shallow ConvolutionNeural Network (ConvNet). For the CIFAR-10 dataset, LeNet (a simpler architecture which is ahistorical mark) (LeCun et al., 1998), VGG (a previous state-of-the-art architecture which is ahistorical mark) (Simonyan & Zisserman, 2014), All Convolutional Network (AllConv) (anarchitecture without max pooling and fully-connected layers) (Springenberg et al., 2014), Network inNetwork (NIN) (an architecture which uses micro neural networks instead of linear filters) (Lin et al.,2013), Residual Networks (ResNet) (an architecture based on skip connections) (He et al., 2016),Wide Residual Networks (WideResNet) (an architecture which also expands in width) (Zagoruyko &Komodakis, 2016), DenseNet (an architecture which is a logical extension of ResNet) (Huang et al.,2017), and Capsule Networks (CapsNet) (a recently proposed completely different architecture basedon dynamic routing and capsules) (Sabour et al., 2017). For our Sub-Imagenet dataset, we choseInceptionV3 (Szegedy et al., 2016), and ResNet-50 (He et al., 2016). Details about the Standard andRaw Zero-Shot Classifiers are mentioned in Appendix C.Considered Adversarial Defences: We also evaluated the representation quality of some of theadversarial defences for CIFAR-10 dataset, such as Feature Squeezing (FS) (Xu et al., 2017), SpatialSmoothing (SS) (Xu et al., 2017) , Label Smoothing (LS) (Hazan et al., 2016), Thermometer Encoding(TE) (Buckman et al., 2018), and Adversarial Training (AT) (Madry et al., 2018). We also evaluateclassifiers trained with augmented dataset having Gaussian Noise of = 1:0(G Aug). Details aboutthe adversarial defences are mentioned in Appendix D. For a discussion about the performance ofadversarial defences in general, please refer to Athalye et al. (2018).Considered Attacks: We also evaluated all our standard vanilla classifiers against well-knownadversarial attacks such as Fast Gradient Method (FGM) (Goodfellow et al., 2014), Basic IterativeMethod (BIM) (Kurakin et al., 2016), Projected Gradient Descent Method (PGD) (Madry et al.,2018), DeepFool (DF) (Moosavi-Dezfooli et al., 2016), and NewtonFool (NF) (Jang et al., 2017).Details about the adversarial attacks are mentioned in Appendix E.Architecture DBM AMFor Fashion MNIST DatasetMLP 0:510:09 670:7181:79ConvNet 0:470:10 683:5576:39For Sub-Imagenet DatasetInceptionV3 0:560:07 1335:6531:83ResNet-50 0:550:15 1311:9737:59Architecture DBM AMFor CIFAR-10 DatasetLeNet 0:540:04 473:9791:53VGG 0:610:12 645:8615:19AllConv 0:640:08 634:0422:01NIN 0:630:09 646:0416:40ResNet 0:640:13 654:906:40DenseNet 0:610:14 658:214:05WideResNet 0:580:15 660:003:60CapsNet 0:430:03 385:8583:77Table 1: Mean and Standard Deviation of DBM and AM Scores for vanilla Raw Zero-Shot Classifiers.Experimental Results For Vanilla Classifiers: Table 1 shows the results of our metrics (DBM andAM) for vanilla classifiers. Note that, we use mean across all the metric values for Nclasses of thedataset to be characteristic metric value for an architecture. To enable the visualisation of DBM, weplot a projection of all the points in the decision space of unknown class ( N1dimensions) intotwo-dimensional space (Appendix F). Similarly, we can also visualise AM, in the form of histogramsof soft-labels for the classifiers (Appendix G). Table 1 reveals that for CIFAR-10 dataset, CapsNetpossesses the best representation quality amongst all classifiers examined as it has the least (best)score in both of our metrics. At the same time, LeNet has the second-best representation quality.Moreover, other architectures possess similar representation quality. Also for Sub-Imagenet dataset,5Under review as a conference paper at ICLR 2021both architectures (InceptionV3 and ResNet-50) are equally clustered and predict the AmalgamProportion similarly. However, ResNet-50 has marginally better representation quality than theInceptionV3 as it has better scores for both of our metrics. Similarly, for Fashion MNIST dataset,both architectures (MLP and ConvNet) have a similar quality of representation. While ConvNetseems marginally superior to the MLP in terms of clustering the unknown classes more tightly(suggested by DBM), MLP seems marginally superior to predict the Amalgam Proportion better thanthe ConvNet (suggested by AM).5 L INKBETWEEN REPRESENTATION QUALITY ANDADVERSARIALDEFENCESDavies–Bouldin Metric (DBM)Architecture No Defence Gaussian Augmentation Label Smoothing Adversarial TrainingLeNet 0:540:04 0:560:04 (0:00) 0:430:02 (0:00) 0:320:04 (0:00)VGG 0:610:12 0:630:12 (0:07) 0:550:10 (0:00) 0:470:07 (0:00)AllConv 0:640:08 0:660:11 (0:27) 0:480:05 (0:00) 0:500:06 (0:00)NIN 0:630:09 0:640:11 (0:17) 0:520:08 (0:00) 0:430:06 (0:00)ResNet 0:640:13 0:630:14 (0:09) 0:540:11 (0:00) 0:430:07 (0:00)DenseNet 0:610:14 0:600:15 (0:05) 0:550:13 (0:00) 0:500:10 (0:02)WideResNet 0:580:15 0:590:15 (0:58) 0:460:09 (0:00) 0:610:10 (0:13)CapsNet 0:220:01 0:230:01 (0:00) 0:180:01 (0:00) 0:150:02 (0:00)Architecture No Defence Feature Squeezing Spatial Smoothing Thermometer EncodingLeNet 0:540:04 0:540:04 (0:38) 0:500:03 (0:01) 0:520:04 (0:09)VGG 0:610:12 0:620:11 (0:14) 0:630:09 (0:52) 0:650:05 (0:27)AllConv 0:640:08 0:640:08 (0:20) 0:630:08 (0:66) 0:670:05 (0:12)NIN 0:630:09 0:630:09 (0:13) 0:650:06 (0:39) 0:650:06 (0:14)ResNet 0:640:13 0:650:13 (0:20) 0:660:11 (0:61) 0:710:06 (0:02)DenseNet 0:610:14 0:620:12 (0:16) 0:640:11 (0:57) 0:690:09 (0:00)WideResNet 0:580:15 0:590:14 (0:13) 0:620:11 (0:51) 0:660:08 (0:02)CapsNet 0:220:01 0:220:01 (0:00) 0:210:01 (0:09) 0:200:02 (0:03)Amalgam Metric (AM)Architecture No Defence Gaussian Augmentation Label Smoothing Adversarial TrainingLeNet 115:9736:92 84:0026:39 (0:03) 177:0897:77 (0:10) 29:9316:06 (0:00)VGG 270:76186:04 287:75122:58 (0:75) 579:05121:89 (0:00) 218:47100:50 (0:44)AllConv 150:3539:16 153:7365:96 (0:90) 395:28143:78 (0:00) 188:6667:98 (0:16)NIN 186:1497:41 222:68104:12 (0:03) 503:32145:15 (0:00) 86:4517:60 (0:01)ResNet 233:84109:08 266:61124:12 (0:17) 592:57119:06 (0:00) 86:7146:24 (0:00)DenseNet 314:93130:50 303:04120:54 (0:70) 629:48131:86 (0:00) 187:3471:01 (0:04)WideResNet 417:37180:78 443:95157:46 (0:13) 586:84132:92 (0:00) 365:29199:90 (0:13)CapsNet 96:9638:59 111:4656:69 (0:07) 100:0142:72 (0:54) 54:4820:38 (0:00)Architecture No Defence Feature Squeezing Spatial Smoothing Thermometer EncodingLeNet 115:9736:92 116:8537:42 (0:37) 72:1320:02 (0:00) 272:0380:86 (0:00)VGG 270:76186:04 271:42184:10 (0:78) 183:06128:16 (0:02) 510:3985:82 (0:00)AllConv 150:3539:16 149:4738:17 (0:50) 179:4468:03 (0:14) 537:4874:51 (0:00)NIN 186:1497:41 185:82100:53 (0:92) 148:72100:69 (0:00) 516:7292:20 (0:00)ResNet 233:84109:08 226:19105:21 (0:06) 199:6499:87 (0:14) 531:5480:03 (0:00)DenseNet 314:93130:50 319:33136:19 (0:68) 246:0899:05 (0:09) 585:3856:48 (0:00)WideResNet 417:37180:78 402:62185:48 (0:04) 207:62131:18 (0:00) 646:8510:66 (0:00)CapsNet 96:9638:59 96:9538:57 (0:82) 84:0231:37 (0:03) 280:3958:42 (0:00)Table 2: Mean and Standard Deviation of DBM and AM values for different Raw Zero-ShotClassifiers with and without the adversarial defences on CIFAR-10. Values in the parentheses arep-values of the paired samples t-test between the metric values of defences and those without defences.Table 2 shows the results of our metrics (DBM and AM) for vanilla classifiers and classifiers employedwith a variety of adversarial defences for improving the robustness of vanilla classifiers for CIFAR-10. We also analyse the statistical relevance of the change in metric values due to introduction ofadversarial defences. A paired samples t-test (David & Gunnink, 1997) was conducted for our metrics’distributions (DBM and AM) of Vanilla Classifiers (without adversarial defence), and Adversariallydefended classifiers (Table 2) to test the significance in the change in metric values due to AdversarialDefences. The Null hypothesis of paired samples t-test assumes that the true mean difference betweenthe distributions is equal to zero. Based on the results (Table 2) adversarial defences, ‘in general’, tendto improve the representation quality of the neural networks evaluated using Amalgam Proportion. Itdoes so by either by creating a more dense cluster of the soft-labels (suggested by DBM) or learningmore general/special features (suggested by AM), or both.6Under review as a conference paper at ICLR 2021Raw DBM Score values for weaker defences such as G Aug, FS, SS and TE lie within the standarddeviation of vanilla classifiers suggesting that they affect minimally in clustering the AmalgamProportion of unknown classes. At the same time, DBM Score values for defences such as LSand AT are noticeably lower than vanilla classifiers suggesting they try to form a denser clusterof Amalgam Proportion compared to the vanilla classifiers. Thus a better association of availablefeatures is observed for the more robust defences. From the perspective of AM Score values, theresults suggest that LS favours learning special features belonging to a class while AT favours to learnmore general features. Interestingly, a general low p-value for the paired samples t-test is observedfor the adversarial defences, which suggests that underlying representation of adversarial defencesdiffer from the vanilla classifiers with high statistical relevance.6LINKBETWEEN REPRESENTATION QUALITY ANDADVERSARIAL ATTACKSArchitectureDBM with Mean L2Score AM with Mean L2ScoreFGM BIM PGD DF NF FGM BIM PGD DF NFFashion MNISTMLP -0.20 (0.58) -0.17 (0.64) -0.17 (0.64) -0.04 (0.91) -0.02 (0.97) 0.82 (0.00) 0.26 (0.47) 0.26 (0.47) 0.83 (0.00) 0.84 (0.00)ConvNet -0.24 (0.50) -0.30 (0.40) -0.30 (0.40) -0.26 (0.46) -0.22 (0.55) 0.83 (0.00) -0.07 (0.84) -0.09 (0.80) 0.81 (0.00) 0.82 (0.00)CIFAR-10LeNet -0.18 (0.61) -0.70 (0.02) -0.66 (0.04) -0.51 (0.13) -0.36 (0.31) 0.93 (0.00) 0.32 (0.36) 0.25 (0.49) 0.81 (0.00) 0.89 (0.00)VGG -0.62 (0.06) -0.21 (0.55) -0.20 (0.58) -0.52 (0.13) -0.63 (0.05) 0.71 (0.02) -0.04 (0.91) -0.07 (0.85) 0.87 (0.00) 0.74 (0.01)AllConv -0.31 (0.39) -0.56 (0.09) -0.54 (0.11) -0.10 (0.78) -0.30 (0.41) 0.67 (0.03) 0.42 (0.23) 0.41 (0.24) 0.94 (0.00) 0.73 (0.02)NIN -0.56 (0.09) -0.57 (0.08) -0.57 (0.09) -0.42 (0.22) -0.43 (0.21) 0.78 (0.01) 0.84 (0.00) 0.84 (0.00) 0.96 (0.00) 0.89 (0.00)ResNet -0.52 (0.12) -0.76 (0.01) -0.76 (0.01) -0.47 (0.17) -0.51 (0.13) 0.35 (0.32) 0.57 (0.09) 0.57 (0.09) 0.79 (0.01) 0.83 (0.00)DenseNet -0.62 (0.06) -0.50 (0.14) -0.49 (0.15) -0.16 (0.65) -0.22 (0.55) 0.53 (0.11) 0.78 (0.01) 0.78 (0.01) 0.78 (0.01) 0.84 (0.00)WideResNet -0.68 (0.03) -0.75 (0.01) -0.75 (0.01) -0.68 (0.03) -0.75 (0.01) 0.66 (0.04) 0.68 (0.03) 0.68 (0.03) 0.78 (0.01) 0.68 (0.03)CapsNet -0.71 (0.02) -0.45 (0.19) -0.49 (0.15) -0.39 (0.26) -0.48 (0.17) 0.98 (0.00) 0.69 (0.03) 0.73 (0.02) -0.17 (0.63) 0.47 (0.17)Sub-ImagenetInceptionV3 -0.76 (0.01) -0.52 (0.13) -0.52 (0.13) -0.35 (0.32) -0.50 (0.14) 0.75 (0.01) 0.14 (0.70) 0.14 (0.70) 0.28 (0.44) 0.25 (0.49)ResNet-50 -0.34 (0.34) -0.12 (0.74) -0.12 (0.74) -0.54 (0.10) -0.25 (0.48) 0.82 (0.00) 0.31 (0.39) 0.31 (0.39) 0.51 (0.13) 0.50 (0.15)Table 3: Pearson correlation coefficient of DBM and AM with Mean L2Score of Adversarial Attacksfor each vanilla classifier and attack pair. Values in the parentheses are p-values of the Pearsoncorrelation test.Since, the results in Table 2, suggests a link between the representation quality and the adversarialdefences as discussed above. It is intuitive to assume that there also exists a link between therepresentation quality and the adversarial attacks. To evaluate the statistical relevance of this linkbetween representation quality evaluated using Amalgam Proportion and adversarial attacks, weconducted a Pearson correlation coefficient test (Freedman et al., 2007) of our metrics (DBM andAM) of the vanilla classifiers with adversarial attacks. The Pearson correlation analysis of our metricssuggests a relationship between our metrics and the adversarial attacks in general.We use the analysis of adversarial attacks in the form of Mean L2Score (L2difference between theoriginal sample and the adversarial one) to compute the correlation (Moosavi-Dezfooli et al., 2016).The Pearson correlation coefficients of our metrics (DBM and AM) with Mean L2Score is shown inTable 3 for every architecture and attacks. Moreover, these Pearson relationships between our metricsand MeanL2Score can also be visualised (Appendix H). We also analyse the impact of adversarialattacks on the correct class soft-label (Appendix I).We do observe some anomalies in the Pearson correlation coefficient of AM with BIM and PGDattacks for the ConvNet, and VGG network and DeepFool for CapsNet. These anomalies are studiedin detail (Appendix H) to understand their existence. Our extended analysis suggests that theseanomalies exist due to abnormal behaviour of some classes. On careful study, we note that for all theclasses of VGG network BIM and PGD have similar AM Scores, while at the same time the Mean L2Score differs for across classes. We observe that for the CapsNet, the Airplane class had abnormallylow MeanL2score suggesting less perturbation which was abnormal compared to the other classesin the same setting. These anomalies further suggest that baseline Amalgam Proportion for someof the classes differ. However, the study of these representation qualities of ‘individual’ classes andtheir effect overall representation quality is beyond the scope of the current article and hence, left asfuture work.7Under review as a conference paper at ICLR 20217 G ENERAL DISCUSSION ONREPRESENTATION QUALITYOn carefully observing the metric values (Tables 1, 2, and 3), we found that our assessment ofrepresentation quality using Amalgam Proportion also explains some of the propositions by otherresearchers, we highlight some of our key findings below,Does a model with high capacity will have a better representation quality? Our results revealthat a deeper network which generally has a higher capacity (Madry et al., 2018) does not necessarilycorrespond to have a better representation quality of the input features. As CapsNet and LeNet, whichare much shallower than the other deeper networks, are shown to have superior representation qualitythan other deeper networks (Table 2).Why CapsNet has better representation quality than other deeper networks? We observe thatCapsule Networks (CapsNet) has the best representation amongst other neural networks (Table 2).Our results suggest that CapsNet not only produces a denser cluster for Amalgam Proportion but alsolearns more general features it might be because of the dynamical nature (routing) of the CapsNet.Thus our results call for a more in-depth investigation of Capsule Networks and their representationquality.How does augmenting the dataset with Gaussian Noise affect the representation quality? Weobserve that Gaussian Augmentation degrades the representation quality of all the classifiers (Table2). This supports our intuition (Section 3), as adding Gaussian noise to the images subdue the featuresof the image by blurring making the classifier harder to interpret these features. Consequently, aweaker association of the representation with these features is observed through the perspective ofAmalgam Proportion.How does Label Smoothing improve the representation quality? Our results corroborate theanalysis in Müller et al. (2019) that Label Smoothing (LS) encourages the representations to groupin tight, equally distant clusters. The raw metric values from our experiments for LS suggests thatclassifiers employed with LS do form a tighter cluster in soft-label space (as suggested by DBM)(Table 2). At the same time, LS also favours the classifiers to learn special features belonging to aclass (as suggested by AM).Do adversarial defences which work on the principle of obfuscated gradients affectrepresentation quality? Since, some adversarial defences rely on obfuscating gradients (Athalyeet al., 2018) such as Feature Squeezing, Spatial Smoothing, and Thermometer Encoding, they fail toimprove the representation quality of the classifiers (suggested by DBM). At the same time, morerobust adversarial defences like Adversarial Training which do not rely on obfuscating gradients havebetter representation quality. Hence, adversarial defences can be evaluated using our metrics toanalyse, if an adversarial defence improves the robustness of the classifier by improving therepresentation quality of the classifier or rely on some other criterion.8 C ONCLUSIONSIn this article, we propose a novel Zero-Shot learning-based method, entitled Raw Zero-Shot, to assessthe representation of the several neural networks. In order to assess the representation, two associatedmetrics are formally defined based on different hypotheses of representation quality. Results fromthe experiments reveal that classifiers employed with adversarial defences not only decrease theattack accuracy as presumed but also improve the representation quality of the classifiers as evaluatedby our proposed metrics (DBM and AM). Further, adversarial defences, have a low p-value in thepaired samples t-test when compared to vanilla classifiers in general, suggesting that representationquality is significantly affected by various adversarial defences. Moreover, a high Pearson correlationcoefficient and low p-value of the Pearson correlation test between the proposed metrics and theadversarial attacks suggest a link between the representation quality and the adversarial attacks. Ourexperimental results suggest that CapsNet (dynamic routing network) has the best representationquality amongst classifiers which calls for a more in-depth investigation of Capsule Networks. Hence,the proposed Raw Zero-Shot was able to assess and understand the representation quality from theperspective of unknown classes of different neural networks’ architectures, along with the adversarialdefences and link this representation quality of the neural networks with adversarial attacks anddefences. It also opens up new possibilities of using representation quality for both evaluation (i.e. asa quality assessment) and the development (e.g. as a loss function) of neural networks.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Further development needed? ### Review Text This paper is concerned with the assessment of the vulnerability of neural networks to adversarial attack, according to its ability to interpret new classes using previously-learned features. The underlying assumption made is that combinations of features from a sufficiently rich learned collection should be able to help identify members of previously unseen classes. For cases where the representational quality is poor, and the learned features do not suffice for new classes, the authors expect adversarial perturbation to be more successful. Conversely, they aim to show that when adversarial defense is added to a learning framework, it facilitates the identification of new classes from previously-learned features. In order to assess the generalizability of learned features to new classes, the authors propose the use of zero-shot learning. Using a leave-one-out strategy ("Raw Zero-Shot"), features are learned over all but one class, and then tested on the class left out. The results are then combined over all possible choices of class to exclude from the training set. For the evaluation, the authors consider an internal clustering validation measure (Davies-Bouldin metric), and a measure of average L1 deviation between normalized probability vectors (soft label vectors) of the zero-shot classifier (where the new class is excluded from the training set) and standard classifier (new class included). The authors then proceed to use these measures to evaluate the performances of classifiers in two main scenarios: generalizability of features when adversarial defences are applied vs when not applied; and the correlation of success / failure of adversarial attack vs the generalizability of features. Pros: ----- 1) The experimental validation does provide evidence that the vulnerability of adversarial attack does correlate with the quality of the features. However, this connection is not unexpected, as adversarial perturbation techniques generally seek to discover nearby regions in which the features are less discriminative. 2) The proposed method is easy to implement, and could be beneficial in practice when some indication is needed as to whether the training data is robust to attack, or can be expected to generalize well to new data classes. 3) The paper is generally readable. However, more intuition could be given in the introduction to support the author's contention that feature generalizability could be related to the vulnerability to adversarial attack. Cons: ----- 1) The technical contribution of the paper is not very novel. It essentially boils down to the use of zero-shot learning with cross-validation techniques. 2) As proposed in the paper, Raw Zero-Shot takes a rather simplistic view of the nature of classes or their relative contribution to the training process; for example, it makes no allowance for differences among the class sizes in the training set. Also, Raw Zero-Shot seeks to characterize the overall generalizability of features over the entire training collection. Can any insight be provided at the level of individual test objects, so as to help give a more useful characterization of adversarial examples? 3) This work could be better situated with respect to the relevant literature on the characterization of adversarial attack and of the generalizability / transferability of learned features. For example, other work on adversarial attack has taken the view that adversarial examples can be easier to detect when they lie far from the underlying data manifold. It seems that this work is implicitly making a related assumption. Discuss? ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
wta_8Hx2KD
ICLR.cc/2021/Conference
2021
Incorporating Symmetry into Deep Dynamics Models for Improved Generalization
["Rui Wang", "Robin Walters", "Rose Yu"]
Recent work has shown deep learning can accelerate the prediction of physical dynamics relative to numerical solvers. However, limited physical accuracy and an inability to generalize under distributional shift limit its applicability to the real world. We propose to improve accuracy and generalization by incorporating symmetries into convolutional neural networks. Specifically, we employ a variety of methods each tailored to enforce a different symmetry. Our models are both theoretically and experimentally robust to distributional shift by symmetry group transformations and enjoy favorable sample complexity. We demonstrate the advantage of our approach on a variety of physical dynamics including Rayleigh–Bénard convection and real-world ocean currents and temperatures. Compare with image or text applications, our work is a significant step towards applying equivariant neural networks to high-dimensional systems with complex dynamics.
["deep sequence model", "equivariant neural network", "physics-guided deep learning", "AI for earth science"]
ABSTRACTRecent work has shown deep learning can accelerate the prediction of physi-cal dynamics relative to numerical solvers. However, limited physical accu-racy and an inability to generalize under distributional shift limits its applica-bility to the real world. We propose to improve accuracy and generalizationby incorporating symmetries into convolutional neural networks. Specifically,we employ a variety of methods each tailored to enforce a different symmetry.Our models are both theoretically and experimentally robust to distributionalshift by symmetry group transformations and enjoy favorable sample complex-ity. We demonstrate the advantage of our approach on a variety of physicaldynamics including Rayleigh–Bénard convection and real-world ocean currentsand temperatures. Compared with image or text applications, our work is a sig-nificant step towards applying equivariant neural networks to high-dimensionalsystems with complex dynamics. We open-source our simulation, data and code athttps://github.com/Rose-STL-Lab/Equivariant-Net .1 I NTRODUCTIONModeling dynamical systems in order to forecast the future is of critical importance in a wide range offields including, e.g., fluid dynamics, epidemiology, economics, and neuroscience [ 2;21;45;22;14].Many dynamical systems are described by systems of non-linear differential equations that aredifficult to simulate numerically. Accurate numerical computation thus requires long run times andmanual engineering in each application.Recently, there has been much work applying deep learning to accelerate solving differential equations[46;6]. However, current approaches struggle with generalization. The underlying problem is thatphysical data has no canonical frame of reference to use for data normalization. For example, itis not clear how to rotate samples of fluid flow such that they share a common orientation. Thusreal-world out-of-distribution test data is difficult to align with training data. Another limitation ofcurrent approaches is low physical accuracy. Even when mean error is low, errors are often spatiallycorrelated, producing a different energy distribution from the ground truth.We propose to improve the generalization and physical accuracy of deep learning models for physicaldynamics by incorporating symmetries into the forecasting model. In physics, Noether’s Law givesa correspondence between conserved quantities and groups of symmetries. By building a neuralnetwork which inherently respects a given symmetry, we thus make conservation of the associatedquantity more likely and consequently the model’s prediction more physically accurate.Equal contribution1Published as a conference paper at ICLR 2021A functionfis equivariant if when its input xis transformed by a symmetry group g, the output istransformed by the same symmetry,f(gx) =gf(x):See Figure 1 for an illustration. In the setting of forecasting, fapproximates the underlying dynamicalsystem. The set of valid transformations gis called the symmetry group of the system.Figure 1: Illustration of equivariance of e.g.f(x) = 2xwith respect to T= rot(=4).By designing a model that is inherently equivariant totransformations of its input, we can guarantee that ourmodel generalizes automatically across these trans-formations, making it robust to distributional shift.The symmetries we consider, translation, rotation,uniform motion, and scale, have different properties,and thus we tailor our methods for incorporating eachsymmetry.Specifically, for scale equivariance, we replace theconvolution operation with group correlation overthe groupGgenerated by translations andrescalings.Our method builds on that of Worrall and Welling[51], with significant novel adaptations to the physicsdomain: scaling affecting time, space, and magni-tude; both up and down scaling; and scaling by anyreal number. For rotational symmetries, we leveragethe key insight of Cohen and Welling [9]that the in-put, output, and hidden layers of the network are allacted upon by the symmetry group and thus shouldbe treated as representations of the symmetry group. Our rotation-equivariant model is built using theflexible E(2)-CNN framework developed by Weiler and Cesa [49]. In the case of a uniform motion,or Galilean transformation, we show the above methods are too constrained. We use the simple buteffective technique of convolutions conjugated by averaging operations.Research into equivariant neural networks has mostly been applied to tasks such as image classificationand segmentation [ 27;50;49]. In contrast, we design equivariant networks in a completely differentcontext, that of a time series representing a physical process. Forecasting high-dimensional turbulenceis a significant step for equivariant neural networks compared to the low-dimensional physicsexamples and computer vision problems treated in other works.We test on a simulated turbulent convection dataset and on real-world ocean current and temperaturedata. Ocean currents are difficult to predict using numerical methods due to unknown externalforces and complex dynamics not fully captured by simplified mathematical models. These domainsare chosen as examples, but since the symmetries we focus on are pervasive in almost all physicsproblems, we expect our techniques will be widely applicable. Our contributions include:We study the problem of improving the generalization capability and physical accuracy of deeplearning models for learning complex physical dynamics such as turbulence and ocean currents.We design tailored methods with theoretical guarantees to incorporate various symmetries, includ-ing uniform motion, rotation, and scaling, into convolutional neural networks.When evaluated on turbulent convection and ocean current prediction, our models achieve signifi-cant improvement on generalization of both predictions and physical consistency.For different symmetries, our methods have an average 31% and maximum 78% reduction inenergy error when evaluated on turbulent convection with no distributional shift.2 M ATHEMATICAL PRELIMINARIES2.1 S YMMETRY GROUPS AND EQUIVARIANT FUNCTIONSFormal discussion of symmetry relies on the concept of an abstract symmetry group. We give a briefoverview, for a more formal treatment see Appendix A, or Lang [28].2Published as a conference paper at ICLR 2021Agroup of symmetries or simply group consists of a set Gtogether with a composition map:GG!G. The composition map is required to be associative and have an identity 12G. Mostimportantly, composition with any element of Gis required to be invertible.Groups are abstract objects, but they become concrete when we let them act. A group Ghas anaction on a setSif there is an action map :GS!Swhich is compatible with the compositionlaw. We say further that Sis aG-representation if the setSis a vector space and the group acts onSby linear transformations.Definition 1 (invariant, equivariant) .Letf:X!Ybe a function and Gbe a group. Assume Gacts onXandY. The function fisG-equivariant iff(gx) =gf(x)for allx2Xandg2G. ThefunctionfisG-invariant iff(gx) =f(x)for allx2Xandg2G.2.2 P HYSICAL DYNAMICAL SYSTEMSWe investigate two dynamical systems: Rayleigh–Bénard convection and real-world ocean currentand temperature. These systems are governed by Navier-Stokes equations.2D Navier-Stokes (NS) Equations. Letw(x;t)be the velocity vector field of a flow. The field whas two components (u;v), velocities along the xandydirections. The governing equations for thisphysical system are the momentum equation, continuity equation, and temperature equation,@w@t=(wr)w10rp+r2w+f;rw= 0;@H@t=H(wr)H;(DNS)whereH(x;t)is temperature, pis pressure,is the heat conductivity, 0is initial density, is thecoefficient of thermal expansion, is the kinematic viscosity, and fis the buoyant force.2.3 S YMMETRIES OF DIFFERENTIAL EQUATIONSBy classifying the symmetries of a system of differential equations, the task of finding solutions ismade far simpler, since the space of solutions will exhibit those same symmetries. Let Gbe a groupequipped with an action on 2-dimensional space X=R2and 3-dimensional spacetime ^X=R3.LetV=Rdbe aG-representation. Denote the set of all V-fields on^Xas^FV=fw:^X!V:wsmoothg:DefineFVsimilarly to be V-fields onX. ThenGhas an induced action on ^FVby(gw)(x;t) =g(w(g1x;g1t))and onFVanalogously.Consider a system of differential operators Dacting on ^FV. Denote the set of solutions Sol(D)^FV:We sayGisa symmetry group of DifGpreserves Sol(D). That is, if'is a solution ofD, then for allg2G,g(')is also. In order to forecast the evolution of a system D, we model the forward predictionfunctionf. Letw2Sol(D). The input to fis a collection of ksnapshots at times tk;:::;t1denotedwti2Fd. The prediction function f:Fkd!Fdis definedf(wtk;:::;wt1) =wt. Itpredicts the solution at a time tbased on the solution in the past. Let Gbe a symmetry group of D.Then forg2G,g(w)is also a solution of D. Thusf(gwtk;:::;gwt1) =gwt. Consequently, fisG-equivariant.2.4 S YMMETRIES OF NAVIER -STOKES EQUATIONSThe Navier-Stokes equations are invariant under the following five different transformations. Individ-ually each of these types of transformations generates a group of symmetries of the system. The fulllist of symmetry groups of NS equations and Heat equations are shown in Appendix B.6.Space translation: Tspcw(x;t) =w(xc;t),c2R2,Time translation: Ttimew(x;t) =w(x;t),2R,Uniform motion: Tumcw(x;t) =w(x;t) +c,c2R2,Rotation/Reflection: TrotRw(x;t) =Rw(R1x;t); R2O(2),Scaling:Tscw(x;t) =w(x;2t),2R>0.3Published as a conference paper at ICLR 20213 M ETHODOLOGYWe prescribe equivariance by training within function classes containing only equivariant functions.Our models can thus be theoretically guaranteed to be equivariant up to discretization error. Weincorporate equivariance into two state-of-the-art architectures for dynamics prediction, ResNetandU-net [48]. Below, we describe how we modify the convolution operation in these models fordifferent symmetries Gto form four EquG-ResNet and four EquG-Unet models.3.1 E QUIVARIANT NETWORKSThe key to building equivariant networks is that the composition of equivariant functions is equivariant.Hence, if the maps between layers of a neural network are equivariant, then the whole network willbe equivariant. Note that both the linear maps and activation functions must be equivariant. Animportant consequence of this principle is that the hidden layers must also carry a G-action. Thus,the hidden layers are not collections of scalar channels, but vector-valued G-representations.Equivariant Convolutions. Consider a convolutional layer FRdin!FRdoutwith kernelKfrom aRdin-field to a Rdout-field. Let RdinandRdoutbeG-representations with action maps inandoutrespectively. Cohen et al. [11, Theorem 3.3] prove the network is G-equivariant if and only ifK(gv) =1out(g)K(v)in(g) for allg2G. (1)A network composed of such equivariant convolutions is called a steerable CNN.Equivariant ResNet andU-net .Equivariant ResNet architectures appear in [ 9;10], and equiv-ariant transposed convolution, a feature of U-net , is implemented in [ 49]. We prove in general thatadding skip connections to a network does not affect its equivariance with respect to linear actionsand also give a condition for ResNet orUnet to be equivariant in Appendix B.2.Relation to Data Augmentation. To improve generalization, equivariant networks offer a betterperforming alternative to the popular technique of data augmentation [ 13]. Large symmetry groupsnormally require augmentation with many transformed examples. In contrast, for equivariant models,we have following proposition. (See Appendix B.1 for proof.)Proposition 1. G-equivariant models with equivariant loss learn equally (up to sample weight) fromany transformation g(s)of a samples. Thus data augmentation does not help during training.3.2 T IME AND SPACE TRANSLATION EQUIVARIANCECNNs are time translation-equivariant as long as we predict in an autoregressive manner. Convolu-tional layers are also naturally space translation-equivariant (if cropping is ignored). Any activationfunction which acts identically pixel-by-pixel is equivariant.3.3 R OTATIONAL EQUIVARIANCETo incorporate rotational symmetry, we model fusing SO(2) -equivariant convolutions and activationswithin the E(2)-CNN framework of Weiler and Cesa [49]. In practice, we use the cyclic groupG=Cninstead ofG= SO(2) as for large enough nthe difference is practically indistinguishabledue to space discretization. We use powers of the regular representation =R[Cn]mfor hiddenlayers. The representation R[Cn]has basis given by elements of CnandCn-action by permutationmatrices. It has good descriptivity since it contains all irreducible representations of Cn, and it iscompatible with any activation function applied channel-wise.3.4 U NIFORM MOTION EQUIVARIANCEUniform motion is part of Galilean invariance and is relevant to all non-relativistic physics modeling.For a vector field X:R2!R2and vectorc2R2, uniform motion transformation is adding aconstant vector field to the vector field X(v),Tumc(X)(v) =X(v) +c;c2R2. By the followingcorollary, proved in Appendix B.3, enforcing uniform motion equivariance as above by requiring alllayers of the CNN to be equivariant severely limits the model.4Published as a conference paper at ICLR 2021Corollary 2. Iffis aCNN alternating between convolutions fiand channel-wise activations iandthe combined layers ifiare uniform motion equivariant, then fis affine.To overcome this limitation, we relax the requirement by conjugating the model with shifted inputdistribution. For each sliding local block in each convolutional layer, we shift the mean of inputtensor to zero and shift the output back after convolution and activation function per sample. In otherwords, if the input isPbdinssand the output isQbdout=(PK)for one sliding local block,wherebis batch size, dis number of channels, sis the kernel size, and Kis the kernel, theni= Meanjkl(Pijkl) ;Pijkl7!Pijkli;Qij7!Qij+i: (2)This will allow the convolution layer to be equivariant with respect to uniform motion. If the input isa vector field, we apply this operation to each element.Proposition 3. A residual block f(x) +xis uniform motion equivariant if the residual connectionfis uniform motion invariant.By the proposition 3 above that is proved in Appendix B.3, within ResNet , residual mappingsshould be invariant , not equivariant, to uniform motion. That is, the skip connection f(i;i+2)=Iis equivariant and the residual function f(i;i+1)should be invariant. Hence, for the first layer ineach residual block, we omit adding the mean back to the output Qij. In the case of Unet , whenupscaling, we pad with the mean to preserve the overall mean.3.5 S CALE EQUIVARIANCEScale equivariance in dynamics is unique as the physical law dictates the scaling of magnitude, spaceand time simultaneously. This is very different from scaling in images regarding resolutions [ 51]. Forexample, the Navier-Stokes equations are preserved under a specific scaling ratio of time, space, andvelocity given by the transformationT:w(x;t)7!w(x;2t); (3)where2R>0. We implement two different approaches for scale equivariance, depending onwhether we tie the physical scale with the resolution of the data.Resolution Independent Scaling. We fix the resolution and scale the magnitude of the input byvarying the discretization step size. An input w2FkR2with step size x(w)andt(w)can bescaledw0=Tsc(w) =wby scaling the magnitude of vector alone, provided the discretizationconstants are now assumed to be x(w0) = 1=x(w)andt(w0) = 1=2t(w). We refer tothis as magnitude equvariance hereafter.To obtain magnitude equivariance, we divide the input tensor by the MinMax scaler (the maximum ofthe tensor minus the minimum) and scale the output back after convolution and activation per slidingblock. We found that the standard deviation and mean L2 norm may work as well but are not asstable as the MinMax scaler. Specifically, using the same notation as in Section 3.4,i= MinMax jkl(Pijkl) ;Pijkl7!Pijkl=i;Qij7!Qiji: (4)Resolution Dependent Scaling. If the physical scale of the data is fixed, then scaling correspondsto a change in resolution and time step size. To achieve this, we replace the convolution layerswith group correlation layers over the group G= (R>0;)n(R2;+)of scaling and translations. Inconvolution, we translate a kernel Kacross an input was suchv(p) =Pq2Z2w(p+q)K(q):TheG-correlation upgrades this operation by both translating andscaling the kernel relative to the input,v(p;s;) =X2R>0;t2R;q2Z2w(p+q;2t;)K(q;s;t; ); (5)wheresandtdenote the indices of output and input channels respectively. We add an axis to thetensors corresponding the scale factor . Note that we treat the channel as a time dimension both withrespective to our input and scaling action. As a consequence, as the number of channels increasesin the lower layers of Unet andResNet , the temporal resolution increases, which is analogous totemporal refinement in numerical methods [ 24;31]. For the input ~wof first layer where ~whas nolevels originally, w(p;s; ) =~w(p;2s).5Published as a conference paper at ICLR 2021Our model builds on the methods of Worrall and Welling [51], but with important adaptations forthe physical domain. Our implementation of group correlation equation 5 directly incorporates thephysical scaling law equation 3 of the system equation DNS. This affects time, space, and magnitude.(For heat, we drop the magnitude scaling.) The physical scaling law dictates our model should beequivariant to both up and down scaling and by any 2R>0. Practically, the sum is truncated to 7different 1=33and discrete data is continuously indexed using interpolation. Note equation 3demands we scale anisotropically , i.e. differently across time and space.4 R ELATED WORKEquivariance and Invariance. Developing neural nets that preserve symmetries has been a funda-mental task in image recognition [ 12;49;9;7;29;27;3;52;10;19;50;16;42]. But these modelshave never been applied to forecasting physical dynamics. Jaiswal et al. [23]; Moyer et al. [37]proposed approaches to find representations of data that are invariant to changes in specified factors,which is different from our physical symmetries. Ling et al. [30] and Fang et al. [17] studied tensorinvariant neural networks to learn the Reynolds stress tensor while preserving Galilean invariance, andMattheakis et al. [34] embedded even/odd symmetry of a function and energy conservation into neuralnetworks to solve differential equations. But these two papers are limited to fully connected neuralnetworks. Sosnovik et al. [44] extend Worrall and Welling [51] to group correlation convolution.But these two papers are limited to 2D images and are not magnitude equivariant, which is stillinadequate for fluid dynamics. Bekkers [4]describes principles for endowing a neural architecturewith invariance with respect to a Lie group.Physics-informed Deep Learning. Deep learning models have been used often to model physicaldynamics. For example, Wang et al. [48] unified the CFD technique and U-net to generate predictionswith higher accuracy and better physical consistency. Kim and Lee [25] studied unsupervisedgenerative modeling of turbulent flows but the model is not able to make real time future predictionsgiven the historic data. Anderson et al. [1]designed rotationally covariant neural network for learningmolecular systems. Raissi et al. [40;41]applied deep neural networks to solve PDEs automaticallybut these approaches require explicit input of boundary conditions during inference, which aregenerally not available in real-time. Mohan et al. [35] proposed a purely data-driven DL modelfor turbulence, but the model lacks physical constraints and interpretability. Wu et al. [53] andBeucler et al. [5] introduced statistical and physical constraints in the loss function to regularize thepredictions of the model. However, their studies only focused on spatial modeling without temporaldynamics. Morton et al. [36] incorporated Koopman theory into a encoder-decoder architecture butdid not study the symmetry of fluid dynamics.Video Prediction. Our work is related to future video prediction. Conditioning on the observedframes, video prediction models are trained to predict future frames, e.g., [ 33;18;54;47;39;18].Many of these models are trained on natural videos with complex noisy data from unknown physicalprocesses. Therefore, it is difficult to explicitly incorporate physical principles into these models. Ourwork is substantially different because we do not attempt to predict object or camera motions.5 E XPERIMENTSWe test our models on Rayleigh-Bénard convection and real-world ocean currents. We also evaluatedon the heat diffusion systems, see Appendix C for more results. The implementation details and adetailed description of energy spectrum error can be found in Appendices D and B.7.Evaluation Metrics. Our goal is to show that adding symmetry improves both the accuracy andthe physical consistency of predictions. For accuracy, we use Root Mean Square Error (RMSE)between the forward predictions and the ground truth over all pixels. For physical consistency, wecalculate the Energy Spectrum Error (ESE) which is the RMSE of the log of energy spectrum. ESEcan indicate whether the predictions preserve the correct statistical distributions of the fluids and obeythe energy conservation law, which is a critical metric for physical consistency.Experimental Setup. ResNet [20] and U-net [43] are the best-performing models for our tasks[48] and are well-suited for our tasks. Thus, we implemented these two convolutional architecturesequipped with four different symmetries, which we name Equ-ResNet(U-net) . We use a rollingwindow approach to generate sequences with step size 1 for the RBC data and step size 3 for the6Published as a conference paper at ICLR 2021Table 2: The RMSE and ESE of the ResNet(Unet) and four Equ-ResNets(Unets) pre-dictions on the original and four transformed test sets of Rayleigh-Bénard Convection. Augm isResNet(Unet) trained on the augmented training set with additional samples applied with randomtransformations from the relevant symmetry group. Each column contains all models’ predictionerrors on the original test set and four different transformed test sets.Root Mean Square Error (103) Energy Spectrum ErrorsOrig UM Mag Rot Scale Orig UM Mag Rot ScaleResNet 0.670.24 2.940.84 4.301.27 3.460.39 1.960.16 0.460.19 0.560.29 0.260.14 1.590.42 4.322.33Augm 1.100.20 1.540.12 0.920.09 1.010.11 1.37 0.02 1.140.32 1.920.21 1.550.14Equ UM 0.710.26 0.710.26 0.330.11 0.330.11Equ Mag 0.690.24 0.670.14 0.340.09 0.190.02Equ Rot 0.650.26 0.76 0.02 0.310.06 1.230.04Equ Scal 0.700.02 0.850.09 0.440.22 0.680.26U-net 0.640.24 2.270.82 3.591.04 2.780.83 1.650.17 0.500.04 0.340.10 0.550.05 0.910.27 4.250.57Augm 0.750.28 1.330.33 0.860.04 1.110.07 0.96 0.23 0.440.21 1.240.04 1.470.11Equ UM 0.680.26 0.710.24 0.230.06 0.140.05Equ Mag 0.670.11 0.680.14 0.420.04 0.340.06Equ Rot 0.680.25 0.740.01 0.11 0.02 1.16 0.05Equ Scal 0.690.13 0.900.25 0.450.32 0.890.29ocean data. All models predict raw velocity and temperature fields up to 10steps ahead auto-regressively. We use the MSE loss function that accumulates the forecasting errors. We split the data60%-20%-20% for training-validation-test across time and report mean errors over five random runs.5.1 E QUIVARIANCE ERRORSThe equivariance errors can be defined as EET(x) =jT(f(x))f(T(x))j, wherexis an input,fis a neural net, Tis a transformation from a symmetry group. We empirically measure theequivariance errors of all equivariant models we have designed. Table 1 shows the equivariance errorsofResNet andEqu-ResNet . The transformation Tis sampled in the same way as we generatedthe transformed Rayleigh-Bénard Convection test sets. See more details in Appendix B.5.5.2 E XPERIMENTS ON SIMULATED RAYLEIGH -BÉNARD CONVECTION DYNAMICSData Description. Rayleigh-Bénard Convection occurs in a horizontal layer of fluid heated frombelow and is a major feature of the El Niño dynamics. The dataset comes from two-dimensionalturbulent flow simulated using the Lattice Boltzmann Method [ 8] with Rayleigh number 2:5108.We divide each 1792 256 image into 7 square subregions of size 256 256, then downsample to6464 pixels. To test the models’ generalization ability, we generate additional four test sets : 1)UM: added random vectors drawn from U(1;1); 2)Mag: multiplied by random values sampledfromU(0;2); 3)Rot: randomly rotated by the multiples of =2; 4)Scale : scaled bysampled fromU(1=5;2). Due to lack of a fixed reference frame, real-world data would be transformed relative totraining data. We use transformed data to mimic this scenario.Table 1: Equivariance Er-rors of ResNet(Unets) andEqu-ResNet(Unets) .EET(103) UM Mag Rot ScaleResNets 2.010 1.885 5.895 1.658Equ ResNets 0.0 0.0 1.190 0.579Unets 1.070 0.200 1.548 1.809Equ Unets 0.0 0.0 0.794 0.481Prediction Performance. Table 2 shows the predictionRMSE and ESE on the original and four transformedtest sets by the non-equivariant ResNet(Unet) andfourEqu-ResNets(Unets) .Augm isResNet(Unet)trained on the augmented training set with additional sampleswith random transformations applied from the relevant sym-metry group. The augmented training set contains additionaltransformed samples and is three times the size of the originaltraining set. Each column contains the prediction errors bythe non-equivariant and equivariant models on each test set.On the original test set, all models have similar RMSE, yetthe equivariant models have lower ESE. This demonstrates that incorporating symmetries preservesthe representation powers of CNNs and even improves models’ physical consistency.On the transformed test sets, we can see that ResNet(Unet) fails, while Equ-ResNets(Unets)performs even much better than Augm-ResNets(Unets) . This demonstrates the value of equiv-ariant models over data augmentation for improving generalization. Figure 2 shows the ground truth7Published as a conference paper at ICLR 2021Figure 2: The ground truth and the predicted velocity norm fields kwk2at time step 1,5and10bytheResNet and four Equ-ResNets on the four transformed test samples. The first column is thetarget, the second is ResNet predictions, and the third is predictions by Equ-ResNets .and the predicted velocity fields at time step 1,5and10by the ResNet and four Equ-ResNetson the four transformed test samples.Table 3: Performance compar-ison on transformed train andtest sets.RMSE ESEResNet 1.030.05 0.960.10Equ UM 0.690.01 0.350.13ResNet 1.500.02 0.550.11Equ Mag 0.750.04 0.390.02ResNet 1.180.05 1.210.04Equ Rot 0.770.01 0.680.01ResNet 0.920.01 1.340.07Equ Scal 0.740.03 1.020.02Generalization. In order to evaluate models’ generalization abilitywith respect to the extent of distributional shift, we created additionaltest sets with different scale factors from15to 1. Figure 3 showsResNet andEqu Scal-ResNet prediction RMSEs (left) and ESEs(right) on the test sets upscaled by different factors. We observedthatEqu Scal-ResNet is very robust across various scaling factorswhile ResNet does not generalize.We also compare ResNet andEqu-ResNet when both train andtest sets have random transformations from the relevant symmetrygroup applied to each sample. This mimics real-world data in whicheach sample has unknown reference frame. As shown in Table3 shows Equ-ResNet outperforms ResNet on average by 34%RMSE and 40% ESE.Figure 3: Left: Prediction RMSE and ESE over five runs of ResNet andEqu Scal-ResNet onthe Rayleigh-Bénard Convection test set upscaled by different factors. Right: The ground truth andpredicted ocean currents kwk2byResNet and four Equ-ResNets on the test set of future time.5.3 E XPERIMENTS ON REAL WORLD OCEAN DYNAMICSData Description. We use the reanalysis ocean current velocity data generated by the NEMO oceanengine [ 32].1We selected an area from each of the Atlantic, Indian and North Pacific Oceans from01/01/2016 to 08/18/2017 and extracted 64 64 sub-regions for our experiments. The correspondinglatitude and longitude ranges for the selected regions are (-44 -23, 2546), (5576, -39-18) and(-174-153, 526) respectively. We not only test all models on the future data but also on a differentdomain (-180-159, -40-59) in South Pacific Ocean from 01/01/2016 to 12/15/2016.Prediction Performance. Table 4 shows the RMSE and ESE of ResNets(Unets) , and equiv-ariant Equ-ResNets(Unets) on the test sets with different time range and spatial domain from1The data are available at https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_0248Published as a conference paper at ICLR 2021the training set. All the equivariant models outperform the non-equivariant baseline on RMSE,andEqu Scal-ResNet achieves the lowest RMSE. For ESE, only the Equ Mag-ResNet(Unet) isworse than the baseline. Also, it is remarkable that the Equ Rotmodels have significantly lower ESEthan others, suggesting that they correctly learn the statistical distribution of ocean currents.Comparison with Data Augmentation. We also compare Equ-ResNets(Unets)ResNets(Unets) that are trained with data-augmentation ( Augm ) in Table 4. In allcases, equivariant models outperforms the baselines trained with data augmentation. We find thatdata augmentation sometimes improves slightly on RMSE but not as much as the equivariantmodels. And, in fact, ESE is uniformly worse for models trained with data augmentationthan even the baselines. In contrast, the equivariant models have much better ESE than thebaselines with or without augmentation. We believe data augmentation presents a trade-offin learning. Though the model may be less sensitive to the various transformations we con-sider, we need to train bigger models longer on many more samples. The models may nothave enough capacity to learn the symmetry from the augmented data and the dynamics ofthe fluids at the same time. By comparison, equivariant architectures do not have this issue.Table 4: Prediction RMSE and ESE com-parison on the two ocean currents test sets.RMSE ESETest time Test domain Test time Test domainResNet 0.710.07 0.720.04 0.830.06 0.750.11Augm UM0.700.01 0.700.07 1.060.06 1.060.04Augm Mag0.760.02 0.710.01 1.080.08 1.050.8Augm Rot0.730.01 0.690.01 0.940.01 0.860.01Augm Scal0.970.06 0.920.04 0.850.03 0.950.11Equ UM 0.680.06 0.680.16 0.750.06 0.730.08Equ Mag 0.660.14 0.680.11 0.840.04 0.850.14Equ Rot 0.690.01 0.700.08 0.430.15 0.280.20Equ Scal 0.630.02 0.680.21 0.440.05 0.420.12U-net 0.700.13 0.730.10 0.770.12 0.730.07Augm UM0.680.02 0.680.01 0.850.04 0.830.04Augm Mag0.690.02 0.670.10 0.780.03 0.860.02Augm Rot0.790.01 0.700.01 0.790.01 0.780.02Augm Scal0.710.01 0.770.02 0.840.01 0.770.02Equ UM 0.660.10 0.670.03 0.730.03 0.820.13Equ Mag 0.630.08 0.660.09 0.740.05 0.790.04Equ Rot 0.680.05 0.690.02 0.420.02 0.470.07Equ Scal 0.650.09 0.690.05 0.450.13 0.430.05Figure 3 shows the ground truth and the predictedocean currents at time step 1;5;10by different mod-els. We can see that equivariant models’ predictionsare more accurate and contain more details than thebaselines. Thus, incorporating symmetry into deeplearning models can improve the prediction accuracyof ocean currents. The most recent work on this datasetis de Bezenac et al. [15], which combines a warpingscheme and a U-net to predict temperature. Sinceour models can also be applied to advection-diffusionsystems, we also investigated the task of ocean temper-ature field predictions. We observe that Equ UM-Unetperforms slightly better than de Bezenac et al. [15]. Foradditional results, see Appendix E.6 C ONCLUSION AND FUTURE WORKWe develop methods to improve the generalization ofdeep sequence models for learning physical dynamics.We incorporate various symmetries by designing equiv-ariant neural networks and demonstrate their superiorperformance on 2D time series prediction both theoreti-cally and experimentally. Our designs obtain improved physical consistency for predictions. In thecase of transformed test data, our models generalize significantly better than their non-equivariantcounterparts. Importantly, all of our equivariant models can be combined and can be extendedto 3D cases. The group Galso acts on the boundary conditions and external forces of a systemD. If these are G-invariant, then the system Dis strictly invariant as in Section 2.3. If not, onemust consider a family of solutions [g2GSol(gD)to retain equivariance. To the best of our bestknowledge, there does not exist a single model with equivariance to the full symmetry group ofthe Navier-Stokes equations. It is possible but non-trivial, and we continue to work on combiningdifferent equivariances. Future work also includes speeding up the the scale-equivariant models andincorporating other symmetries into DL models.ACKNOWLEDGMENTSThis work was supported in part by Google Faculty Research Award, NSF Grant #2037745, and theU. S. Army Research Office under Grant W911NF-20-1-0334. The Titan Xp used for this researchwas donated by the NVIDIA Corporation. This research used resources of the National EnergyResearch Scientific Computing Center, a DOE Office of Science User Facility supported by the Officeof Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We alsothank Dragos Bogdan Chirila for providing the turbulent flow data.9Published as a conference paper at ICLR 2021
lO8ZHqy_NFp
Novelty in modeling physical dynamics with symmetry
6: Marginally above acceptance threshold
This paper studies improving the modeling of physical dynamics with equivariant neural networks. In particular, this paper focuses on a new type of data governed by physical models. Several special symmetry groups are considered to better characterize the system, including uniform motion equivariance, resolution-independent scaling, and resolution-dependent scaling, etc. Simulation results show that the proposed equivariant model yields better accuracy and physical consistency than the non-equivariant models even with data augmentation, given the type of distributional shift is known. Results on the real-world data show some of the equivariant models can generalize better than the non-equivariant models. Pros * The idea of using equivariant networks in physical dynamics seems well-motivated. In cases global alignment is difficult and the distributional shift is unknown, improving generalization by incorporating known symmetries seems to be a natural idea. * Although the idea of equivariant networks has been proposed before, the proposed treatments tailored to the modeling physical dynamics are new. Cons * It is claimed the data is governed by the differential equation, which has several symmetry properties. However, how the "ResNet and U-net" networks are used to solve the dynamics prediction problem is missing from the main text. Maybe due to the same reason, the connections to the differential equations are unclear. This paper is not quite self-contained. * The content is targeted to a narrow audience. Questions: - Is data augmentation available as a baseline for experiments in Table 3? - It seems different kinds of symmetries are incorporated separately - not sure if this is a limitation. If a system is known to satisfy multiple symmetries, is it possible to incorporate all of them together in a network?
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Incorporating Symmetry into Deep Dynamics Models for Improved Generalization ### Paper Abstract Recent work has shown deep learning can accelerate the prediction of physical dynamics relative to numerical solvers. However, limited physical accuracy and an inability to generalize under distributional shift limit its applicability to the real world. We propose to improve accuracy and generalization by incorporating symmetries into convolutional neural networks. Specifically, we employ a variety of methods each tailored to enforce a different symmetry. Our models are both theoretically and experimentally robust to distributional shift by symmetry group transformations and enjoy favorable sample complexity. We demonstrate the advantage of our approach on a variety of physical dynamics including Rayleigh–Bénard convection and real-world ocean currents and temperatures. Compare with image or text applications, our work is a significant step towards applying equivariant neural networks to high-dimensional systems with complex dynamics. ### Paper Keywords ["deep sequence model", "equivariant neural network", "physics-guided deep learning", "AI for earth science"] ### Paper Content ABSTRACTRecent work has shown deep learning can accelerate the prediction of physi-cal dynamics relative to numerical solvers. However, limited physical accu-racy and an inability to generalize under distributional shift limits its applica-bility to the real world. We propose to improve accuracy and generalizationby incorporating symmetries into convolutional neural networks. Specifically,we employ a variety of methods each tailored to enforce a different symmetry.Our models are both theoretically and experimentally robust to distributionalshift by symmetry group transformations and enjoy favorable sample complex-ity. We demonstrate the advantage of our approach on a variety of physicaldynamics including Rayleigh–Bénard convection and real-world ocean currentsand temperatures. Compared with image or text applications, our work is a sig-nificant step towards applying equivariant neural networks to high-dimensionalsystems with complex dynamics. We open-source our simulation, data and code athttps://github.com/Rose-STL-Lab/Equivariant-Net .1 I NTRODUCTIONModeling dynamical systems in order to forecast the future is of critical importance in a wide range offields including, e.g., fluid dynamics, epidemiology, economics, and neuroscience [ 2;21;45;22;14].Many dynamical systems are described by systems of non-linear differential equations that aredifficult to simulate numerically. Accurate numerical computation thus requires long run times andmanual engineering in each application.Recently, there has been much work applying deep learning to accelerate solving differential equations[46;6]. However, current approaches struggle with generalization. The underlying problem is thatphysical data has no canonical frame of reference to use for data normalization. For example, itis not clear how to rotate samples of fluid flow such that they share a common orientation. Thusreal-world out-of-distribution test data is difficult to align with training data. Another limitation ofcurrent approaches is low physical accuracy. Even when mean error is low, errors are often spatiallycorrelated, producing a different energy distribution from the ground truth.We propose to improve the generalization and physical accuracy of deep learning models for physicaldynamics by incorporating symmetries into the forecasting model. In physics, Noether’s Law givesa correspondence between conserved quantities and groups of symmetries. By building a neuralnetwork which inherently respects a given symmetry, we thus make conservation of the associatedquantity more likely and consequently the model’s prediction more physically accurate.Equal contribution1Published as a conference paper at ICLR 2021A functionfis equivariant if when its input xis transformed by a symmetry group g, the output istransformed by the same symmetry,f(gx) =gf(x):See Figure 1 for an illustration. In the setting of forecasting, fapproximates the underlying dynamicalsystem. The set of valid transformations gis called the symmetry group of the system.Figure 1: Illustration of equivariance of e.g.f(x) = 2xwith respect to T= rot(=4).By designing a model that is inherently equivariant totransformations of its input, we can guarantee that ourmodel generalizes automatically across these trans-formations, making it robust to distributional shift.The symmetries we consider, translation, rotation,uniform motion, and scale, have different properties,and thus we tailor our methods for incorporating eachsymmetry.Specifically, for scale equivariance, we replace theconvolution operation with group correlation overthe groupGgenerated by translations andrescalings.Our method builds on that of Worrall and Welling[51], with significant novel adaptations to the physicsdomain: scaling affecting time, space, and magni-tude; both up and down scaling; and scaling by anyreal number. For rotational symmetries, we leveragethe key insight of Cohen and Welling [9]that the in-put, output, and hidden layers of the network are allacted upon by the symmetry group and thus shouldbe treated as representations of the symmetry group. Our rotation-equivariant model is built using theflexible E(2)-CNN framework developed by Weiler and Cesa [49]. In the case of a uniform motion,or Galilean transformation, we show the above methods are too constrained. We use the simple buteffective technique of convolutions conjugated by averaging operations.Research into equivariant neural networks has mostly been applied to tasks such as image classificationand segmentation [ 27;50;49]. In contrast, we design equivariant networks in a completely differentcontext, that of a time series representing a physical process. Forecasting high-dimensional turbulenceis a significant step for equivariant neural networks compared to the low-dimensional physicsexamples and computer vision problems treated in other works.We test on a simulated turbulent convection dataset and on real-world ocean current and temperaturedata. Ocean currents are difficult to predict using numerical methods due to unknown externalforces and complex dynamics not fully captured by simplified mathematical models. These domainsare chosen as examples, but since the symmetries we focus on are pervasive in almost all physicsproblems, we expect our techniques will be widely applicable. Our contributions include:We study the problem of improving the generalization capability and physical accuracy of deeplearning models for learning complex physical dynamics such as turbulence and ocean currents.We design tailored methods with theoretical guarantees to incorporate various symmetries, includ-ing uniform motion, rotation, and scaling, into convolutional neural networks.When evaluated on turbulent convection and ocean current prediction, our models achieve signifi-cant improvement on generalization of both predictions and physical consistency.For different symmetries, our methods have an average 31% and maximum 78% reduction inenergy error when evaluated on turbulent convection with no distributional shift.2 M ATHEMATICAL PRELIMINARIES2.1 S YMMETRY GROUPS AND EQUIVARIANT FUNCTIONSFormal discussion of symmetry relies on the concept of an abstract symmetry group. We give a briefoverview, for a more formal treatment see Appendix A, or Lang [28].2Published as a conference paper at ICLR 2021Agroup of symmetries or simply group consists of a set Gtogether with a composition map:GG!G. The composition map is required to be associative and have an identity 12G. Mostimportantly, composition with any element of Gis required to be invertible.Groups are abstract objects, but they become concrete when we let them act. A group Ghas anaction on a setSif there is an action map :GS!Swhich is compatible with the compositionlaw. We say further that Sis aG-representation if the setSis a vector space and the group acts onSby linear transformations.Definition 1 (invariant, equivariant) .Letf:X!Ybe a function and Gbe a group. Assume Gacts onXandY. The function fisG-equivariant iff(gx) =gf(x)for allx2Xandg2G. ThefunctionfisG-invariant iff(gx) =f(x)for allx2Xandg2G.2.2 P HYSICAL DYNAMICAL SYSTEMSWe investigate two dynamical systems: Rayleigh–Bénard convection and real-world ocean currentand temperature. These systems are governed by Navier-Stokes equations.2D Navier-Stokes (NS) Equations. Letw(x;t)be the velocity vector field of a flow. The field whas two components (u;v), velocities along the xandydirections. The governing equations for thisphysical system are the momentum equation, continuity equation, and temperature equation,@w@t=(wr)w10rp+r2w+f;rw= 0;@H@t=H(wr)H;(DNS)whereH(x;t)is temperature, pis pressure,is the heat conductivity, 0is initial density, is thecoefficient of thermal expansion, is the kinematic viscosity, and fis the buoyant force.2.3 S YMMETRIES OF DIFFERENTIAL EQUATIONSBy classifying the symmetries of a system of differential equations, the task of finding solutions ismade far simpler, since the space of solutions will exhibit those same symmetries. Let Gbe a groupequipped with an action on 2-dimensional space X=R2and 3-dimensional spacetime ^X=R3.LetV=Rdbe aG-representation. Denote the set of all V-fields on^Xas^FV=fw:^X!V:wsmoothg:DefineFVsimilarly to be V-fields onX. ThenGhas an induced action on ^FVby(gw)(x;t) =g(w(g1x;g1t))and onFVanalogously.Consider a system of differential operators Dacting on ^FV. Denote the set of solutions Sol(D)^FV:We sayGisa symmetry group of DifGpreserves Sol(D). That is, if'is a solution ofD, then for allg2G,g(')is also. In order to forecast the evolution of a system D, we model the forward predictionfunctionf. Letw2Sol(D). The input to fis a collection of ksnapshots at times tk;:::;t1denotedwti2Fd. The prediction function f:Fkd!Fdis definedf(wtk;:::;wt1) =wt. Itpredicts the solution at a time tbased on the solution in the past. Let Gbe a symmetry group of D.Then forg2G,g(w)is also a solution of D. Thusf(gwtk;:::;gwt1) =gwt. Consequently, fisG-equivariant.2.4 S YMMETRIES OF NAVIER -STOKES EQUATIONSThe Navier-Stokes equations are invariant under the following five different transformations. Individ-ually each of these types of transformations generates a group of symmetries of the system. The fulllist of symmetry groups of NS equations and Heat equations are shown in Appendix B.6.Space translation: Tspcw(x;t) =w(xc;t),c2R2,Time translation: Ttimew(x;t) =w(x;t),2R,Uniform motion: Tumcw(x;t) =w(x;t) +c,c2R2,Rotation/Reflection: TrotRw(x;t) =Rw(R1x;t); R2O(2),Scaling:Tscw(x;t) =w(x;2t),2R>0.3Published as a conference paper at ICLR 20213 M ETHODOLOGYWe prescribe equivariance by training within function classes containing only equivariant functions.Our models can thus be theoretically guaranteed to be equivariant up to discretization error. Weincorporate equivariance into two state-of-the-art architectures for dynamics prediction, ResNetandU-net [48]. Below, we describe how we modify the convolution operation in these models fordifferent symmetries Gto form four EquG-ResNet and four EquG-Unet models.3.1 E QUIVARIANT NETWORKSThe key to building equivariant networks is that the composition of equivariant functions is equivariant.Hence, if the maps between layers of a neural network are equivariant, then the whole network willbe equivariant. Note that both the linear maps and activation functions must be equivariant. Animportant consequence of this principle is that the hidden layers must also carry a G-action. Thus,the hidden layers are not collections of scalar channels, but vector-valued G-representations.Equivariant Convolutions. Consider a convolutional layer FRdin!FRdoutwith kernelKfrom aRdin-field to a Rdout-field. Let RdinandRdoutbeG-representations with action maps inandoutrespectively. Cohen et al. [11, Theorem 3.3] prove the network is G-equivariant if and only ifK(gv) =1out(g)K(v)in(g) for allg2G. (1)A network composed of such equivariant convolutions is called a steerable CNN.Equivariant ResNet andU-net .Equivariant ResNet architectures appear in [ 9;10], and equiv-ariant transposed convolution, a feature of U-net , is implemented in [ 49]. We prove in general thatadding skip connections to a network does not affect its equivariance with respect to linear actionsand also give a condition for ResNet orUnet to be equivariant in Appendix B.2.Relation to Data Augmentation. To improve generalization, equivariant networks offer a betterperforming alternative to the popular technique of data augmentation [ 13]. Large symmetry groupsnormally require augmentation with many transformed examples. In contrast, for equivariant models,we have following proposition. (See Appendix B.1 for proof.)Proposition 1. G-equivariant models with equivariant loss learn equally (up to sample weight) fromany transformation g(s)of a samples. Thus data augmentation does not help during training.3.2 T IME AND SPACE TRANSLATION EQUIVARIANCECNNs are time translation-equivariant as long as we predict in an autoregressive manner. Convolu-tional layers are also naturally space translation-equivariant (if cropping is ignored). Any activationfunction which acts identically pixel-by-pixel is equivariant.3.3 R OTATIONAL EQUIVARIANCETo incorporate rotational symmetry, we model fusing SO(2) -equivariant convolutions and activationswithin the E(2)-CNN framework of Weiler and Cesa [49]. In practice, we use the cyclic groupG=Cninstead ofG= SO(2) as for large enough nthe difference is practically indistinguishabledue to space discretization. We use powers of the regular representation =R[Cn]mfor hiddenlayers. The representation R[Cn]has basis given by elements of CnandCn-action by permutationmatrices. It has good descriptivity since it contains all irreducible representations of Cn, and it iscompatible with any activation function applied channel-wise.3.4 U NIFORM MOTION EQUIVARIANCEUniform motion is part of Galilean invariance and is relevant to all non-relativistic physics modeling.For a vector field X:R2!R2and vectorc2R2, uniform motion transformation is adding aconstant vector field to the vector field X(v),Tumc(X)(v) =X(v) +c;c2R2. By the followingcorollary, proved in Appendix B.3, enforcing uniform motion equivariance as above by requiring alllayers of the CNN to be equivariant severely limits the model.4Published as a conference paper at ICLR 2021Corollary 2. Iffis aCNN alternating between convolutions fiand channel-wise activations iandthe combined layers ifiare uniform motion equivariant, then fis affine.To overcome this limitation, we relax the requirement by conjugating the model with shifted inputdistribution. For each sliding local block in each convolutional layer, we shift the mean of inputtensor to zero and shift the output back after convolution and activation function per sample. In otherwords, if the input isPbdinssand the output isQbdout=(PK)for one sliding local block,wherebis batch size, dis number of channels, sis the kernel size, and Kis the kernel, theni= Meanjkl(Pijkl) ;Pijkl7!Pijkli;Qij7!Qij+i: (2)This will allow the convolution layer to be equivariant with respect to uniform motion. If the input isa vector field, we apply this operation to each element.Proposition 3. A residual block f(x) +xis uniform motion equivariant if the residual connectionfis uniform motion invariant.By the proposition 3 above that is proved in Appendix B.3, within ResNet , residual mappingsshould be invariant , not equivariant, to uniform motion. That is, the skip connection f(i;i+2)=Iis equivariant and the residual function f(i;i+1)should be invariant. Hence, for the first layer ineach residual block, we omit adding the mean back to the output Qij. In the case of Unet , whenupscaling, we pad with the mean to preserve the overall mean.3.5 S CALE EQUIVARIANCEScale equivariance in dynamics is unique as the physical law dictates the scaling of magnitude, spaceand time simultaneously. This is very different from scaling in images regarding resolutions [ 51]. Forexample, the Navier-Stokes equations are preserved under a specific scaling ratio of time, space, andvelocity given by the transformationT:w(x;t)7!w(x;2t); (3)where2R>0. We implement two different approaches for scale equivariance, depending onwhether we tie the physical scale with the resolution of the data.Resolution Independent Scaling. We fix the resolution and scale the magnitude of the input byvarying the discretization step size. An input w2FkR2with step size x(w)andt(w)can bescaledw0=Tsc(w) =wby scaling the magnitude of vector alone, provided the discretizationconstants are now assumed to be x(w0) = 1=x(w)andt(w0) = 1=2t(w). We refer tothis as magnitude equvariance hereafter.To obtain magnitude equivariance, we divide the input tensor by the MinMax scaler (the maximum ofthe tensor minus the minimum) and scale the output back after convolution and activation per slidingblock. We found that the standard deviation and mean L2 norm may work as well but are not asstable as the MinMax scaler. Specifically, using the same notation as in Section 3.4,i= MinMax jkl(Pijkl) ;Pijkl7!Pijkl=i;Qij7!Qiji: (4)Resolution Dependent Scaling. If the physical scale of the data is fixed, then scaling correspondsto a change in resolution and time step size. To achieve this, we replace the convolution layerswith group correlation layers over the group G= (R>0;)n(R2;+)of scaling and translations. Inconvolution, we translate a kernel Kacross an input was suchv(p) =Pq2Z2w(p+q)K(q):TheG-correlation upgrades this operation by both translating andscaling the kernel relative to the input,v(p;s;) =X2R>0;t2R;q2Z2w(p+q;2t;)K(q;s;t; ); (5)wheresandtdenote the indices of output and input channels respectively. We add an axis to thetensors corresponding the scale factor . Note that we treat the channel as a time dimension both withrespective to our input and scaling action. As a consequence, as the number of channels increasesin the lower layers of Unet andResNet , the temporal resolution increases, which is analogous totemporal refinement in numerical methods [ 24;31]. For the input ~wof first layer where ~whas nolevels originally, w(p;s; ) =~w(p;2s).5Published as a conference paper at ICLR 2021Our model builds on the methods of Worrall and Welling [51], but with important adaptations forthe physical domain. Our implementation of group correlation equation 5 directly incorporates thephysical scaling law equation 3 of the system equation DNS. This affects time, space, and magnitude.(For heat, we drop the magnitude scaling.) The physical scaling law dictates our model should beequivariant to both up and down scaling and by any 2R>0. Practically, the sum is truncated to 7different 1=33and discrete data is continuously indexed using interpolation. Note equation 3demands we scale anisotropically , i.e. differently across time and space.4 R ELATED WORKEquivariance and Invariance. Developing neural nets that preserve symmetries has been a funda-mental task in image recognition [ 12;49;9;7;29;27;3;52;10;19;50;16;42]. But these modelshave never been applied to forecasting physical dynamics. Jaiswal et al. [23]; Moyer et al. [37]proposed approaches to find representations of data that are invariant to changes in specified factors,which is different from our physical symmetries. Ling et al. [30] and Fang et al. [17] studied tensorinvariant neural networks to learn the Reynolds stress tensor while preserving Galilean invariance, andMattheakis et al. [34] embedded even/odd symmetry of a function and energy conservation into neuralnetworks to solve differential equations. But these two papers are limited to fully connected neuralnetworks. Sosnovik et al. [44] extend Worrall and Welling [51] to group correlation convolution.But these two papers are limited to 2D images and are not magnitude equivariant, which is stillinadequate for fluid dynamics. Bekkers [4]describes principles for endowing a neural architecturewith invariance with respect to a Lie group.Physics-informed Deep Learning. Deep learning models have been used often to model physicaldynamics. For example, Wang et al. [48] unified the CFD technique and U-net to generate predictionswith higher accuracy and better physical consistency. Kim and Lee [25] studied unsupervisedgenerative modeling of turbulent flows but the model is not able to make real time future predictionsgiven the historic data. Anderson et al. [1]designed rotationally covariant neural network for learningmolecular systems. Raissi et al. [40;41]applied deep neural networks to solve PDEs automaticallybut these approaches require explicit input of boundary conditions during inference, which aregenerally not available in real-time. Mohan et al. [35] proposed a purely data-driven DL modelfor turbulence, but the model lacks physical constraints and interpretability. Wu et al. [53] andBeucler et al. [5] introduced statistical and physical constraints in the loss function to regularize thepredictions of the model. However, their studies only focused on spatial modeling without temporaldynamics. Morton et al. [36] incorporated Koopman theory into a encoder-decoder architecture butdid not study the symmetry of fluid dynamics.Video Prediction. Our work is related to future video prediction. Conditioning on the observedframes, video prediction models are trained to predict future frames, e.g., [ 33;18;54;47;39;18].Many of these models are trained on natural videos with complex noisy data from unknown physicalprocesses. Therefore, it is difficult to explicitly incorporate physical principles into these models. Ourwork is substantially different because we do not attempt to predict object or camera motions.5 E XPERIMENTSWe test our models on Rayleigh-Bénard convection and real-world ocean currents. We also evaluatedon the heat diffusion systems, see Appendix C for more results. The implementation details and adetailed description of energy spectrum error can be found in Appendices D and B.7.Evaluation Metrics. Our goal is to show that adding symmetry improves both the accuracy andthe physical consistency of predictions. For accuracy, we use Root Mean Square Error (RMSE)between the forward predictions and the ground truth over all pixels. For physical consistency, wecalculate the Energy Spectrum Error (ESE) which is the RMSE of the log of energy spectrum. ESEcan indicate whether the predictions preserve the correct statistical distributions of the fluids and obeythe energy conservation law, which is a critical metric for physical consistency.Experimental Setup. ResNet [20] and U-net [43] are the best-performing models for our tasks[48] and are well-suited for our tasks. Thus, we implemented these two convolutional architecturesequipped with four different symmetries, which we name Equ-ResNet(U-net) . We use a rollingwindow approach to generate sequences with step size 1 for the RBC data and step size 3 for the6Published as a conference paper at ICLR 2021Table 2: The RMSE and ESE of the ResNet(Unet) and four Equ-ResNets(Unets) pre-dictions on the original and four transformed test sets of Rayleigh-Bénard Convection. Augm isResNet(Unet) trained on the augmented training set with additional samples applied with randomtransformations from the relevant symmetry group. Each column contains all models’ predictionerrors on the original test set and four different transformed test sets.Root Mean Square Error (103) Energy Spectrum ErrorsOrig UM Mag Rot Scale Orig UM Mag Rot ScaleResNet 0.670.24 2.940.84 4.301.27 3.460.39 1.960.16 0.460.19 0.560.29 0.260.14 1.590.42 4.322.33Augm 1.100.20 1.540.12 0.920.09 1.010.11 1.37 0.02 1.140.32 1.920.21 1.550.14Equ UM 0.710.26 0.710.26 0.330.11 0.330.11Equ Mag 0.690.24 0.670.14 0.340.09 0.190.02Equ Rot 0.650.26 0.76 0.02 0.310.06 1.230.04Equ Scal 0.700.02 0.850.09 0.440.22 0.680.26U-net 0.640.24 2.270.82 3.591.04 2.780.83 1.650.17 0.500.04 0.340.10 0.550.05 0.910.27 4.250.57Augm 0.750.28 1.330.33 0.860.04 1.110.07 0.96 0.23 0.440.21 1.240.04 1.470.11Equ UM 0.680.26 0.710.24 0.230.06 0.140.05Equ Mag 0.670.11 0.680.14 0.420.04 0.340.06Equ Rot 0.680.25 0.740.01 0.11 0.02 1.16 0.05Equ Scal 0.690.13 0.900.25 0.450.32 0.890.29ocean data. All models predict raw velocity and temperature fields up to 10steps ahead auto-regressively. We use the MSE loss function that accumulates the forecasting errors. We split the data60%-20%-20% for training-validation-test across time and report mean errors over five random runs.5.1 E QUIVARIANCE ERRORSThe equivariance errors can be defined as EET(x) =jT(f(x))f(T(x))j, wherexis an input,fis a neural net, Tis a transformation from a symmetry group. We empirically measure theequivariance errors of all equivariant models we have designed. Table 1 shows the equivariance errorsofResNet andEqu-ResNet . The transformation Tis sampled in the same way as we generatedthe transformed Rayleigh-Bénard Convection test sets. See more details in Appendix B.5.5.2 E XPERIMENTS ON SIMULATED RAYLEIGH -BÉNARD CONVECTION DYNAMICSData Description. Rayleigh-Bénard Convection occurs in a horizontal layer of fluid heated frombelow and is a major feature of the El Niño dynamics. The dataset comes from two-dimensionalturbulent flow simulated using the Lattice Boltzmann Method [ 8] with Rayleigh number 2:5108.We divide each 1792 256 image into 7 square subregions of size 256 256, then downsample to6464 pixels. To test the models’ generalization ability, we generate additional four test sets : 1)UM: added random vectors drawn from U(1;1); 2)Mag: multiplied by random values sampledfromU(0;2); 3)Rot: randomly rotated by the multiples of =2; 4)Scale : scaled bysampled fromU(1=5;2). Due to lack of a fixed reference frame, real-world data would be transformed relative totraining data. We use transformed data to mimic this scenario.Table 1: Equivariance Er-rors of ResNet(Unets) andEqu-ResNet(Unets) .EET(103) UM Mag Rot ScaleResNets 2.010 1.885 5.895 1.658Equ ResNets 0.0 0.0 1.190 0.579Unets 1.070 0.200 1.548 1.809Equ Unets 0.0 0.0 0.794 0.481Prediction Performance. Table 2 shows the predictionRMSE and ESE on the original and four transformedtest sets by the non-equivariant ResNet(Unet) andfourEqu-ResNets(Unets) .Augm isResNet(Unet)trained on the augmented training set with additional sampleswith random transformations applied from the relevant sym-metry group. The augmented training set contains additionaltransformed samples and is three times the size of the originaltraining set. Each column contains the prediction errors bythe non-equivariant and equivariant models on each test set.On the original test set, all models have similar RMSE, yetthe equivariant models have lower ESE. This demonstrates that incorporating symmetries preservesthe representation powers of CNNs and even improves models’ physical consistency.On the transformed test sets, we can see that ResNet(Unet) fails, while Equ-ResNets(Unets)performs even much better than Augm-ResNets(Unets) . This demonstrates the value of equiv-ariant models over data augmentation for improving generalization. Figure 2 shows the ground truth7Published as a conference paper at ICLR 2021Figure 2: The ground truth and the predicted velocity norm fields kwk2at time step 1,5and10bytheResNet and four Equ-ResNets on the four transformed test samples. The first column is thetarget, the second is ResNet predictions, and the third is predictions by Equ-ResNets .and the predicted velocity fields at time step 1,5and10by the ResNet and four Equ-ResNetson the four transformed test samples.Table 3: Performance compar-ison on transformed train andtest sets.RMSE ESEResNet 1.030.05 0.960.10Equ UM 0.690.01 0.350.13ResNet 1.500.02 0.550.11Equ Mag 0.750.04 0.390.02ResNet 1.180.05 1.210.04Equ Rot 0.770.01 0.680.01ResNet 0.920.01 1.340.07Equ Scal 0.740.03 1.020.02Generalization. In order to evaluate models’ generalization abilitywith respect to the extent of distributional shift, we created additionaltest sets with different scale factors from15to 1. Figure 3 showsResNet andEqu Scal-ResNet prediction RMSEs (left) and ESEs(right) on the test sets upscaled by different factors. We observedthatEqu Scal-ResNet is very robust across various scaling factorswhile ResNet does not generalize.We also compare ResNet andEqu-ResNet when both train andtest sets have random transformations from the relevant symmetrygroup applied to each sample. This mimics real-world data in whicheach sample has unknown reference frame. As shown in Table3 shows Equ-ResNet outperforms ResNet on average by 34%RMSE and 40% ESE.Figure 3: Left: Prediction RMSE and ESE over five runs of ResNet andEqu Scal-ResNet onthe Rayleigh-Bénard Convection test set upscaled by different factors. Right: The ground truth andpredicted ocean currents kwk2byResNet and four Equ-ResNets on the test set of future time.5.3 E XPERIMENTS ON REAL WORLD OCEAN DYNAMICSData Description. We use the reanalysis ocean current velocity data generated by the NEMO oceanengine [ 32].1We selected an area from each of the Atlantic, Indian and North Pacific Oceans from01/01/2016 to 08/18/2017 and extracted 64 64 sub-regions for our experiments. The correspondinglatitude and longitude ranges for the selected regions are (-44 -23, 2546), (5576, -39-18) and(-174-153, 526) respectively. We not only test all models on the future data but also on a differentdomain (-180-159, -40-59) in South Pacific Ocean from 01/01/2016 to 12/15/2016.Prediction Performance. Table 4 shows the RMSE and ESE of ResNets(Unets) , and equiv-ariant Equ-ResNets(Unets) on the test sets with different time range and spatial domain from1The data are available at https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_0248Published as a conference paper at ICLR 2021the training set. All the equivariant models outperform the non-equivariant baseline on RMSE,andEqu Scal-ResNet achieves the lowest RMSE. For ESE, only the Equ Mag-ResNet(Unet) isworse than the baseline. Also, it is remarkable that the Equ Rotmodels have significantly lower ESEthan others, suggesting that they correctly learn the statistical distribution of ocean currents.Comparison with Data Augmentation. We also compare Equ-ResNets(Unets)ResNets(Unets) that are trained with data-augmentation ( Augm ) in Table 4. In allcases, equivariant models outperforms the baselines trained with data augmentation. We find thatdata augmentation sometimes improves slightly on RMSE but not as much as the equivariantmodels. And, in fact, ESE is uniformly worse for models trained with data augmentationthan even the baselines. In contrast, the equivariant models have much better ESE than thebaselines with or without augmentation. We believe data augmentation presents a trade-offin learning. Though the model may be less sensitive to the various transformations we con-sider, we need to train bigger models longer on many more samples. The models may nothave enough capacity to learn the symmetry from the augmented data and the dynamics ofthe fluids at the same time. By comparison, equivariant architectures do not have this issue.Table 4: Prediction RMSE and ESE com-parison on the two ocean currents test sets.RMSE ESETest time Test domain Test time Test domainResNet 0.710.07 0.720.04 0.830.06 0.750.11Augm UM0.700.01 0.700.07 1.060.06 1.060.04Augm Mag0.760.02 0.710.01 1.080.08 1.050.8Augm Rot0.730.01 0.690.01 0.940.01 0.860.01Augm Scal0.970.06 0.920.04 0.850.03 0.950.11Equ UM 0.680.06 0.680.16 0.750.06 0.730.08Equ Mag 0.660.14 0.680.11 0.840.04 0.850.14Equ Rot 0.690.01 0.700.08 0.430.15 0.280.20Equ Scal 0.630.02 0.680.21 0.440.05 0.420.12U-net 0.700.13 0.730.10 0.770.12 0.730.07Augm UM0.680.02 0.680.01 0.850.04 0.830.04Augm Mag0.690.02 0.670.10 0.780.03 0.860.02Augm Rot0.790.01 0.700.01 0.790.01 0.780.02Augm Scal0.710.01 0.770.02 0.840.01 0.770.02Equ UM 0.660.10 0.670.03 0.730.03 0.820.13Equ Mag 0.630.08 0.660.09 0.740.05 0.790.04Equ Rot 0.680.05 0.690.02 0.420.02 0.470.07Equ Scal 0.650.09 0.690.05 0.450.13 0.430.05Figure 3 shows the ground truth and the predictedocean currents at time step 1;5;10by different mod-els. We can see that equivariant models’ predictionsare more accurate and contain more details than thebaselines. Thus, incorporating symmetry into deeplearning models can improve the prediction accuracyof ocean currents. The most recent work on this datasetis de Bezenac et al. [15], which combines a warpingscheme and a U-net to predict temperature. Sinceour models can also be applied to advection-diffusionsystems, we also investigated the task of ocean temper-ature field predictions. We observe that Equ UM-Unetperforms slightly better than de Bezenac et al. [15]. Foradditional results, see Appendix E.6 C ONCLUSION AND FUTURE WORKWe develop methods to improve the generalization ofdeep sequence models for learning physical dynamics.We incorporate various symmetries by designing equiv-ariant neural networks and demonstrate their superiorperformance on 2D time series prediction both theoreti-cally and experimentally. Our designs obtain improved physical consistency for predictions. In thecase of transformed test data, our models generalize significantly better than their non-equivariantcounterparts. Importantly, all of our equivariant models can be combined and can be extendedto 3D cases. The group Galso acts on the boundary conditions and external forces of a systemD. If these are G-invariant, then the system Dis strictly invariant as in Section 2.3. If not, onemust consider a family of solutions [g2GSol(gD)to retain equivariance. To the best of our bestknowledge, there does not exist a single model with equivariance to the full symmetry group ofthe Navier-Stokes equations. It is possible but non-trivial, and we continue to work on combiningdifferent equivariances. Future work also includes speeding up the the scale-equivariant models andincorporating other symmetries into DL models.ACKNOWLEDGMENTSThis work was supported in part by Google Faculty Research Award, NSF Grant #2037745, and theU. S. Army Research Office under Grant W911NF-20-1-0334. The Titan Xp used for this researchwas donated by the NVIDIA Corporation. This research used resources of the National EnergyResearch Scientific Computing Center, a DOE Office of Science User Facility supported by the Officeof Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We alsothank Dragos Bogdan Chirila for providing the turbulent flow data.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Novelty in modeling physical dynamics with symmetry ### Review Text This paper studies improving the modeling of physical dynamics with equivariant neural networks. In particular, this paper focuses on a new type of data governed by physical models. Several special symmetry groups are considered to better characterize the system, including uniform motion equivariance, resolution-independent scaling, and resolution-dependent scaling, etc. Simulation results show that the proposed equivariant model yields better accuracy and physical consistency than the non-equivariant models even with data augmentation, given the type of distributional shift is known. Results on the real-world data show some of the equivariant models can generalize better than the non-equivariant models. Pros * The idea of using equivariant networks in physical dynamics seems well-motivated. In cases global alignment is difficult and the distributional shift is unknown, improving generalization by incorporating known symmetries seems to be a natural idea. * Although the idea of equivariant networks has been proposed before, the proposed treatments tailored to the modeling physical dynamics are new. Cons * It is claimed the data is governed by the differential equation, which has several symmetry properties. However, how the "ResNet and U-net" networks are used to solve the dynamics prediction problem is missing from the main text. Maybe due to the same reason, the connections to the differential equations are unclear. This paper is not quite self-contained. * The content is targeted to a narrow audience. Questions: - Is data augmentation available as a baseline for experiments in Table 3? - It seems different kinds of symmetries are incorporated separately - not sure if this is a limitation. If a system is known to satisfy multiple symmetries, is it possible to incorporate all of them together in a network? ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
pdsec2YIOCx
ICLR.cc/2021/Conference
2021
Untangle: Critiquing Disentangled Recommendations
["Preksha Nema", "Alexandros Karatzoglou", "Filip Radlinski"]
The core principle behind most collaborative filtering methods is to embed users and items in latent spaces, where individual dimensions are learned independently of any particular item attributes. It is thus difficult for users to control their recommendations based on particular aspects (critiquing). In this work, we propose Untangle: a recommendation model that gives users control over the recommendation list with respect to specific item attributes, (e.g.:less violent, funnier movies) that have a causal relationship in user preferences. Untangle uses a refined training procedure by training (i) a (partially) supervised β-VAE that disentangles the item representations and (ii) a second phase which optimized to generate recommendations for users. Untangle gives control on critiquing recommendations based on users preferences, without sacrificing on recommendation accuracy. Moreover only a tiny fraction of labeled items is needed to create disentangled preference representations over attributes.
["Disentangling", "Recommender Systems", "VAE", "Critiquing", "Explainability"]
Under review as a conference paper at ICLR 2021Untangle :Critiquing Disentangled RecommendationsAnonymous authorsPaper under double-blind reviewAbstractThe core principle behind most collaborative filtering methods is to embedusers and items in latent spaces, where individual dimensions are learnedindependently of any particular item attributes. It is thus difficult for usersto control their recommendations based on particular aspects (critiquing).In this work, we propose Untangle: a recommendation model that givesusers control over the recommendation list with respect to specific itemattributes, (e.g.:less violent, funnier movies) that have a causal relationshipin user preferences. Untangle uses a refined training procedure by training(i) a (partially) supervised -VAE that disentangles the item representationsand (ii) a second phase which optimized to generate recommendations forusers. Untangle gives control on critiquing recommendations based on userspreferences, without sacrificing on recommendation accuracy. Moreover onlya tiny fraction of labeled items is needed to create disentangled preferencerepresentations over attributes.1 IntroductionFigure 1: Untangle model is trained in twophases: Disentangling phase: Input to en-coder is a one hot representation of an item(green dotted line). Obtained representationis disentangled across Aattributes. Recom-mendation phase: Input to encoder is theitems user interacted with (solid red line) andrecommends new items.User and item representations form the basisof typical collaborative filtering recommenda-tion models. These representations can belearned through various techniques such asMatrixFactorization( 1;2), orareconstructeddynamically during inference e.g. the hiddenstate of RNN’s in session-based recommenda-tions (3; 4).As most standard recommendation modelssolely aim at increasing the performance ofthe system, no special care is taken to en-sure interpretability of the user and item rep-resentations. These representations do notexplicitly encode user preferences over itemattributes. Hence, they cannot be easily usedby users to change a.k.a. critique ( 5) therecommendations. For instance, a user in arecipe recommendation system cannot ask forrecommendationsforasetoflessspicyrecipes,as the spiciness is not explicitly encoded inthe latent space. Moreover the explainabilityof the recommendations that are provided bysuch systems is very limited.In this work, we enrich a state-of-the-art rec-ommendation model to explicitly encode pref-erences over item attributes in the user latentspace while simultaneously optimizing for rec-ommendation’s performance. Our work is1Under review as a conference paper at ICLR 2021motivated by disentangled representations in other domains, e.g., manipulating generativemodels of images with specific characteristics (6) or text with certain attributes (7). Varia-tional Autoencoders (VAEs), particularly -VAE’s ( 8) (which we adapt here), are generallyused to learn these disentangled representations. Intuitively, they optimize embeddings tocapture meaningful aspects of users and items independently. Consequently, such embeddingswill be more usable for critiquing .There are two types of disentangling -VAEs:unsupervised andsupervised . In the former,the representations are disentangled to explanatory factors of variation in an unsupervisedmanner, i.e., without assuming additional information on the existence (or not) of specificaspects. Used in the original -VAE (8) approach, a lack of supervision often results ininconsistency and instability in disentangled representations (9). In contrast, in superviseddisentangling, a small subset of data is assumed to have side-information (i.e. a label ora tag). This small subset is then used to disentangle into meaningful factors ( 10;9). Ascritiquing requires user control using familiar terms/attributes, we incorporate superviseddisentanglement in a -VAE architecture in this work.To achieve the explicit encoding of preferences over item attributes in embedding space werefine the training strategy of the untangle model. We essentially train in two phases: i)Disentangling phase : We explicitly disentangle item representations, using very few supervisedlabels. ii) Recommendation phase : We encode the user, using the bag-of-words representationof the items interacted, and then generate the list of recommended items. Untangle givesfine-grained control over the recommendations across various item attributes, as compared tothe baseline. We achieve this with a tiny fraction of attribute labels over items, and moreoverachieve comparable recommendation performance compared to state-of-the-art baselines.2 Related WorkDeep learning based Autoencoder architectures are routinely used in collaborative filteringand recommendation models ( 11;12;13). In particular ( 11;12) adopt denoising autoencoderarchitectures, whereas ( 13) uses variational autoencoders. The internal (hidden) representa-tions generated by the encoders in these models are not interpretable and hence cannot beused for critiquing or explanations in recommendations.Recent work on Variational Autoencoders across domains have focused on the task ofgenerating disentangled representations. One of the first approaches used to that end was-VAE (8;14;15), which essentially enforced a stronger (multiplying that term with >1)KL divergence constraint on the VAE objective. Such representations are more controllableand interpretable as compared to VAEs.One of the drawbacks of -VAE is that the disentanglement of the factors cannot be controlledand that they are relatively unstable and not easy to reproduce particularly when the factorsof variance are subtle ( 9;8;14;16;17). This has motivated methods that explicitly supervisethe disentangling ( 10), that rely either on selecting a good set of disentangling using multipleruns and the label information ( 18), or by adding a supervised loss function in the -VAEobjective function ( 10). As supervised disentangling methods are better in explainability andcould provide control over desired attributes, we motivate our model from ( 19) for bettercritiquing in VAE based recommendation systems.In recommender systems similar methods to utilize side information, have also been usedrecently to allow for models that enable critiquing of recommendations. These models allowusers to tune the recommendations across some provided attributes/dimensions. Notableexamples are ( 20;21), where the models are augmented with a classifier of the features overwhich to control the recommendation. Adjusting the features at the output of the classifiermodifies the internal hidden state of the model and leads to recommendations that exhibitor not the requested attribute. Note that this method of critiquing is quite different to ourapproach which allows for a gradual adjustment of the attributtes. Moreover the modelsin (20;21) require a fully labeled dataset with respect to the attributes while our approachonly requires a small fraction of labeled data.2Under review as a conference paper at ICLR 2021Unsupervised disentanglement was also recently used to identify and potentially use factorsof variation from purely collaborative data i.e., data generated by user interactions withitems (22) note though that this method focus was mainly on performance of the recommen-dations and that it does not allow for seamless critiquing as it is not clear what aspect ofthe data get disentangled.3UntangleThe aim of the untangle model is to obtaining controllable user (and item) representationsfor better critiquing along with optimizing for recommendation performance. To this end,we incorporate a simple supervised disentanglement technique to disentangle across itemattributes/characteristics over which we want to provide explicit control to the users.We index users with u2f1;:::;ng, and items with i2f1;:::;mg.Xnmis a matrix ofuser-item interactions ( xui= 1if useruinteracted with item i, and 0otherwise). A subsetof items are assumed to have binary labels for attributes A.Our model is a modified -VAE architecture, with a feed forward network based encoder anddecoder. In Figure 1, user uis represented by [z:c]. Note that :stands for concatenation,thezpart of the representation is non-interpretable by default while on the cpart of therepresentation we map (through a refined learning step) the representation of the attributesof the items over which we would like the user to have control. Each dimension in cismapped to only one attribute a. Across the paper, we refer the dimension associated with theattributea, asca. The user representation is sampled from the distribution parameterizedby the encoder ( q):q(xu) =N((xu);diag ((xu)). The input to the encoder is thebag of words representation of the items uinteracted with, i.e. the uthrow of matrix X,xu. The decoder generates the probability distribution given user representation [z:c],(z)/exp(fdec([z:c])), over themitems. The likelihood function used in recommendersystem settings (3; 23; 24; 25) is typically the multinomial likelihood:p(xuj[z:c]) =Xixuilogi([z:c]))3.1 LearningTraining is conducted in two phases: Recommendation and Disentangle phase, as mentionedin Algorithm 1.Recommendation Phase The objective in this phase is to optimize the encoder parame-terized by ( ), and decoder parameterized by ( ) to generate personalized recommendations.We train our model with the following objective:L(xu;;)Eq([z:c]jxu)[logp(xuj[z:c])]KL (q([z:c]jxu)jp([z:c]))(1)Intuitively, this is the negative reconstruction error minus the Kullback-Leibler divergenceenforcing the posterior distribution of zto be close to the Gaussian distribution (prior) p(z).The KL divergence in -VAE is computed between the representation sampled from theencoder and the normal distribution p(z) =N(0;Id). The diagonal co-variance matrixenforces a degree of independence among the individual factors of the representation. Con-sequently, increasing the weight of the KL divergence term with >1boosts the featureindependence criteria, leading to disentangled representation. This ensures that even in therecommendation phase, the learnt user representations are nudged towards disentanglement.Disentanglement Phase Since the attribute information is commonly available acrossthe items. In this phase, we first obtain the item representation in the user latent space (asdepicted in the highlighted green box in Figure 1). We pass the one hot encoding of an item,and obtain its representation in the latent user space. We then disentangle the obtainedrepresentation using the following objective:(2) L(1i;;)Eq([z:c]j1i)[logp(1ij[z:c])]KL (q([z:c]j1i)jp([z:c])) +Eq(cj1i)l(q(cj1i);a)3Under review as a conference paper at ICLR 2021Algorithm 1: Untangle: TrainingData:X2Rnmcontaining user-item interactions, with a subset of items having labelsforAattributes1initialize model params.: Encoder( ), Decoder( ) ;2do3ifis_disentangle then// Disentangle representations4 1i random mini batch from set of items that are labelled with Aset.5 [z:c] sample fromN((1i);diag ((1i))6 ~xi Decoder ([z:c])7 compute gradients rL,rLusing Objective 28 +rL9 +rL10 end11 ifis_recommend then// Recommend items12 xu random mini-batch from dataset13 [z:c] sample fromN((xu);diag ((xu))14 ~xu Decoder ([z:c])15 compute gradients rL,rLusing Objective 116 +rL17 +rL18 end19whilemodel converges ;As in (10), we modify the -VAE objective (Objective 1) in to incorporate a classification lossover the factors c, over which we disentangle. This loss penalizes discrepancies between theattribute label prediction for factor caand the label aof interest, nudging the disentanglementfor each attribute to happen over the corresponding factor ca.4 DatasetsMovielens Dataset: We use the Movielens-1m and Movielens-20m datasets ( 26), whichcontain 1 million and 20 million user-movie interactions, respectively. For the latter, wefilter out movies with fewer than 5ratings and users who rated 10movies. We utilizethe relevance scores given in the Movielens dataset for 10,381 movies across 1,000 differenttags to select attributes for disentangling. E.g., Mission Impossible movie has high relevance(0:79) for the actiontag. We take the top 100 tags, based on the mean relevance score acrossall movies. Among these 100 tags, some tag pairs, like ( funny, andvery funny ), are bydefinition entangled. Therefore, to identify distinct tags, we cluster these 100 tags ( 2R10381movies) into 20 clusters using K-Means clustering. Finally, we select a subset from these 20clusters, as given in Table 1 for disentangling. We assign the new-clustered tag (as givenin Table 1, Column 1) if the average-relevance score (the mean of relevance scores for tagspresent in the corresponding cluster) is higher than 0:5.Goodreads Dataset: The GoodReads dataset ( 27) contains user-book interactions fordifferent genres. We use the Children and Comics genres to evaluate our model. We filterout items rated5and users who rated 10books. The final statistics are given inAppendix A. We extract the tags for disentangling from the user-generated shelf names, e.g.,historical-fiction ,to-read. We retrieve the top 100 shelf names. Some tags (like “books-i-have”)are not useful to revise recommendations. Therefore, we only consider item attributes thatall the authors consider informative for critiquing recommendations. We select a subset fordisentangling from this set, as it still contains correlated attributes like historical-fiction,fiction. We select attributes with the corresponding number of books where the attribute waspresent {horror:1080, humor:9318, mystery:3589, and romance:1399} and {adventure:8162,horror:5518, humor:8314, mystery:5194, romance:7508, sci-fi:7928}, for Goodreads-(Childrenand Comics) respectively.4Under review as a conference paper at ICLR 2021Cluster LabelTaggedTags included in clustermoviesaction 1,167 action, fight-scenes, special-effectsfunny 1,219 comedy, funny, goofy, very funnyromantic 975destiny, feel-good, love story, romanticsad 1,488 bleak, intimate, loneliness, melancholic, reflective, sadsuspense 1,070 betrayal, murder, secrets, suspense, tense, twist-and-turnsviolence 1,297 brutality, cult classic, vengeance, violence, violentTable 1: Each cluster was manually assigned a human-readable label. Some of the tagspresent in each cluster are listed in column 3. Column 2 lists the number of movies that hadhigh relevance score for tags in each cluster.5 Evaluation MetricsWe evaluate Untangle on these criteria: i) quality of items recommended, ii) extent ofdisentanglement, iii) control/critiquing based on the disentangled representations.Ranking Based Metrics: We evaluate the quality of items recommended using tworanking-based metrics: Recall@k and normalized discounted cumulative gain NDCG@k. Thelatter is rank sensitive, whereas Recall@k considers each relevant item in the top-k equally.Recall @k:=Pki=1I[item[i]2S]min(k;jSj)DCG @k:=kXi=12I[item[i]2S]1log(i+ 1)NDCG is normalized DCG by dividing it by the largest possible DCG@k.Disentanglement Metrics: We use the Disentanglement , andCompleteness metricsintroduced in ( 28).Disentanglement measures the extent to which each dimension capturesat most one attribute. E.g., if a dimension captures all attributes, the Disentanglement scorewill be 0. We compute importance pajofathattribute on jthdimension of [z:c]2Rd, withGradient Boosted Trees as given in ( 9). Using the pajscores, the disentanglement score isdefined as:HjAj(Pj) =jAj1Xa=0pajlogjAjpaj; Dj= (1HjAj(Pj))D=d1Xj=0jDj; j=PjAj1a=0pajPd1j=0PjAj1a=0pajWe compute entropy HjAj(Pj)) forjthdimension. Disentanglement score for dimension jisthen 1entropy. The final disentanglement score of the system is weighted average of Djacross all the dimensions d, wherejthe dimension’s relative importance. Completeness :Measures the extent to which one attribute ais encoded in a single dimension of [z:c]. Fora latent representation of 16 dimensions and 2 attributes, if 8 dimensions encode attributea1and the other 8 encode a2, then the Disentanglement will be 1butCompleteness will be0:25. Completeness is defined as:Hd(Pa) =d1Xj=0pajlogdpaj; Ca= (1Hd(Pa))C=jAj1Xa=0aCa; a=Pd1j=0pajPjAj1a=0Pd1j=0pajController Metric: We propose a simple metric to quantify the extent of controldisen-tangled dimension cahas on recommendations by critiquing attribute a. With supervised-disentanglement, the mapping between dimensions cin the latent representations, and theattributes across which we disentangled is known. The features in these dimensions in callow the user to control/critique the respective attribute in the generated recommendations.For instance, less violence can be achieved by reducing the corresponding dimension value5Under review as a conference paper at ICLR 2021Dataset ModelRecommendation Performance Disentanglement PerformanceN@100 R@20 R@50 Disent. Comp. Controller MetricML-1mMulti-DAE 0.38782 0.31636 0.43404 0.317 0.214 0.961Multi-VAE 0.39252 0.32515 0.44757 0.306 0.200 0.947-VAE 0.38658 0.31216 0.43032 0.313 0.0211 0.924Untangle 0.37833 0.30079 0.42532 0.543 0.393 19.27ML-20mMulti-DAE 0.39738 0.37071 0.50847 0.265 0.182 0.88Multi-VAE 0.39827 0.37212 0.50946 0.246 0.167 3.53-VAE 0.38724 0.35617 0.48976 0.211 0.142 3.27Untangle 0.40320 0.37367 0.51303 0.677 0.529 75.11GR-ComicsMulti-DAE 0.42593 0.42602 0.52610 0.243 0.175 0.963Multi-VAE 0.45159 0.45697 0.55598 0.173 0.137 0.872-VAE 0.44366 0.44949 0.55226 0.192 0.146 0.847Untangle 0.43597 0.43981 0.54218 0.733 0.536 73.41GR-ChildrenMulti-DAE 0.40030 0.43240 0.56473 0.145 0.132 2.37Multi-VAE 0.40219 0.43057 0.56695 0.164 0.132 0.86-VAE 0.40219 0.43057 0.56695 0.139 0.103 0.92Untangle 0.41255 0.44490 0.58473 0.517 0.574 14.37Table 2: Recommendation and Disentanglement performance on Movielens-(1m,20m) andGoodreads-(Comics,Children) domain dataset on the corresponding test split.(violence) in c. We evaluate this by probing if the items where the attribute is present ( Sa)are ranked higher when the dimension value cais increased by a factor of gin the userrepresentation. We extract the items recommended from the decoder ( Ia(g)), for the newuser representation where onlycais multiplied gca. We compare (Ia(g)) against (Sa)using any ranking-based metric described above. We further vary gfor a given range [G;G],and study if the ranking of ( Sa) improves. The Controller-Metric is defined as follows:Controller_Metric (k;g) :=jRecall @k(Ia(G);Sa)Recall @k(Ia(G);Sa)jRecall @k(Ia(G);Sa)(3)To compute the Controller-Metric for a system, we take the median across all the attributesdisentangled in c. Note that the metric value depends on kand the range chosen.6 Results and DiscussionsRecommendation and Disentanglement Performance We train the Untangle modelwith the parameter settings mentioned in Appendix B. We compare Untangle with theMultiDAE, and MultiVAE models ( 13). We also compare our model with a stronger baselinefor disentanglement -VAE, which disentangles the representation in an unsupervised way.We present our results in Table 2. Note that supervised disentanglement for Table 2, hasbeen trained with 300 (1%), 1030 (5%), 1500 (5%), 1550 (5%) labelled items for Movielens-(1m,20m) and Goodreads-(Children,Comics) respectively. We observe that our proposedmodel’s performance on ranking-based metrics (Recall@k, and NDCG@k) is comparable tothe baselines across all datasets. Thus we show that disentangling the latent representationdoes not impact the recommendation performance. We also quantify the disentanglementusing the Disentanglement and Completeness metrics discussed in Section 5. We infer fromTable 2 that the disentanglement achieved across all the mentioned strategies is significantlyhigher than the baselines. Disentangling with a tiny fraction of labeled items leads to asignificant gain in disentanglement compared to -VAE.We evaluate the extent of the controllability of the disentangled representations. To thisend, we compute the Controller Metric, which measures the control over the attributedimensioncavariation. We use the multiplicative range of [150;+150]to amplifyca, andmeasure the ranking performance using recall@10 across this range. Note that the rest of therepresentation remains unchanged. We observe that we get significantly higher controllabilityfor theUntangle model compared to the baseline approaches, especially for Movielens-20mand Goodreads-Comics dataset. By reducing cawe can diminish the existence of itemswith attribute afrom the recommendation list and by gradually increasing the magnitudeofcaincrease the presence of items with this attribute in the recommendation list up tosaturation.6Under review as a conference paper at ICLR 2021(a) MultiVAE:Sad (b) MultiVAE:Romantic (c) MultiVAE:Suspense (d) MultiVAE:Violence(e) Untangle:Sad (f) Untangle:Romantic (g) Untangle:Suspense (h) Untangle:ViolenceFigure 2: Control over recommendations when factor-value ca, is adjusted by multiplicativefactorg2[150;150]. Recommendation lists are evaluated by recall@(5,10,20). Relevanceis determined by the presence of attribute ain the retrieved items. We compare Multi-VAE(top) with Untangle model (bottom) for sad, romantic, suspense and violence on ML-20m.(a) MultiVAE:Adventure (b) MultiVAE:Sci-Fi (c) MultiVAE:Mystery (d) MultiVAE:Humor(e) Untangle:Adventure (f) Untangle:Sci-Fi (g) Untangle:Mystery (h) Untangle:HumorFigure 3: We compare Multi-VAE (top) with Untangle model (bottom) for adventure, sci-fi,mystery and humor attributes for Goodreads-Comics for the same analysis done in Figure 2.Critiquing Recommendations The primary aim of our model is to obtain controllablerepresentations for critiquing. With the Controller Metric, we quantify controllability , here wefurther analyze the incremental impact of changing the attribute dimension. In this analysis,we visualize the effect on the recommendations of the adjustment of the disentangled factorcafor each attribute a. We multiply the factor with gin Figure 2 and Figure 3 for baselinemodel MultiVAE and Untangle . Note that for the baseline (MultiVAE), we adjust thedimension that has the highest feature importance score computed using Gradient BoostingClassifier for attribute a.For the movies domain (Figure 2), we observe that for MultiVAE (row 1) the variation in cahas no clear correlation with the recommendation performance in terms of the presence orabsence of items with this attribute. In contrast to MultiVAE, in the Untangle model, weconsistently observe a significant and gradual variation across all the explicitly disentangledattributes A. Even for subtle attributes like suspense, we obtain a complete range of recall@10from 0.0 to 1.0 We observe similar results for Goodreads comics dataset (Figure 3), wherewe again get gradual and significant change (approximately 1) across all the disentangledattributes.7Under review as a conference paper at ICLR 2021Figure 4: Correlation between learnt dimension valuecato the true relevance score across 500 movies forMovielens-20mCorrelation between RelevanceScores and ca:We observe thatdisentangling across item representa-tions leads to a fine-grained controlforcritiquing. Wefurtherverify, iftheachieved controllability is an outcomeof high correlation between factor ca,and the true relevance score acrossmovies for attribute afor Movielens-20m dataset. We randomly sample500 movies, and obtain their latentrepresentation from the encoder. InFigure 4, we plot the obtained cavalue to and the true relevance score for attribute action. We can infer from the Fig-ure 4 that the representations obtained from Untangle have a high Pearson correlation of0.53 as compared to MultiVAE model (Pearson Correlation: -0.03). The graphs for otherattributes/tags are presented in Appendix C.Figure 5: Variation in Disentanglement and Com-pleteness metrics when model is trained with lesserlabels for Movielens-20m and GoodReads-Comics.Fewer Labels for Disentangle-mentOne of the advantages of Un-tangleis that it disentangles with veryfew labels. We train Untangle withfewer labeled items. Each point in inFigure 5 is an average across 5 differ-ent runs with different random seeds.For Movielens-20m just 1% attributelabels yields a disentanglement scoreof0:51, which gradually increases upto0:92when trained with all labels.For Goodreads-Comics, with 1% la-belled books we are able to achieve0:52disentanglement, which gradually increases to 0:93when the model is trained with allthe labels. Note that even with 1% labelled items, the disentanglement and completenessscores obtained are significantly higher than -VAE model:0.21 and 0.19 on Movielens-20mand Goodreads-Comics and respectively.Controllable Attributes With the above analysis, we have established that Untangleleads to controllable representations. In this experiment, we identify if the controllabilityis restricted to the chosen set of attributes. Therefore, we apply Untangle to a larger setof tags for Movielens-20m dataset. We cluster all the 1181 tags present in the dataset,using K-Means clustering into 50 clusters. The clustering strategy is similar to the onementioned in Section 4. We then evaluate the controllability for each of the clustered-tag, b.We explicitly encode the corresponding clustered-tag busingUntangle , using 5% of labelleditems. The controller metric score is obtained for each tag, across 5 runs. In each run, wesub-sample four clustered tags out of 40 to be disentangled along with the correspondingclustered tag b. This is done to model the impact of disentangling a given attribute alongsidewith other attributes present in the dataset. We identify that across 40 clustered-tags, weobtain a controller-metric score of >11:0for over 21 tags. Some of the attributes whichdo not have a higher controller-metric score includes:80s, crappy, philosophical, etc. Theseattributes are also unlikely to be critiqued by user. Some of the most controllable and leastcontrollable tags have been listed in Appendix D.7 ConclusionUntangle archives the goals we set, it provides control and critiquing over the user recommen-dations over a set of predefined item attributes. It does so without sacrificing recommendationquality and only needs a small fraction of labeled items.8Under review as a conference paper at ICLR 2021References[1]Y. Koren, “Factorization meets the neighborhood: A multifaceted collaborative filter-ing model,” in Proceedings of the 14th ACM SIGKDD International Conference onKnowledge Discovery and Data Mining , KDD ’08, (New York, NY, USA), p. 426–434,Association for Computing Machinery, 2008.[2]Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommendersystems,” Computer , vol. 42, pp. 30–37, Aug 2009.[3]B. Hidasi and A. Karatzoglou, “Recurrent neural networks with top-k gains for session-based recommendations,” in Proceedings of the 27th ACM International Conference onInformation and Knowledge Management , CIKM ’18, (New York, NY, USA), p. 843–852,Association for Computing Machinery, 2018.[4]C.-Y. Wu, A. Ahmed, A. Beutel, A. J. Smola, and H. Jing, “Recurrent recommendernetworks,” in Proceedings of the Tenth ACM International Conference on Web Searchand Data Mining , WSDM ’17, (New York, NY, USA), p. 495–503, Association forComputing Machinery, 2017.[5]L. Chen and P. Pu, “Critiquing-based recommenders: survey and emerging trends,”User Modeling and User-Adapted Interaction , vol. 22, no. 1, pp. 125–150, 2012.[6]X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan:Interpretable representation learning by information maximizing generative adversarialnets,” in NIPS, pp. 2172–2180, 2016.[7]Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing, “Toward controlledgeneration of text,” in ICML, vol. 70 of Proceedings of Machine Learning Research ,pp. 1587–1596, PMLR, 2017.[8]I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, andA. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variationalframework,” in ICLR (Poster) , OpenReview.net, 2017.[9]F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem,“Challenging common assumptions in the unsupervised learning of disentangled repre-sentations,” in Proceedings of the 36th International Conference on Machine Learning(ICML), vol. 97 of Proceedings of Machine Learning Research , pp. 4114–4124, PMLR,June 2019.[10]F. Locatello, M. Tschannen, S. Bauer, G. Rätsch, B. Schölkopf, and O. Bachem,“Disentangling factors of variation using few labels,” 2019.[11]S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, “Autorec: Autoencoders meet collabora-tive filtering,” in Proceedings of the 24th International Conference on World Wide Web ,WWW ’15 Companion, (New York, NY, USA), p. 111–112, Association for ComputingMachinery, 2015.[12]Y. Wu, C. DuBois, A. X. Zheng, and M. Ester, “Collaborative denoising auto-encoders fortop-n recommender systems,” in Proceedings of the Ninth ACM International Conferenceon Web Search and Data Mining , WSDM ’16, (New York, NY, USA), p. 153–162,Association for Computing Machinery, 2016.[13]D. Liang, R. G. Krishnan, M. D. Hoffman, and T. Jebara, “Variational autoencoders forcollaborative filtering,” in Proceedings of the 2018 World Wide Web Conference , WWW’18, (Republic and Canton of Geneva, CHE), p. 689–698, International World Wide WebConferences Steering Committee, 2018.[14]C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, and A. Lerchner,“Understanding disentangling in -vae,”ArXiv, vol. abs/1804.03599, 2018.9Under review as a conference paper at ICLR 2021[15]S. Van Steenkiste, F. Locatello, J. Schmidhuber, and O. Bachem, “Are disentangledrepresentationshelpfulforabstractvisualreasoning?,” in Advances in Neural InformationProcessing Systems 32 (H.Wallach, H.Larochelle, A.Beygelzimer, F.dAlché-Buc, E.Fox,and R. Garnett, eds.), pp. 14222–14235, Curran Associates, Inc., 2019.[16]H. Kim and A. Mnih, “Disentangling by factorising,” in Proceedings of the 35th In-ternational Conference on Machine Learning (J. Dy and A. Krause, eds.), vol. 80 ofProceedings of Machine Learning Research , (Stockholmsmässan, Stockholm Sweden),pp. 2649–2658, PMLR, 10–15 Jul 2018.[17]P. K. Rubenstein, B. Schölkopf, and I. Tolstikhin, “Learning disentangled representationswith wasserstein auto-encoders,” in Workshop at the 6th International Conference onLearning Representations (ICLR) , May 2018.[18]S. Duan, L. Matthey, A. Saraiva, N. Watters, C. Burgess, A. Lerchner, and I. Higgins,“Unsupervised model selection for variational disentangled representation learning,” inInternational Conference on Learning Representations , 2020.[19]G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. DENOYER, and M. A. Ranzato,“Fader networks:manipulating images by sliding attributes,” in Advances in NeuralInformation Processing Systems 30 (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach,R. Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 5967–5976, Curran Associates,Inc., 2017.[20]G. Wu, K. Luo, S. Sanner, and H. Soh, “Deep language-based critiquing for recommendersystems,” in Proceedings of the 13th ACM Conference on Recommender Systems , RecSys’19, (New York, NY, USA), p. 137–145, Association for Computing Machinery, 2019.[21]K. Luo, S. Sanner, G. Wu, H. Li, and H. Yang, “Latent linear critiquing for conversationalrecommender systems,” in Proceedings of The Web Conference 2020 , WWW ’20, (NewYork, NY, USA), p. 2535–2541, Association for Computing Machinery, 2020.[22]J. Ma, C. Zhou, P. Cui, H. Yang, and W. Zhu, “Learning disentangled representations forrecommendation,” in Advances in Neural Information Processing Systems 32 (H.Wallach,H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett, eds.), pp. 5711–5722, Curran Associates, Inc., 2019.[23]B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-based recommendationswith recurrent neural networks,” in International Conference on Learning Representa-tions, ICLR ’16, 2016.[24]H. Steck, “Gaussian ranking by matrix factorization,” in Proceedings of the 9th ACMConference on Recommender Systems , RecSys ’15, (New York, NY, USA), p. 115–122,Association for Computing Machinery, 2015.[25]E. Smirnova and F. Vasile, “Contextual sequence modeling for recommendation withrecurrent neural networks,” in Proceedings of the 2nd Workshop on Deep Learning forRecommender Systems , DLRS 2017, (New York, NY, USA), p. 2–9, Association forComputing Machinery, 2017.[26]F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” Acmtransactions on interactive intelligent systems (tiis) , vol. 5, no. 4, pp. 1–19, 2015.[27]M. Wan and J. J. McAuley, “Item recommendation on monotonic behavior chains,” inRecSys, pp. 86–94, ACM, 2018.[28]C. Eastwood and C. K. Williams, “A framework for the quantitative evaluation ofdisentangled representations,” 2018.10Under review as a conference paper at ICLR 2021A Dataset StatisticsWe have mentioned the number of interactions, users, and items for Movielens and GoodreadsDataset in Table 3.Dataset Number of In-teractionsNumber ofUsersNumber ofItemsSparsityRateMovielens-1m 1,000,209 6,040 3,706 4.468 %Movielens-20m 9,990,682 136, 677 20, 720 0.353 %Goodreads-Children3,371,518 92,993 33,635 0.108 %Goodreads-Comics2,705,538 57,405 32,541 0.145 %Table 3: Dataset statistics (after performing all filtering). The sparsity rate indicates thefraction of cells in the complete user-item matrix with a known value.B Implementation DetailsWe divide the set of users into train, validation and test splits. Validation and test splitsconsist of 10% of the users, across all datasets. For each user in the validation and test split,we use only 80% of the items rated by them to learn the user representation. The remaining20% is used to evaluate the model’s performance. This strategy is similar to the one usedby (13). For all the experiments, user’s latent representation is restricted to 32dimensions.The encoder and decoder consists of two layers with [600;200]and[200;600]hidden unitsrespectively, each with ReLu activation. We conduct hyper-parameter tuning to identify andvalues from [5;10;50]and[5;10;50;500]respectively. The threshold Mto identifymovies where the attribute is present for Movielens-20m , and MovieLens-1m is taken as0.5 and 0.4 respectively. All the models are run up to 50 epochs. We select the best model,based on its performance on validation dataset for both NDCG@100 and Disentanglementscore. We select less than 5% of items for supervised -VAE using stratified sampling.C Correlation between dimension value caand true relevancescores across itemsWe compare the dimension value caassociated with an attribut a, to the true relevancescores present in the Movielens-20m dataset. We show in Figure 6 that across all the tags,the correlation is consistently higher for Untangle, when compared to MultiVAE.D Controllable AttributesUsingUntangle , we identify the clustered-tags, which are more controllable for revising userrecommendations. We have listed some of the most controllable and least controllable tagsin Table 4. We also list the absolute recall difference obtained across each cluster.11Under review as a conference paper at ICLR 2021Recall Difference Tags in the cluster:10 Most Controllable Attributes0.75933 action packed, adventure, big budget, cool, dynamic cgi action, exciting,fast paced, fighting, franchise, plot holes, series0.75924 atmospheric, bleak, character study, downbeat, forceful, grim, master-piece, movielens top pick, powerful ending, tense, visceral0.75461 corruption, intense, murder, police investigation, secrets, suspense, sus-penseful, thriller, twists & turns0.75246 beautiful scenery, betrayal, childhood, earnest, excellent, excellent script,exceptional acting, friendship, good acting, great movie, honest, idealism,justice, light, moral ambiguity, original plot, oscar, oscar winner, sacrifice,unlikely friendships, very good, witty0.72529 classic, cult classic, gunfight, highly quotable, quotable0.72285 comedy, funny, hilarious, humorous, very funny0.7144 afi 100 (movie quotes), oscar (best actor), oscar (best cinematography),oscar (best picture), oscar (best supporting actor)0.70965 adapted from:book, based on a book, based on book, books0.61973 future, futuristic, sci fi, sci-fi, science fiction, scifi, special effects, technol-ogy0.59895 goofy, silly, silly fun10 Least Controllable Attributes0.24986 erotic, sex, sexual, sexuality0.24014 adolescence, bullying, coming of age, coming-of-age, high school, school,teacher, teen, teen movie, teenager, teenagers, teens0.23056 anti-semitism, anti-war, best war films, bombs, civil war, fascism, geno-cide, german, germany, historical, history, holocaust, jewish, jews, mili-tary, nazi, nazis, poland, russian, war, war movie, wartime, world war i,world war ii, wwii0.17843 broadway, dance, dancing, great music, hip hop, lyrical, music, musicbusiness, musical, musicians, rock and roll0.17675 adapted from:comic, based on a comic, based on comic, comic, comicbook, comics, graphic novel, mutants, super hero, super-hero, superhero,superheroes, vigilante0.1112 business, capitalism, controversial, documentary, factual, freedom, islam,journalism, oil, political, politics, propaganda, revolution, us history,world politics0.08376 1970s, anti-hero, awesome soundtrack, california, crime, drugs, gangs,good music, great soundtrack, gritty, nostalgic, small town0.06328 assassination, black comedy, brainwashing, censorship, cynical, distopia,fighting the system, guilt, hotel, identity, intellectual, intelligent, ironic,manipulation, morality, off-beat comedy, oscar (best writing - screenplaywritten directly for the screen), paranoid, philosophical, philosophy,surveillance, thought-provoking0.0432 mentor, original0.00691 80s, awful, bad, bad acting, bad cgi, boring, camp, campy, cheesy, disaster,dumb, dumb but funny, horrible, idiotic, lame, mad scientist, nudity(topless), remake, ridiculous, stupid, stupid as hell, stupidityTable 4: Most controllable and least controllable tags obtained from Untangle12Under review as a conference paper at ICLR 2021(a) MultiVAE:Romantic (b) MultiVAE:Sad (c) MultiVAE:Suspense (d) MultiVAE:Violence(e) Untangle:Romantic (f) Untangle:Sad (g) Untangle:Suspense (h) Untangle:ViolenceFigure 6: We compare Multi-VAE (top) with Untangle model (bottom) for the correlationbetween factor caand true relevance scores.13
rrGBiv4NUZN
An interesting paper but below the bar
4: Ok but not good enough - rejection
Summary: The paper proposes a framework to learn disentangled representations for collaborative filtering systems. To model the user-item interactions, the authors adopt the likelihood model proposed by \beta-Multi-VAE. The auxiliary task of predicting item labels is considered to increase the disentanglement and the ability to control the recommendations. Practical performance and the properties of disentanglement are demonstrated in experiments on real data sets. Pros: 1. The paper focuses on a novel and important area for the recommender system. Learning disentangled representation might help obtain a model that is useful yet interpretable and controllable. 2. Consistent and superior disentanglement performances on all datasets. 3. The choice of experimentation metric is complete and the results are presented coherently. Cons: 1. While being neglected by most literature. Using disentangled representation for recommendation is not entirely new. Therefore this paper needs a stronger baseline (e.g. [https://papers.nips.cc/paper/7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models]) for disentanglement learning models besides the \beta-Multi-VAE. 2. The idea of utilizing external knowledge or labels in the recommendation is not new. And it’s not uncommon for such models to tolerate missing contextual attributes. [https://dl.acm.org/doi/10.1145/3097983.3098094] How does the proposed model compare with these baselines? 3. The selection of cut-off for ranking metrics is not very consistent. The superiority of the VAE models is sensitive to this choice as pointed in [https://dl.acm.org/doi/10.1145/3298689.3347058] 4. The setting of controller metric experimentation seems to be trivial or unfair. The multiplier is applied to the dimension c_a, which is only defined for models trained with attribute labels. 5. The presentation of the model is confusing. 1) What is 1_i in equation (2)? is it a bag of word representation of randomly sampled items? If it is, does it make sense to ask the model to approximate this random sample? And if it is not, then why is it used in the multinomial likelihood? 2) How does the model train? Does it train each phase at each iteration? Or it firstly trains the disentangle phase then the recommendation phase?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Untangle: Critiquing Disentangled Recommendations ### Paper Abstract The core principle behind most collaborative filtering methods is to embed users and items in latent spaces, where individual dimensions are learned independently of any particular item attributes. It is thus difficult for users to control their recommendations based on particular aspects (critiquing). In this work, we propose Untangle: a recommendation model that gives users control over the recommendation list with respect to specific item attributes, (e.g.:less violent, funnier movies) that have a causal relationship in user preferences. Untangle uses a refined training procedure by training (i) a (partially) supervised β-VAE that disentangles the item representations and (ii) a second phase which optimized to generate recommendations for users. Untangle gives control on critiquing recommendations based on users preferences, without sacrificing on recommendation accuracy. Moreover only a tiny fraction of labeled items is needed to create disentangled preference representations over attributes. ### Paper Keywords ["Disentangling", "Recommender Systems", "VAE", "Critiquing", "Explainability"] ### Paper Content Under review as a conference paper at ICLR 2021Untangle :Critiquing Disentangled RecommendationsAnonymous authorsPaper under double-blind reviewAbstractThe core principle behind most collaborative filtering methods is to embedusers and items in latent spaces, where individual dimensions are learnedindependently of any particular item attributes. It is thus difficult for usersto control their recommendations based on particular aspects (critiquing).In this work, we propose Untangle: a recommendation model that givesusers control over the recommendation list with respect to specific itemattributes, (e.g.:less violent, funnier movies) that have a causal relationshipin user preferences. Untangle uses a refined training procedure by training(i) a (partially) supervised -VAE that disentangles the item representationsand (ii) a second phase which optimized to generate recommendations forusers. Untangle gives control on critiquing recommendations based on userspreferences, without sacrificing on recommendation accuracy. Moreover onlya tiny fraction of labeled items is needed to create disentangled preferencerepresentations over attributes.1 IntroductionFigure 1: Untangle model is trained in twophases: Disentangling phase: Input to en-coder is a one hot representation of an item(green dotted line). Obtained representationis disentangled across Aattributes. Recom-mendation phase: Input to encoder is theitems user interacted with (solid red line) andrecommends new items.User and item representations form the basisof typical collaborative filtering recommenda-tion models. These representations can belearned through various techniques such asMatrixFactorization( 1;2), orareconstructeddynamically during inference e.g. the hiddenstate of RNN’s in session-based recommenda-tions (3; 4).As most standard recommendation modelssolely aim at increasing the performance ofthe system, no special care is taken to en-sure interpretability of the user and item rep-resentations. These representations do notexplicitly encode user preferences over itemattributes. Hence, they cannot be easily usedby users to change a.k.a. critique ( 5) therecommendations. For instance, a user in arecipe recommendation system cannot ask forrecommendationsforasetoflessspicyrecipes,as the spiciness is not explicitly encoded inthe latent space. Moreover the explainabilityof the recommendations that are provided bysuch systems is very limited.In this work, we enrich a state-of-the-art rec-ommendation model to explicitly encode pref-erences over item attributes in the user latentspace while simultaneously optimizing for rec-ommendation’s performance. Our work is1Under review as a conference paper at ICLR 2021motivated by disentangled representations in other domains, e.g., manipulating generativemodels of images with specific characteristics (6) or text with certain attributes (7). Varia-tional Autoencoders (VAEs), particularly -VAE’s ( 8) (which we adapt here), are generallyused to learn these disentangled representations. Intuitively, they optimize embeddings tocapture meaningful aspects of users and items independently. Consequently, such embeddingswill be more usable for critiquing .There are two types of disentangling -VAEs:unsupervised andsupervised . In the former,the representations are disentangled to explanatory factors of variation in an unsupervisedmanner, i.e., without assuming additional information on the existence (or not) of specificaspects. Used in the original -VAE (8) approach, a lack of supervision often results ininconsistency and instability in disentangled representations (9). In contrast, in superviseddisentangling, a small subset of data is assumed to have side-information (i.e. a label ora tag). This small subset is then used to disentangle into meaningful factors ( 10;9). Ascritiquing requires user control using familiar terms/attributes, we incorporate superviseddisentanglement in a -VAE architecture in this work.To achieve the explicit encoding of preferences over item attributes in embedding space werefine the training strategy of the untangle model. We essentially train in two phases: i)Disentangling phase : We explicitly disentangle item representations, using very few supervisedlabels. ii) Recommendation phase : We encode the user, using the bag-of-words representationof the items interacted, and then generate the list of recommended items. Untangle givesfine-grained control over the recommendations across various item attributes, as compared tothe baseline. We achieve this with a tiny fraction of attribute labels over items, and moreoverachieve comparable recommendation performance compared to state-of-the-art baselines.2 Related WorkDeep learning based Autoencoder architectures are routinely used in collaborative filteringand recommendation models ( 11;12;13). In particular ( 11;12) adopt denoising autoencoderarchitectures, whereas ( 13) uses variational autoencoders. The internal (hidden) representa-tions generated by the encoders in these models are not interpretable and hence cannot beused for critiquing or explanations in recommendations.Recent work on Variational Autoencoders across domains have focused on the task ofgenerating disentangled representations. One of the first approaches used to that end was-VAE (8;14;15), which essentially enforced a stronger (multiplying that term with >1)KL divergence constraint on the VAE objective. Such representations are more controllableand interpretable as compared to VAEs.One of the drawbacks of -VAE is that the disentanglement of the factors cannot be controlledand that they are relatively unstable and not easy to reproduce particularly when the factorsof variance are subtle ( 9;8;14;16;17). This has motivated methods that explicitly supervisethe disentangling ( 10), that rely either on selecting a good set of disentangling using multipleruns and the label information ( 18), or by adding a supervised loss function in the -VAEobjective function ( 10). As supervised disentangling methods are better in explainability andcould provide control over desired attributes, we motivate our model from ( 19) for bettercritiquing in VAE based recommendation systems.In recommender systems similar methods to utilize side information, have also been usedrecently to allow for models that enable critiquing of recommendations. These models allowusers to tune the recommendations across some provided attributes/dimensions. Notableexamples are ( 20;21), where the models are augmented with a classifier of the features overwhich to control the recommendation. Adjusting the features at the output of the classifiermodifies the internal hidden state of the model and leads to recommendations that exhibitor not the requested attribute. Note that this method of critiquing is quite different to ourapproach which allows for a gradual adjustment of the attributtes. Moreover the modelsin (20;21) require a fully labeled dataset with respect to the attributes while our approachonly requires a small fraction of labeled data.2Under review as a conference paper at ICLR 2021Unsupervised disentanglement was also recently used to identify and potentially use factorsof variation from purely collaborative data i.e., data generated by user interactions withitems (22) note though that this method focus was mainly on performance of the recommen-dations and that it does not allow for seamless critiquing as it is not clear what aspect ofthe data get disentangled.3UntangleThe aim of the untangle model is to obtaining controllable user (and item) representationsfor better critiquing along with optimizing for recommendation performance. To this end,we incorporate a simple supervised disentanglement technique to disentangle across itemattributes/characteristics over which we want to provide explicit control to the users.We index users with u2f1;:::;ng, and items with i2f1;:::;mg.Xnmis a matrix ofuser-item interactions ( xui= 1if useruinteracted with item i, and 0otherwise). A subsetof items are assumed to have binary labels for attributes A.Our model is a modified -VAE architecture, with a feed forward network based encoder anddecoder. In Figure 1, user uis represented by [z:c]. Note that :stands for concatenation,thezpart of the representation is non-interpretable by default while on the cpart of therepresentation we map (through a refined learning step) the representation of the attributesof the items over which we would like the user to have control. Each dimension in cismapped to only one attribute a. Across the paper, we refer the dimension associated with theattributea, asca. The user representation is sampled from the distribution parameterizedby the encoder ( q):q(xu) =N((xu);diag ((xu)). The input to the encoder is thebag of words representation of the items uinteracted with, i.e. the uthrow of matrix X,xu. The decoder generates the probability distribution given user representation [z:c],(z)/exp(fdec([z:c])), over themitems. The likelihood function used in recommendersystem settings (3; 23; 24; 25) is typically the multinomial likelihood:p(xuj[z:c]) =Xixuilogi([z:c]))3.1 LearningTraining is conducted in two phases: Recommendation and Disentangle phase, as mentionedin Algorithm 1.Recommendation Phase The objective in this phase is to optimize the encoder parame-terized by ( ), and decoder parameterized by ( ) to generate personalized recommendations.We train our model with the following objective:L(xu;;)Eq([z:c]jxu)[logp(xuj[z:c])]KL (q([z:c]jxu)jp([z:c]))(1)Intuitively, this is the negative reconstruction error minus the Kullback-Leibler divergenceenforcing the posterior distribution of zto be close to the Gaussian distribution (prior) p(z).The KL divergence in -VAE is computed between the representation sampled from theencoder and the normal distribution p(z) =N(0;Id). The diagonal co-variance matrixenforces a degree of independence among the individual factors of the representation. Con-sequently, increasing the weight of the KL divergence term with >1boosts the featureindependence criteria, leading to disentangled representation. This ensures that even in therecommendation phase, the learnt user representations are nudged towards disentanglement.Disentanglement Phase Since the attribute information is commonly available acrossthe items. In this phase, we first obtain the item representation in the user latent space (asdepicted in the highlighted green box in Figure 1). We pass the one hot encoding of an item,and obtain its representation in the latent user space. We then disentangle the obtainedrepresentation using the following objective:(2) L(1i;;)Eq([z:c]j1i)[logp(1ij[z:c])]KL (q([z:c]j1i)jp([z:c])) +Eq(cj1i)l(q(cj1i);a)3Under review as a conference paper at ICLR 2021Algorithm 1: Untangle: TrainingData:X2Rnmcontaining user-item interactions, with a subset of items having labelsforAattributes1initialize model params.: Encoder( ), Decoder( ) ;2do3ifis_disentangle then// Disentangle representations4 1i random mini batch from set of items that are labelled with Aset.5 [z:c] sample fromN((1i);diag ((1i))6 ~xi Decoder ([z:c])7 compute gradients rL,rLusing Objective 28 +rL9 +rL10 end11 ifis_recommend then// Recommend items12 xu random mini-batch from dataset13 [z:c] sample fromN((xu);diag ((xu))14 ~xu Decoder ([z:c])15 compute gradients rL,rLusing Objective 116 +rL17 +rL18 end19whilemodel converges ;As in (10), we modify the -VAE objective (Objective 1) in to incorporate a classification lossover the factors c, over which we disentangle. This loss penalizes discrepancies between theattribute label prediction for factor caand the label aof interest, nudging the disentanglementfor each attribute to happen over the corresponding factor ca.4 DatasetsMovielens Dataset: We use the Movielens-1m and Movielens-20m datasets ( 26), whichcontain 1 million and 20 million user-movie interactions, respectively. For the latter, wefilter out movies with fewer than 5ratings and users who rated 10movies. We utilizethe relevance scores given in the Movielens dataset for 10,381 movies across 1,000 differenttags to select attributes for disentangling. E.g., Mission Impossible movie has high relevance(0:79) for the actiontag. We take the top 100 tags, based on the mean relevance score acrossall movies. Among these 100 tags, some tag pairs, like ( funny, andvery funny ), are bydefinition entangled. Therefore, to identify distinct tags, we cluster these 100 tags ( 2R10381movies) into 20 clusters using K-Means clustering. Finally, we select a subset from these 20clusters, as given in Table 1 for disentangling. We assign the new-clustered tag (as givenin Table 1, Column 1) if the average-relevance score (the mean of relevance scores for tagspresent in the corresponding cluster) is higher than 0:5.Goodreads Dataset: The GoodReads dataset ( 27) contains user-book interactions fordifferent genres. We use the Children and Comics genres to evaluate our model. We filterout items rated5and users who rated 10books. The final statistics are given inAppendix A. We extract the tags for disentangling from the user-generated shelf names, e.g.,historical-fiction ,to-read. We retrieve the top 100 shelf names. Some tags (like “books-i-have”)are not useful to revise recommendations. Therefore, we only consider item attributes thatall the authors consider informative for critiquing recommendations. We select a subset fordisentangling from this set, as it still contains correlated attributes like historical-fiction,fiction. We select attributes with the corresponding number of books where the attribute waspresent {horror:1080, humor:9318, mystery:3589, and romance:1399} and {adventure:8162,horror:5518, humor:8314, mystery:5194, romance:7508, sci-fi:7928}, for Goodreads-(Childrenand Comics) respectively.4Under review as a conference paper at ICLR 2021Cluster LabelTaggedTags included in clustermoviesaction 1,167 action, fight-scenes, special-effectsfunny 1,219 comedy, funny, goofy, very funnyromantic 975destiny, feel-good, love story, romanticsad 1,488 bleak, intimate, loneliness, melancholic, reflective, sadsuspense 1,070 betrayal, murder, secrets, suspense, tense, twist-and-turnsviolence 1,297 brutality, cult classic, vengeance, violence, violentTable 1: Each cluster was manually assigned a human-readable label. Some of the tagspresent in each cluster are listed in column 3. Column 2 lists the number of movies that hadhigh relevance score for tags in each cluster.5 Evaluation MetricsWe evaluate Untangle on these criteria: i) quality of items recommended, ii) extent ofdisentanglement, iii) control/critiquing based on the disentangled representations.Ranking Based Metrics: We evaluate the quality of items recommended using tworanking-based metrics: Recall@k and normalized discounted cumulative gain NDCG@k. Thelatter is rank sensitive, whereas Recall@k considers each relevant item in the top-k equally.Recall @k:=Pki=1I[item[i]2S]min(k;jSj)DCG @k:=kXi=12I[item[i]2S]1log(i+ 1)NDCG is normalized DCG by dividing it by the largest possible DCG@k.Disentanglement Metrics: We use the Disentanglement , andCompleteness metricsintroduced in ( 28).Disentanglement measures the extent to which each dimension capturesat most one attribute. E.g., if a dimension captures all attributes, the Disentanglement scorewill be 0. We compute importance pajofathattribute on jthdimension of [z:c]2Rd, withGradient Boosted Trees as given in ( 9). Using the pajscores, the disentanglement score isdefined as:HjAj(Pj) =jAj1Xa=0pajlogjAjpaj; Dj= (1HjAj(Pj))D=d1Xj=0jDj; j=PjAj1a=0pajPd1j=0PjAj1a=0pajWe compute entropy HjAj(Pj)) forjthdimension. Disentanglement score for dimension jisthen 1entropy. The final disentanglement score of the system is weighted average of Djacross all the dimensions d, wherejthe dimension’s relative importance. Completeness :Measures the extent to which one attribute ais encoded in a single dimension of [z:c]. Fora latent representation of 16 dimensions and 2 attributes, if 8 dimensions encode attributea1and the other 8 encode a2, then the Disentanglement will be 1butCompleteness will be0:25. Completeness is defined as:Hd(Pa) =d1Xj=0pajlogdpaj; Ca= (1Hd(Pa))C=jAj1Xa=0aCa; a=Pd1j=0pajPjAj1a=0Pd1j=0pajController Metric: We propose a simple metric to quantify the extent of controldisen-tangled dimension cahas on recommendations by critiquing attribute a. With supervised-disentanglement, the mapping between dimensions cin the latent representations, and theattributes across which we disentangled is known. The features in these dimensions in callow the user to control/critique the respective attribute in the generated recommendations.For instance, less violence can be achieved by reducing the corresponding dimension value5Under review as a conference paper at ICLR 2021Dataset ModelRecommendation Performance Disentanglement PerformanceN@100 R@20 R@50 Disent. Comp. Controller MetricML-1mMulti-DAE 0.38782 0.31636 0.43404 0.317 0.214 0.961Multi-VAE 0.39252 0.32515 0.44757 0.306 0.200 0.947-VAE 0.38658 0.31216 0.43032 0.313 0.0211 0.924Untangle 0.37833 0.30079 0.42532 0.543 0.393 19.27ML-20mMulti-DAE 0.39738 0.37071 0.50847 0.265 0.182 0.88Multi-VAE 0.39827 0.37212 0.50946 0.246 0.167 3.53-VAE 0.38724 0.35617 0.48976 0.211 0.142 3.27Untangle 0.40320 0.37367 0.51303 0.677 0.529 75.11GR-ComicsMulti-DAE 0.42593 0.42602 0.52610 0.243 0.175 0.963Multi-VAE 0.45159 0.45697 0.55598 0.173 0.137 0.872-VAE 0.44366 0.44949 0.55226 0.192 0.146 0.847Untangle 0.43597 0.43981 0.54218 0.733 0.536 73.41GR-ChildrenMulti-DAE 0.40030 0.43240 0.56473 0.145 0.132 2.37Multi-VAE 0.40219 0.43057 0.56695 0.164 0.132 0.86-VAE 0.40219 0.43057 0.56695 0.139 0.103 0.92Untangle 0.41255 0.44490 0.58473 0.517 0.574 14.37Table 2: Recommendation and Disentanglement performance on Movielens-(1m,20m) andGoodreads-(Comics,Children) domain dataset on the corresponding test split.(violence) in c. We evaluate this by probing if the items where the attribute is present ( Sa)are ranked higher when the dimension value cais increased by a factor of gin the userrepresentation. We extract the items recommended from the decoder ( Ia(g)), for the newuser representation where onlycais multiplied gca. We compare (Ia(g)) against (Sa)using any ranking-based metric described above. We further vary gfor a given range [G;G],and study if the ranking of ( Sa) improves. The Controller-Metric is defined as follows:Controller_Metric (k;g) :=jRecall @k(Ia(G);Sa)Recall @k(Ia(G);Sa)jRecall @k(Ia(G);Sa)(3)To compute the Controller-Metric for a system, we take the median across all the attributesdisentangled in c. Note that the metric value depends on kand the range chosen.6 Results and DiscussionsRecommendation and Disentanglement Performance We train the Untangle modelwith the parameter settings mentioned in Appendix B. We compare Untangle with theMultiDAE, and MultiVAE models ( 13). We also compare our model with a stronger baselinefor disentanglement -VAE, which disentangles the representation in an unsupervised way.We present our results in Table 2. Note that supervised disentanglement for Table 2, hasbeen trained with 300 (1%), 1030 (5%), 1500 (5%), 1550 (5%) labelled items for Movielens-(1m,20m) and Goodreads-(Children,Comics) respectively. We observe that our proposedmodel’s performance on ranking-based metrics (Recall@k, and NDCG@k) is comparable tothe baselines across all datasets. Thus we show that disentangling the latent representationdoes not impact the recommendation performance. We also quantify the disentanglementusing the Disentanglement and Completeness metrics discussed in Section 5. We infer fromTable 2 that the disentanglement achieved across all the mentioned strategies is significantlyhigher than the baselines. Disentangling with a tiny fraction of labeled items leads to asignificant gain in disentanglement compared to -VAE.We evaluate the extent of the controllability of the disentangled representations. To thisend, we compute the Controller Metric, which measures the control over the attributedimensioncavariation. We use the multiplicative range of [150;+150]to amplifyca, andmeasure the ranking performance using recall@10 across this range. Note that the rest of therepresentation remains unchanged. We observe that we get significantly higher controllabilityfor theUntangle model compared to the baseline approaches, especially for Movielens-20mand Goodreads-Comics dataset. By reducing cawe can diminish the existence of itemswith attribute afrom the recommendation list and by gradually increasing the magnitudeofcaincrease the presence of items with this attribute in the recommendation list up tosaturation.6Under review as a conference paper at ICLR 2021(a) MultiVAE:Sad (b) MultiVAE:Romantic (c) MultiVAE:Suspense (d) MultiVAE:Violence(e) Untangle:Sad (f) Untangle:Romantic (g) Untangle:Suspense (h) Untangle:ViolenceFigure 2: Control over recommendations when factor-value ca, is adjusted by multiplicativefactorg2[150;150]. Recommendation lists are evaluated by recall@(5,10,20). Relevanceis determined by the presence of attribute ain the retrieved items. We compare Multi-VAE(top) with Untangle model (bottom) for sad, romantic, suspense and violence on ML-20m.(a) MultiVAE:Adventure (b) MultiVAE:Sci-Fi (c) MultiVAE:Mystery (d) MultiVAE:Humor(e) Untangle:Adventure (f) Untangle:Sci-Fi (g) Untangle:Mystery (h) Untangle:HumorFigure 3: We compare Multi-VAE (top) with Untangle model (bottom) for adventure, sci-fi,mystery and humor attributes for Goodreads-Comics for the same analysis done in Figure 2.Critiquing Recommendations The primary aim of our model is to obtain controllablerepresentations for critiquing. With the Controller Metric, we quantify controllability , here wefurther analyze the incremental impact of changing the attribute dimension. In this analysis,we visualize the effect on the recommendations of the adjustment of the disentangled factorcafor each attribute a. We multiply the factor with gin Figure 2 and Figure 3 for baselinemodel MultiVAE and Untangle . Note that for the baseline (MultiVAE), we adjust thedimension that has the highest feature importance score computed using Gradient BoostingClassifier for attribute a.For the movies domain (Figure 2), we observe that for MultiVAE (row 1) the variation in cahas no clear correlation with the recommendation performance in terms of the presence orabsence of items with this attribute. In contrast to MultiVAE, in the Untangle model, weconsistently observe a significant and gradual variation across all the explicitly disentangledattributes A. Even for subtle attributes like suspense, we obtain a complete range of recall@10from 0.0 to 1.0 We observe similar results for Goodreads comics dataset (Figure 3), wherewe again get gradual and significant change (approximately 1) across all the disentangledattributes.7Under review as a conference paper at ICLR 2021Figure 4: Correlation between learnt dimension valuecato the true relevance score across 500 movies forMovielens-20mCorrelation between RelevanceScores and ca:We observe thatdisentangling across item representa-tions leads to a fine-grained controlforcritiquing. Wefurtherverify, iftheachieved controllability is an outcomeof high correlation between factor ca,and the true relevance score acrossmovies for attribute afor Movielens-20m dataset. We randomly sample500 movies, and obtain their latentrepresentation from the encoder. InFigure 4, we plot the obtained cavalue to and the true relevance score for attribute action. We can infer from the Fig-ure 4 that the representations obtained from Untangle have a high Pearson correlation of0.53 as compared to MultiVAE model (Pearson Correlation: -0.03). The graphs for otherattributes/tags are presented in Appendix C.Figure 5: Variation in Disentanglement and Com-pleteness metrics when model is trained with lesserlabels for Movielens-20m and GoodReads-Comics.Fewer Labels for Disentangle-mentOne of the advantages of Un-tangleis that it disentangles with veryfew labels. We train Untangle withfewer labeled items. Each point in inFigure 5 is an average across 5 differ-ent runs with different random seeds.For Movielens-20m just 1% attributelabels yields a disentanglement scoreof0:51, which gradually increases upto0:92when trained with all labels.For Goodreads-Comics, with 1% la-belled books we are able to achieve0:52disentanglement, which gradually increases to 0:93when the model is trained with allthe labels. Note that even with 1% labelled items, the disentanglement and completenessscores obtained are significantly higher than -VAE model:0.21 and 0.19 on Movielens-20mand Goodreads-Comics and respectively.Controllable Attributes With the above analysis, we have established that Untangleleads to controllable representations. In this experiment, we identify if the controllabilityis restricted to the chosen set of attributes. Therefore, we apply Untangle to a larger setof tags for Movielens-20m dataset. We cluster all the 1181 tags present in the dataset,using K-Means clustering into 50 clusters. The clustering strategy is similar to the onementioned in Section 4. We then evaluate the controllability for each of the clustered-tag, b.We explicitly encode the corresponding clustered-tag busingUntangle , using 5% of labelleditems. The controller metric score is obtained for each tag, across 5 runs. In each run, wesub-sample four clustered tags out of 40 to be disentangled along with the correspondingclustered tag b. This is done to model the impact of disentangling a given attribute alongsidewith other attributes present in the dataset. We identify that across 40 clustered-tags, weobtain a controller-metric score of >11:0for over 21 tags. Some of the attributes whichdo not have a higher controller-metric score includes:80s, crappy, philosophical, etc. Theseattributes are also unlikely to be critiqued by user. Some of the most controllable and leastcontrollable tags have been listed in Appendix D.7 ConclusionUntangle archives the goals we set, it provides control and critiquing over the user recommen-dations over a set of predefined item attributes. It does so without sacrificing recommendationquality and only needs a small fraction of labeled items.8Under review as a conference paper at ICLR 2021References[1]Y. Koren, “Factorization meets the neighborhood: A multifaceted collaborative filter-ing model,” in Proceedings of the 14th ACM SIGKDD International Conference onKnowledge Discovery and Data Mining , KDD ’08, (New York, NY, USA), p. 426–434,Association for Computing Machinery, 2008.[2]Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommendersystems,” Computer , vol. 42, pp. 30–37, Aug 2009.[3]B. Hidasi and A. Karatzoglou, “Recurrent neural networks with top-k gains for session-based recommendations,” in Proceedings of the 27th ACM International Conference onInformation and Knowledge Management , CIKM ’18, (New York, NY, USA), p. 843–852,Association for Computing Machinery, 2018.[4]C.-Y. Wu, A. Ahmed, A. Beutel, A. J. Smola, and H. Jing, “Recurrent recommendernetworks,” in Proceedings of the Tenth ACM International Conference on Web Searchand Data Mining , WSDM ’17, (New York, NY, USA), p. 495–503, Association forComputing Machinery, 2017.[5]L. Chen and P. Pu, “Critiquing-based recommenders: survey and emerging trends,”User Modeling and User-Adapted Interaction , vol. 22, no. 1, pp. 125–150, 2012.[6]X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan:Interpretable representation learning by information maximizing generative adversarialnets,” in NIPS, pp. 2172–2180, 2016.[7]Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing, “Toward controlledgeneration of text,” in ICML, vol. 70 of Proceedings of Machine Learning Research ,pp. 1587–1596, PMLR, 2017.[8]I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, andA. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variationalframework,” in ICLR (Poster) , OpenReview.net, 2017.[9]F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem,“Challenging common assumptions in the unsupervised learning of disentangled repre-sentations,” in Proceedings of the 36th International Conference on Machine Learning(ICML), vol. 97 of Proceedings of Machine Learning Research , pp. 4114–4124, PMLR,June 2019.[10]F. Locatello, M. Tschannen, S. Bauer, G. Rätsch, B. Schölkopf, and O. Bachem,“Disentangling factors of variation using few labels,” 2019.[11]S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, “Autorec: Autoencoders meet collabora-tive filtering,” in Proceedings of the 24th International Conference on World Wide Web ,WWW ’15 Companion, (New York, NY, USA), p. 111–112, Association for ComputingMachinery, 2015.[12]Y. Wu, C. DuBois, A. X. Zheng, and M. Ester, “Collaborative denoising auto-encoders fortop-n recommender systems,” in Proceedings of the Ninth ACM International Conferenceon Web Search and Data Mining , WSDM ’16, (New York, NY, USA), p. 153–162,Association for Computing Machinery, 2016.[13]D. Liang, R. G. Krishnan, M. D. Hoffman, and T. Jebara, “Variational autoencoders forcollaborative filtering,” in Proceedings of the 2018 World Wide Web Conference , WWW’18, (Republic and Canton of Geneva, CHE), p. 689–698, International World Wide WebConferences Steering Committee, 2018.[14]C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, and A. Lerchner,“Understanding disentangling in -vae,”ArXiv, vol. abs/1804.03599, 2018.9Under review as a conference paper at ICLR 2021[15]S. Van Steenkiste, F. Locatello, J. Schmidhuber, and O. Bachem, “Are disentangledrepresentationshelpfulforabstractvisualreasoning?,” in Advances in Neural InformationProcessing Systems 32 (H.Wallach, H.Larochelle, A.Beygelzimer, F.dAlché-Buc, E.Fox,and R. Garnett, eds.), pp. 14222–14235, Curran Associates, Inc., 2019.[16]H. Kim and A. Mnih, “Disentangling by factorising,” in Proceedings of the 35th In-ternational Conference on Machine Learning (J. Dy and A. Krause, eds.), vol. 80 ofProceedings of Machine Learning Research , (Stockholmsmässan, Stockholm Sweden),pp. 2649–2658, PMLR, 10–15 Jul 2018.[17]P. K. Rubenstein, B. Schölkopf, and I. Tolstikhin, “Learning disentangled representationswith wasserstein auto-encoders,” in Workshop at the 6th International Conference onLearning Representations (ICLR) , May 2018.[18]S. Duan, L. Matthey, A. Saraiva, N. Watters, C. Burgess, A. Lerchner, and I. Higgins,“Unsupervised model selection for variational disentangled representation learning,” inInternational Conference on Learning Representations , 2020.[19]G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. DENOYER, and M. A. Ranzato,“Fader networks:manipulating images by sliding attributes,” in Advances in NeuralInformation Processing Systems 30 (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach,R. Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 5967–5976, Curran Associates,Inc., 2017.[20]G. Wu, K. Luo, S. Sanner, and H. Soh, “Deep language-based critiquing for recommendersystems,” in Proceedings of the 13th ACM Conference on Recommender Systems , RecSys’19, (New York, NY, USA), p. 137–145, Association for Computing Machinery, 2019.[21]K. Luo, S. Sanner, G. Wu, H. Li, and H. Yang, “Latent linear critiquing for conversationalrecommender systems,” in Proceedings of The Web Conference 2020 , WWW ’20, (NewYork, NY, USA), p. 2535–2541, Association for Computing Machinery, 2020.[22]J. Ma, C. Zhou, P. Cui, H. Yang, and W. Zhu, “Learning disentangled representations forrecommendation,” in Advances in Neural Information Processing Systems 32 (H.Wallach,H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett, eds.), pp. 5711–5722, Curran Associates, Inc., 2019.[23]B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-based recommendationswith recurrent neural networks,” in International Conference on Learning Representa-tions, ICLR ’16, 2016.[24]H. Steck, “Gaussian ranking by matrix factorization,” in Proceedings of the 9th ACMConference on Recommender Systems , RecSys ’15, (New York, NY, USA), p. 115–122,Association for Computing Machinery, 2015.[25]E. Smirnova and F. Vasile, “Contextual sequence modeling for recommendation withrecurrent neural networks,” in Proceedings of the 2nd Workshop on Deep Learning forRecommender Systems , DLRS 2017, (New York, NY, USA), p. 2–9, Association forComputing Machinery, 2017.[26]F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” Acmtransactions on interactive intelligent systems (tiis) , vol. 5, no. 4, pp. 1–19, 2015.[27]M. Wan and J. J. McAuley, “Item recommendation on monotonic behavior chains,” inRecSys, pp. 86–94, ACM, 2018.[28]C. Eastwood and C. K. Williams, “A framework for the quantitative evaluation ofdisentangled representations,” 2018.10Under review as a conference paper at ICLR 2021A Dataset StatisticsWe have mentioned the number of interactions, users, and items for Movielens and GoodreadsDataset in Table 3.Dataset Number of In-teractionsNumber ofUsersNumber ofItemsSparsityRateMovielens-1m 1,000,209 6,040 3,706 4.468 %Movielens-20m 9,990,682 136, 677 20, 720 0.353 %Goodreads-Children3,371,518 92,993 33,635 0.108 %Goodreads-Comics2,705,538 57,405 32,541 0.145 %Table 3: Dataset statistics (after performing all filtering). The sparsity rate indicates thefraction of cells in the complete user-item matrix with a known value.B Implementation DetailsWe divide the set of users into train, validation and test splits. Validation and test splitsconsist of 10% of the users, across all datasets. For each user in the validation and test split,we use only 80% of the items rated by them to learn the user representation. The remaining20% is used to evaluate the model’s performance. This strategy is similar to the one usedby (13). For all the experiments, user’s latent representation is restricted to 32dimensions.The encoder and decoder consists of two layers with [600;200]and[200;600]hidden unitsrespectively, each with ReLu activation. We conduct hyper-parameter tuning to identify andvalues from [5;10;50]and[5;10;50;500]respectively. The threshold Mto identifymovies where the attribute is present for Movielens-20m , and MovieLens-1m is taken as0.5 and 0.4 respectively. All the models are run up to 50 epochs. We select the best model,based on its performance on validation dataset for both NDCG@100 and Disentanglementscore. We select less than 5% of items for supervised -VAE using stratified sampling.C Correlation between dimension value caand true relevancescores across itemsWe compare the dimension value caassociated with an attribut a, to the true relevancescores present in the Movielens-20m dataset. We show in Figure 6 that across all the tags,the correlation is consistently higher for Untangle, when compared to MultiVAE.D Controllable AttributesUsingUntangle , we identify the clustered-tags, which are more controllable for revising userrecommendations. We have listed some of the most controllable and least controllable tagsin Table 4. We also list the absolute recall difference obtained across each cluster.11Under review as a conference paper at ICLR 2021Recall Difference Tags in the cluster:10 Most Controllable Attributes0.75933 action packed, adventure, big budget, cool, dynamic cgi action, exciting,fast paced, fighting, franchise, plot holes, series0.75924 atmospheric, bleak, character study, downbeat, forceful, grim, master-piece, movielens top pick, powerful ending, tense, visceral0.75461 corruption, intense, murder, police investigation, secrets, suspense, sus-penseful, thriller, twists & turns0.75246 beautiful scenery, betrayal, childhood, earnest, excellent, excellent script,exceptional acting, friendship, good acting, great movie, honest, idealism,justice, light, moral ambiguity, original plot, oscar, oscar winner, sacrifice,unlikely friendships, very good, witty0.72529 classic, cult classic, gunfight, highly quotable, quotable0.72285 comedy, funny, hilarious, humorous, very funny0.7144 afi 100 (movie quotes), oscar (best actor), oscar (best cinematography),oscar (best picture), oscar (best supporting actor)0.70965 adapted from:book, based on a book, based on book, books0.61973 future, futuristic, sci fi, sci-fi, science fiction, scifi, special effects, technol-ogy0.59895 goofy, silly, silly fun10 Least Controllable Attributes0.24986 erotic, sex, sexual, sexuality0.24014 adolescence, bullying, coming of age, coming-of-age, high school, school,teacher, teen, teen movie, teenager, teenagers, teens0.23056 anti-semitism, anti-war, best war films, bombs, civil war, fascism, geno-cide, german, germany, historical, history, holocaust, jewish, jews, mili-tary, nazi, nazis, poland, russian, war, war movie, wartime, world war i,world war ii, wwii0.17843 broadway, dance, dancing, great music, hip hop, lyrical, music, musicbusiness, musical, musicians, rock and roll0.17675 adapted from:comic, based on a comic, based on comic, comic, comicbook, comics, graphic novel, mutants, super hero, super-hero, superhero,superheroes, vigilante0.1112 business, capitalism, controversial, documentary, factual, freedom, islam,journalism, oil, political, politics, propaganda, revolution, us history,world politics0.08376 1970s, anti-hero, awesome soundtrack, california, crime, drugs, gangs,good music, great soundtrack, gritty, nostalgic, small town0.06328 assassination, black comedy, brainwashing, censorship, cynical, distopia,fighting the system, guilt, hotel, identity, intellectual, intelligent, ironic,manipulation, morality, off-beat comedy, oscar (best writing - screenplaywritten directly for the screen), paranoid, philosophical, philosophy,surveillance, thought-provoking0.0432 mentor, original0.00691 80s, awful, bad, bad acting, bad cgi, boring, camp, campy, cheesy, disaster,dumb, dumb but funny, horrible, idiotic, lame, mad scientist, nudity(topless), remake, ridiculous, stupid, stupid as hell, stupidityTable 4: Most controllable and least controllable tags obtained from Untangle12Under review as a conference paper at ICLR 2021(a) MultiVAE:Romantic (b) MultiVAE:Sad (c) MultiVAE:Suspense (d) MultiVAE:Violence(e) Untangle:Romantic (f) Untangle:Sad (g) Untangle:Suspense (h) Untangle:ViolenceFigure 6: We compare Multi-VAE (top) with Untangle model (bottom) for the correlationbetween factor caand true relevance scores.13<|im_end|> <|im_start|>assistant ### Review Title An interesting paper but below the bar ### Review Text Summary: The paper proposes a framework to learn disentangled representations for collaborative filtering systems. To model the user-item interactions, the authors adopt the likelihood model proposed by \beta-Multi-VAE. The auxiliary task of predicting item labels is considered to increase the disentanglement and the ability to control the recommendations. Practical performance and the properties of disentanglement are demonstrated in experiments on real data sets. Pros: 1. The paper focuses on a novel and important area for the recommender system. Learning disentangled representation might help obtain a model that is useful yet interpretable and controllable. 2. Consistent and superior disentanglement performances on all datasets. 3. The choice of experimentation metric is complete and the results are presented coherently. Cons: 1. While being neglected by most literature. Using disentangled representation for recommendation is not entirely new. Therefore this paper needs a stronger baseline (e.g. [https://papers.nips.cc/paper/7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models]) for disentanglement learning models besides the \beta-Multi-VAE. 2. The idea of utilizing external knowledge or labels in the recommendation is not new. And it’s not uncommon for such models to tolerate missing contextual attributes. [https://dl.acm.org/doi/10.1145/3097983.3098094] How does the proposed model compare with these baselines? 3. The selection of cut-off for ranking metrics is not very consistent. The superiority of the VAE models is sensitive to this choice as pointed in [https://dl.acm.org/doi/10.1145/3298689.3347058] 4. The setting of controller metric experimentation seems to be trivial or unfair. The multiplier is applied to the dimension c_a, which is only defined for models trained with attribute labels. 5. The presentation of the model is confusing. 1) What is 1_i in equation (2)? is it a bag of word representation of randomly sampled items? If it is, does it make sense to ask the model to approximate this random sample? And if it is not, then why is it used in the multinomial likelihood? 2) How does the model train? Does it train each phase at each iteration? Or it firstly trains the disentangle phase then the recommendation phase? ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
R7aFOrR0b2
ICLR.cc/2021/Conference
2021
Dataset Curation Beyond Accuracy
["Johan Bjorck", "Carla P Gomes"]
Neural networks are known to be data-hungry, and collecting large labeled datasets is often a crucial step in deep learning deployment. Researchers have studied dataset aspects such as distributional shift and labeling cost, primarily using downstream prediction accuracy for evaluation. In sensitive real-world applications such as medicine and self-driving cars, not only is the accuracy important, but also the calibration -- the extent that model uncertainty reflects the actual correctness likelihood. It has recently been shown that modern neural networks are ill-calibrated. In this work, we take a complementary approach -- studying how dataset properties, rather than architecture, affects calibration. For the common issue of dataset imbalance, we show that calibration varies significantly among classes, even when common strategies to mitigate class imbalance are employed. We also study the effects of label quality, showing how label noise dramatically increases calibration error. Furthermore, poor calibration can come from small dataset sizes, which we motive via results on network expressivity. Our experiments demonstrate that dataset properties can significantly affect calibration and suggest that calibration should be measured during dataset curation.
["crowd-sourcing", "calibration", "dataset", "uncertainty"]
ABSTRACTNeural networks are known to be data-hungry, and collecting large labeleddatasets is often a crucial step in deep learning deployment. Researchers havestudied dataset aspects such as distributional shift and labeling cost, primarily us-ing downstream prediction accuracy for evaluation. In sensitive real-world appli-cations such as medicine and self-driving cars, not only is the accuracy important,but also the calibration – the extent that model uncertainty reflects the actual cor-rectness likelihood. It has recently been shown that modern neural networks areill-calibrated. In this work, we take a complementary approach – studying howdataset properties, rather than architecture, affects calibration. For the commonissue of dataset imbalance, we show that calibration varies significantly amongclasses, even when common strategies to mitigate class imbalance are employed.We also study the effects of label quality, showing how label noise dramaticallyincreases calibration error. Furthermore, poor calibration can come from smalldataset sizes, which we motive via results on network expressivity. Our experi-ments demonstrate that dataset properties can significantly affect calibration andsuggest that calibration should be measured during dataset curation.1 I NTRODUCTIONNeural networks often require large amounts of labeled data to perform well, making data curationa crucial but costly aspect of deployment. Thus, researchers have studied dataset properties suchas distributional shift (Miller et al., 2020) and the bias in crowd-sourced computer vision datasets(Tsipras et al., 2020) among others. Often, the evaluation criteria in such studies is downstreamprediction accuracy. However, neural networks are increasingly deployed in sensitive real-worldapplications such as medicine (Caruana et al., 2015), self-driving cars (Bojarski et al., 2016), andscientific analysis (Attia et al., 2020), where not only accuracy matters but also calibration. Cali-bration is the extent to which model certainty reflects the actual correctness likelihood. Calibrationcan be important when costs of false positives and false negatives are asymmetric, e.g., for a deadlydisease with cheap treatment, doctors might initiate treatment when the probability of being sickexceeds 10%. Beyond simple classification, calibration can be important for beam-search in NLP(Ott et al., 2018) and algorithmic fairness (Pleiss et al., 2017). Calibration in machine learning hasbeen studied by e.g Zadrozny & Elkan (2001); Naeini et al. (2015). Niculescu-Mizil & Caruana(2005) have shown that small scale neural networks can yield well-calibrated predictions. However,it has recently been observed by (Guo et al., 2017) that modern neural networks are ill-calibrated,whereas the now primitive Lenet (LeCun et al., 1998) achieves good calibration.In this work, we take a complementary approach; instead of focusing on network architecture, westudy how calibration is influenced by dataset properties. We primarily focus on computer vision andperform extensive experiments across common benchmarks and more exotic datasets such as satel-lite images (the eurosat dataset (Helber et al., 2019)) and species detection (the iNaturalist dataset(Van Horn et al., 2018)). We consistently find that dataset properties can significantly affect calibra-tion, causing effects comparable to network architecture. For example, we consider the ubiquitousproblem of class imbalanced datasets, a common issue in practice (Van Horn et al., 2018; Krishnaet al., 2017; Thomee et al., 2016). For such datasets, the miscalibration is not uniform but insteadvaries across the different classes. This problem persists even when common strategies to mitigateclass imbalanced are employed. Another practical concern is generating high-quality labels via e.g.crowdsourcing (Karger et al., 2011). We demonstrate how labeling quality affects calibration, withnoisier labels resulting in worse calibration. Additionally, we show that just the size of the dataset1Under review as a conference paper at ICLR 2021classclasscalibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 1: Calibration error for individual classes under class-imbalance. The classes are orderedfrom the most (left) to the least (right) amount of samples. Fewer samples result in larger calibrationerrors. Imbalance is injected in CIFAR10/100 and eurosat randomly, removing any correlation withclass-specific properties. We do not modify Inaturalist, which already suffers from imbalance; thusclasswise calibration is correlated with class-specific properties.has a strong effect on calibration. This also holds when one artificially increases the dataset sizeby data augmentation. We motivate our findings by considering the geometry of the cross-entropyloss and utilizing recent results on network expressivity (Yun et al., 2019). If the dataset is suffi-ciently small compared to the number of parameters, we argue that the lack of minimizer for thecross-entropy loss biases the network to high confidence and poor calibration. Our results highlightan underappreciated aspect of calibration and suggest that for sensitive applications, one shouldmeasure calibration during dataset curation.2 B ACKGROUNDCalibration. Calibration has a traditional place in machine learning (Zadrozny & Elkan, 2001;Naeini et al., 2015). Before the advent of modern deep learning, Niculescu-Mizil & Caruana (2005)showed that neural networks can yield well-calibrated predictions for classification. However, Guoet al. (2017) showed that modern neural networks are ill-calibrated. Modern neural networks aremodeled as e.g. resnet (He et al., 2016) or densenets (Huang et al., 2016). It is important to note thataccuracy and calibration do notnecessarily follow each other, but can move independently – modernneural networks are ill-calibrated, but still yield excellent accuracy. Beyond image classification, theimportance of calibration in NLP has further been studied by Ott et al. (2018) and its relationship tofairness by Pleiss et al. (2017).Metrics for Calibration. We letfxig2Rndxbe a dataset of ndatapoints with dxfeatures andtakefyigto be the labels. Following Guo et al. (2017), we assume that a neural network houtputsh(xi) = (^pi;^yi), where ^yiis the predicted class and ^piis the estimated probability that the predictionis correct. For evaluating calibration, we divide the interval [0;1]intoMequally sized bins andassign predictions to bins based upon ^p. Within each bin Bmwe define the accuracy as acc (Bm) =1jBmjPi2Bm1(^yi=yi). Similarly, we define the confidence as conf (Bm) =1jBmjPi2Bm^pi. Fora well-calibrated model, we would expect the confidence and accuracy of each bin to be close toeach other. Calibration error can be measured by their difference, evaluated on the test set. The2Under review as a conference paper at ICLR 2021resulting metric is known as the expected calibration error (Naeini et al., 2015), often abbreviated asece. Mathematically, it is defined as follows:ece=MXm=1jBmjnacc(Bm)conf(Bm)(1)3 E XPERIMENTSExperimental setup. We consider the following computer vision datasets. Cifar10 & Cifar100(Krizhevsky & Hinton, 2010) which contains 50,000 RGB images spanning ten or hundred classesrespectively. Classes are balanced. Eurosat (Helber et al., 2019), which is a dataset of satelliteimages over continental Europe; there are ten balanced classes and 27,000 images in total. iNatu-ralist (Van Horn et al., 2018) which is a dataset for species detection. We use the FGVC6 version(FGVC6, 2019), compromising over 260,000 images and an imbalanced hierarchical class systemcompromising e.g., species and phylum. We perform classification at the ”class” level, resulting innine classes. Across all datasets, we use the same architecture, Resnet50 (He et al., 2016). We usehyperparameters from the original resnet paper (He et al., 2016): cross-entropy loss optimized withSGD using a learning rate at 0.1 and decreased by a factor 0:1after 50 %and 75 %of the training, abatch size of 128, a weight decay of 0:0001 , and momentum of 0:9. For the cifar/eurosat/inaturalist,networks are trained over 62=30=331103gradient steps, corresponding to 160 epochs for eachdataset. We use randomized cropping and random horizontal flipping for data augmentation, seeAppendix A for data preprocessing. Experiments are repeated five times, with mean and standarddeviations reported. Calibration error is measured in expected calibration error(ece) as in eq. (1),usingM= 15 and evaluated on the test set.Inbalanced dataset. A common problem in practice, not necessarily found in benchmark datasets,is class imbalance (i.e., the number of available samples varies between classes), see Van Horn et al.(2018); Krishna et al. (2017); Thomee et al. (2016). Here we study how imbalanced datasets affectnoisy labels (fraction)calibration error (ece)calibration error (ece)noisy labels (fraction)cifar10cifar100eurosatiNaturalistFigure 2: Calibration error under label noise, simulated by randomly reassigning labels for a fractionof the training labels. Across datasets, label noise degrades network calibration. Thus, label noisefrom e.g. crowd sourcing can affect not only accuracy, but also calibration (Karger et al., 2011).3Under review as a conference paper at ICLR 2021cifar100cifar10SVHNcalibration error (ece)Figure 3: Calibration error under non-uniform label noise. We linearly increase label noise from 0to0:5among classes, and sort them thereafter. Increased noise leads to worse calibration.the calibration error for individual classes. Whereas the iNaturalist dataset is naturally imbalanced,the cifar and eurosat datasets are not. For these datasets, we simulate long-tailed class imbalancefollowing Cao et al. (2019). We randomly reorder the classes from = 1 ton, and only keep ai1fraction of examples for class i. Given some desired ratio between the class with the mostand fewest samples, one picks such thatn1=. Following Buda et al. (2018), we alsoconsider a step-imbalance, where half of the classes are downsampled by a factor . We consider= 100 as done by e.g. Cao et al. (2019) and keep the test set balanced. For cifar/eurosat, werandomly chose what classes to subsample to eliminate class-specific properties. Since iNaturalist isalready imbalanced, we keep it as it is, but note that the class-specific properties are correlated withclass-specific imbalance. After this procedure, we train the DNNs as normal and give the averagecalibration error for individual classes. The results are shown in Figure 1. Generally, classes withfewer examples have significantly higher ECE, showing how imbalance can have a significant effecton model calibration. For the iNaturalist dataset, we have some outlier classes which is likely dueto class-specific effects, e.g., the class with the most labels might be unusually hard to calibrate.Methods for Imbalanced Datasets. As class imbalanced is a problem of practical importance,there is ample work on mitigating this issue. One common strategy is to sample the dataset un-evenly when generating mini-batches, attempting to obtain a roughly balanced dataset. One canboth oversample minority classes (Buda et al., 2018) and undersample the majority class (Japkow-icz & Stephen, 2002). Another strategy is instead to weight the objective function to give all classesTable 1: Calibration error for various mitigation strategies used in imbalanced datasets. We give thecalibration error for the class with the most/fewest labels (referred to as min/max), and the ratio ofthese two errors. Two types of imbalance are considered, exponential and step. While improving insome cases, classwise imbalance remains even when mitigation strategies are used.exp-inbalance cifar10 cifar100 eurosatmethod min max ratio min max ratio min max ratiooriginal 0:12 0:48 4:24 0:34 0:61 1:82 0:05 0:3 7:06sampling 0:16 0:62 3:99 0:39 0:66 1:68 0:05 0:22 4:85weighted 0:1 0:35 3:67 0:22 0:4 1:79 0:08 0:14 1:75label smooth 0:08 0:35 4:31 0:16 0:27 1:73 0:1 0:29 2:96focal (Lin et al., 2017) 0:1 0:46 4:88 0:29 0:56 1:91 0:06 0:28 4:5CB (Cui et al., 2019) 0:09 0:34 3:98 0:2 0:32 1:57 0:08 0:14 1:84step-imbalance cifar10 cifar100 eurosatmethod min max ratio min max ratio min max ratiooriginal 0:04 0:63 16:57 0:16 0:73 4:6 0:02 0:43 19:62sampling 0:06 0:77 13:17 0:19 0:74 3:85 0:02 0:43 19:94weighted 0:08 0:34 4:62 0:15 0:3 2:01 0:1 0:18 1:94label smooth 0:08 0:47 5:58 0:12 0:48 3:95 0:09 0:33 3:47focal (Lin et al., 2017) 0:03 0:56 20:54 0:14 0:66 4:77 0:03 0:37 16:72CB (Cui et al., 2019) 0:07 0:41 6:24 0:15 0:3 2:04 0:08 0:17 2:224Under review as a conference paper at ICLR 2021dataset size (fraction)dataset size (fraction)calibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 4: Calibration error under different dataset sizes. We subsample the datasets, and give thesize as a fractions of the original size. Across all tasks, smaller datasets consistenly yield poorercalibration, highlighting how dataset size influences not only accuracy but also calibration.approximately the same weight in the objective function. A common strategy is weight classes in-versely proportional to their frequency (Wang et al., 2017). Recently, Cui et al. (2019) has proposedto reweight based upon the ”effective” number of samples, which is defined per a mathematicalformula. We here investigate if the calibration issues of an imbalanced dataset persist when us-ing such mitigation strategies. Thus, we consider the standard cross-entropy (original), resamplinginversely proportional to the frequency (sampling), reweighting inversely proportional to the fre-quency (weighted), the weighting scheme of Cui et al. (2019) (CB), and the focal loss of Lin et al.(2017) (focal). Additionally we consider label smoothing (Szegedy et al., 2016) (label smooth). Weconstruct imbalanced datasets as in the previous section. Due to limited computational resources,we only consider the three smallest datasets for these experiments. Calibration errors are given inTable 1, and we give the largest and smallest calibration error among classes and the average ratioof the two. We see that issues of imbalanced calibration errors, while sometimes improving, stillpersist. Standard deviations are given in Table 2 in Appendix A.Label Quality. When collecting labels, for example via crowdsourcing, a common issue is labelquality (Patterson & Hays, 2012; Su et al., 2012; Callison-Burch & Dredze, 2010). For example,workers might have poor incentives to perform well or lack the necessary skills for quality labeling.To study the effects of potentially mislabeled data, we artificially inject symmetric noise into thetraining set. This is done by selecting a random subset of the training set corresponding to somefixed fraction, and then shuffling the labels of this set. This setup follows conventions in label noiseliterature (Patrini et al., 2017; Han et al., 2018). Given these noisy labels, we train the networks andevaluate them on the original test-set (which has no noise). We consider five levels of label noise inincrements of 0:1, starting at 0:0. The resulting calibration errors for various noise levels are givenin Figure 2, where we see that label noise increases the calibration error across all datasets. Addi-tionally, we consider the effects of non-uniform noise, studied by e.g. Crammer et al. (2006). Forclassi, we linearly increase a noise level pifrom 0:0to0:5. Classes are randomly ordered. For eachimage with original class i, with probability piwe assign it to a new random class. The reassignmentprobability to class iis proportional to pi. Results are given in Figure 3, where noisier classes sufferfrom worse calibration. This underscores how label quality control in e.g. crowdsourcing (Su et al.,2012) can be not only important for accuracy, but also for calibration of downstream models.5Under review as a conference paper at ICLR 2021calibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 5: Calibration error under combinations of data augmentations. Following He et al. (2016),we consider randomized cropping and flipping. Removing these components, often used to artifi-cially enlarge the dataset, increases the calibration error.Dataset size. The perhaps most common concern for collecting data is the dataset size, and modelaccuracy typically grows with this size (Hestness et al., 2017). Crowdsourcing labels and bound-ing boxes for images is common practice, with many researchers investigating strategies to reduceneeded queries (Su et al., 2012) In practice, dataset size can be limited by costs of labeling, but alsoby obtaining the actual data (Suram et al., 2017). Motivated by this, we study the effect of datasetsize upon the calibration error. We simply subsample the training sets of the datasets uniformly atrandom and thereafter train on them, comparing different sizes of the resulting dataset. We considersubsampled sizes, measured in fractions of the original size, from 1:0to0:2in increments of 0:2,and also consider 0:1. The test set is not subsampled. The results of these experiments are given inFigure 4. We see that smaller datasets have substantially larger calibration errors, demonstrating thedramatic effect that dataset size can have not only on accuracy, but also on calibration error.fraction of trainingsizecalibration error (ece)Figure 6: Calibration error of an NLP task duringtraining for different dataset sizes. The dataset issubsampled, and we give the relative size.Augmentations. Beyond actual dataset size, itis common to artificially increase the size of thedataset by augmenting it, e.g., randomly crop-ping the images or slightly shifting the colorbalance (Cubuk et al., 2018). We have seenthat the size of the dataset influences calibra-tion, and now consider the effect of increasingthe effective dataset size via augmentations. Weuse both randomized cropping and horizontalflipping for our training and consider removingthese components while keeping other trainingparameters fixed. The outcome of this exper-iment is shown in Figure 5, and we see thatremoving data augmentations significantly in-creases the calibration error. Viewing data aug-mentation as a strategy of extending the trainingset, we again see how smaller training sets in-crease the calibration error, just as in Figure 4.6Under review as a conference paper at ICLR 2021NLP. While we primarily focus on computer vision, we here consider experiments in NLP. Languagemodels often generate text via beam-search, and it has been observed that calibration is importantfor this process (Ott et al., 2018). Here we investigate the effect of dataset size on calibration in NLP.We considering translation of the IWSLT’14 German to English dataset (Cettolo et al., 2014) usingtransformer architectures (Vaswani et al., 2017) with code and default hyperparameters from thepublicly available fairseq codebase (Ott et al., 2019). As before, we simply subsample the trainingset uniformly at random for variable sizes and train the transformer with all other training parametersfixed. Figure 6 shows how the mean calibration error, with standard deviations as error bars, variesduring training. Again we see how the dataset size influences the calibration. It is natural to guessthat there might be word-level calibration issues too, e.g., rare words might have worse calibration.4 T HEORETICAL MOTIVATIONFigure 4 and Figure 5 show that the size of the dataset affects calibration, with smaller datasetsresulting in worse calibration. To explain this, let us consider the cross-entropy loss. We will letlijdenote the logit for image iand classj. Furthermore, let cibe the index of the correct class forimagei. The soft-max cross-entropy loss function is then defined as`=Xi`i=Xilogexp(lici)Pjexp(lij)=Xilici+ logXjexp(lij)(2)We note that this loss function decreases monotonically as the logit liciincreases. This impliesthat there is no global minimizer, but instead that if the other logits are fixed, we have `i!0aslici!1 . The logit tending to infinity implies that the confidence of the prediction tends to 1. Thelack of minimizer for soft-max cross-entropy is in stark contrast with e.g. label smoothing, whichpenalizes large confidence, see Figure 7. Let us imagine that the network has infinite capacity. If weoptimize it for a sufficient amount of time, we would expect `to tend to zero, which implies that thelogits tend to infinity. This corresponds to the confidence on the training set tending to 100% , whichmost likely implies overconfidence and poor calibration . We can formalize this observation, but wefirst need to state some assumptions.Assumption 1 Letfxig2Rndxbe a dataset of ndatapoints with dxfeatures each. Let fyig2f0;1gncbe a one-hot label encoding that assigns each image one out of cclasses, where cis aconstant. We assume that all datapoints fxigare distinct, i.e. xi6=xi;8i6=j.Under such assumptions, recent results in network expressivity say that one can essentially memo-rize a training set (Yun et al., 2019) if the width is at least of order O(pn). In an idealized setting,where we optimize the function without computational considerations, such expressivity means thatthe loss function can be optimized towards its minimizer 0. This means train set confidence growingto100% , likely translating to poor calibration. We formalize this line of argument in Theorem 1.Theorem 1 Let Assumption 1 hold. Let fbe a Relu networks with four or more layers and withwidth at least (pn)and parameters w. Let`be equal to the loss function in eq. (2). Then (i)minw`(f(w))no global minima; (ii) the confidence tends to 1as`!0.The formal proof is given in Appendix B, we here provide some intuition. To prove (i)and(ii), itsuffices that the network can fit the training set with 1:0accuracy. This is typically the condition inpractice (Zhang et al., 2016), and whereas we consider an idealized argument without computationcosts, the conclusions agree with our experimental results. For the sake of contradiction, let usassume that we are in a global minima with parameters wand100% accuracy. Now consider `iwhen we scale the final layer by (1 +)for>0. The network output is then (1 +)lij, and`iislogexp((1 +)lici)=Pjexp((1 +)lij) = log(1 +Pj6=iexp(1 +)(lijlici). The factthat we have perfect train accuracy means that (lijlici)<08j6=ci. Thus, the loss must shrink,asPj6=iexp(1 +)(lijlici)decreases with and as logis monotone. By contradiction, weare not in such a global minima. The results of Yun et al. (2019) say that we can always find weightswhich achieves perfect accuracy using O(pn)parameters, and thus that there are no issues withfitting that dataset that prevents the loss from tending to 0. Thus, if the dataset is small compared tothe number of parameters, we expect overconfidence and poor calibration. This conclusion agreeswith observations of Guo et al. (2017), who show that depth and width increases miscalibration.7Under review as a conference paper at ICLR 20215 R ELATED WORKThere is much recent work on how datasets influence the behavior of neural networks. Tsipraset al. (2020) shows how the process used to collect labels for imagenet can introduce bias into theresulting dataset. Miller et al. (2020) studies how the shift between different datasets can influencethe performance of question and answering systems. Recht et al. (2019) construct new test setsfor imagenet and cifar10, and observe differences in generalization compared to the original testsets. Imbalanced datasets is a common issue when applying machine learning in practice (Van Hornet al., 2018; Krishna et al., 2017; Thomee et al., 2016), and researcher often describe the ”heavy-tail” of class labels (Cui et al., 2019). Traditional work on class imbalance includes Japkowicz &Stephen (2002) which investigates different sampling strategies, applicable to most machine learningmodels. For models of empirical risk minimization, one can instead reweight samples. A relativelyrecent reweighting scheme is proposed by Cui et al. (2019), where one uses the effective number ofsamples, which can be calculated from a simple formula.For generating datasets, a common strategy is to employ crowdsourcing, where one lets ordinarypeople assign labels in a large-scale automated fashion, commonly via Amazon’s Mechanical Turksystem (Keith et al., 2017). Typical applications of crowdsourcing include analyzing images andproviding bounding boxes (Patterson & Hays, 2012; Su et al., 2012), providing linguistic annotationsfor natural language (Callison-Burch & Dredze, 2010), or evaluating the relevance of search enginesresults (Alonso, 2013). Another application is machine learning debugging Ribeiro et al. (2016).The idea of eliciting and aggregating crowdsourced labels efficiently has inspired much algorithmicwork (Khetan & Oh, 2016; Zhang et al., 2014). Common issues include finding tasks that resultin high-quality labels, dealing with inconsistent labels (Karger et al., 2011; Zheng et al., 2017) andheterogenous workers (Ho et al., 2013).Figure 7: The softmax-cross entropy and labelsmoothing as a function of the logit of the cor-rect class (other logits are zero). Cross-entropydecreases monotonically, resulting in large logitsafter optimization.Calibration in machine learning has been stud-ied for a long time (Zadrozny & Elkan, 2001;Naeini et al., 2015) due to its practical impli-cations. For neural networks, Caruana et al.(2015) demonstrated that shallow neural net-works can yield well-calibrated predictions onclassification tasks. In contrast, Guo et al.(2017) show how modern neural networks areill-calibrated, with width and depth resulting inworse calibration scores, and investigate mit-igation strategies. Neural network calibrationhas implications in NLP (Ott et al., 2018), fair-ness (Pleiss et al., 2017) and reinforcementlearning (Kuleshov et al., 2018). For applica-tions such as medicine (Miner et al., 2020), me-teorology (Ren et al., 2015) and autonomousvehicles (Bojarski et al., 2016) it can be impor-tant for performance. Reliable uncertainty esti-mates also allow one to integrate DNNS withother probabilistic models, incorporating e.g.camera information (Kendall & Cipolla, 2015).6 C ONCLUSIONSWe have investigated the effects that datasets can have on network calibration. By generating labelnoise and class imbalance synthetically, we show how calibration error increases with label noiseand few samples. We also study how calibration changes with dataset size. Our work points towardsthe importance of high-quality dataset curation for generating well-calibrated predictions, and high-light issues that are relevant in high-stakes applications such as autonomous vehicles and medicalapplications. These calibration issues can potentially be mitigated both at dataset curation time andtraining time; we defer such studies to future work. A practical takeaway from this work is that forsensitive applications, one should evaluate calibration when collecting datasets.8Under review as a conference paper at ICLR 2021
X9QI5g_7BqY
If we were reporting accuracy in the paper experiments, how different would the conclusions be?
4: Ok but not good enough - rejection
The paper is an empirical study looking at how different dataset properties affect model calibration in the context of vision tasks. All experiments use a specific well-known vision model (ResNet 50). In particular, the dataset properties that are investigated are: - Balanced/Unbalanced classes. - Label quality. - Dataset size. - Augmentations. - NLP. I briefly present the main conclusions below. - Balance in classes. Often times some classes have way more datapoints than others. The authors look at four datasets (Cifar 10, Cifar 100, Eurosat, iNaturalist). The last one's classes are unbalanced, whereas the first three require some sampling method to (artificially) make them unbalanced (note in this case by design there is no relationship between balance/unbalance and the class properties). Figure 1 shows the results. For Cifar and Eurosat those classes with more examples are better calibrated. The trend is somewhat similar for iNaturalist. Then, the authors present a number of approaches people have tried in the past to mitigate the consequences of unbalance in data. They repeat the previous experiment (on Cifar 10, Cifar 100, Eurosat) but, this time, using each of those methods while training the model. Table 1 shows the results. The ratio column offers very mixed results depending on the dataset and method. The authors conclude that overall the imbalance in calibration persists in most cases. Q. How do these results compare to accuracy? One would also expect to do better on classes with more data. - Label quality. The authors tackle the question of how label noise affects calibration. In order to do that, they artificially inject noise to the "true" labels with increasing probability. Figure 2 summarizes the calibration error for a number of datasets and noise level. The pattern is clear: the more noise, the worse the calibration. Importantly, the calibration is measured on a test set that is not perturbed with random noise. Accordingly, results were to be expected: there's a mismatch between training and test distributions, and the further apart they are, the less "meaningful" predicted probabilities one should expect. Again, it would be informative to see how the *accuracy* of the model also degrades under this circumstances. Similarly, Figure 3 shows the effect of non-uniform noise across classes. Those classes "attacked" with more noise are worse calibrated. - Dataset size. Another important practical aspect to study is dataset size. The authors subsample uniformly at random a fraction of the data points, and measure ECE. Figure 4 shows how models trained on more data are better calibrated. Again, the accuracy of the model should also be shown for context. - Augmentations. It is common to use data augmentation to train better models; augmentations make the effective datasize larger. Figure 5 shows how removing augmentation axes leads to worse calibration. The same probably applies to accuracy (that's the reason why people use this!). This result is probably intimately related to the previous point (dataset size). - NLP. The conclusions regarding dataset size also hold with a Transformer on an NLP dataset. Finally, Section 4 provides some theoretical explanation. We can summarize this as: the cross-entropy loss wants to have more and more confidence / probability on the right class for a given example, and when the data is small and the model powerful enough, we can basically memorize it to make cross-entropy happy. This, however, leads to overconfidence and poor calibration. On one hand, it's recently becoming clear that ECE is not a very robust estimator. Depending on design choices (as number of bins, argmax vs all, adaptive versus fixed bins, etc.) the ranking among models and conclusions can change wildly [1]. On the other, this study fixed a specific model, so one could say that the conclusions are "shown" for the (dataset, model) pairs. Still, I believe the conclusions are true in a more general setting, though, and the model is fairly reasonable. However, while the paper is titled "Dataset Curation Beyond Accuracy", I do not see how the outcome and conclusions of all these experiments would be different if we were looking at accuracy rather than calibration. The authors should measure, include, and address this, and try to disentangle both aspects, or argue for any correlation / causation relationship among them. [1] - Measuring Calibration in Deep Learning - https://arxiv.org/abs/1904.01685
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Dataset Curation Beyond Accuracy ### Paper Abstract Neural networks are known to be data-hungry, and collecting large labeled datasets is often a crucial step in deep learning deployment. Researchers have studied dataset aspects such as distributional shift and labeling cost, primarily using downstream prediction accuracy for evaluation. In sensitive real-world applications such as medicine and self-driving cars, not only is the accuracy important, but also the calibration -- the extent that model uncertainty reflects the actual correctness likelihood. It has recently been shown that modern neural networks are ill-calibrated. In this work, we take a complementary approach -- studying how dataset properties, rather than architecture, affects calibration. For the common issue of dataset imbalance, we show that calibration varies significantly among classes, even when common strategies to mitigate class imbalance are employed. We also study the effects of label quality, showing how label noise dramatically increases calibration error. Furthermore, poor calibration can come from small dataset sizes, which we motive via results on network expressivity. Our experiments demonstrate that dataset properties can significantly affect calibration and suggest that calibration should be measured during dataset curation. ### Paper Keywords ["crowd-sourcing", "calibration", "dataset", "uncertainty"] ### Paper Content ABSTRACTNeural networks are known to be data-hungry, and collecting large labeleddatasets is often a crucial step in deep learning deployment. Researchers havestudied dataset aspects such as distributional shift and labeling cost, primarily us-ing downstream prediction accuracy for evaluation. In sensitive real-world appli-cations such as medicine and self-driving cars, not only is the accuracy important,but also the calibration – the extent that model uncertainty reflects the actual cor-rectness likelihood. It has recently been shown that modern neural networks areill-calibrated. In this work, we take a complementary approach – studying howdataset properties, rather than architecture, affects calibration. For the commonissue of dataset imbalance, we show that calibration varies significantly amongclasses, even when common strategies to mitigate class imbalance are employed.We also study the effects of label quality, showing how label noise dramaticallyincreases calibration error. Furthermore, poor calibration can come from smalldataset sizes, which we motive via results on network expressivity. Our experi-ments demonstrate that dataset properties can significantly affect calibration andsuggest that calibration should be measured during dataset curation.1 I NTRODUCTIONNeural networks often require large amounts of labeled data to perform well, making data curationa crucial but costly aspect of deployment. Thus, researchers have studied dataset properties suchas distributional shift (Miller et al., 2020) and the bias in crowd-sourced computer vision datasets(Tsipras et al., 2020) among others. Often, the evaluation criteria in such studies is downstreamprediction accuracy. However, neural networks are increasingly deployed in sensitive real-worldapplications such as medicine (Caruana et al., 2015), self-driving cars (Bojarski et al., 2016), andscientific analysis (Attia et al., 2020), where not only accuracy matters but also calibration. Cali-bration is the extent to which model certainty reflects the actual correctness likelihood. Calibrationcan be important when costs of false positives and false negatives are asymmetric, e.g., for a deadlydisease with cheap treatment, doctors might initiate treatment when the probability of being sickexceeds 10%. Beyond simple classification, calibration can be important for beam-search in NLP(Ott et al., 2018) and algorithmic fairness (Pleiss et al., 2017). Calibration in machine learning hasbeen studied by e.g Zadrozny & Elkan (2001); Naeini et al. (2015). Niculescu-Mizil & Caruana(2005) have shown that small scale neural networks can yield well-calibrated predictions. However,it has recently been observed by (Guo et al., 2017) that modern neural networks are ill-calibrated,whereas the now primitive Lenet (LeCun et al., 1998) achieves good calibration.In this work, we take a complementary approach; instead of focusing on network architecture, westudy how calibration is influenced by dataset properties. We primarily focus on computer vision andperform extensive experiments across common benchmarks and more exotic datasets such as satel-lite images (the eurosat dataset (Helber et al., 2019)) and species detection (the iNaturalist dataset(Van Horn et al., 2018)). We consistently find that dataset properties can significantly affect calibra-tion, causing effects comparable to network architecture. For example, we consider the ubiquitousproblem of class imbalanced datasets, a common issue in practice (Van Horn et al., 2018; Krishnaet al., 2017; Thomee et al., 2016). For such datasets, the miscalibration is not uniform but insteadvaries across the different classes. This problem persists even when common strategies to mitigateclass imbalanced are employed. Another practical concern is generating high-quality labels via e.g.crowdsourcing (Karger et al., 2011). We demonstrate how labeling quality affects calibration, withnoisier labels resulting in worse calibration. Additionally, we show that just the size of the dataset1Under review as a conference paper at ICLR 2021classclasscalibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 1: Calibration error for individual classes under class-imbalance. The classes are orderedfrom the most (left) to the least (right) amount of samples. Fewer samples result in larger calibrationerrors. Imbalance is injected in CIFAR10/100 and eurosat randomly, removing any correlation withclass-specific properties. We do not modify Inaturalist, which already suffers from imbalance; thusclasswise calibration is correlated with class-specific properties.has a strong effect on calibration. This also holds when one artificially increases the dataset sizeby data augmentation. We motivate our findings by considering the geometry of the cross-entropyloss and utilizing recent results on network expressivity (Yun et al., 2019). If the dataset is suffi-ciently small compared to the number of parameters, we argue that the lack of minimizer for thecross-entropy loss biases the network to high confidence and poor calibration. Our results highlightan underappreciated aspect of calibration and suggest that for sensitive applications, one shouldmeasure calibration during dataset curation.2 B ACKGROUNDCalibration. Calibration has a traditional place in machine learning (Zadrozny & Elkan, 2001;Naeini et al., 2015). Before the advent of modern deep learning, Niculescu-Mizil & Caruana (2005)showed that neural networks can yield well-calibrated predictions for classification. However, Guoet al. (2017) showed that modern neural networks are ill-calibrated. Modern neural networks aremodeled as e.g. resnet (He et al., 2016) or densenets (Huang et al., 2016). It is important to note thataccuracy and calibration do notnecessarily follow each other, but can move independently – modernneural networks are ill-calibrated, but still yield excellent accuracy. Beyond image classification, theimportance of calibration in NLP has further been studied by Ott et al. (2018) and its relationship tofairness by Pleiss et al. (2017).Metrics for Calibration. We letfxig2Rndxbe a dataset of ndatapoints with dxfeatures andtakefyigto be the labels. Following Guo et al. (2017), we assume that a neural network houtputsh(xi) = (^pi;^yi), where ^yiis the predicted class and ^piis the estimated probability that the predictionis correct. For evaluating calibration, we divide the interval [0;1]intoMequally sized bins andassign predictions to bins based upon ^p. Within each bin Bmwe define the accuracy as acc (Bm) =1jBmjPi2Bm1(^yi=yi). Similarly, we define the confidence as conf (Bm) =1jBmjPi2Bm^pi. Fora well-calibrated model, we would expect the confidence and accuracy of each bin to be close toeach other. Calibration error can be measured by their difference, evaluated on the test set. The2Under review as a conference paper at ICLR 2021resulting metric is known as the expected calibration error (Naeini et al., 2015), often abbreviated asece. Mathematically, it is defined as follows:ece=MXm=1jBmjnacc(Bm)conf(Bm)(1)3 E XPERIMENTSExperimental setup. We consider the following computer vision datasets. Cifar10 & Cifar100(Krizhevsky & Hinton, 2010) which contains 50,000 RGB images spanning ten or hundred classesrespectively. Classes are balanced. Eurosat (Helber et al., 2019), which is a dataset of satelliteimages over continental Europe; there are ten balanced classes and 27,000 images in total. iNatu-ralist (Van Horn et al., 2018) which is a dataset for species detection. We use the FGVC6 version(FGVC6, 2019), compromising over 260,000 images and an imbalanced hierarchical class systemcompromising e.g., species and phylum. We perform classification at the ”class” level, resulting innine classes. Across all datasets, we use the same architecture, Resnet50 (He et al., 2016). We usehyperparameters from the original resnet paper (He et al., 2016): cross-entropy loss optimized withSGD using a learning rate at 0.1 and decreased by a factor 0:1after 50 %and 75 %of the training, abatch size of 128, a weight decay of 0:0001 , and momentum of 0:9. For the cifar/eurosat/inaturalist,networks are trained over 62=30=331103gradient steps, corresponding to 160 epochs for eachdataset. We use randomized cropping and random horizontal flipping for data augmentation, seeAppendix A for data preprocessing. Experiments are repeated five times, with mean and standarddeviations reported. Calibration error is measured in expected calibration error(ece) as in eq. (1),usingM= 15 and evaluated on the test set.Inbalanced dataset. A common problem in practice, not necessarily found in benchmark datasets,is class imbalance (i.e., the number of available samples varies between classes), see Van Horn et al.(2018); Krishna et al. (2017); Thomee et al. (2016). Here we study how imbalanced datasets affectnoisy labels (fraction)calibration error (ece)calibration error (ece)noisy labels (fraction)cifar10cifar100eurosatiNaturalistFigure 2: Calibration error under label noise, simulated by randomly reassigning labels for a fractionof the training labels. Across datasets, label noise degrades network calibration. Thus, label noisefrom e.g. crowd sourcing can affect not only accuracy, but also calibration (Karger et al., 2011).3Under review as a conference paper at ICLR 2021cifar100cifar10SVHNcalibration error (ece)Figure 3: Calibration error under non-uniform label noise. We linearly increase label noise from 0to0:5among classes, and sort them thereafter. Increased noise leads to worse calibration.the calibration error for individual classes. Whereas the iNaturalist dataset is naturally imbalanced,the cifar and eurosat datasets are not. For these datasets, we simulate long-tailed class imbalancefollowing Cao et al. (2019). We randomly reorder the classes from = 1 ton, and only keep ai1fraction of examples for class i. Given some desired ratio between the class with the mostand fewest samples, one picks such thatn1=. Following Buda et al. (2018), we alsoconsider a step-imbalance, where half of the classes are downsampled by a factor . We consider= 100 as done by e.g. Cao et al. (2019) and keep the test set balanced. For cifar/eurosat, werandomly chose what classes to subsample to eliminate class-specific properties. Since iNaturalist isalready imbalanced, we keep it as it is, but note that the class-specific properties are correlated withclass-specific imbalance. After this procedure, we train the DNNs as normal and give the averagecalibration error for individual classes. The results are shown in Figure 1. Generally, classes withfewer examples have significantly higher ECE, showing how imbalance can have a significant effecton model calibration. For the iNaturalist dataset, we have some outlier classes which is likely dueto class-specific effects, e.g., the class with the most labels might be unusually hard to calibrate.Methods for Imbalanced Datasets. As class imbalanced is a problem of practical importance,there is ample work on mitigating this issue. One common strategy is to sample the dataset un-evenly when generating mini-batches, attempting to obtain a roughly balanced dataset. One canboth oversample minority classes (Buda et al., 2018) and undersample the majority class (Japkow-icz & Stephen, 2002). Another strategy is instead to weight the objective function to give all classesTable 1: Calibration error for various mitigation strategies used in imbalanced datasets. We give thecalibration error for the class with the most/fewest labels (referred to as min/max), and the ratio ofthese two errors. Two types of imbalance are considered, exponential and step. While improving insome cases, classwise imbalance remains even when mitigation strategies are used.exp-inbalance cifar10 cifar100 eurosatmethod min max ratio min max ratio min max ratiooriginal 0:12 0:48 4:24 0:34 0:61 1:82 0:05 0:3 7:06sampling 0:16 0:62 3:99 0:39 0:66 1:68 0:05 0:22 4:85weighted 0:1 0:35 3:67 0:22 0:4 1:79 0:08 0:14 1:75label smooth 0:08 0:35 4:31 0:16 0:27 1:73 0:1 0:29 2:96focal (Lin et al., 2017) 0:1 0:46 4:88 0:29 0:56 1:91 0:06 0:28 4:5CB (Cui et al., 2019) 0:09 0:34 3:98 0:2 0:32 1:57 0:08 0:14 1:84step-imbalance cifar10 cifar100 eurosatmethod min max ratio min max ratio min max ratiooriginal 0:04 0:63 16:57 0:16 0:73 4:6 0:02 0:43 19:62sampling 0:06 0:77 13:17 0:19 0:74 3:85 0:02 0:43 19:94weighted 0:08 0:34 4:62 0:15 0:3 2:01 0:1 0:18 1:94label smooth 0:08 0:47 5:58 0:12 0:48 3:95 0:09 0:33 3:47focal (Lin et al., 2017) 0:03 0:56 20:54 0:14 0:66 4:77 0:03 0:37 16:72CB (Cui et al., 2019) 0:07 0:41 6:24 0:15 0:3 2:04 0:08 0:17 2:224Under review as a conference paper at ICLR 2021dataset size (fraction)dataset size (fraction)calibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 4: Calibration error under different dataset sizes. We subsample the datasets, and give thesize as a fractions of the original size. Across all tasks, smaller datasets consistenly yield poorercalibration, highlighting how dataset size influences not only accuracy but also calibration.approximately the same weight in the objective function. A common strategy is weight classes in-versely proportional to their frequency (Wang et al., 2017). Recently, Cui et al. (2019) has proposedto reweight based upon the ”effective” number of samples, which is defined per a mathematicalformula. We here investigate if the calibration issues of an imbalanced dataset persist when us-ing such mitigation strategies. Thus, we consider the standard cross-entropy (original), resamplinginversely proportional to the frequency (sampling), reweighting inversely proportional to the fre-quency (weighted), the weighting scheme of Cui et al. (2019) (CB), and the focal loss of Lin et al.(2017) (focal). Additionally we consider label smoothing (Szegedy et al., 2016) (label smooth). Weconstruct imbalanced datasets as in the previous section. Due to limited computational resources,we only consider the three smallest datasets for these experiments. Calibration errors are given inTable 1, and we give the largest and smallest calibration error among classes and the average ratioof the two. We see that issues of imbalanced calibration errors, while sometimes improving, stillpersist. Standard deviations are given in Table 2 in Appendix A.Label Quality. When collecting labels, for example via crowdsourcing, a common issue is labelquality (Patterson & Hays, 2012; Su et al., 2012; Callison-Burch & Dredze, 2010). For example,workers might have poor incentives to perform well or lack the necessary skills for quality labeling.To study the effects of potentially mislabeled data, we artificially inject symmetric noise into thetraining set. This is done by selecting a random subset of the training set corresponding to somefixed fraction, and then shuffling the labels of this set. This setup follows conventions in label noiseliterature (Patrini et al., 2017; Han et al., 2018). Given these noisy labels, we train the networks andevaluate them on the original test-set (which has no noise). We consider five levels of label noise inincrements of 0:1, starting at 0:0. The resulting calibration errors for various noise levels are givenin Figure 2, where we see that label noise increases the calibration error across all datasets. Addi-tionally, we consider the effects of non-uniform noise, studied by e.g. Crammer et al. (2006). Forclassi, we linearly increase a noise level pifrom 0:0to0:5. Classes are randomly ordered. For eachimage with original class i, with probability piwe assign it to a new random class. The reassignmentprobability to class iis proportional to pi. Results are given in Figure 3, where noisier classes sufferfrom worse calibration. This underscores how label quality control in e.g. crowdsourcing (Su et al.,2012) can be not only important for accuracy, but also for calibration of downstream models.5Under review as a conference paper at ICLR 2021calibration error (ece)calibration error (ece)cifar10cifar100eurosatiNaturalistFigure 5: Calibration error under combinations of data augmentations. Following He et al. (2016),we consider randomized cropping and flipping. Removing these components, often used to artifi-cially enlarge the dataset, increases the calibration error.Dataset size. The perhaps most common concern for collecting data is the dataset size, and modelaccuracy typically grows with this size (Hestness et al., 2017). Crowdsourcing labels and bound-ing boxes for images is common practice, with many researchers investigating strategies to reduceneeded queries (Su et al., 2012) In practice, dataset size can be limited by costs of labeling, but alsoby obtaining the actual data (Suram et al., 2017). Motivated by this, we study the effect of datasetsize upon the calibration error. We simply subsample the training sets of the datasets uniformly atrandom and thereafter train on them, comparing different sizes of the resulting dataset. We considersubsampled sizes, measured in fractions of the original size, from 1:0to0:2in increments of 0:2,and also consider 0:1. The test set is not subsampled. The results of these experiments are given inFigure 4. We see that smaller datasets have substantially larger calibration errors, demonstrating thedramatic effect that dataset size can have not only on accuracy, but also on calibration error.fraction of trainingsizecalibration error (ece)Figure 6: Calibration error of an NLP task duringtraining for different dataset sizes. The dataset issubsampled, and we give the relative size.Augmentations. Beyond actual dataset size, itis common to artificially increase the size of thedataset by augmenting it, e.g., randomly crop-ping the images or slightly shifting the colorbalance (Cubuk et al., 2018). We have seenthat the size of the dataset influences calibra-tion, and now consider the effect of increasingthe effective dataset size via augmentations. Weuse both randomized cropping and horizontalflipping for our training and consider removingthese components while keeping other trainingparameters fixed. The outcome of this exper-iment is shown in Figure 5, and we see thatremoving data augmentations significantly in-creases the calibration error. Viewing data aug-mentation as a strategy of extending the trainingset, we again see how smaller training sets in-crease the calibration error, just as in Figure 4.6Under review as a conference paper at ICLR 2021NLP. While we primarily focus on computer vision, we here consider experiments in NLP. Languagemodels often generate text via beam-search, and it has been observed that calibration is importantfor this process (Ott et al., 2018). Here we investigate the effect of dataset size on calibration in NLP.We considering translation of the IWSLT’14 German to English dataset (Cettolo et al., 2014) usingtransformer architectures (Vaswani et al., 2017) with code and default hyperparameters from thepublicly available fairseq codebase (Ott et al., 2019). As before, we simply subsample the trainingset uniformly at random for variable sizes and train the transformer with all other training parametersfixed. Figure 6 shows how the mean calibration error, with standard deviations as error bars, variesduring training. Again we see how the dataset size influences the calibration. It is natural to guessthat there might be word-level calibration issues too, e.g., rare words might have worse calibration.4 T HEORETICAL MOTIVATIONFigure 4 and Figure 5 show that the size of the dataset affects calibration, with smaller datasetsresulting in worse calibration. To explain this, let us consider the cross-entropy loss. We will letlijdenote the logit for image iand classj. Furthermore, let cibe the index of the correct class forimagei. The soft-max cross-entropy loss function is then defined as`=Xi`i=Xilogexp(lici)Pjexp(lij)=Xilici+ logXjexp(lij)(2)We note that this loss function decreases monotonically as the logit liciincreases. This impliesthat there is no global minimizer, but instead that if the other logits are fixed, we have `i!0aslici!1 . The logit tending to infinity implies that the confidence of the prediction tends to 1. Thelack of minimizer for soft-max cross-entropy is in stark contrast with e.g. label smoothing, whichpenalizes large confidence, see Figure 7. Let us imagine that the network has infinite capacity. If weoptimize it for a sufficient amount of time, we would expect `to tend to zero, which implies that thelogits tend to infinity. This corresponds to the confidence on the training set tending to 100% , whichmost likely implies overconfidence and poor calibration . We can formalize this observation, but wefirst need to state some assumptions.Assumption 1 Letfxig2Rndxbe a dataset of ndatapoints with dxfeatures each. Let fyig2f0;1gncbe a one-hot label encoding that assigns each image one out of cclasses, where cis aconstant. We assume that all datapoints fxigare distinct, i.e. xi6=xi;8i6=j.Under such assumptions, recent results in network expressivity say that one can essentially memo-rize a training set (Yun et al., 2019) if the width is at least of order O(pn). In an idealized setting,where we optimize the function without computational considerations, such expressivity means thatthe loss function can be optimized towards its minimizer 0. This means train set confidence growingto100% , likely translating to poor calibration. We formalize this line of argument in Theorem 1.Theorem 1 Let Assumption 1 hold. Let fbe a Relu networks with four or more layers and withwidth at least (pn)and parameters w. Let`be equal to the loss function in eq. (2). Then (i)minw`(f(w))no global minima; (ii) the confidence tends to 1as`!0.The formal proof is given in Appendix B, we here provide some intuition. To prove (i)and(ii), itsuffices that the network can fit the training set with 1:0accuracy. This is typically the condition inpractice (Zhang et al., 2016), and whereas we consider an idealized argument without computationcosts, the conclusions agree with our experimental results. For the sake of contradiction, let usassume that we are in a global minima with parameters wand100% accuracy. Now consider `iwhen we scale the final layer by (1 +)for>0. The network output is then (1 +)lij, and`iislogexp((1 +)lici)=Pjexp((1 +)lij) = log(1 +Pj6=iexp(1 +)(lijlici). The factthat we have perfect train accuracy means that (lijlici)<08j6=ci. Thus, the loss must shrink,asPj6=iexp(1 +)(lijlici)decreases with and as logis monotone. By contradiction, weare not in such a global minima. The results of Yun et al. (2019) say that we can always find weightswhich achieves perfect accuracy using O(pn)parameters, and thus that there are no issues withfitting that dataset that prevents the loss from tending to 0. Thus, if the dataset is small compared tothe number of parameters, we expect overconfidence and poor calibration. This conclusion agreeswith observations of Guo et al. (2017), who show that depth and width increases miscalibration.7Under review as a conference paper at ICLR 20215 R ELATED WORKThere is much recent work on how datasets influence the behavior of neural networks. Tsipraset al. (2020) shows how the process used to collect labels for imagenet can introduce bias into theresulting dataset. Miller et al. (2020) studies how the shift between different datasets can influencethe performance of question and answering systems. Recht et al. (2019) construct new test setsfor imagenet and cifar10, and observe differences in generalization compared to the original testsets. Imbalanced datasets is a common issue when applying machine learning in practice (Van Hornet al., 2018; Krishna et al., 2017; Thomee et al., 2016), and researcher often describe the ”heavy-tail” of class labels (Cui et al., 2019). Traditional work on class imbalance includes Japkowicz &Stephen (2002) which investigates different sampling strategies, applicable to most machine learningmodels. For models of empirical risk minimization, one can instead reweight samples. A relativelyrecent reweighting scheme is proposed by Cui et al. (2019), where one uses the effective number ofsamples, which can be calculated from a simple formula.For generating datasets, a common strategy is to employ crowdsourcing, where one lets ordinarypeople assign labels in a large-scale automated fashion, commonly via Amazon’s Mechanical Turksystem (Keith et al., 2017). Typical applications of crowdsourcing include analyzing images andproviding bounding boxes (Patterson & Hays, 2012; Su et al., 2012), providing linguistic annotationsfor natural language (Callison-Burch & Dredze, 2010), or evaluating the relevance of search enginesresults (Alonso, 2013). Another application is machine learning debugging Ribeiro et al. (2016).The idea of eliciting and aggregating crowdsourced labels efficiently has inspired much algorithmicwork (Khetan & Oh, 2016; Zhang et al., 2014). Common issues include finding tasks that resultin high-quality labels, dealing with inconsistent labels (Karger et al., 2011; Zheng et al., 2017) andheterogenous workers (Ho et al., 2013).Figure 7: The softmax-cross entropy and labelsmoothing as a function of the logit of the cor-rect class (other logits are zero). Cross-entropydecreases monotonically, resulting in large logitsafter optimization.Calibration in machine learning has been stud-ied for a long time (Zadrozny & Elkan, 2001;Naeini et al., 2015) due to its practical impli-cations. For neural networks, Caruana et al.(2015) demonstrated that shallow neural net-works can yield well-calibrated predictions onclassification tasks. In contrast, Guo et al.(2017) show how modern neural networks areill-calibrated, with width and depth resulting inworse calibration scores, and investigate mit-igation strategies. Neural network calibrationhas implications in NLP (Ott et al., 2018), fair-ness (Pleiss et al., 2017) and reinforcementlearning (Kuleshov et al., 2018). For applica-tions such as medicine (Miner et al., 2020), me-teorology (Ren et al., 2015) and autonomousvehicles (Bojarski et al., 2016) it can be impor-tant for performance. Reliable uncertainty esti-mates also allow one to integrate DNNS withother probabilistic models, incorporating e.g.camera information (Kendall & Cipolla, 2015).6 C ONCLUSIONSWe have investigated the effects that datasets can have on network calibration. By generating labelnoise and class imbalance synthetically, we show how calibration error increases with label noiseand few samples. We also study how calibration changes with dataset size. Our work points towardsthe importance of high-quality dataset curation for generating well-calibrated predictions, and high-light issues that are relevant in high-stakes applications such as autonomous vehicles and medicalapplications. These calibration issues can potentially be mitigated both at dataset curation time andtraining time; we defer such studies to future work. A practical takeaway from this work is that forsensitive applications, one should evaluate calibration when collecting datasets.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title If we were reporting accuracy in the paper experiments, how different would the conclusions be? ### Review Text The paper is an empirical study looking at how different dataset properties affect model calibration in the context of vision tasks. All experiments use a specific well-known vision model (ResNet 50). In particular, the dataset properties that are investigated are: - Balanced/Unbalanced classes. - Label quality. - Dataset size. - Augmentations. - NLP. I briefly present the main conclusions below. - Balance in classes. Often times some classes have way more datapoints than others. The authors look at four datasets (Cifar 10, Cifar 100, Eurosat, iNaturalist). The last one's classes are unbalanced, whereas the first three require some sampling method to (artificially) make them unbalanced (note in this case by design there is no relationship between balance/unbalance and the class properties). Figure 1 shows the results. For Cifar and Eurosat those classes with more examples are better calibrated. The trend is somewhat similar for iNaturalist. Then, the authors present a number of approaches people have tried in the past to mitigate the consequences of unbalance in data. They repeat the previous experiment (on Cifar 10, Cifar 100, Eurosat) but, this time, using each of those methods while training the model. Table 1 shows the results. The ratio column offers very mixed results depending on the dataset and method. The authors conclude that overall the imbalance in calibration persists in most cases. Q. How do these results compare to accuracy? One would also expect to do better on classes with more data. - Label quality. The authors tackle the question of how label noise affects calibration. In order to do that, they artificially inject noise to the "true" labels with increasing probability. Figure 2 summarizes the calibration error for a number of datasets and noise level. The pattern is clear: the more noise, the worse the calibration. Importantly, the calibration is measured on a test set that is not perturbed with random noise. Accordingly, results were to be expected: there's a mismatch between training and test distributions, and the further apart they are, the less "meaningful" predicted probabilities one should expect. Again, it would be informative to see how the *accuracy* of the model also degrades under this circumstances. Similarly, Figure 3 shows the effect of non-uniform noise across classes. Those classes "attacked" with more noise are worse calibrated. - Dataset size. Another important practical aspect to study is dataset size. The authors subsample uniformly at random a fraction of the data points, and measure ECE. Figure 4 shows how models trained on more data are better calibrated. Again, the accuracy of the model should also be shown for context. - Augmentations. It is common to use data augmentation to train better models; augmentations make the effective datasize larger. Figure 5 shows how removing augmentation axes leads to worse calibration. The same probably applies to accuracy (that's the reason why people use this!). This result is probably intimately related to the previous point (dataset size). - NLP. The conclusions regarding dataset size also hold with a Transformer on an NLP dataset. Finally, Section 4 provides some theoretical explanation. We can summarize this as: the cross-entropy loss wants to have more and more confidence / probability on the right class for a given example, and when the data is small and the model powerful enough, we can basically memorize it to make cross-entropy happy. This, however, leads to overconfidence and poor calibration. On one hand, it's recently becoming clear that ECE is not a very robust estimator. Depending on design choices (as number of bins, argmax vs all, adaptive versus fixed bins, etc.) the ranking among models and conclusions can change wildly [1]. On the other, this study fixed a specific model, so one could say that the conclusions are "shown" for the (dataset, model) pairs. Still, I believe the conclusions are true in a more general setting, though, and the model is fairly reasonable. However, while the paper is titled "Dataset Curation Beyond Accuracy", I do not see how the outcome and conclusions of all these experiments would be different if we were looking at accuracy rather than calibration. The authors should measure, include, and address this, and try to disentangle both aspects, or argue for any correlation / causation relationship among them. [1] - Measuring Calibration in Deep Learning - https://arxiv.org/abs/1904.01685 ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
ByUEelW0-
ICLR.cc/2018/Conference
2018
Modifying memories in a Recurrent Neural Network Unit
["Vlad Velici", "Adam Pr\u00fcgel-Bennett"]
Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data. We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights. This addition shows significant increases of performance on some of the tasks from the bAbI dataset.
["LSTM", "RNN", "rotation matrix", "long-term memory", "natural language processing"]
ABSTRACTLong Short-Term Memory (LSTM) units have the ability to memorise and uselong-term dependencies between inputs to generate predictions on time seriesdata. We introduce the concept of modifying the cell state (memory) of LSTMsusing rotation matrices parametrised by a new set of trainable weights. This ad-dition shows significant increases of performance on some of the tasks from thebAbI dataset.1 I NTRODUCTIONIn the recent years, Recurrent Neural Networks (RNNs) have been successfully used to tackle prob-lems with data that can be represented in the shape of time series. Application domains includeNatural Language Processing (NLP) (translation (Rosca & Breuel, 2016), summarisation (Nallapatiet al., 2016), question answering and more), speech recogition (Hannun et al., 2014; Graves et al.,2013), text to speech systems (Arik et al., 2017), computer vision tasks (Stewart et al., 2016; Wuet al., 2017), and differentiable programming language interpreters (Riedel et al., 2016; Rockt ̈aschel& Riedel, 2017).An intuitive explanation for the success of RNNs in fields such as natural language understandingis that they allow words at the beginning of a sentence or paragraph to be memorised. This canbe crucial to understanding the semantic content. Thus in the phrase ”The cat ate the fish” it isimportant to memorise the subject (cat). However, often later words can change the meaning of asenstence in subtle ways. For example, ”The cat ate the fish, didn’t it ”changes a simple statementinto a question. In this paper, we study a mechanism to enhance a standard RNN to enable it tomodify its memory, with the hope that this will allow it to capture in the memory cells sequenceinformation using a shorter and more robust representation.One of the most used RNN units is the Long Short-Term Memory (LSTM) (Hochreiter & Schmid-huber, 1997). The core of the LSTM is that each unit has a cell state that is modified in a gatedfashion at every time step. At a high level, the cell state has the role of providing the neural networkwith memory to hold long-term relationships between inputs. There are many small variations ofLSTM units in the literature and most of them yield similar performance (Greff et al., 2017).The memory (cell state) is expected to encode information necessary to make the next prediction.Currently the ability of the LSTMs to rotate and swap memory positions is limited to what can beachieved using the available gates. In this work we introduce a new operation on the memory thatexplicitly enables rotations and swaps of pairwise memory elements. Our preliminary tests showperformance improvements on some of the bAbI tasks (Weston et al., 2015) compared with LSTMbased architectures.2 T HE ROTATION GATEIn this section we introduce the idea of adding a new set of parameters for the RNN cell that enablerotation of the cell state. The following subsection shows how this is implemented in the LSTMunit.One of the key innovations of LSTMs was the introduction of gated modified states so that if thegate neuron iis saturated then the memory ci(t1)would be unaltered. That is, ci(t1)ci(t)1Under review as a conference paper at ICLR 2018with high accuracy. The fact that the amplification factor is very close to 1 prevents the memoryvanishing or exploding over many epochs.To modify the memory, but retain an amplification factor of 1 we take the output after appling theforget and add gates (we call it dt), and apply a rotation matrix Uto obtain a modified memoryct=Udt. Note that, for a rotation matrix UTU=Iso thatkdtk=kctk.We parametrise the rotation by a vector of anglesu= 2(Wrotx+brot); (1)whereWrotis a weight matrix and brotis a bias vector which we learn along with the other parame-ters.xis the vector of our concatenated inputs (in LSTMs given by concatenating the input for thecurrent timestep with the output from the previous time step).A full rotation matrix is parametrisable by n(n1)=2parameters (angles). Using all of these wouldintroduce a huge number of weights, which is likely to over-fit. Instead, we have limited ourselvesto considering rotations between pairs of inputs di(t)anddi+1(t). Exploring more powerful sets ofrotations is currently being investigated.Our rotation matrix is a block-diagonal matrix of 2D rotationsU(u) =266664cosu1sinu1sinu1 cosu1...cosun=2sinun=2sinun=2cosun=2377775; (2)where the cell state is of size n. Our choice of rotations only needs n=2angles.2.1 R OTLSTMIn this section we show how to add memory rotation to the LSTM unit. The rotation is applied afterthe forget and add gates and before using the current cell state to produce an output.The RotLSTM equations are as follows:x= [ht1;xt]; (3)ft=(Wfx+bf); (4)it=(Wix+bi); (5)ot=(Wox+bo); (6)ut= 2(Wrotx+brot); (7)dt=ftct1+ittanh(Wcx+bc); (8)ct=U(ut)dt; (9)ht=ottanh( ct); (10)where Wff;i;o;rot;cgare weight matrices, bff;i;o;rot;cgare biases ( Ws and bs learned during training),ht1is the previous cell output, htis the output the cell produces for the current timestep, similarlyct1andctare the cell states for the previous and current timestep, is element-wise multiplicationand[;]is concatenation. Uas defined in Equation 2, parametrised by ut. Figure 1 shows aRotLSTM unit in detail.Assuming cell state size n, input sizem, the RotLSTM has n(n+m)=2extra parameters, a 12.5%increase (ignoring biases). Our expectation is that we can decrease nwithout harming performanceand the rotations will enforce a better representation for the cell state.3 E XPERIMENTS AND RESULTSTo empirically evaluate the performance of adding the rotation gate to LSTMs we use the toy NLPdataset bAbI with 1000 samples per task. The bAbI dataset is composed of 20 different tasks ofvarious difficulties, starting from easy questions based on a single supporting fact (for example:2Under review as a conference paper at ICLR 2018URotLSTMct−1/braceleftBig /bracerightBigctht−1/braceleftBig x/braceleftbiggxtftσ⊗itσ tanh⊗⊕dtut2πσotσ/bracerightBight⊗tanhFigure 1: RotLSTM diagram. xis the concatenation of ht1andxtin the diagram (green and bluelines). Note that this differs from a regular LSTM by the introduction of the network producingangles utand the rotation module marked U. In the diagram input size is 4 and cell state size is 3.John is in the kitchen. Where is John? A: Kitchen ) and going to more difficult tasks of reasoningabout size (example: The football fits in the suitcase. The box is smaller than the football. Will thebox fit in the suitcase? A: yes ) and path finding (example: The bathroom is south of the office. Thebathroom is north of the hallway. How do you go from the hallway to the office? A: north, north ).A summary of all tasks is available in Table 2. We are interested in evaluating the behaviour andperformance of rotations on RNN units rather than beating state of the art.We compare a model based on RotLSTM with the same model based on the traditional LSTM. Allmodels are trained with the same hyperparameters and we do not perform any hyperparameter tuningapart from using the sensible defaults provided in the Keras library and example code (Chollet et al.,2015).For the first experiment we train a LSTM and RotLSTM based model 10 times using a fixed cellstate size of 50. In the second experiment we train the same models but vary the cell state size from6 to 50 to assess whether the rotations help our models achieve good performance with smaller statesizes. We only choose even numbers for the cell state size to make all units go through rotations.3.1 T HE MODEL ARCHITECTUREThe model architecture, illustrated in Figure 2, is based on the Keras example implementation1.This model architecture, empirically, shows better performance than the LSTM baseline publishedin Weston et al. (2015). The input question and sentences are passed thorugh a word embeddinglayer (not shared, embeddings are different for questions and sentences). The question is fed intoan RNN which produces a representation of the question. This representation is concatenated toevery word vector from the story, which is then used as input to the second RNN. Intuitively, thishelps the second RNN (Query) to focus on the important words to answer the question. The outputof the second RNN is passed to a fully connected layer with a softmax activation of the size of thedictionary. The answer is the word with the highest activation.1Available at https://goo.gl/9wfzr5 .3Under review as a conference paper at ICLR 2018Figure 2: Model architecture for the bAbI dataset. The RNN is either LSTM or RotLSTM.Table 1: Performance comparison on the bAbI dataset. Values are % average accuracy on test setstandard deviation taken from training each model 10 times. For each of the trained models thetest accuracy is taken for the epoch with the best validation accuracy (picked from epoch numbers1, 11, 21, 31, 40 since we only evaluated on the test set for those).Task: 1 2 3 4 5 6 7 8 9 10LSTM 49.83.3 26.53.8 21.11.0 63.79.4 33.66.2 49.20.8 62.715.5 68.88.3 63.90.2 45.22.0RotLSTM 52.3 1.1 27.11.3 22.41.3 65.83.6 55.75.5 50.11.9 76.73.0 66.16.2 61.51.9 48.11.1Task: 11 12 13 14 15 16 17 18 19 20LSTM 73.62.2 74.31.7 94.40.2 20.73.3 21.40.5 48.02.2 48.00.0 70.320.8 8.50.6 87.74.2RotLSTM 73.9 1.2 76.51.1 94.40.0 19.92.0 28.88.8 46.72.0 54.23.1 90.50.9 8.91.1 89.92.4The categorical cross-entropy loss function was used for training. All dropout layers are dropping30% of the nodes. The train-validation dataset split used was 95%-5%. The optimizer used wasAdam with learning rate 0.001, no decay, 1= 0:9,2= 0:999,= 108. The training set wasrandomly shuffled before every epoch. All models were trained for 40 epochs. After every epochthe model performance was evaluated on the validation and training sets, and every 10 epochs onthe test set. We set the random seeds to the same number for reproducibility and ran the experiments10 times with 10 different random seeds. The source code is available at https://goo.gl/Eopz2C2.3.2 R ESULTSIn this subsection we compare the the performance of models based on the LSTM and RotLSTMunits on the bAbI dataset.Applying rotations on the unit memory of the LSTM cell gives a slight improvement in performanceoverall, and significant improvements on specific tasks. Results are shown in Table 1. The mostsignificant improvements are faster convergence, as shown in Figure 3, and requiring smaller statesizes, illustrated in Figure 4.On tasks 1 (basic factoid), 11 (basic coreference), 12 (conjunction) and 13 (compound coreference)the RotLSTM model reaches top performance a couple of epochs before the LSTM model consis-tently. The RotLSTM model also needs a smaller cell state size, reaching top performance at statesize 10 to 20 where the LSTM needs 20 to 30. The top performance is, however, similar for bothmodels, with RotLSTM improving the accuracy with up to 2.5%.The effect is observed on task 18 (reasoning about size) at a greater magnitude where the RotLSTMreaches top performance before epoch 20, after which it plateaus, while the LSTM model takes 40epochs to fit the data. The training is more stable for RotLSTM and the final accuracy is improvedby 20%. The RotLSTM reaches top performance using cell state 10 and the LSTM needs size40. Similar performance increase for the RotLSTM (22.1%) is observed in task 5 (three argumentrelations), reaching top performance around epoch 25 and using a cell state of 50. Task 7 (counting)shows a similar behaviour with an accuracy increase of 14% for RotLSTM.Tasks 4 (two argument relations) and 20 (agent motivation) show quicker learning (better perfor-mance in the early epochs) for the RotLSTM model but both models reach their top performance2A GitHub repository will be made available after the double-blind review period ends.4Under review as a conference paper at ICLR 2018after the same amount of traning. On task 20 the RotLSTM performance reaches top accuracy usingstate size 10 while the LSTM incremetally improves until using state size 40 to 50.Signs of overfitting for the RotLSTM model can be observed more prominently than for the LSTMmodel on tasks 15 (basic deduction) and 17 (positional reasoning).Our models, both LSTM and RotLSTM, perform poorly on tasks 2 and 3 (factoid questions with 2and 3 supporting facts, respectively) and 14 (time manipulation). These problem classes are solvedvery well using models that look over the input data many times and use an attention mechanism thatallows the model to focus on the relevant input sentences to answer a question (Sukhbaatar et al.,2015; Kumar et al., 2016). Our models only look at the input data once and we do not filter outirrelevant information.4 D ISCUSSION AND FUTURE WORKA limitation of the models in our experiments is only applying pairwise 2D rotations. Represen-tations of past input can be larger groups of the cell state vector, thus 2D rotations might not fullyexploit the benefits of transformations. In the future we hope to explore rotating groups of elementsand multi-dimensional rotations. Rotating groups of elements of the cell state could potentiallyalso force the models to learn a more structured representation of the world, similar to how forc-ing a model to learn specific representations of scenes, as presented in Higgins et al. (2017), yieldssemantic representations of the scene.Rotations also need not be fully flexible. Introducing hard constraints on the rotations and whatgroups of parameters can be rotated might lead the model to learn richer memory representations.Future work could explore how adding such constraints impacts learning times and final performanceon different datasets, but also look at what constraints can qualitatively improve the representationof long-term dependencies.In this work we presented prelimiary tests for adding rotations to simple models but we only useda toy dataset. The bAbI dataset has certain advantages such as being small thus easy to train manymodels on a single machine, not having noise as it is generated from a simulation, and having a widerange of tasks of various difficulties. However it is a toy dataset that has a very limited vocabularyand lacks the complexity of real world datasets (noise, inconsistencies, larger vocabularies, morecomplex language constructs, and so on). Another limitation of our evaluation is only using text,specifically question answering. To fully evaluate the idea of adding rotations to memory cells, inthe future, we aim to look into incorporating our rotations on different domains and tasks includingspeech to text, translation, language generation, stock prices, and other common problems using realworld datasets.Tuning the hyperparameters of the rotation models might give better insights and performance in-creases and is something we aim to incorporate in our training pipeline in the future.A brief exploration of the angles produced by uand the weight matrix Wrotshow that udoes notsaturate, thus rotations are in fact applied to our cell states and do not converge to 0 (or 360 degress).A more in-depth qualitative analysis of the rotation gate is planned for future work. Peeking into theactivations of our rotation gates could help understand the behaviour of rotations and to what extentthey help better represent long-term memory.A very successful and popular mutation of the LSTM is the Gated Recurrent Unit (GRU) unit (Choet al., 2014). The GRU only has an output as opposed to both a cell state and an output and usesfewer gates. In the future we hope to explore adding rotations to GRU units and whether we canobtain similar results.5 C ONCLUSIONWe have introduced a novel gating mechanism for RNN units that enables applying a parametrisedtransformation matrix to the cell state. We picked pairwise 2D rotations as the transformation andshown how this can be added to the popular LSTM units to create what we call RotLSTM.5Under review as a conference paper at ICLR 20180.00.51.0AccuracybAbI 1train val testbAbI 11train val test0.00.51.0bAbI 2bAbI 120.00.51.0bAbI 3bAbI 130.00.51.0bAbI 4bAbI 140.00.51.0bAbI 5bAbI 150.00.51.0bAbI 6bAbI 160.00.51.0bAbI 7bAbI 170.00.51.0bAbI 8bAbI 180.00.51.0bAbI 9bAbI 190 20 40Epochs0.00.51.0bAbI 100 20 40 0 20 40 0 20 40bAbI 200 20 40 0 20 40LSTMRotLSTMFigure 3: Accuracy comparison on training, validation (val) and test sets over 40 epochs for LSTMand RotLSTM models. The models were trained 10 times and shown is the average accuracy and infaded colour is the standard deviation. Test set accuracy was computed every 10 epochs.We trained a simple model using RotLSTM units and compared them with the same model based onLSTM units. We show that for the LSTM-based architetures adding rotations has a positive impacton most bAbI tasks, making the training require fewer epochs to achieve similar or higher accuracy.On some tasks the RotLSTM model can use a lower dimensional cell state vector and maintain itsperformance.6Under review as a conference paper at ICLR 20180.00.20.40.60.81.0AccuracybAbI-1 bAbI-2 bAbI-3 bAbI-4 bAbI-50.00.20.40.60.81.0bAbI-6 bAbI-7 bAbI-8 bAbI-9 bAbI-100.00.20.40.60.81.0bAbI-11 bAbI-12 bAbI-13 bAbI-14 bAbI-1510 20 30 40 50Cell state size0.00.20.40.60.81.0bAbI-1610 20 30 40 50bAbI-1710 20 30 40 50bAbI-1810 20 30 40 50bAbI-1910 20 30 40 50bAbI-20LSTMRotLSTMFigure 4: Accuracy on the test set for the LSTM and RotLSTM while varying the cell state size from6 to 50. The shown numbers are for the epochs with best validation set accuracy.Significant accracy improvements of approximatively 20% for the RotLSTM model over the LSTMmodel are visible on bAbI tasks 5 (three argument relations) and 18 (reasoning about size).
SkYqiWteM
Insufficient Justification and Comparison
4: Ok but not good enough - rejection
The paper proposes to add a rotation operation in long short-term memory (LSTM) cells. It performs experiments on bAbI tasks and showed that the results are better than the simple baselines with original LSTM cells. There are a few problems with the paper. Firstly, the title and abstract discuss "modifying memories", but the content is only about a rotation operation. Perhaps the title should be "Rotation Operation in Long Short-Term Memory"? Secondly, the motivation of adding the rotation operation is not properly justified. What does it do that a usual LSTM cell could not learn? Does it reduce the excess representational power compared to the LSTM cell that could result in better models? Or does it increase its representational capacity so that some pattern is modeled in the new cell structure that was not possible before? This is not clear at all after reading the paper. Besides, the idea of using a rotation operation in recurrent networks has been explored before [3]. Finally, the task (bAbI) and baseline models (LSTM from a Keras tutorial) are too weak. There have been recent works that nearly solved the bAbI tasks to perfection (e.g., [1][2][4][5], and many others). The paper presented a solution that is weak compared to these recent results. In a summary, the main idea of adding rotation to LSTM cells is not properly justified in the paper, and the results presented are quite weak for publication in ICLR 2018. [1] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus. End-to-end memory networks, NIPS 2015 [2] Caiming Xiong, Stephen Merity, Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering, ICML 2016 [3] Mikael Henaff, Arthur Szlam, Yann LeCun, Recurrent Orthogonal Networks and Long-Memory Tasks, ICML 2016 [4] Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio, Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes, ICLR 2017 [5] Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun, Tracking the World State with Recurrent Entity Networks, ICLR 2017
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Modifying memories in a Recurrent Neural Network Unit ### Paper Abstract Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data. We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights. This addition shows significant increases of performance on some of the tasks from the bAbI dataset. ### Paper Keywords ["LSTM", "RNN", "rotation matrix", "long-term memory", "natural language processing"] ### Paper Content ABSTRACTLong Short-Term Memory (LSTM) units have the ability to memorise and uselong-term dependencies between inputs to generate predictions on time seriesdata. We introduce the concept of modifying the cell state (memory) of LSTMsusing rotation matrices parametrised by a new set of trainable weights. This ad-dition shows significant increases of performance on some of the tasks from thebAbI dataset.1 I NTRODUCTIONIn the recent years, Recurrent Neural Networks (RNNs) have been successfully used to tackle prob-lems with data that can be represented in the shape of time series. Application domains includeNatural Language Processing (NLP) (translation (Rosca & Breuel, 2016), summarisation (Nallapatiet al., 2016), question answering and more), speech recogition (Hannun et al., 2014; Graves et al.,2013), text to speech systems (Arik et al., 2017), computer vision tasks (Stewart et al., 2016; Wuet al., 2017), and differentiable programming language interpreters (Riedel et al., 2016; Rockt ̈aschel& Riedel, 2017).An intuitive explanation for the success of RNNs in fields such as natural language understandingis that they allow words at the beginning of a sentence or paragraph to be memorised. This canbe crucial to understanding the semantic content. Thus in the phrase ”The cat ate the fish” it isimportant to memorise the subject (cat). However, often later words can change the meaning of asenstence in subtle ways. For example, ”The cat ate the fish, didn’t it ”changes a simple statementinto a question. In this paper, we study a mechanism to enhance a standard RNN to enable it tomodify its memory, with the hope that this will allow it to capture in the memory cells sequenceinformation using a shorter and more robust representation.One of the most used RNN units is the Long Short-Term Memory (LSTM) (Hochreiter & Schmid-huber, 1997). The core of the LSTM is that each unit has a cell state that is modified in a gatedfashion at every time step. At a high level, the cell state has the role of providing the neural networkwith memory to hold long-term relationships between inputs. There are many small variations ofLSTM units in the literature and most of them yield similar performance (Greff et al., 2017).The memory (cell state) is expected to encode information necessary to make the next prediction.Currently the ability of the LSTMs to rotate and swap memory positions is limited to what can beachieved using the available gates. In this work we introduce a new operation on the memory thatexplicitly enables rotations and swaps of pairwise memory elements. Our preliminary tests showperformance improvements on some of the bAbI tasks (Weston et al., 2015) compared with LSTMbased architectures.2 T HE ROTATION GATEIn this section we introduce the idea of adding a new set of parameters for the RNN cell that enablerotation of the cell state. The following subsection shows how this is implemented in the LSTMunit.One of the key innovations of LSTMs was the introduction of gated modified states so that if thegate neuron iis saturated then the memory ci(t1)would be unaltered. That is, ci(t1)ci(t)1Under review as a conference paper at ICLR 2018with high accuracy. The fact that the amplification factor is very close to 1 prevents the memoryvanishing or exploding over many epochs.To modify the memory, but retain an amplification factor of 1 we take the output after appling theforget and add gates (we call it dt), and apply a rotation matrix Uto obtain a modified memoryct=Udt. Note that, for a rotation matrix UTU=Iso thatkdtk=kctk.We parametrise the rotation by a vector of anglesu= 2(Wrotx+brot); (1)whereWrotis a weight matrix and brotis a bias vector which we learn along with the other parame-ters.xis the vector of our concatenated inputs (in LSTMs given by concatenating the input for thecurrent timestep with the output from the previous time step).A full rotation matrix is parametrisable by n(n1)=2parameters (angles). Using all of these wouldintroduce a huge number of weights, which is likely to over-fit. Instead, we have limited ourselvesto considering rotations between pairs of inputs di(t)anddi+1(t). Exploring more powerful sets ofrotations is currently being investigated.Our rotation matrix is a block-diagonal matrix of 2D rotationsU(u) =266664cosu1sinu1sinu1 cosu1...cosun=2sinun=2sinun=2cosun=2377775; (2)where the cell state is of size n. Our choice of rotations only needs n=2angles.2.1 R OTLSTMIn this section we show how to add memory rotation to the LSTM unit. The rotation is applied afterthe forget and add gates and before using the current cell state to produce an output.The RotLSTM equations are as follows:x= [ht1;xt]; (3)ft=(Wfx+bf); (4)it=(Wix+bi); (5)ot=(Wox+bo); (6)ut= 2(Wrotx+brot); (7)dt=ftct1+ittanh(Wcx+bc); (8)ct=U(ut)dt; (9)ht=ottanh( ct); (10)where Wff;i;o;rot;cgare weight matrices, bff;i;o;rot;cgare biases ( Ws and bs learned during training),ht1is the previous cell output, htis the output the cell produces for the current timestep, similarlyct1andctare the cell states for the previous and current timestep, is element-wise multiplicationand[;]is concatenation. Uas defined in Equation 2, parametrised by ut. Figure 1 shows aRotLSTM unit in detail.Assuming cell state size n, input sizem, the RotLSTM has n(n+m)=2extra parameters, a 12.5%increase (ignoring biases). Our expectation is that we can decrease nwithout harming performanceand the rotations will enforce a better representation for the cell state.3 E XPERIMENTS AND RESULTSTo empirically evaluate the performance of adding the rotation gate to LSTMs we use the toy NLPdataset bAbI with 1000 samples per task. The bAbI dataset is composed of 20 different tasks ofvarious difficulties, starting from easy questions based on a single supporting fact (for example:2Under review as a conference paper at ICLR 2018URotLSTMct−1/braceleftBig /bracerightBigctht−1/braceleftBig x/braceleftbiggxtftσ⊗itσ tanh⊗⊕dtut2πσotσ/bracerightBight⊗tanhFigure 1: RotLSTM diagram. xis the concatenation of ht1andxtin the diagram (green and bluelines). Note that this differs from a regular LSTM by the introduction of the network producingangles utand the rotation module marked U. In the diagram input size is 4 and cell state size is 3.John is in the kitchen. Where is John? A: Kitchen ) and going to more difficult tasks of reasoningabout size (example: The football fits in the suitcase. The box is smaller than the football. Will thebox fit in the suitcase? A: yes ) and path finding (example: The bathroom is south of the office. Thebathroom is north of the hallway. How do you go from the hallway to the office? A: north, north ).A summary of all tasks is available in Table 2. We are interested in evaluating the behaviour andperformance of rotations on RNN units rather than beating state of the art.We compare a model based on RotLSTM with the same model based on the traditional LSTM. Allmodels are trained with the same hyperparameters and we do not perform any hyperparameter tuningapart from using the sensible defaults provided in the Keras library and example code (Chollet et al.,2015).For the first experiment we train a LSTM and RotLSTM based model 10 times using a fixed cellstate size of 50. In the second experiment we train the same models but vary the cell state size from6 to 50 to assess whether the rotations help our models achieve good performance with smaller statesizes. We only choose even numbers for the cell state size to make all units go through rotations.3.1 T HE MODEL ARCHITECTUREThe model architecture, illustrated in Figure 2, is based on the Keras example implementation1.This model architecture, empirically, shows better performance than the LSTM baseline publishedin Weston et al. (2015). The input question and sentences are passed thorugh a word embeddinglayer (not shared, embeddings are different for questions and sentences). The question is fed intoan RNN which produces a representation of the question. This representation is concatenated toevery word vector from the story, which is then used as input to the second RNN. Intuitively, thishelps the second RNN (Query) to focus on the important words to answer the question. The outputof the second RNN is passed to a fully connected layer with a softmax activation of the size of thedictionary. The answer is the word with the highest activation.1Available at https://goo.gl/9wfzr5 .3Under review as a conference paper at ICLR 2018Figure 2: Model architecture for the bAbI dataset. The RNN is either LSTM or RotLSTM.Table 1: Performance comparison on the bAbI dataset. Values are % average accuracy on test setstandard deviation taken from training each model 10 times. For each of the trained models thetest accuracy is taken for the epoch with the best validation accuracy (picked from epoch numbers1, 11, 21, 31, 40 since we only evaluated on the test set for those).Task: 1 2 3 4 5 6 7 8 9 10LSTM 49.83.3 26.53.8 21.11.0 63.79.4 33.66.2 49.20.8 62.715.5 68.88.3 63.90.2 45.22.0RotLSTM 52.3 1.1 27.11.3 22.41.3 65.83.6 55.75.5 50.11.9 76.73.0 66.16.2 61.51.9 48.11.1Task: 11 12 13 14 15 16 17 18 19 20LSTM 73.62.2 74.31.7 94.40.2 20.73.3 21.40.5 48.02.2 48.00.0 70.320.8 8.50.6 87.74.2RotLSTM 73.9 1.2 76.51.1 94.40.0 19.92.0 28.88.8 46.72.0 54.23.1 90.50.9 8.91.1 89.92.4The categorical cross-entropy loss function was used for training. All dropout layers are dropping30% of the nodes. The train-validation dataset split used was 95%-5%. The optimizer used wasAdam with learning rate 0.001, no decay, 1= 0:9,2= 0:999,= 108. The training set wasrandomly shuffled before every epoch. All models were trained for 40 epochs. After every epochthe model performance was evaluated on the validation and training sets, and every 10 epochs onthe test set. We set the random seeds to the same number for reproducibility and ran the experiments10 times with 10 different random seeds. The source code is available at https://goo.gl/Eopz2C2.3.2 R ESULTSIn this subsection we compare the the performance of models based on the LSTM and RotLSTMunits on the bAbI dataset.Applying rotations on the unit memory of the LSTM cell gives a slight improvement in performanceoverall, and significant improvements on specific tasks. Results are shown in Table 1. The mostsignificant improvements are faster convergence, as shown in Figure 3, and requiring smaller statesizes, illustrated in Figure 4.On tasks 1 (basic factoid), 11 (basic coreference), 12 (conjunction) and 13 (compound coreference)the RotLSTM model reaches top performance a couple of epochs before the LSTM model consis-tently. The RotLSTM model also needs a smaller cell state size, reaching top performance at statesize 10 to 20 where the LSTM needs 20 to 30. The top performance is, however, similar for bothmodels, with RotLSTM improving the accuracy with up to 2.5%.The effect is observed on task 18 (reasoning about size) at a greater magnitude where the RotLSTMreaches top performance before epoch 20, after which it plateaus, while the LSTM model takes 40epochs to fit the data. The training is more stable for RotLSTM and the final accuracy is improvedby 20%. The RotLSTM reaches top performance using cell state 10 and the LSTM needs size40. Similar performance increase for the RotLSTM (22.1%) is observed in task 5 (three argumentrelations), reaching top performance around epoch 25 and using a cell state of 50. Task 7 (counting)shows a similar behaviour with an accuracy increase of 14% for RotLSTM.Tasks 4 (two argument relations) and 20 (agent motivation) show quicker learning (better perfor-mance in the early epochs) for the RotLSTM model but both models reach their top performance2A GitHub repository will be made available after the double-blind review period ends.4Under review as a conference paper at ICLR 2018after the same amount of traning. On task 20 the RotLSTM performance reaches top accuracy usingstate size 10 while the LSTM incremetally improves until using state size 40 to 50.Signs of overfitting for the RotLSTM model can be observed more prominently than for the LSTMmodel on tasks 15 (basic deduction) and 17 (positional reasoning).Our models, both LSTM and RotLSTM, perform poorly on tasks 2 and 3 (factoid questions with 2and 3 supporting facts, respectively) and 14 (time manipulation). These problem classes are solvedvery well using models that look over the input data many times and use an attention mechanism thatallows the model to focus on the relevant input sentences to answer a question (Sukhbaatar et al.,2015; Kumar et al., 2016). Our models only look at the input data once and we do not filter outirrelevant information.4 D ISCUSSION AND FUTURE WORKA limitation of the models in our experiments is only applying pairwise 2D rotations. Represen-tations of past input can be larger groups of the cell state vector, thus 2D rotations might not fullyexploit the benefits of transformations. In the future we hope to explore rotating groups of elementsand multi-dimensional rotations. Rotating groups of elements of the cell state could potentiallyalso force the models to learn a more structured representation of the world, similar to how forc-ing a model to learn specific representations of scenes, as presented in Higgins et al. (2017), yieldssemantic representations of the scene.Rotations also need not be fully flexible. Introducing hard constraints on the rotations and whatgroups of parameters can be rotated might lead the model to learn richer memory representations.Future work could explore how adding such constraints impacts learning times and final performanceon different datasets, but also look at what constraints can qualitatively improve the representationof long-term dependencies.In this work we presented prelimiary tests for adding rotations to simple models but we only useda toy dataset. The bAbI dataset has certain advantages such as being small thus easy to train manymodels on a single machine, not having noise as it is generated from a simulation, and having a widerange of tasks of various difficulties. However it is a toy dataset that has a very limited vocabularyand lacks the complexity of real world datasets (noise, inconsistencies, larger vocabularies, morecomplex language constructs, and so on). Another limitation of our evaluation is only using text,specifically question answering. To fully evaluate the idea of adding rotations to memory cells, inthe future, we aim to look into incorporating our rotations on different domains and tasks includingspeech to text, translation, language generation, stock prices, and other common problems using realworld datasets.Tuning the hyperparameters of the rotation models might give better insights and performance in-creases and is something we aim to incorporate in our training pipeline in the future.A brief exploration of the angles produced by uand the weight matrix Wrotshow that udoes notsaturate, thus rotations are in fact applied to our cell states and do not converge to 0 (or 360 degress).A more in-depth qualitative analysis of the rotation gate is planned for future work. Peeking into theactivations of our rotation gates could help understand the behaviour of rotations and to what extentthey help better represent long-term memory.A very successful and popular mutation of the LSTM is the Gated Recurrent Unit (GRU) unit (Choet al., 2014). The GRU only has an output as opposed to both a cell state and an output and usesfewer gates. In the future we hope to explore adding rotations to GRU units and whether we canobtain similar results.5 C ONCLUSIONWe have introduced a novel gating mechanism for RNN units that enables applying a parametrisedtransformation matrix to the cell state. We picked pairwise 2D rotations as the transformation andshown how this can be added to the popular LSTM units to create what we call RotLSTM.5Under review as a conference paper at ICLR 20180.00.51.0AccuracybAbI 1train val testbAbI 11train val test0.00.51.0bAbI 2bAbI 120.00.51.0bAbI 3bAbI 130.00.51.0bAbI 4bAbI 140.00.51.0bAbI 5bAbI 150.00.51.0bAbI 6bAbI 160.00.51.0bAbI 7bAbI 170.00.51.0bAbI 8bAbI 180.00.51.0bAbI 9bAbI 190 20 40Epochs0.00.51.0bAbI 100 20 40 0 20 40 0 20 40bAbI 200 20 40 0 20 40LSTMRotLSTMFigure 3: Accuracy comparison on training, validation (val) and test sets over 40 epochs for LSTMand RotLSTM models. The models were trained 10 times and shown is the average accuracy and infaded colour is the standard deviation. Test set accuracy was computed every 10 epochs.We trained a simple model using RotLSTM units and compared them with the same model based onLSTM units. We show that for the LSTM-based architetures adding rotations has a positive impacton most bAbI tasks, making the training require fewer epochs to achieve similar or higher accuracy.On some tasks the RotLSTM model can use a lower dimensional cell state vector and maintain itsperformance.6Under review as a conference paper at ICLR 20180.00.20.40.60.81.0AccuracybAbI-1 bAbI-2 bAbI-3 bAbI-4 bAbI-50.00.20.40.60.81.0bAbI-6 bAbI-7 bAbI-8 bAbI-9 bAbI-100.00.20.40.60.81.0bAbI-11 bAbI-12 bAbI-13 bAbI-14 bAbI-1510 20 30 40 50Cell state size0.00.20.40.60.81.0bAbI-1610 20 30 40 50bAbI-1710 20 30 40 50bAbI-1810 20 30 40 50bAbI-1910 20 30 40 50bAbI-20LSTMRotLSTMFigure 4: Accuracy on the test set for the LSTM and RotLSTM while varying the cell state size from6 to 50. The shown numbers are for the epochs with best validation set accuracy.Significant accracy improvements of approximatively 20% for the RotLSTM model over the LSTMmodel are visible on bAbI tasks 5 (three argument relations) and 18 (reasoning about size).<|im_end|> <|im_start|>assistant ### Review Title Insufficient Justification and Comparison ### Review Text The paper proposes to add a rotation operation in long short-term memory (LSTM) cells. It performs experiments on bAbI tasks and showed that the results are better than the simple baselines with original LSTM cells. There are a few problems with the paper. Firstly, the title and abstract discuss "modifying memories", but the content is only about a rotation operation. Perhaps the title should be "Rotation Operation in Long Short-Term Memory"? Secondly, the motivation of adding the rotation operation is not properly justified. What does it do that a usual LSTM cell could not learn? Does it reduce the excess representational power compared to the LSTM cell that could result in better models? Or does it increase its representational capacity so that some pattern is modeled in the new cell structure that was not possible before? This is not clear at all after reading the paper. Besides, the idea of using a rotation operation in recurrent networks has been explored before [3]. Finally, the task (bAbI) and baseline models (LSTM from a Keras tutorial) are too weak. There have been recent works that nearly solved the bAbI tasks to perfection (e.g., [1][2][4][5], and many others). The paper presented a solution that is weak compared to these recent results. In a summary, the main idea of adding rotation to LSTM cells is not properly justified in the paper, and the results presented are quite weak for publication in ICLR 2018. [1] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus. End-to-end memory networks, NIPS 2015 [2] Caiming Xiong, Stephen Merity, Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering, ICML 2016 [3] Mikael Henaff, Arthur Szlam, Yann LeCun, Recurrent Orthogonal Networks and Long-Memory Tasks, ICML 2016 [4] Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio, Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes, ICLR 2017 [5] Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun, Tracking the World State with Recurrent Entity Networks, ICLR 2017 ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
ryxUMREYPr
ICLR.cc/2020/Conference
2020
Is There Mode Collapse? A Case Study on Face Generation and Its Black-box Calibration
["Zhenyu Wu", "Ye Yuan", "Zhaowen Wang", "Jianming Zhang", "Zhangyang Wang", "Hailin Jin"]
Generative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGAN’s learned distribution still suffers from mode collapse. Existing evaluation metrics for image synthesis focus on low-level perceptual quality. Diversity tests of samples from GANs are usually conducted qualitatively on a small scale. In this work, we devise a set of statistical tools, that are broadly applicable to quantitatively measuring the mode collapse of GANs. Strikingly, we consistently observe strong mode collapse on several state-of-the-art GANs using our toolset. We analyze possible causes, and for the first time present two simple yet effective “black-box” methods to calibrate the GAN learned distribution, without accessing either model parameters or the original training data.
["Generative Adversarial Networks", "Mode Collapse", "Calibration"]
ABSTRACTGenerative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGAN’s learned distribution still suffers from mode collapse. Existing evaluationmetrics for image synthesis focus on low-level perceptual quality. Diversity testsof samples from GANs are usually conducted qualitatively on a small scale. In thiswork, we devise a set of statistical tools, that are broadly applicable to quantitativelymeasuring the mode collapse of GANs. Strikingly, we consistently observe strongmode collapse on several state-of-the-art GANs using our toolset. We analyzepossible causes, and for the first time present two simple yet effective “black-box”methods to calibrate the GAN learned distribution, without accessing either modelparameters or the original training data.1 I NTRODUCTIONGenerative adversarial networks (GANs) (Goodfellow et al., 2014) have demonstrated unprecedentedpower for various image generation tasks. However, GANs have also been suffering from generationbias and/or loss of diversity. The underlying reasons could be compound, ranging from the dataimbalance to the training difficulty of GANs, and more:•First of all, the training data for GANs, especially for the typical unconditional/unsupervisedgeneration tasks (Karras et al., 2017; 2018), might possess various subject or attribute imbalances.As a result, GANs trained with them might be further biased towards the denser areas, similarly tothe classifier bias towards the majority class in imbalanced classification.•More intrinsically, even when the training dataset “looks" balanced, training GANs is notoriouslymore unstable (sometimes even uncontrollable) than training classifiers, potentially constitutinganother source of mode collapse. One most common hurdle of GANs is the loss of diversity due tomode collapse (Goodfellow, 2016), wherein the generator concentrates too large a probability masson a few modes of the true distribution. Another widely reported issue, known as co-variate shift(Santurkar et al., 2017), could be viewed as a nuanced version of mode collapse.This paper seeks to explore: do the state-of-the-art GANs still suffer from mode collapse? Can wehave a toolkit to detect that? And if the mode collapse happens, is there any “easy and quick" remedyfor calibrating the GAN’s learned distribution to alleviate the mode collapse?Evaluation of Mode Collapse There are several popular metrics for GAN evaluation, e.g.InceptionScore (IS) (Salimans et al., 2016), Fréchet Inception Distance (FID) (Heusel et al., 2017), MODE(Che et al., 2016) and birthday paradox based diversity test (Arora & Zhang, 2017). IS, FID andMODE score takes both visual fidelity and diversity into account. Birthday paradox based diversitytest gives a rough estimation of support size under the assumption of uniform sampling. Recently, aclassification-based metric (Santurkar et al., 2017) was proposed for a quantitative assessment of themode distribution learned by GANs. However, their approach hinge on a classifier trained on theoriginal (balanced) GAN training set, with class labels known, available and well-defined ( e.g., objectclasses in CIFAR-10, or face gender in CelebA), making it non-straightforward to extend to datasubjects where classes are hard to be defined, and/or are not enumerable (e.g, open set problems).To tackle this problem, we propose a hypothesis test method by analyzing the clustering pattern ofsamples. We exploit a statistical tool from spatial analysis, called Ripley’s K function, to quantitatively1Under review as a conference paper at ICLR 2020measure the mode collapse. We demonstrate the application of our tool set in analyzing the bias inunconditional face image generation : a popular benchmark task nowadays for GANs, yet remainingrather unclear how to measure its mode collapse using existing tools since every generated identityis expected to be new. The study of face identity generation bias has profound practical values forunderstanding facial privacy (Filipovych et al., 2011) and fairness (Holstein et al., 2018). Using ourtools, we find the mode collapse still a prevailing problem in state-of-the-art face generation GANs(Karras et al., 2018; 2017), and further analyze several possible causes.Calibration Approaches on GAN Many approaches have been proposed to alleviate mode col-lapse problem, ranging from better optimization objectives (Arjovsky et al., 2017; Mao et al., 2017),to specialized builing blocks (Durugkar et al., 2016; Ghosh et al., 2018; Liu & Tuzel, 2016). However,they require either tedious (re-)training, or at least the access to training data, as well as to modelparameters: we refer to the existing methods as white-box approaches.In contrast, we are interested in an almost unexplored aspect: assuming some generation bias isknown, how can be calibrate the GAN, without accessing either the training data or the current modelparameters? Such black-box calibration is desirable due to many practical demands: the training datamight be protected or no longer available; the GAN model might be provided as a black box andcannot be altered ( e.g., as APIs); or we simply want to adjust the generated distribution of any GANwith minimized re-training efforts. For the first time, we explore two “black-box” approaches tocalibrate the GAN learned distribution, i.e., latent space reshaping via Gaussian mixture models, andimportance sampling. They are observed to alleviate the mode collapse without re-touching trainingdata, nor even needing any access to model parameters.2 R ELATED WORKS2.1 E VALUATION METRICS OF MODE COLLAPSE IN GAN SGAN models are often observed to suffer from the mode collapse problem (Salimans et al., 2016);(Sutskever et al., 2015), where only small modes subsets of distribution are characterized by thegenerator. The problem is especially prevalent for high-dimensional data, e.g.face image generation,where the training samples are low-density w.r.t. the high-dimensional feature space.Salimans et al. (2016) presented the popular metric of Inception Score (IS) to measure the individualsample quality. IS does not directly reflect the population-level generation quality, e.g., the overfittingand loss of diversity. It also requires pre-trained perceptual models on ImageNet or other specificdatasets (Barratt & Sharma, 2018). Heusel et al. (2017) propose the Fréchet Inception Distance(FID), which models the distribution of image features as multivariate Gaussian distribution andcomputes the distance between the distribution of real images and the distribution of fakes images.Unlike IS, FID can detect intra-class mode dropping. However, the multivariate Gaussian distributionassumption hardly holds very well on real images, and low FID score cannot rule out the possibilityof the generator’s simply copying the training data. Besides the two most popular metrics, (Cheet al., 2016) develop an assessment for both visual quality and variety of samples, known as theMODE score and later shown to be similar to IS (Zhou et al., 2017). (Arora et al., 2018) and (Arora& Zhang, 2017) proposed a test based upon the birthday paradox for estimating the support size ofthe generated distribution. Although the test can detect severe cases of mode collapse, it falls short inmeasuring how well a generator captures the true data distribution. It also heavily relies on humanannotation, making it challenging to scale up to larger-scale evaluation.(Santurkar et al., 2017) took a classification-based perspective and view loss of diversity as a form ofcovariate shift. As we discussed above, their approach cannot be straightforwardly extended to datasubjects without pre-known and closed-set class definition, in addition to the need of training an extraclassifier on the original labeled training set.2.2 M ODEL CALIBRATION APPROACHES OF GAN SThere are many efforts to address the mode collapse problem in GANs. Some focus on discriminatorsby introducing different divergence metrics (Metz et al., 2016) and optimization losses (Arjovskyet al., 2017; Mao et al., 2017). The minibatch discrimination scheme allows the discriminator2Under review as a conference paper at ICLR 2020to discriminate between whole mini-batches of samples instead of between individual samples.(Durugkar et al., 2016) adopted multiple discriminators to alleviate mode collapse. ModeGAN (Cheet al., 2016) and VEEGAN (Srivastava et al., 2017) enforce the bijection mapping between the inputnoise vectors and generated images with additional encoder networks. Multiple generators (Ghoshet al., 2018) and weight-sharing generators (Liu & Tuzel, 2016) are developed to capture more modesof the distribution. However, these approaches are designed to easily calibrating trained GANs.A handful of existing works attempt to combine GANs with sampling methods to improve generationquality. (Turner et al., 2018) introduced the Metropolis-Hastings generative adversarial network(MH-GAN). The MH-GAN uses the learned discriminator from GAN training to build a wrapper forthe generator for improved sampling, at the generation inference stage. With a perfect discriminator,the wrapped generator can sample from the true distribution exactly even with a deficient generator.(Azadi et al., 2018) proposed discriminator rejection sampling (DRS) for GANs, which performsrejection sampling on the outputs of the generator by using the probabilities given by the discrimi-nator, to approximately correct errors in the generator’s distribution. Yet still, these approaches arewhite-box calibration since both require access to trained discriminators (which might be even lessavailable/accessible than the generator after a GAN is trained).3 M ETHODWe intend to study the bias of the most representative features of the generated faces, i.e.the faceidentity distribution, since almost all face attributes can be derived based on this representations. Todetect face identity collapse, we are aiming to detect high-density regions in features space caused byany possible attribute non-diversified. Or, if being slightly imprecise in terms,(Santurkar et al., 2017)examined the marginalized distribution through some discrete categorical attributes’ lens, while ourslooks at the joint distribution of all possible attributes in the continuous feature space holistically.Algorithm 1 Identity Clustering Pattern Analysis via Sampling and Neighboring Function N.Given a pre-trained generator G, an identity descriptor fid, a random distribution N(0;), aneighbor distance threshold d0and a face embedding space distance range [db;de]ds(ds: step size).S fIS1;;ISmg // Randomly sampled mface imagesfor eachISi2S do // Count neighbors within d0distance for each sampled ISi.NISi N (ISi;SnISi;d0).Robs f~IS1;;~ISpg// Find the region for observation by selecting the top pface images inSwith largest NISi.Rref f^IS1;;^ISqg // Find the region for reference by randomly selecting qface images fromS.T fIT1;;ITMg // Randomly sampled Mface images ( Mm)for eachdin[db;de]dsdofor each ~ISi2Robsdo // Count neighbors within ddistance for each ~ISiinRobs.Nd~ISi N (~ISi;T;d)for each ^ISi2Rrefdo // Count neighbors within ddistance for each ^ISiinRref.Nd^ISi N (^ISi;T;d).Compute the pointwise confidence regions of [Nd^ISij12;Nd^ISij2]for eachd2[db;de]ds, atconfidence level of (default 0.05). The intervals between the upper and lower confidence boundsfor all samples inRrefdefine the confidence band (Eubank & Speckman, 1993)..Reject the hypothesis that the clustering pattern of Robsis the same as that of Rref, if the curveofNd~ISifalls outside of the confidence band.Given an unconditional face generator Gand an identity descriptor fid, we sample images I=G(z)using a random distribution zN(0;). The unit vector fid(I)describes the identity feature in theface embedding space. The normalized cosine distance between image I0andI1is defined as:d(I0;I1) =1cos1(<fid(I0);fid(I1)>) (1)3Under review as a conference paper at ICLR 2020For a given anchor face image I0, a distance threshold d0and a collection of randomly sampled faceimagesS, the neighboring function N(I0;S;d0)is defined to compute the number of neighborswithind0distance ofI0, among all images in S:N(I0;S;d0) =XI2S12(1 +sgn(d0d(I0;I))) (2)We refer to the tool of Ripley’s K function (Dixon, 2014), a spatial analysis method used to describepoint patterns over a given area of interest. Ripley’s K function can be used to determine if thepoints of interest appears to be dispersed, clustered, or randomly distributed throughout the area. Ourdefined neighboring function N(I0;S;d0)serves as a surrogate of the Ripley’s K function K(d).Hypothesis Testing Given an observed high-identity-density region Robsand a reference regionRref, we want to test the hypothesis that the clustering pattern of Robsis the same asRref. We useNto get the clustering pattern for the anchor images in RobsandRrefrespectively. We can rejectthe hypothesis if the clustering pattern of Robsis significantly different from Rref. The detailedalgorithm is outlined in Algorithm 1.4 E MPIRICAL STUDY AND ANALYSISWe choose two state-of-the-art GANs: PGGAN (Karras et al., 2017) and StyleGAN (Karras et al.,2018), as our model subjects of study. Both are known to be able to produce high-resolution, realisticand diverse images. We find that the observations below drawn from the two models also generalizeto a few other GAN models. We choose the CelebAHQ benchmark (Karras et al., 2017) and FFHQbenchmark (Karras et al., 2018) as our data subject of study. Both benchmarks are composed ofdiverse and realistic face images. All images are 10241024 resolution unless otherwise specified.We use ensemble model of InsightFace (Deng et al., 2019b; Guo et al., 2018; Deng et al., 2018;2019a), FaceNet (Schroff et al., 2015) and CosFace (Wang et al., 2018) as fidto serve as the faceidentity descriptor. We emphasize that the due diligence of “sanity check” has been performed onthose classifiers, e.g., their face recognition results are manually inspected one-by-one and confirmedto be highly reliable on the generated images. q(jRrefj) is set to be 1000 . We empirically set db,deanddsare set to be 0:1,0:5and0:01respectively.4.1 O BSERVATION OF THE MODE COLLAPSEMode Collapse Analysis For both StyleGAN and PGGAN, despite of the observed diversity andhigh quality of their generated images, we empirically find some high-density regions in both learneddistributions. Figure 1 shows that the clustering pattern of Robsis significantly different from that ofRref, showing that even the learned distributions of two currently best models have strong denseregions towards some specific identities. For simplicity, our study target is the worst-case densemode, i.e.the identity with the largest number of neighbors within a given distance threshold.Consistency of the Dense Mode The dense region Robsis obtained by selecting the top pimagesinSwith the largest number of neighbors. In order to test the consistency of the worst-case densemodeImagainst sampling, we visualize the Imw.r.t. different size of Sin Figure 2. We consistentlyobserve roughly the same identity as the sampling size increases. Imcan be reliably obtained evenwhenjSj= 1k. The consistency of Imdemonstrate that the support size of Imis unnegligible.4.2 E MPIRICAL STUDY OF THE CAUSE OF MODE COLLAPSEWe hypothesize multiple factors that may potentially lead to the observed dense mode of face identity.We perform additional experiments, aiming to validate one by one: unfortunately, none of them wasobserved to reduce the observed mode collapse. That implies the existence of some more intrinsicreason for the mode collapse in GAN, which we leave for future exploration.Imbalance of Training Data? CelebAHQ is a highly imbalanced dataset: among its 30;000high-resolution face images of 6;217different celebrities, the largest identity class has 28 images and thesmallest one has only 1. Would a balanced dataset alleviate the mode collapse?4Under review as a conference paper at ICLR 20200.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(a) PGGAN-CelebAHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (b) StyleGAN-CelebAHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(c) PGGAN-FFHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (d) StyleGAN-FFHQFigure 1: Identity clustering pattern analysis on StyleGAN and PGGAN, both trained on CelebAHQ.The blue region is a confidence band formed by the pointwise intervals between the upper and lowerconfidence bounds for all identities in Rref. The red curve is the neighboring function curve foridentity inRobs, the worst-case dense mode. We empirically set m(jSj) to be 100;000andM(jTj)to be 10;000;000. To study the worst-case dense mode, p(jRobsj) is set to be 1.(a)jSj= 1k (b)jSj= 10k (c)jSj= 100 k (d)jSj= 1m (e)jSj= 10mFigure 2: Visualization of the worst-case dense modeImw.r.t. different size of the S.Sis a collectionof randomly sampled images.We turn to the Flickr-Faces-HQ Dataset (FFHQ), a high-quality human face dataset created in (Karraset al., 2018), consisting of 70;000high-resolution face images, without repeated identities (wemanually examined the dataset to ensure so. It is thus “balanced” in terms of identity, in the sense thateach identity class has one sample. We train StyleGAN on FFHQ: somehow surprisingly, the modecollapse persists and seems no less than StyleGAN on CelebAHQ, as shown in Figure 1c and 1d.5Under review as a conference paper at ICLR 20200.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(a) StyleGAN-Randomness0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (b) StyleGAN-Overfitting/Underfitting0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(c) StyleGAN-Architecture0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (d) PGGAN-ArchitectureFigure 3: Empirical study on possible causes of the mode collapse. The shaded areas denote thevariances of neighboring statistics for different experiments (caused by re-initialization/training;running different iterations; and varying architectures: see the texts for details). We empirically set m(jSj) to be 100;000andM(jTj) to be 1;000;000. To study the worst-case dense mode, p(jRobsj)and is set to be 1.Randomness during Initialization/Optimization? We repeat training StyleGAN on CelebAHQ(128128) for 10 times. The experimental results are shown in Figure 3a, with the shaded areasdenoting the variances. Despite the variance for the neighboring function curves plotted for repeatedexperiments, a large gap between the curves of RobsandRrefcan be consistently observed.Unfitting/Overfitting in Training? We train StyleGAN on CelebAHQ ( 128128) again, andstore model checkpoints at iteration 7707 (FID = 7.67, same hereinafter), 8307 (7.02), 8908 (6.89),9508 (6.63), 10108 (6.41), and 12000 (6.32). We plot their corresponding neighboring functioncurves in Figure 3b. Similarly, despite the variances, the identity mode collapse persists due to theconsistent large gap between RobsandRrefcurves.Model Architecture Differences? Both StyleGAN and PGGAN progressively grow their archi-tectures that can generate images of different resolutions: 128,256,512and1024 . Utilizing thisproperty, we train StyleGAN and PGGAN on CelebAHQ-128, CelebAHQ-256, CelebAHQ-512 andCelebAHQ-1024 respectively, and plot the neighboring function curves correspondingly. Accordingto Figures 3c and 3d, varying the architectures does not eliminate the mode collapse either.5 B LACK -BOX CALIBRATION APPROACHESGiven a pre-trained generator Gand target dense mode for alleviation, the goals of calibration arethree-fold: (1) the density of the mode is maximally alleviated; (2) the diversity and quality of the6Under review as a conference paper at ICLR 2020generated images (measured by FID) are minimally sacrificed; and (3) the calibration is black-box,which does not require access to training data or model parameters.We propose two calibration approaches: reshaping latent space via Gaussian mixture models andimportance sampling. They operate on the latent codes, and require no modification of the trainedmodel, nor even any access to the model parameters or training data, making them “black-box".Both approaches are evaluated with StyleGAN trained on CelebAHQ-128. For simplicity, we onlytarget to eliminating the worst-case dense mode Im,i.e.the identity with the largest number ofneighbors within a specified distance threshold.5.1 R ESHAPING LATENT SPACE VIA GAUSSIAN MIXTURE MODELSSince we consistently observe close neighbors to Im, when interpolating near Im, we hypothesizethat the latent codes of a dense mode Imlay on a smooth manifold. Based on this assumption, weattempt to re-shape that into a Gaussian mixture.5.1.1 M ETHOD DESCRIPTIONThe original latent space distribution (z;0)can be approximated with a mixture of GaussiandistributionsKXi=1wi(z;i). We randomly sample Nlatent code and use K-means to explorei= (i;i). We denote p(Im)as the probability of sampling the worst-case dense mode Im.p(Im) =Zp(Imjz)(z;0)dz=KXi=1wiZp(Imjz)i(z;i)dz. Ifp(Imji)is large, we reducewito make the overall p(Im)small.p(Imji)is estimated by the number of neighbors within d0distance toImin clusterCi,i.e.N(Im;Ci;d0).5.1.2 E XPERIMENTS0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))Robs(Im,)Robs(Im,/prime)Robs(I/primem,)Robs(I/primem,/prime)/primeRrefRrefFigure 4: Identity clustering pattern analysis of Style-GAN on CelebA, before/after latent space reshaping.Starting from a StyleGAN model Mpre-trained on CelebAHQ-128, we aim at al-leviating the collapse on the worst-casedense mode IminRobswith the largestnumber of neighbors. We reshape the latentspace ofMvia Gaussian mixture modelsto get the new model M0. We get the newworst-case dense mode I0min the new re-gionR0obswith the largest number of neigh-bors. We next randomly sample 106imagesfrom the original Gaussian distribution andnew GMM distribution, to form TandT0respectively. We then plot the neighboringfunction curves for IminTandT0, andI0minTandT0respectively. We expect the re-shaping latent space via Gaussian mixturemodels to alleviate the worst-case densemode with the minimal sacrifice of gener-ated image quality and diversity.As shown in Figure 4, the latent space reshaping could suppress the clustering of Im(indicated bya large gap between the two red curves) without intensifying the clustering of I0m(indicated by alittle gap between the two green curves), resulting in a reduction of mode collapse on Im. Such analleviation is achieved with an unnoticeable degradation of generation quality, with FID increasingfrom 5.93 (M) to 5.95 (M0). The large overlapping between confidence bands NMRrefandNM0Rrefshows that the diversity of generation is not sacrificed either.7Under review as a conference paper at ICLR 20205.2 I MPORTANCE SAMPLINGUnder the same hypothesis of smooth manifold in section 5.1, the high-density region correspondingto the worst-case dense mode Imcan be approximated with a convex hull.5.2.1 M ETHOD DESCRIPTIONImportance sampling is a variance reduction strategy in the Monte Carlo method. Let the estimatedneighboring function densities for the dense and sparse regions be p1andp2respectively. Weaccept the samples from Gfalling in the high-density region with a probability of p2=p1, so that thecalibrated densities can match.We approximate the high-density region with a convex hull formed by the collection of latent codesZImcorresponding to the identities similar to Im:Conv (ZIm) =fjZImjXk=1kzkj(8k:k0)^jZImjXk=1k= 1;zk2ZImg (3)5.2.2 E XPERIMENTThe experiment setting is mostly similar to the reshaping latent space via the Gaussian mixturemodels case. We integrate importance sampling to the latent code generation stage. Given the densemodeIm, we can find the collection of latent codes ZImvia sampling:ZIm=fzjd(Im;G(z))d0;zN(0;)g (4)0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))Robs(Im,)Robs(Im,/prime)Robs(I/primem,)Robs(I/primem,/prime)/primeRrefRrefFigure 5: Identity clustering pattern analysis of Style-GAN on CelebA, before/after importance sampling.ZImis obtained from the top 102latentcodes whose corresponding images havethe smallest distances (1) to Im, among the106random samples. We randomly sample106images fromMandM0to formTandT0respectively. We plot the neighboringfunction curves for IMinTandT0, andIM0inTandT0respectively. As shownin Figure 5, the mode collapse is again al-leviated (indicated by a gap between thetwo red curves), without intensifying theclustering of I0m(indicated by a little gapbetween the two green curves), while FIDonly marginally increases from 5.93 ( M)to 5.94 (M0). The confidence band NMRrefis overlapped withNM0Rref, showing no loss of the diversity.Additionally, in the appendix, we show a white-box counterpart to the importance sampling approach,where the latent codes ZImare obtained via explicit optimization (accessing and altering modelparameters). The white-box approach does not seem to notably outperform than our above black-boxway, implying the relative effectiveness of the latter.6 D ISCUSSIONS AND FUTURE WORKThis paper is intended as a pilot study on exploring the mode collapse issue of GANs. Using facegeneration as a study subject, we quantify the general mode collapse via statistical tools, discuss andverify possible causes, as well as propose two black-box calibration approaches for the first time toalleviate the mode collapse. Despite the preliminary success, the current study remains to be limitedin many ways. First, there are inevitably prediction errors for the identity descriptors from generatedimages, even we have performed our best efforts to find the three most accurate descriptor predictions.Moreover, the fundamental causes of GAN mode collapse demand deeper understandings. Besides,the two calibration approaches only handle one worst-case dense mode, leaving much improvementroom open for future work.8Under review as a conference paper at ICLR 2020
HJlStebC5r
Official Blind Review #3
1: Reject
This work addresses the important problem of generation bias and a lack of diversity in generative models, which is often called model collapse. It proposed a new metric to measure the diversity of the generative model's "worst" outputs based on the sample clustering patterns. Furthermore, it proposed two blackbox approaches to increasing the model diversity through resampling the latent z. Unlike most existing works that address the model collapse problem, a blackbox approach does not make assumptions about having access to model weights or the artifacts produced during model training, making it more widely applicable than the white-box approaches. In terms of experiment setup, the authors chooses face generation as the area to investigate and measures the diversity by detecting the generated face identity. With the proposed methods, the authors showed that most STOA methods have a wide gap between the top p faces of the most popular face identities and randomly sampled faces. It further showed that the proposed blackbox approaches increases the proposed diversity metric without sacrificing image quality. The proposed diversity measuring metric is lacking both in terms of experimental proofs and intuitive motivations. While the black-box calibration of a GAN model may be attractive under specific settings, the authors did not consider the restrictions under those situations and their design may be hard to implement as a result. For those reasons, I propose to REJECT this paper. Missing key experiments that will provide more motivation that 1. the new metric reflects human perception of diversity 2. the new metric works better than existing ones: 1. Please provide experiments and/or citation for using the face identity as a proxy for face image diversity. this is important since all your experiments rely on that assumption. 2. Were there experiments that applies your metric to the training datasets like CelebA and FFHQ? In theory your metric should show no gap between N_R_obs and N_R_ref measured on the training dataset since that's the sampled ground truth. Missing assumptions about blackbox calibration approaches: 1. If we do not have access to the model parameter, the training data, or the artifacts during training like the discriminator, what are some of the real world situations that fit this description? In those cases, is it too much to assume that we can control the random seed input to G? 2. Is it reasonable to assume some constraints on how much data we can get from the blackbox generator? A website that just exposes the image generation API may not allow you to ping their service 100k times to improve the generation diversity. If you are allowed to do that, it may be reasonable to assume that you can contact the API provider to get access to the rest of the model. Minor improvements that did not have a huge impact on the score 1. I found the argument about FID in section 2.1 unconvincing. Are there proofs or citations for the claim that real images don't follow multivariate gaussian distribution after applying FID? Copying is indeed an issue that FID cannot detect, but it may be tangential to model collapse for real world concerns like privacy. 2. The statement "IS, FID and MODE score takes both visual fidelity and diversity into account." under "Evaluation of Mode Collapse" is contradictory to the description in sec 2.1 that IS in fact does not measure diversity. 3. You may want to consider stating the work as "a pilot study" (sec 6.) earlier in the abstract or in the introduction, so that the reader knows what to expect.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Is There Mode Collapse? A Case Study on Face Generation and Its Black-box Calibration ### Paper Abstract Generative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGAN’s learned distribution still suffers from mode collapse. Existing evaluation metrics for image synthesis focus on low-level perceptual quality. Diversity tests of samples from GANs are usually conducted qualitatively on a small scale. In this work, we devise a set of statistical tools, that are broadly applicable to quantitatively measuring the mode collapse of GANs. Strikingly, we consistently observe strong mode collapse on several state-of-the-art GANs using our toolset. We analyze possible causes, and for the first time present two simple yet effective “black-box” methods to calibrate the GAN learned distribution, without accessing either model parameters or the original training data. ### Paper Keywords ["Generative Adversarial Networks", "Mode Collapse", "Calibration"] ### Paper Content ABSTRACTGenerative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGAN’s learned distribution still suffers from mode collapse. Existing evaluationmetrics for image synthesis focus on low-level perceptual quality. Diversity testsof samples from GANs are usually conducted qualitatively on a small scale. In thiswork, we devise a set of statistical tools, that are broadly applicable to quantitativelymeasuring the mode collapse of GANs. Strikingly, we consistently observe strongmode collapse on several state-of-the-art GANs using our toolset. We analyzepossible causes, and for the first time present two simple yet effective “black-box”methods to calibrate the GAN learned distribution, without accessing either modelparameters or the original training data.1 I NTRODUCTIONGenerative adversarial networks (GANs) (Goodfellow et al., 2014) have demonstrated unprecedentedpower for various image generation tasks. However, GANs have also been suffering from generationbias and/or loss of diversity. The underlying reasons could be compound, ranging from the dataimbalance to the training difficulty of GANs, and more:•First of all, the training data for GANs, especially for the typical unconditional/unsupervisedgeneration tasks (Karras et al., 2017; 2018), might possess various subject or attribute imbalances.As a result, GANs trained with them might be further biased towards the denser areas, similarly tothe classifier bias towards the majority class in imbalanced classification.•More intrinsically, even when the training dataset “looks" balanced, training GANs is notoriouslymore unstable (sometimes even uncontrollable) than training classifiers, potentially constitutinganother source of mode collapse. One most common hurdle of GANs is the loss of diversity due tomode collapse (Goodfellow, 2016), wherein the generator concentrates too large a probability masson a few modes of the true distribution. Another widely reported issue, known as co-variate shift(Santurkar et al., 2017), could be viewed as a nuanced version of mode collapse.This paper seeks to explore: do the state-of-the-art GANs still suffer from mode collapse? Can wehave a toolkit to detect that? And if the mode collapse happens, is there any “easy and quick" remedyfor calibrating the GAN’s learned distribution to alleviate the mode collapse?Evaluation of Mode Collapse There are several popular metrics for GAN evaluation, e.g.InceptionScore (IS) (Salimans et al., 2016), Fréchet Inception Distance (FID) (Heusel et al., 2017), MODE(Che et al., 2016) and birthday paradox based diversity test (Arora & Zhang, 2017). IS, FID andMODE score takes both visual fidelity and diversity into account. Birthday paradox based diversitytest gives a rough estimation of support size under the assumption of uniform sampling. Recently, aclassification-based metric (Santurkar et al., 2017) was proposed for a quantitative assessment of themode distribution learned by GANs. However, their approach hinge on a classifier trained on theoriginal (balanced) GAN training set, with class labels known, available and well-defined ( e.g., objectclasses in CIFAR-10, or face gender in CelebA), making it non-straightforward to extend to datasubjects where classes are hard to be defined, and/or are not enumerable (e.g, open set problems).To tackle this problem, we propose a hypothesis test method by analyzing the clustering pattern ofsamples. We exploit a statistical tool from spatial analysis, called Ripley’s K function, to quantitatively1Under review as a conference paper at ICLR 2020measure the mode collapse. We demonstrate the application of our tool set in analyzing the bias inunconditional face image generation : a popular benchmark task nowadays for GANs, yet remainingrather unclear how to measure its mode collapse using existing tools since every generated identityis expected to be new. The study of face identity generation bias has profound practical values forunderstanding facial privacy (Filipovych et al., 2011) and fairness (Holstein et al., 2018). Using ourtools, we find the mode collapse still a prevailing problem in state-of-the-art face generation GANs(Karras et al., 2018; 2017), and further analyze several possible causes.Calibration Approaches on GAN Many approaches have been proposed to alleviate mode col-lapse problem, ranging from better optimization objectives (Arjovsky et al., 2017; Mao et al., 2017),to specialized builing blocks (Durugkar et al., 2016; Ghosh et al., 2018; Liu & Tuzel, 2016). However,they require either tedious (re-)training, or at least the access to training data, as well as to modelparameters: we refer to the existing methods as white-box approaches.In contrast, we are interested in an almost unexplored aspect: assuming some generation bias isknown, how can be calibrate the GAN, without accessing either the training data or the current modelparameters? Such black-box calibration is desirable due to many practical demands: the training datamight be protected or no longer available; the GAN model might be provided as a black box andcannot be altered ( e.g., as APIs); or we simply want to adjust the generated distribution of any GANwith minimized re-training efforts. For the first time, we explore two “black-box” approaches tocalibrate the GAN learned distribution, i.e., latent space reshaping via Gaussian mixture models, andimportance sampling. They are observed to alleviate the mode collapse without re-touching trainingdata, nor even needing any access to model parameters.2 R ELATED WORKS2.1 E VALUATION METRICS OF MODE COLLAPSE IN GAN SGAN models are often observed to suffer from the mode collapse problem (Salimans et al., 2016);(Sutskever et al., 2015), where only small modes subsets of distribution are characterized by thegenerator. The problem is especially prevalent for high-dimensional data, e.g.face image generation,where the training samples are low-density w.r.t. the high-dimensional feature space.Salimans et al. (2016) presented the popular metric of Inception Score (IS) to measure the individualsample quality. IS does not directly reflect the population-level generation quality, e.g., the overfittingand loss of diversity. It also requires pre-trained perceptual models on ImageNet or other specificdatasets (Barratt & Sharma, 2018). Heusel et al. (2017) propose the Fréchet Inception Distance(FID), which models the distribution of image features as multivariate Gaussian distribution andcomputes the distance between the distribution of real images and the distribution of fakes images.Unlike IS, FID can detect intra-class mode dropping. However, the multivariate Gaussian distributionassumption hardly holds very well on real images, and low FID score cannot rule out the possibilityof the generator’s simply copying the training data. Besides the two most popular metrics, (Cheet al., 2016) develop an assessment for both visual quality and variety of samples, known as theMODE score and later shown to be similar to IS (Zhou et al., 2017). (Arora et al., 2018) and (Arora& Zhang, 2017) proposed a test based upon the birthday paradox for estimating the support size ofthe generated distribution. Although the test can detect severe cases of mode collapse, it falls short inmeasuring how well a generator captures the true data distribution. It also heavily relies on humanannotation, making it challenging to scale up to larger-scale evaluation.(Santurkar et al., 2017) took a classification-based perspective and view loss of diversity as a form ofcovariate shift. As we discussed above, their approach cannot be straightforwardly extended to datasubjects without pre-known and closed-set class definition, in addition to the need of training an extraclassifier on the original labeled training set.2.2 M ODEL CALIBRATION APPROACHES OF GAN SThere are many efforts to address the mode collapse problem in GANs. Some focus on discriminatorsby introducing different divergence metrics (Metz et al., 2016) and optimization losses (Arjovskyet al., 2017; Mao et al., 2017). The minibatch discrimination scheme allows the discriminator2Under review as a conference paper at ICLR 2020to discriminate between whole mini-batches of samples instead of between individual samples.(Durugkar et al., 2016) adopted multiple discriminators to alleviate mode collapse. ModeGAN (Cheet al., 2016) and VEEGAN (Srivastava et al., 2017) enforce the bijection mapping between the inputnoise vectors and generated images with additional encoder networks. Multiple generators (Ghoshet al., 2018) and weight-sharing generators (Liu & Tuzel, 2016) are developed to capture more modesof the distribution. However, these approaches are designed to easily calibrating trained GANs.A handful of existing works attempt to combine GANs with sampling methods to improve generationquality. (Turner et al., 2018) introduced the Metropolis-Hastings generative adversarial network(MH-GAN). The MH-GAN uses the learned discriminator from GAN training to build a wrapper forthe generator for improved sampling, at the generation inference stage. With a perfect discriminator,the wrapped generator can sample from the true distribution exactly even with a deficient generator.(Azadi et al., 2018) proposed discriminator rejection sampling (DRS) for GANs, which performsrejection sampling on the outputs of the generator by using the probabilities given by the discrimi-nator, to approximately correct errors in the generator’s distribution. Yet still, these approaches arewhite-box calibration since both require access to trained discriminators (which might be even lessavailable/accessible than the generator after a GAN is trained).3 M ETHODWe intend to study the bias of the most representative features of the generated faces, i.e.the faceidentity distribution, since almost all face attributes can be derived based on this representations. Todetect face identity collapse, we are aiming to detect high-density regions in features space caused byany possible attribute non-diversified. Or, if being slightly imprecise in terms,(Santurkar et al., 2017)examined the marginalized distribution through some discrete categorical attributes’ lens, while ourslooks at the joint distribution of all possible attributes in the continuous feature space holistically.Algorithm 1 Identity Clustering Pattern Analysis via Sampling and Neighboring Function N.Given a pre-trained generator G, an identity descriptor fid, a random distribution N(0;), aneighbor distance threshold d0and a face embedding space distance range [db;de]ds(ds: step size).S fIS1;;ISmg // Randomly sampled mface imagesfor eachISi2S do // Count neighbors within d0distance for each sampled ISi.NISi N (ISi;SnISi;d0).Robs f~IS1;;~ISpg// Find the region for observation by selecting the top pface images inSwith largest NISi.Rref f^IS1;;^ISqg // Find the region for reference by randomly selecting qface images fromS.T fIT1;;ITMg // Randomly sampled Mface images ( Mm)for eachdin[db;de]dsdofor each ~ISi2Robsdo // Count neighbors within ddistance for each ~ISiinRobs.Nd~ISi N (~ISi;T;d)for each ^ISi2Rrefdo // Count neighbors within ddistance for each ^ISiinRref.Nd^ISi N (^ISi;T;d).Compute the pointwise confidence regions of [Nd^ISij12;Nd^ISij2]for eachd2[db;de]ds, atconfidence level of (default 0.05). The intervals between the upper and lower confidence boundsfor all samples inRrefdefine the confidence band (Eubank & Speckman, 1993)..Reject the hypothesis that the clustering pattern of Robsis the same as that of Rref, if the curveofNd~ISifalls outside of the confidence band.Given an unconditional face generator Gand an identity descriptor fid, we sample images I=G(z)using a random distribution zN(0;). The unit vector fid(I)describes the identity feature in theface embedding space. The normalized cosine distance between image I0andI1is defined as:d(I0;I1) =1cos1(<fid(I0);fid(I1)>) (1)3Under review as a conference paper at ICLR 2020For a given anchor face image I0, a distance threshold d0and a collection of randomly sampled faceimagesS, the neighboring function N(I0;S;d0)is defined to compute the number of neighborswithind0distance ofI0, among all images in S:N(I0;S;d0) =XI2S12(1 +sgn(d0d(I0;I))) (2)We refer to the tool of Ripley’s K function (Dixon, 2014), a spatial analysis method used to describepoint patterns over a given area of interest. Ripley’s K function can be used to determine if thepoints of interest appears to be dispersed, clustered, or randomly distributed throughout the area. Ourdefined neighboring function N(I0;S;d0)serves as a surrogate of the Ripley’s K function K(d).Hypothesis Testing Given an observed high-identity-density region Robsand a reference regionRref, we want to test the hypothesis that the clustering pattern of Robsis the same asRref. We useNto get the clustering pattern for the anchor images in RobsandRrefrespectively. We can rejectthe hypothesis if the clustering pattern of Robsis significantly different from Rref. The detailedalgorithm is outlined in Algorithm 1.4 E MPIRICAL STUDY AND ANALYSISWe choose two state-of-the-art GANs: PGGAN (Karras et al., 2017) and StyleGAN (Karras et al.,2018), as our model subjects of study. Both are known to be able to produce high-resolution, realisticand diverse images. We find that the observations below drawn from the two models also generalizeto a few other GAN models. We choose the CelebAHQ benchmark (Karras et al., 2017) and FFHQbenchmark (Karras et al., 2018) as our data subject of study. Both benchmarks are composed ofdiverse and realistic face images. All images are 10241024 resolution unless otherwise specified.We use ensemble model of InsightFace (Deng et al., 2019b; Guo et al., 2018; Deng et al., 2018;2019a), FaceNet (Schroff et al., 2015) and CosFace (Wang et al., 2018) as fidto serve as the faceidentity descriptor. We emphasize that the due diligence of “sanity check” has been performed onthose classifiers, e.g., their face recognition results are manually inspected one-by-one and confirmedto be highly reliable on the generated images. q(jRrefj) is set to be 1000 . We empirically set db,deanddsare set to be 0:1,0:5and0:01respectively.4.1 O BSERVATION OF THE MODE COLLAPSEMode Collapse Analysis For both StyleGAN and PGGAN, despite of the observed diversity andhigh quality of their generated images, we empirically find some high-density regions in both learneddistributions. Figure 1 shows that the clustering pattern of Robsis significantly different from that ofRref, showing that even the learned distributions of two currently best models have strong denseregions towards some specific identities. For simplicity, our study target is the worst-case densemode, i.e.the identity with the largest number of neighbors within a given distance threshold.Consistency of the Dense Mode The dense region Robsis obtained by selecting the top pimagesinSwith the largest number of neighbors. In order to test the consistency of the worst-case densemodeImagainst sampling, we visualize the Imw.r.t. different size of Sin Figure 2. We consistentlyobserve roughly the same identity as the sampling size increases. Imcan be reliably obtained evenwhenjSj= 1k. The consistency of Imdemonstrate that the support size of Imis unnegligible.4.2 E MPIRICAL STUDY OF THE CAUSE OF MODE COLLAPSEWe hypothesize multiple factors that may potentially lead to the observed dense mode of face identity.We perform additional experiments, aiming to validate one by one: unfortunately, none of them wasobserved to reduce the observed mode collapse. That implies the existence of some more intrinsicreason for the mode collapse in GAN, which we leave for future exploration.Imbalance of Training Data? CelebAHQ is a highly imbalanced dataset: among its 30;000high-resolution face images of 6;217different celebrities, the largest identity class has 28 images and thesmallest one has only 1. Would a balanced dataset alleviate the mode collapse?4Under review as a conference paper at ICLR 20200.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(a) PGGAN-CelebAHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (b) StyleGAN-CelebAHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(c) PGGAN-FFHQ0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (d) StyleGAN-FFHQFigure 1: Identity clustering pattern analysis on StyleGAN and PGGAN, both trained on CelebAHQ.The blue region is a confidence band formed by the pointwise intervals between the upper and lowerconfidence bounds for all identities in Rref. The red curve is the neighboring function curve foridentity inRobs, the worst-case dense mode. We empirically set m(jSj) to be 100;000andM(jTj)to be 10;000;000. To study the worst-case dense mode, p(jRobsj) is set to be 1.(a)jSj= 1k (b)jSj= 10k (c)jSj= 100 k (d)jSj= 1m (e)jSj= 10mFigure 2: Visualization of the worst-case dense modeImw.r.t. different size of the S.Sis a collectionof randomly sampled images.We turn to the Flickr-Faces-HQ Dataset (FFHQ), a high-quality human face dataset created in (Karraset al., 2018), consisting of 70;000high-resolution face images, without repeated identities (wemanually examined the dataset to ensure so. It is thus “balanced” in terms of identity, in the sense thateach identity class has one sample. We train StyleGAN on FFHQ: somehow surprisingly, the modecollapse persists and seems no less than StyleGAN on CelebAHQ, as shown in Figure 1c and 1d.5Under review as a conference paper at ICLR 20200.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(a) StyleGAN-Randomness0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (b) StyleGAN-Overfitting/Underfitting0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref(c) StyleGAN-Architecture0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))RobsRref (d) PGGAN-ArchitectureFigure 3: Empirical study on possible causes of the mode collapse. The shaded areas denote thevariances of neighboring statistics for different experiments (caused by re-initialization/training;running different iterations; and varying architectures: see the texts for details). We empirically set m(jSj) to be 100;000andM(jTj) to be 1;000;000. To study the worst-case dense mode, p(jRobsj)and is set to be 1.Randomness during Initialization/Optimization? We repeat training StyleGAN on CelebAHQ(128128) for 10 times. The experimental results are shown in Figure 3a, with the shaded areasdenoting the variances. Despite the variance for the neighboring function curves plotted for repeatedexperiments, a large gap between the curves of RobsandRrefcan be consistently observed.Unfitting/Overfitting in Training? We train StyleGAN on CelebAHQ ( 128128) again, andstore model checkpoints at iteration 7707 (FID = 7.67, same hereinafter), 8307 (7.02), 8908 (6.89),9508 (6.63), 10108 (6.41), and 12000 (6.32). We plot their corresponding neighboring functioncurves in Figure 3b. Similarly, despite the variances, the identity mode collapse persists due to theconsistent large gap between RobsandRrefcurves.Model Architecture Differences? Both StyleGAN and PGGAN progressively grow their archi-tectures that can generate images of different resolutions: 128,256,512and1024 . Utilizing thisproperty, we train StyleGAN and PGGAN on CelebAHQ-128, CelebAHQ-256, CelebAHQ-512 andCelebAHQ-1024 respectively, and plot the neighboring function curves correspondingly. Accordingto Figures 3c and 3d, varying the architectures does not eliminate the mode collapse either.5 B LACK -BOX CALIBRATION APPROACHESGiven a pre-trained generator Gand target dense mode for alleviation, the goals of calibration arethree-fold: (1) the density of the mode is maximally alleviated; (2) the diversity and quality of the6Under review as a conference paper at ICLR 2020generated images (measured by FID) are minimally sacrificed; and (3) the calibration is black-box,which does not require access to training data or model parameters.We propose two calibration approaches: reshaping latent space via Gaussian mixture models andimportance sampling. They operate on the latent codes, and require no modification of the trainedmodel, nor even any access to the model parameters or training data, making them “black-box".Both approaches are evaluated with StyleGAN trained on CelebAHQ-128. For simplicity, we onlytarget to eliminating the worst-case dense mode Im,i.e.the identity with the largest number ofneighbors within a specified distance threshold.5.1 R ESHAPING LATENT SPACE VIA GAUSSIAN MIXTURE MODELSSince we consistently observe close neighbors to Im, when interpolating near Im, we hypothesizethat the latent codes of a dense mode Imlay on a smooth manifold. Based on this assumption, weattempt to re-shape that into a Gaussian mixture.5.1.1 M ETHOD DESCRIPTIONThe original latent space distribution (z;0)can be approximated with a mixture of GaussiandistributionsKXi=1wi(z;i). We randomly sample Nlatent code and use K-means to explorei= (i;i). We denote p(Im)as the probability of sampling the worst-case dense mode Im.p(Im) =Zp(Imjz)(z;0)dz=KXi=1wiZp(Imjz)i(z;i)dz. Ifp(Imji)is large, we reducewito make the overall p(Im)small.p(Imji)is estimated by the number of neighbors within d0distance toImin clusterCi,i.e.N(Im;Ci;d0).5.1.2 E XPERIMENTS0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))Robs(Im,)Robs(Im,/prime)Robs(I/primem,)Robs(I/primem,/prime)/primeRrefRrefFigure 4: Identity clustering pattern analysis of Style-GAN on CelebA, before/after latent space reshaping.Starting from a StyleGAN model Mpre-trained on CelebAHQ-128, we aim at al-leviating the collapse on the worst-casedense mode IminRobswith the largestnumber of neighbors. We reshape the latentspace ofMvia Gaussian mixture modelsto get the new model M0. We get the newworst-case dense mode I0min the new re-gionR0obswith the largest number of neigh-bors. We next randomly sample 106imagesfrom the original Gaussian distribution andnew GMM distribution, to form TandT0respectively. We then plot the neighboringfunction curves for IminTandT0, andI0minTandT0respectively. We expect the re-shaping latent space via Gaussian mixturemodels to alleviate the worst-case densemode with the minimal sacrifice of gener-ated image quality and diversity.As shown in Figure 4, the latent space reshaping could suppress the clustering of Im(indicated bya large gap between the two red curves) without intensifying the clustering of I0m(indicated by alittle gap between the two green curves), resulting in a reduction of mode collapse on Im. Such analleviation is achieved with an unnoticeable degradation of generation quality, with FID increasingfrom 5.93 (M) to 5.95 (M0). The large overlapping between confidence bands NMRrefandNM0Rrefshows that the diversity of generation is not sacrificed either.7Under review as a conference paper at ICLR 20205.2 I MPORTANCE SAMPLINGUnder the same hypothesis of smooth manifold in section 5.1, the high-density region correspondingto the worst-case dense mode Imcan be approximated with a convex hull.5.2.1 M ETHOD DESCRIPTIONImportance sampling is a variance reduction strategy in the Monte Carlo method. Let the estimatedneighboring function densities for the dense and sparse regions be p1andp2respectively. Weaccept the samples from Gfalling in the high-density region with a probability of p2=p1, so that thecalibrated densities can match.We approximate the high-density region with a convex hull formed by the collection of latent codesZImcorresponding to the identities similar to Im:Conv (ZIm) =fjZImjXk=1kzkj(8k:k0)^jZImjXk=1k= 1;zk2ZImg (3)5.2.2 E XPERIMENTThe experiment setting is mostly similar to the reshaping latent space via the Gaussian mixturemodels case. We integrate importance sampling to the latent code generation stage. Given the densemodeIm, we can find the collection of latent codes ZImvia sampling:ZIm=fzjd(Im;G(z))d0;zN(0;)g (4)0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50d0.00.51.01.52.02.53.03.54.04.55.05.56.06.5log10(N(d))Robs(Im,)Robs(Im,/prime)Robs(I/primem,)Robs(I/primem,/prime)/primeRrefRrefFigure 5: Identity clustering pattern analysis of Style-GAN on CelebA, before/after importance sampling.ZImis obtained from the top 102latentcodes whose corresponding images havethe smallest distances (1) to Im, among the106random samples. We randomly sample106images fromMandM0to formTandT0respectively. We plot the neighboringfunction curves for IMinTandT0, andIM0inTandT0respectively. As shownin Figure 5, the mode collapse is again al-leviated (indicated by a gap between thetwo red curves), without intensifying theclustering of I0m(indicated by a little gapbetween the two green curves), while FIDonly marginally increases from 5.93 ( M)to 5.94 (M0). The confidence band NMRrefis overlapped withNM0Rref, showing no loss of the diversity.Additionally, in the appendix, we show a white-box counterpart to the importance sampling approach,where the latent codes ZImare obtained via explicit optimization (accessing and altering modelparameters). The white-box approach does not seem to notably outperform than our above black-boxway, implying the relative effectiveness of the latter.6 D ISCUSSIONS AND FUTURE WORKThis paper is intended as a pilot study on exploring the mode collapse issue of GANs. Using facegeneration as a study subject, we quantify the general mode collapse via statistical tools, discuss andverify possible causes, as well as propose two black-box calibration approaches for the first time toalleviate the mode collapse. Despite the preliminary success, the current study remains to be limitedin many ways. First, there are inevitably prediction errors for the identity descriptors from generatedimages, even we have performed our best efforts to find the three most accurate descriptor predictions.Moreover, the fundamental causes of GAN mode collapse demand deeper understandings. Besides,the two calibration approaches only handle one worst-case dense mode, leaving much improvementroom open for future work.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This work addresses the important problem of generation bias and a lack of diversity in generative models, which is often called model collapse. It proposed a new metric to measure the diversity of the generative model's "worst" outputs based on the sample clustering patterns. Furthermore, it proposed two blackbox approaches to increasing the model diversity through resampling the latent z. Unlike most existing works that address the model collapse problem, a blackbox approach does not make assumptions about having access to model weights or the artifacts produced during model training, making it more widely applicable than the white-box approaches. In terms of experiment setup, the authors chooses face generation as the area to investigate and measures the diversity by detecting the generated face identity. With the proposed methods, the authors showed that most STOA methods have a wide gap between the top p faces of the most popular face identities and randomly sampled faces. It further showed that the proposed blackbox approaches increases the proposed diversity metric without sacrificing image quality. The proposed diversity measuring metric is lacking both in terms of experimental proofs and intuitive motivations. While the black-box calibration of a GAN model may be attractive under specific settings, the authors did not consider the restrictions under those situations and their design may be hard to implement as a result. For those reasons, I propose to REJECT this paper. Missing key experiments that will provide more motivation that 1. the new metric reflects human perception of diversity 2. the new metric works better than existing ones: 1. Please provide experiments and/or citation for using the face identity as a proxy for face image diversity. this is important since all your experiments rely on that assumption. 2. Were there experiments that applies your metric to the training datasets like CelebA and FFHQ? In theory your metric should show no gap between N_R_obs and N_R_ref measured on the training dataset since that's the sampled ground truth. Missing assumptions about blackbox calibration approaches: 1. If we do not have access to the model parameter, the training data, or the artifacts during training like the discriminator, what are some of the real world situations that fit this description? In those cases, is it too much to assume that we can control the random seed input to G? 2. Is it reasonable to assume some constraints on how much data we can get from the blackbox generator? A website that just exposes the image generation API may not allow you to ping their service 100k times to improve the generation diversity. If you are allowed to do that, it may be reasonable to assume that you can contact the API provider to get access to the rest of the model. Minor improvements that did not have a huge impact on the score 1. I found the argument about FID in section 2.1 unconvincing. Are there proofs or citations for the claim that real images don't follow multivariate gaussian distribution after applying FID? Copying is indeed an issue that FID cannot detect, but it may be tangential to model collapse for real world concerns like privacy. 2. The statement "IS, FID and MODE score takes both visual fidelity and diversity into account." under "Evaluation of Mode Collapse" is contradictory to the description in sec 2.1 that IS in fact does not measure diversity. 3. You may want to consider stating the work as "a pilot study" (sec 6.) earlier in the abstract or in the introduction, so that the reader knows what to expect. ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
HkxSOAEFDB
ICLR.cc/2020/Conference
2020
Octave Graph Convolutional Network
["Heng Chang", "Yu Rong", "Somayeh Sojoudi", "Junzhou Huang", "Wenwu Zhu"]
Many variants of Graph Convolutional Networks (GCNs) for representation learning have been proposed recently and have achieved fruitful results in various domains. Among them, spectral-based GCNs are constructed via convolution theorem upon theoretical foundation from the perspective of Graph Signal Processing (GSP). However, despite most of them implicitly act as low-pass filters that generate smooth representations for each node, there is limited development on the full usage of underlying information from low-frequency. Here, we first introduce the octave convolution on graphs in spectral domain. Accordingly, we present Octave Graph Convolutional Network (OctGCN), a novel architecture that learns representations for different frequency components regarding to weighted filters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification and demonstrate that our model achieves state-of-the-art performance in comparison with both spectral-based and spatial-based baselines.
["Graph Convolutional Networks", "Octave Convolution", "Graph Mining"]
ABSTRACTMany variants of Graph Convolutional Networks (GCNs) for representation learn-ing have been proposed recently and have achieved fruitful results in various do-mains. Among them, spectral-based GCNs are constructed via convolution theo-rem upon a theoretical foundation from the perspective of Graph Signal Processing(GSP). However, despite most of them implicitly act as low-pass filters that gen-erate smooth representations for each node, there is limited development on thefull usage of underlying information from low-frequency components. Here, wefirst introduce the octave convolution on graphs in spectral domain. Accordingly,we present Octave Graph Convolutional Network ( OctGCN ), a novel architecturethat learns representations for different frequency components regarding weightedfilters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification anddemonstrate that our model achieves state-of-the-art performance in comparisonwith both spectral-based and spatial-based baselines.1 I NTRODUCTIONThe family of Graph Convolutional Networks (GCNs) (Zhang et al., 2018), which generalizes thetraditional Convolutional Neural Networks (CNNs) from Euclidean structure data to graphs, hasachieved a remarkable success in various application domains, including but not limited to socialnetworks (Chen et al., 2018), computer vision (Kampffmeyer et al., 2018), text classification (Yaoet al., 2019) and applied chemistry (Liao et al., 2019).Existing methods of GCNs design falls into two categories: spatial-based methods and spectral-based methods (Wu et al., 2019). On the surface, the spatial-based models directly perform infor-mation aggregation through graph topology. However, this aggregation can be viewed as a simplifiedconvolution operation on spectral domain with the theoretical foundation in Graph Signal Process-ing (GSP). GSP extends the concepts in Discrete Signal Processing (DSP) and focuses on analyzingand processing data points whose relations are modeled as graph (Shuman et al., 2013; Ortega et al.,2018). In standard signal processing problems, the underlying "real signal" is usually assumed tohave low frequencies (Rabiner & Gold, 1975). Recent works (Wu et al., 2019; Maehara, 2019) re-veal that the spectral-based GCNs can be viewed as an implicit low-pass-type filter based denoisingmechanism on the spectral domain. However, there is still a lack of the explicit learning architec-ture of GCNs to extract the beneficial information from low-frequency while making full use of thehigh-frequency under certain scenarios.Considering the signal processing problem in computer vision, a natural image can be decomposedinto a low spatial frequency component containing the smoothly changing structure, e.g., back-ground, and a high spatial frequency component describing the rapidly changing fine details, e.g.,outlines. To accommodate with this phenomenon, (Chen et al., 2019) proposed Octave Convolution(OctConv) to learn the octave feature representations, which factorizes convolutional feature mapsinto two groups at different spatial frequencies and process them with different convolutions at theircorresponding frequency. Similarly, the octave mechanism is observed in graph representationallearning more naturally. The eigenvectors associated with small eigenvalues carry smoothly varyingsignal, encouraging nodes that are neighbors to share similar values. In contrast, the eigenvectorsassociated with large eigenvalues carry sharply varying signal across edges (Donnat et al., 2018).Accordingly, extending octave convolution from images to graphs sheds light on the explicit learningof GCNs regarding the representation of different frequencies.1Under review as a conference paper at ICLR 2020Low-FrequencyComponentsHigh-Frequency componentsMaxPoolingLow-FrequencyrepresentationsHigh-Frequency representationsOctaveConvolutionInputGraphf(W$,α$)f(W(,α()Figure 1: The overview of octave convolutional learning on graphs in spectral domain.Different from the scale-space theory (Lindeberg, 2013) utilized in OctConv (Chen et al., 2019)to define the low- and high-frequency spaces, graph signal processing (GSP) provides us a way todirectly divide the low- and high-frequency components based on the ascending ordered eigenvaluesof Laplacian. Inspired from this, we propose to consider the octave feature in the spectral domainto construct a new graph convolutional model: Octave Graph Convolutional Network OctGCN . InOctGCN , with a particular design of filters for different spectrum, we allocate different weights onlow- and high-frequency. Spectral graph wavelets are chosen as feature transformation bases dueto their local and sparse property. Two parameters are further introduced to construct the filters forreducing the parameter complexity to the same as (Kipf & Welling, 2017), which is critical whenlabels of training data are limited. Meanwhile, we employ the attention mechanism to learn theimportance of low and high pass filters. Figure 1 provides the overview of the design of OctGCNin spectral domain. We validate the effectiveness of our model via experiments on semi-supervisednode classification tasks, where the expressive power of GCNs is crucial to capture the underlyingbeneficial information in graph signals. Our results confirm the importance of low-frequency ingraphs and bring interpretability to the innate character of GCNs. In addition, empirical resultsshow that our proposed method consistently rivals the state-of-art methods from both spectral-basedand spatial-based baselines on real-world datasets.2 R ELATED WORKSpectral convolutional networks on graphs. Existing methods of defining a convolutional op-eration on graphs can be broadly divided into two categories: spectral based and spatial basedmethods (Zhang et al., 2018). We focus on the spectral graph convolutions in this paper. Spec-tral CNN (Bruna et al., 2014) first attempts to generalize CNNs to graphs based on the spectrum ofthe graph Laplacian and defines the convolutional kernel in the spectral domain. (Boscaini et al.,2015) further employs windowed Fourier transformation to define a local spectral CNN approach.ChebyNet (Defferrard et al., 2016) introduces a fast localized convolutional filter on graphs viaChebyshev polynomial approximation. Vanilla GCN (Kipf & Welling, 2017) further extends thespectral graph convolutions considering networks of significantly larger scale by several simplifi-cations. (Khasanova & Frossard, 2017) learns graph-based features on images that are inherentlyinvariant to isometric transformations. Cayleynets (Levie et al., 2018) alternatively introduce Cayleypolynomials allowing to efficiently compute spectral filters on graphs. Lanczos algorithm is utilizedin LanczosNet (Liao et al., 2019) to construct low-rank approximations of the graph Laplacian forconvolution. SGC (Wu et al., 2019) further reduces the complexity of Vanilla GCN by successivelyremoving the non-linearities and collapsing weights between consecutive layers. Despite their effec-tive performance, all these convolution theorem based methods lack the strategy to explicitly treatlow- and high-frequency components with different importance.Spectral graph wavelets. Theoretically, the lifting scheme is proposed for the construction ofwavelets that can be adapted to irregular graphs in (Sweldens, 1998). (Hammond et al., 2011) defineswavelet transforms appropriate for graphs and describes a fast algorithm for computation via fastChebyshev polynomial approximation. For applications, (Tremblay & Borgnat, 2014) utilizes graphwavelets for multi-scale community mining and obtains a local view of the graph from each node.(Donnat et al., 2018) introduces the property of graph wavelets that describes information diffusionand learns structural node embeddings accordingly. GWNN (Xu et al., 2019a) first attempts toconstruct graph neural networks with graph wavelets. These works emphasize the local and sparseproperty of graph wavelets for graph signal processing both theoretically and practically.2Under review as a conference paper at ICLR 2020Octave feature representation. In computer vision, (Chen et al., 2019) first defines octave featurerepresentations based on scale-space theory and reduces spatial redundancy of vanilla CNN models.(Durall et al., 2019) further leverages octave convolutions for designing stabilizing GANs. To ourknowledge, this is the first time that octave feature representations are considered in irregular graphdomain and established with graph convolutional neural networks.3 P ROPOSED APPROACH3.1 P RELIMINARYWe denoteG=fV;Egas an undirected graph, where jVj=nis the set of nnodes, andEis theset of edges. The adjacency matrix is defined as AwithAi;j=Aj;idescribing the edge connectingnodeiand nodej. The graph Laplacian matrix Lis defined as the difference L=DA, whereDi;i=PjAi;jis a diagonal degree matrix. The normalized graph Laplacian matrix is referredasL=InD1=2AD1=2, whereInis the identity matrix. The graph Laplacian Lcan bedecomposed into its eigenvalue components, L=UU>, such that for the set of eigenvalues inascending orderfign1i=0=01n1, the diagonal eigenvalue matrix is defined as = diag(0;:::;n1)andU= (u1;u2;:::;un)is the eigenvector matrix.SinceLis a real symmetric matrix, it has real, non-negative eigenvalues fign1i=0=01n1, known as the frequencies of graph. These eigenvalues have associated a complete setof orthonormal eigenvectors in U, identified as Laplacian eigenvectors. In Graph Signal Processing(GSP), we denote frequency components with small/large eigenvalues of Laplacian as low/highfrequencies. Given a signal xand a graph Laplacian L, the Graph Fourier Transform (GFT) of xwith respect to Lis defined as the signal ~x=U>x, and the inverse (i)GFT of xwith respect to Lisx=U~x(Shuman et al., 2013).3.2 S PECTRAL GRAPH CONVOLUTIONThe spectral convolution on graphs is normally defined as the multiplication of a signal on everynodexwith a diagonal filter g=diag()parameterized by in the Fourier domain:g?x=UgU>x; (1)The filters are usually understood as a function of the eigenvalues. Inspired by (Maehara, 2019),we can decompose the spectral convolution process on graphs from the perspective of GSP as foursteps: 1.Compute the graph bases U;2.Graph spectral transform on signal xwithU>;3.Filteringwithg;4.Reconstruct the signal features in the spatial domain with U. In this sense, the designof filtergis essential to the performance of spectral convolution. Broadly, the filter design can bedivided into two categories: it is either learned by the neural network (Bruna et al., 2014; Xu et al.,2019a) or directly fixed as the eigenvalues via approximation (Kipf & Welling, 2017; Wu et al.,2019). In this paper, we will focus on the first kind.Spectral CNN (Bruna et al., 2014) generalizes the convolutional net by operating on the spectrumof weights, given by the ordered eigenvectors of its graph Laplacian. The structure of k-th layer isconstructed as:Xk+1[:;j]=hUpXi=1Fki;jU>Xk[:;i]j= 1;;q; (2)whereXk2Rnpis the signal with pinput channels and Xk+12Rnqis the convolved signalmatrix.Xk[:;i]andXk+1[:;j]are thei-th andj-th column of XkandXk+1, respectively. gofk-th layeris defined as Fki;j, which is a diagonal filter matrix to be learned for each input channel in spectraldomain.his a real valued nonlinear activation function, e.g.,ReLU() = max (0;1). Thus, theparameter complexity of Spectral CNN is O(npq), which generally demands a huge amountof training data for parameter learning.3.3 W HYSPECTRAL GRAPH WAVELET ?Graph wavelet neural network (GWNN) (Xu et al., 2019a) expands the spectral convolution fromFourier transformation to wavelet transformation. Let gs() =esbe a heat kernel filter with3Under review as a conference paper at ICLR 2020scaling parameter s. In GSP (Hammond et al., 2011; Shuman et al., 2013), the spectral graphwavelet siis defined as the signal resulting from the modulation in the spectral domain of a signal xcentered around the associated node i. Then, the graph wavelet transform is conducted by employinga set of wavelets s= ( s1; s2;:::; sn)as bases. Formally, the spectral graph wavelets are givenas: s=UgsU>; (3)whereUis Laplacian eigenvectors of L=DAor normalized Laplacian L=InD12AD12,gs=diaggs(1);gs(2);:::;gs(n)is a scaling matrix with heat kernel. The inverse of graphwavelets 1sis obtained by simply replacing the gs()in swithgs()corresponding to theheat kernel (Donnat et al., 2018). Similarly, smaller indices in graph wavelets correspond to low-frequency components and vice versa.Similar to GFT, after replacing the Fourier bases with spectral graph wavelets, the graph wavelettransformation of a signal xon graph is defined as ^x= 1sxand the inverse graph wavelet trans-form isx= s^x. Replacing the graph Fourier transform in spectral convolution (Equation 1) withgraph wavelet transform, the graph wavelet convolution can be obtained as:g?x= sg >sx (4)The benefits that spectral graph wavelet bases have over Fourier bases mainly fall into two aspects:1.Given the sparse real-world networks, the graph wavelet bases are usually much more sparsethan Fourier bases, e.g., the density of sis2:8%comparing with 99:1%ofU(Xu et al., 2019a).The sparseness of graph wavelets makes them more computationally efficient for use. 2.In spectralgraph wavelets, the signal sresulting from heat kernel filter gsis typically localized on the graphand in the spectral domain (Shuman et al., 2013). By adjusting the scaling parameter s, one caneasily constrain the range of localized neighborhood. Smaller values of sgenerally associate withsmaller neighborhoods.Employing the same strategy as in Spectral CNN (Bruna et al., 2014), GWNN designs the samediagonal filter Fki;jto be learned for each input channel. The structure of k-th layer of GWNN is:Xk+1[:;j]=h spXi=1Fki;j 1sXk[:;i]j= 1;;q; (5)Note that both Spectral CNN and GWNN employ the same filters for learning full frequency com-ponents. As mentioned before, the parameter complexity of Spectral CNN is large since each pair ofinput and output channel requires learning an individual diagonal filter matrix Fki;j. GWNN furtherreduces the parameter complexity by dividing each layer into two components: feature transforma-tion and graph convolution:feature transformation : Xk0=XkWk; (6)graph convolution : Xk+1=h sFk 1sXk0: (7)whereWk2Rpqis the feature transformation parameter matrix similar to Vanilla GCN (Kipf& Welling, 2017). In such a way, the feature transformation operation is detached from graphconvolution and the parameter complexity is decreased from from O(npq)toO(n+pq).3.4 O CTAVE CONVOLUTIONAL LAYERIn contrast to the scale-space theory-based octave feature representation utilized in computer vi-sion (Lindeberg, 2013), Graph Signal Processing provides us a more principle way in the spectraldomain. To better capture the different importance of low- and high- frequency components andcombining the benefits of graph wavelets, we can naturally construct each layer in an octave convo-lution manner by learning two different filters as:feature transformation : Xk0L=XkWkL; Xk0H=XkWkH (8)graph convolution : Xk+1L= sLFkL 1sLXk0L; Xk+1H= sHFkH 1sHXk0H; (9)whereFkL2RddandFkH2R(nd)(nd)are the diagonal filter matrix for graph convolution to belearned with different weights for low- and high- components, respectively. dis the hyper-parameter4Under review as a conference paper at ICLR 2020Table 1: The overview of dataset statistics.Dataset Nodes Edges Classes Features Label rateCiteseer 3,327 4,732 6 3,703 0:036Cora 2,708 5,429 7 1,433 0:052Pubmed 19,717 44,338 3 500 0:003to select the proportion d=nof low frequency components. sLand sLare the corresponding low-and high- frequency graph wavelet bases. Further, with a pooling operation on the outputs andnon-linear activation function, the structure of the k-th layer in our structure can be defined asXk+1=hPooling( sLFkL 1sLXkWkL; sHFkH 1sHXkWkH)(10)We refer this proposed architecture as Octave Graph Convolutional Network ( OctGCN ).Since the parameters in the diagonal convolution filtering kernels could be huge especially for largegraphs, the graph-based semi-supervised learning might prohibit the parameter learning due to thelimited amount of training data. To mitigate this issue, we further reduce the parameter complexityby constructing the graph convolution kernel Fkwith two parameters LandH, and keeping thesame weight matrix Wshared between low- and high- frequency components asXk+1=hPooling( sL264L...L375 1sLXkWk; sH264H...H375 1sHXkWk)(11)For the learning of weights of low- and high-frequency components LandH, we adopt theattention strategy to constraint them within the scale of (0;1):= softmax( ) =exp ()Pexp();=L;HHence, we introduce three more hyper-parameters to be tuned: LandHcontrol the importanceof low and high frequency components, and parameter dspecify the ratio of low frequencies weexpect to represent the graph. In this way, we reduces the parameter complexity from O(n+pq)in GWNN (Xu et al., 2019a) to O(pq), which is the same as Vanilla GCN (Kipf & Welling, 2017).3.5 F AST SPECTRAL GRAPH WAVELET APPROXIMATION VIA CHEBYSHEV POLYNOMIALSDirectly computing the transformation according to Equation 3 is intensive for large graphs, sincediagonalizing Laplacian Lcommonly requires O(n3)operations. Luckily, (Hammond et al., 2011)provides us a method to fast approximate the spectral graph wavelet via Chebyshev polynomials.Letsbe the fixed scaling parameter in the heat filter kernel gs() =esandMbe the degree ofthe Chebyshev polynomial approximations for the scaled wavelet (Larger value of Myields moreaccurate approximations but higher computational cost in opposite), the graph wavelet is given by s=12c0;s+MXi=1ci;sTi(~L);ci;s=2Z0cosies(cos+1)d= 2esJi(s) (12)where ~L=2maxLInandJi(s)is the Bessel function of the first kind. The proof can be referredto (Hammond et al., 2011). With this Chebyshev polynomial approximation, the computational costof spectral graph wavelets is decreased to O(MkEk+Mn). Due the real world graphs are usuallysparse, this computational difference can be very significant.4 E XPERIMENTS4.1 D ATASETSWe evaluate our proposed OctGCN on semi-supervised node classification task. The experimen-tal setup is closely followed (Yang et al., 2016; Kipf & Welling, 2017). Statistical overview of5Under review as a conference paper at ICLR 2020Table 2: Experimental results (in percent) on semi-supervised node classification.Model Citeseer Cora PubmedLP (Zhu et al., 2003) 45:3 68 :0 63 :0ICA (Lu & Getoor, 2003) 69:1 75 :1 73 :9ManiReg (Belkin et al., 2006) 60:1 59 :5 70 :7SemiEmb (Weston et al., 2012) 59:6 59 :0 71 :1DeepWalk (Perozzi et al., 2014) 43:2 67 :2 65 :3Planetoid (Yang et al., 2016) 64:7 75 :7 77 :2Spectral CNN (Bruna et al., 2014) 58:9 73 :3 73 :9ChebyNet (Defferrard et al., 2016) 69:8 81 :2 74 :4Vanilla GCN (Kipf & Welling, 2017) 70:3 81 :5 79 :0GWNN (Xu et al., 2019a) 71:7 82 :8 79 :1LNet (Liao et al., 2019) 66:21:9 79:51:8 78:30:3AdaLNet (Liao et al., 2019) 68:71:0 80:41:1 78:10:4SGC (Wu et al., 2019) 71:90:1 81:00:0 78:90:0MoNet (Monti et al., 2017) — 81:70:5 78:80:3GAT (Veli ˇckovi ́c et al., 2018) 72:50:7 83:00:7 79:00:3GIN (Xu et al., 2019b) 66:10:9 77:61:1 77:01:2DGI (Velickovic et al., 2019) 71:80:7 82:30:6 76:80:6OctGCN (this paper) 72:10:283:50:280:50:3datasets is given in Table 1. Three real-world datasets are chosen as benchmarks: Citeseer, Coraand Pubmed (Sen et al., 2008). In these citation networks, nodes are documents with correspondingbag-of-words features and edges are citation links. Label rate denotes the ratio of labeled nodesfetched in training process. We keep the label rate consistent with the classic public split, which is20 labeled nodes per class in each dataset for training. Meantime, the test set contains 1000 labeledsamples for prediction accuracy evaluation, and the validation set includes 500 labeled samples fordetermining hyper-parameters.4.2 B ASELINESWe first compare against traditional baselines, i.e., label propagation (LP) (Zhu et al., 2003), iterativeclassification algorithm (ICA) (Lu & Getoor, 2003), manifold regularization (ManiReg) (Belkinet al., 2006), semi-supervised embedding (SemiEmb) (Weston et al., 2012), skip-gram based graphembeddings (DeepWalk) (Perozzi et al., 2014) and Planetoid (Yang et al., 2016).Then we compare the most recent and state-of-the-art baselines from both spectral and spatialgraph neural networks, since they are shown effective for semi-supervised settings. For spectralapproaches based on convolution theorem, we compare our OctGCN with the Spectral CNN (Brunaet al., 2014), ChebyNet (Defferrard et al., 2016), Vanilla GCN (Kipf & Welling, 2017), GWNN (Xuet al., 2019a), LNet/AdaLNet (Liao et al., 2019) and SGC (Wu et al., 2019). For spatial based meth-ods, we select the MoNet (Monti et al., 2017), GAT (Veli ˇckovi ́c et al., 2018), GIN (Xu et al., 2019b)and DGI (Velickovic et al., 2019) as comparisons.4.3 E XPERIMENTAL SETUPFor all experiments, a 2-layer network of our model is constructed using TensorFlow (Abadi et al.,2015) with 64 hidden units. We train our model utilizing the Adam optimizer (Kingma & Ba, 2014)with an initial learning rate lr= 0:01. We terminate training if validation accuracy does not improvefor 100 consecutive steps, and most runs finish in less than 200 steps as expected. We initialize theweights matrix following (Glorot & Bengio, 2010), employ 5104L2 regularization on weightsand dropout input and hidden layers to prevent overfitting (Srivastava et al., 2014).For hyper-parameters for constructing wavelets s, we adopt the selection of the scaling parametersand sparseness threshold t(the elements of sare set to 0 when smaller than t) as in (Xu et al.,2019a), i.e.,s= 0:7t= 1105for Citeseer, s= 1:0t= 1104for Cora and s= 0:5t= 1107for Pubmed, since both smaller sandtare shown not sensitive to datasets. For6Under review as a conference paper at ICLR 20200.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.7120.7140.7160.7180.7200.722Classification AccuracyBest Fraction citeseer0.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.8050.8100.8150.8200.8250.8300.835Classification AccuracyBest Fraction cora0.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.79000.79250.79500.79750.80000.80250.80500.8075Classification AccuracyBest Fraction pubmedFigure 2: The performance of learned OctGCN w.r.t the proportion of low-frequency components.The best fraction is marked with the red vertical line.Table 3: Learned weights LandHofOctGCN for low and high frequency w.r.t the best fractionof low frequency components d=n(number followed after the name of datasets).Dataset Citeseer ( 15%) Cora ( 5%) Pubmed ( 10%)Octave filter weights LHLHLHLearned value 0.838 0.162 0.722 0.278 0.860 0.140Table 4: The mean Silhouette Coefficient of learned samples. Larger is better.Dataset Citeseer Cora PubmedModel Vanilla GCN GWNN OctGCN Vanilla GCN GWNN OctGCN Vanilla GCN GWNN OctGCNSilhouette score 0.038 0.050 0.083 0.119 0.153 0.220 0.110 0.130 0.171the only hyper-parameter of OctGCN , the optimal proportion d=nof low-frequency components foreach dataset, is determined through grid search and studied in next Section. The weights of low-and high-frequency components LandHare both initialized with 1and learned automatically. Inexperiments, Max -pooling is chosen to demonstrate the importance the low-frequency components.4.4 E XPERIMENTAL RESULTS4.4.1 P ERFORMANCE OF OctGCN ONNODE CLASSIFICATIONIn Table 2, we demonstrate how our model performs on public splits taken from (Yang et al.,2016). The results of baselines are strictly consistent with the numbers from literature. With thelimited information given in semi-supervised learning, We achieve a average test accuracy of 72:1%,83:5%, and 80:5%on Citeseer, Cora and Pubmed, respectively. As OctGCN learned the octavefeature representations for graph in spectral domain, it can demonstrate the meaningful informationextracted from the underlying "true signal" from low-frequency over high-frequency. This is themain reason that explains why OctGCN outperforms other baseline methods.4.4.2 A NALYSIS ON INTERPRETABILITYIn Figure 2, how the proportion d=nof low-frequency components affect the performance is studied.We fine-tune the proportion in a range of f0%;5%;;95%g. The best proportion of low-frequencycomponents are 15%,5%, and 10% for Citeseer, Cora and Pubmed, respectively. The learnedweights of low- and high-frequency components LandHw.r.t the best proportion for each datasetare demonstrated in Table 3, accordingly. It’s clearly to note that the small proportion of low-frequency components are essential to the learning octave feature representation. The results are inline with the importance of low-frequency in GSP and bring interpretability to the nature of GCNs.7Under review as a conference paper at ICLR 2020(a) Vanilla GCN on Citeseer (b) GWNN on Citeseer (c)OctGCN on Citeseer(d) Vanilla GCN on Cora (e) GWNN on Cora (f)OctGCN on Cora(g) Vanilla GCN on Pubmed (h) GWNN on Pubmed (i)OctGCN on PubmedFigure 3: The t-SNE visualization of OctGCN comparing with spectral convolution based baselines.Each color corresponds to a different class that the embeddings belongs to.4.4.3 T-SNE VISUALIZATION OF LEARNED EMBEDDINGSTable 4 presents the mean Silhouette Coefficient (Rousseeuw, 1987) over all learned samples, largerthe silhouette score is, better the clustering performs. We choose two representative baseline meth-ods, i.e., Vanilla GCN (Kipf & Welling, 2017) and GWNN (Xu et al., 2019a) for comparison. Wecan indicate that OctGCN achieves the best quality of embeddings. Figure 3 depicts the t-SNEvisualization (Maaten & Hinton, 2008) of learned embeddings on all three citation datasets. Wecan visualize the local and sparse property of spectral graph wavelets that utilized in GWNN andOctGCN . Further, the intersections of different classes are more separated in the results our Oct-GCN , since the octave feature embeddings learned from our model tend to capture the importanceinformation in low-frequency components and effectively alleviate the noise from high-frequency.5 C ONCLUSIONIn this paper, we propose OctGCN , a novel spectral-based graph convolutional neural network tolearn the representation of graph with respect to different frequency components. By distinct designof filters for low- and high-frequency, our model can effectively capture the octave feature repre-sentations and enhance the interpretability of GCNs. To the best of our knowledge, this is the firstattempt on octave convolution for graphs. An interesting direction for future work is to extend thedefinition of octave convolution from spectral domain to spatial domain, in order to pursue moreefficient architectures for learning with graphs.8Under review as a conference paper at ICLR 2020
rygxsZzmqH
Official Blind Review #3
3: Weak Reject
This paper proposes to use octave convolution to learn a representation of a graph. Typically, a learning on a graph is done either in a spatial domain or in a spectral domain. A spectral domain based approach uses a eigenvalue decomposition form of a graph Laplacian (a symmetric matrix) and learning a filter that acts on the eigenvalue of a graph Laplacian while preserving eigenvectors of a graph Laplacian. This architecture is called the graph convolutional network on a spectral domain. This paper's main contribution is to adapt octave convolutional network's architecture to the usual graph convolutional network. While I believe that this is the first work on applying the idea behind octave convolutional network architecture, separating low and high frequency component in the learning stage, to graph convolutional network architecture, I cannot see a good motivation on why this architecture is good for learning on a graph. A comprehensive study in the paper shows a better performance gain compared to the existing method, but it would be better if the gains were substantial or the authors presented a good motivation on why this architecture is good in some cases. Overall, I think the paper is well-written, but I would suggest to present more meaningful justification why and when the octave GCN is better than the GCN.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Octave Graph Convolutional Network ### Paper Abstract Many variants of Graph Convolutional Networks (GCNs) for representation learning have been proposed recently and have achieved fruitful results in various domains. Among them, spectral-based GCNs are constructed via convolution theorem upon theoretical foundation from the perspective of Graph Signal Processing (GSP). However, despite most of them implicitly act as low-pass filters that generate smooth representations for each node, there is limited development on the full usage of underlying information from low-frequency. Here, we first introduce the octave convolution on graphs in spectral domain. Accordingly, we present Octave Graph Convolutional Network (OctGCN), a novel architecture that learns representations for different frequency components regarding to weighted filters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification and demonstrate that our model achieves state-of-the-art performance in comparison with both spectral-based and spatial-based baselines. ### Paper Keywords ["Graph Convolutional Networks", "Octave Convolution", "Graph Mining"] ### Paper Content ABSTRACTMany variants of Graph Convolutional Networks (GCNs) for representation learn-ing have been proposed recently and have achieved fruitful results in various do-mains. Among them, spectral-based GCNs are constructed via convolution theo-rem upon a theoretical foundation from the perspective of Graph Signal Processing(GSP). However, despite most of them implicitly act as low-pass filters that gen-erate smooth representations for each node, there is limited development on thefull usage of underlying information from low-frequency components. Here, wefirst introduce the octave convolution on graphs in spectral domain. Accordingly,we present Octave Graph Convolutional Network ( OctGCN ), a novel architecturethat learns representations for different frequency components regarding weightedfilters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification anddemonstrate that our model achieves state-of-the-art performance in comparisonwith both spectral-based and spatial-based baselines.1 I NTRODUCTIONThe family of Graph Convolutional Networks (GCNs) (Zhang et al., 2018), which generalizes thetraditional Convolutional Neural Networks (CNNs) from Euclidean structure data to graphs, hasachieved a remarkable success in various application domains, including but not limited to socialnetworks (Chen et al., 2018), computer vision (Kampffmeyer et al., 2018), text classification (Yaoet al., 2019) and applied chemistry (Liao et al., 2019).Existing methods of GCNs design falls into two categories: spatial-based methods and spectral-based methods (Wu et al., 2019). On the surface, the spatial-based models directly perform infor-mation aggregation through graph topology. However, this aggregation can be viewed as a simplifiedconvolution operation on spectral domain with the theoretical foundation in Graph Signal Process-ing (GSP). GSP extends the concepts in Discrete Signal Processing (DSP) and focuses on analyzingand processing data points whose relations are modeled as graph (Shuman et al., 2013; Ortega et al.,2018). In standard signal processing problems, the underlying "real signal" is usually assumed tohave low frequencies (Rabiner & Gold, 1975). Recent works (Wu et al., 2019; Maehara, 2019) re-veal that the spectral-based GCNs can be viewed as an implicit low-pass-type filter based denoisingmechanism on the spectral domain. However, there is still a lack of the explicit learning architec-ture of GCNs to extract the beneficial information from low-frequency while making full use of thehigh-frequency under certain scenarios.Considering the signal processing problem in computer vision, a natural image can be decomposedinto a low spatial frequency component containing the smoothly changing structure, e.g., back-ground, and a high spatial frequency component describing the rapidly changing fine details, e.g.,outlines. To accommodate with this phenomenon, (Chen et al., 2019) proposed Octave Convolution(OctConv) to learn the octave feature representations, which factorizes convolutional feature mapsinto two groups at different spatial frequencies and process them with different convolutions at theircorresponding frequency. Similarly, the octave mechanism is observed in graph representationallearning more naturally. The eigenvectors associated with small eigenvalues carry smoothly varyingsignal, encouraging nodes that are neighbors to share similar values. In contrast, the eigenvectorsassociated with large eigenvalues carry sharply varying signal across edges (Donnat et al., 2018).Accordingly, extending octave convolution from images to graphs sheds light on the explicit learningof GCNs regarding the representation of different frequencies.1Under review as a conference paper at ICLR 2020Low-FrequencyComponentsHigh-Frequency componentsMaxPoolingLow-FrequencyrepresentationsHigh-Frequency representationsOctaveConvolutionInputGraphf(W$,α$)f(W(,α()Figure 1: The overview of octave convolutional learning on graphs in spectral domain.Different from the scale-space theory (Lindeberg, 2013) utilized in OctConv (Chen et al., 2019)to define the low- and high-frequency spaces, graph signal processing (GSP) provides us a way todirectly divide the low- and high-frequency components based on the ascending ordered eigenvaluesof Laplacian. Inspired from this, we propose to consider the octave feature in the spectral domainto construct a new graph convolutional model: Octave Graph Convolutional Network OctGCN . InOctGCN , with a particular design of filters for different spectrum, we allocate different weights onlow- and high-frequency. Spectral graph wavelets are chosen as feature transformation bases dueto their local and sparse property. Two parameters are further introduced to construct the filters forreducing the parameter complexity to the same as (Kipf & Welling, 2017), which is critical whenlabels of training data are limited. Meanwhile, we employ the attention mechanism to learn theimportance of low and high pass filters. Figure 1 provides the overview of the design of OctGCNin spectral domain. We validate the effectiveness of our model via experiments on semi-supervisednode classification tasks, where the expressive power of GCNs is crucial to capture the underlyingbeneficial information in graph signals. Our results confirm the importance of low-frequency ingraphs and bring interpretability to the innate character of GCNs. In addition, empirical resultsshow that our proposed method consistently rivals the state-of-art methods from both spectral-basedand spatial-based baselines on real-world datasets.2 R ELATED WORKSpectral convolutional networks on graphs. Existing methods of defining a convolutional op-eration on graphs can be broadly divided into two categories: spectral based and spatial basedmethods (Zhang et al., 2018). We focus on the spectral graph convolutions in this paper. Spec-tral CNN (Bruna et al., 2014) first attempts to generalize CNNs to graphs based on the spectrum ofthe graph Laplacian and defines the convolutional kernel in the spectral domain. (Boscaini et al.,2015) further employs windowed Fourier transformation to define a local spectral CNN approach.ChebyNet (Defferrard et al., 2016) introduces a fast localized convolutional filter on graphs viaChebyshev polynomial approximation. Vanilla GCN (Kipf & Welling, 2017) further extends thespectral graph convolutions considering networks of significantly larger scale by several simplifi-cations. (Khasanova & Frossard, 2017) learns graph-based features on images that are inherentlyinvariant to isometric transformations. Cayleynets (Levie et al., 2018) alternatively introduce Cayleypolynomials allowing to efficiently compute spectral filters on graphs. Lanczos algorithm is utilizedin LanczosNet (Liao et al., 2019) to construct low-rank approximations of the graph Laplacian forconvolution. SGC (Wu et al., 2019) further reduces the complexity of Vanilla GCN by successivelyremoving the non-linearities and collapsing weights between consecutive layers. Despite their effec-tive performance, all these convolution theorem based methods lack the strategy to explicitly treatlow- and high-frequency components with different importance.Spectral graph wavelets. Theoretically, the lifting scheme is proposed for the construction ofwavelets that can be adapted to irregular graphs in (Sweldens, 1998). (Hammond et al., 2011) defineswavelet transforms appropriate for graphs and describes a fast algorithm for computation via fastChebyshev polynomial approximation. For applications, (Tremblay & Borgnat, 2014) utilizes graphwavelets for multi-scale community mining and obtains a local view of the graph from each node.(Donnat et al., 2018) introduces the property of graph wavelets that describes information diffusionand learns structural node embeddings accordingly. GWNN (Xu et al., 2019a) first attempts toconstruct graph neural networks with graph wavelets. These works emphasize the local and sparseproperty of graph wavelets for graph signal processing both theoretically and practically.2Under review as a conference paper at ICLR 2020Octave feature representation. In computer vision, (Chen et al., 2019) first defines octave featurerepresentations based on scale-space theory and reduces spatial redundancy of vanilla CNN models.(Durall et al., 2019) further leverages octave convolutions for designing stabilizing GANs. To ourknowledge, this is the first time that octave feature representations are considered in irregular graphdomain and established with graph convolutional neural networks.3 P ROPOSED APPROACH3.1 P RELIMINARYWe denoteG=fV;Egas an undirected graph, where jVj=nis the set of nnodes, andEis theset of edges. The adjacency matrix is defined as AwithAi;j=Aj;idescribing the edge connectingnodeiand nodej. The graph Laplacian matrix Lis defined as the difference L=DA, whereDi;i=PjAi;jis a diagonal degree matrix. The normalized graph Laplacian matrix is referredasL=InD1=2AD1=2, whereInis the identity matrix. The graph Laplacian Lcan bedecomposed into its eigenvalue components, L=UU>, such that for the set of eigenvalues inascending orderfign1i=0=01n1, the diagonal eigenvalue matrix is defined as = diag(0;:::;n1)andU= (u1;u2;:::;un)is the eigenvector matrix.SinceLis a real symmetric matrix, it has real, non-negative eigenvalues fign1i=0=01n1, known as the frequencies of graph. These eigenvalues have associated a complete setof orthonormal eigenvectors in U, identified as Laplacian eigenvectors. In Graph Signal Processing(GSP), we denote frequency components with small/large eigenvalues of Laplacian as low/highfrequencies. Given a signal xand a graph Laplacian L, the Graph Fourier Transform (GFT) of xwith respect to Lis defined as the signal ~x=U>x, and the inverse (i)GFT of xwith respect to Lisx=U~x(Shuman et al., 2013).3.2 S PECTRAL GRAPH CONVOLUTIONThe spectral convolution on graphs is normally defined as the multiplication of a signal on everynodexwith a diagonal filter g=diag()parameterized by in the Fourier domain:g?x=UgU>x; (1)The filters are usually understood as a function of the eigenvalues. Inspired by (Maehara, 2019),we can decompose the spectral convolution process on graphs from the perspective of GSP as foursteps: 1.Compute the graph bases U;2.Graph spectral transform on signal xwithU>;3.Filteringwithg;4.Reconstruct the signal features in the spatial domain with U. In this sense, the designof filtergis essential to the performance of spectral convolution. Broadly, the filter design can bedivided into two categories: it is either learned by the neural network (Bruna et al., 2014; Xu et al.,2019a) or directly fixed as the eigenvalues via approximation (Kipf & Welling, 2017; Wu et al.,2019). In this paper, we will focus on the first kind.Spectral CNN (Bruna et al., 2014) generalizes the convolutional net by operating on the spectrumof weights, given by the ordered eigenvectors of its graph Laplacian. The structure of k-th layer isconstructed as:Xk+1[:;j]=hUpXi=1Fki;jU>Xk[:;i]j= 1;;q; (2)whereXk2Rnpis the signal with pinput channels and Xk+12Rnqis the convolved signalmatrix.Xk[:;i]andXk+1[:;j]are thei-th andj-th column of XkandXk+1, respectively. gofk-th layeris defined as Fki;j, which is a diagonal filter matrix to be learned for each input channel in spectraldomain.his a real valued nonlinear activation function, e.g.,ReLU() = max (0;1). Thus, theparameter complexity of Spectral CNN is O(npq), which generally demands a huge amountof training data for parameter learning.3.3 W HYSPECTRAL GRAPH WAVELET ?Graph wavelet neural network (GWNN) (Xu et al., 2019a) expands the spectral convolution fromFourier transformation to wavelet transformation. Let gs() =esbe a heat kernel filter with3Under review as a conference paper at ICLR 2020scaling parameter s. In GSP (Hammond et al., 2011; Shuman et al., 2013), the spectral graphwavelet siis defined as the signal resulting from the modulation in the spectral domain of a signal xcentered around the associated node i. Then, the graph wavelet transform is conducted by employinga set of wavelets s= ( s1; s2;:::; sn)as bases. Formally, the spectral graph wavelets are givenas: s=UgsU>; (3)whereUis Laplacian eigenvectors of L=DAor normalized Laplacian L=InD12AD12,gs=diaggs(1);gs(2);:::;gs(n)is a scaling matrix with heat kernel. The inverse of graphwavelets 1sis obtained by simply replacing the gs()in swithgs()corresponding to theheat kernel (Donnat et al., 2018). Similarly, smaller indices in graph wavelets correspond to low-frequency components and vice versa.Similar to GFT, after replacing the Fourier bases with spectral graph wavelets, the graph wavelettransformation of a signal xon graph is defined as ^x= 1sxand the inverse graph wavelet trans-form isx= s^x. Replacing the graph Fourier transform in spectral convolution (Equation 1) withgraph wavelet transform, the graph wavelet convolution can be obtained as:g?x= sg >sx (4)The benefits that spectral graph wavelet bases have over Fourier bases mainly fall into two aspects:1.Given the sparse real-world networks, the graph wavelet bases are usually much more sparsethan Fourier bases, e.g., the density of sis2:8%comparing with 99:1%ofU(Xu et al., 2019a).The sparseness of graph wavelets makes them more computationally efficient for use. 2.In spectralgraph wavelets, the signal sresulting from heat kernel filter gsis typically localized on the graphand in the spectral domain (Shuman et al., 2013). By adjusting the scaling parameter s, one caneasily constrain the range of localized neighborhood. Smaller values of sgenerally associate withsmaller neighborhoods.Employing the same strategy as in Spectral CNN (Bruna et al., 2014), GWNN designs the samediagonal filter Fki;jto be learned for each input channel. The structure of k-th layer of GWNN is:Xk+1[:;j]=h spXi=1Fki;j 1sXk[:;i]j= 1;;q; (5)Note that both Spectral CNN and GWNN employ the same filters for learning full frequency com-ponents. As mentioned before, the parameter complexity of Spectral CNN is large since each pair ofinput and output channel requires learning an individual diagonal filter matrix Fki;j. GWNN furtherreduces the parameter complexity by dividing each layer into two components: feature transforma-tion and graph convolution:feature transformation : Xk0=XkWk; (6)graph convolution : Xk+1=h sFk 1sXk0: (7)whereWk2Rpqis the feature transformation parameter matrix similar to Vanilla GCN (Kipf& Welling, 2017). In such a way, the feature transformation operation is detached from graphconvolution and the parameter complexity is decreased from from O(npq)toO(n+pq).3.4 O CTAVE CONVOLUTIONAL LAYERIn contrast to the scale-space theory-based octave feature representation utilized in computer vi-sion (Lindeberg, 2013), Graph Signal Processing provides us a more principle way in the spectraldomain. To better capture the different importance of low- and high- frequency components andcombining the benefits of graph wavelets, we can naturally construct each layer in an octave convo-lution manner by learning two different filters as:feature transformation : Xk0L=XkWkL; Xk0H=XkWkH (8)graph convolution : Xk+1L= sLFkL 1sLXk0L; Xk+1H= sHFkH 1sHXk0H; (9)whereFkL2RddandFkH2R(nd)(nd)are the diagonal filter matrix for graph convolution to belearned with different weights for low- and high- components, respectively. dis the hyper-parameter4Under review as a conference paper at ICLR 2020Table 1: The overview of dataset statistics.Dataset Nodes Edges Classes Features Label rateCiteseer 3,327 4,732 6 3,703 0:036Cora 2,708 5,429 7 1,433 0:052Pubmed 19,717 44,338 3 500 0:003to select the proportion d=nof low frequency components. sLand sLare the corresponding low-and high- frequency graph wavelet bases. Further, with a pooling operation on the outputs andnon-linear activation function, the structure of the k-th layer in our structure can be defined asXk+1=hPooling( sLFkL 1sLXkWkL; sHFkH 1sHXkWkH)(10)We refer this proposed architecture as Octave Graph Convolutional Network ( OctGCN ).Since the parameters in the diagonal convolution filtering kernels could be huge especially for largegraphs, the graph-based semi-supervised learning might prohibit the parameter learning due to thelimited amount of training data. To mitigate this issue, we further reduce the parameter complexityby constructing the graph convolution kernel Fkwith two parameters LandH, and keeping thesame weight matrix Wshared between low- and high- frequency components asXk+1=hPooling( sL264L...L375 1sLXkWk; sH264H...H375 1sHXkWk)(11)For the learning of weights of low- and high-frequency components LandH, we adopt theattention strategy to constraint them within the scale of (0;1):= softmax( ) =exp ()Pexp();=L;HHence, we introduce three more hyper-parameters to be tuned: LandHcontrol the importanceof low and high frequency components, and parameter dspecify the ratio of low frequencies weexpect to represent the graph. In this way, we reduces the parameter complexity from O(n+pq)in GWNN (Xu et al., 2019a) to O(pq), which is the same as Vanilla GCN (Kipf & Welling, 2017).3.5 F AST SPECTRAL GRAPH WAVELET APPROXIMATION VIA CHEBYSHEV POLYNOMIALSDirectly computing the transformation according to Equation 3 is intensive for large graphs, sincediagonalizing Laplacian Lcommonly requires O(n3)operations. Luckily, (Hammond et al., 2011)provides us a method to fast approximate the spectral graph wavelet via Chebyshev polynomials.Letsbe the fixed scaling parameter in the heat filter kernel gs() =esandMbe the degree ofthe Chebyshev polynomial approximations for the scaled wavelet (Larger value of Myields moreaccurate approximations but higher computational cost in opposite), the graph wavelet is given by s=12c0;s+MXi=1ci;sTi(~L);ci;s=2Z0cosies(cos+1)d= 2esJi(s) (12)where ~L=2maxLInandJi(s)is the Bessel function of the first kind. The proof can be referredto (Hammond et al., 2011). With this Chebyshev polynomial approximation, the computational costof spectral graph wavelets is decreased to O(MkEk+Mn). Due the real world graphs are usuallysparse, this computational difference can be very significant.4 E XPERIMENTS4.1 D ATASETSWe evaluate our proposed OctGCN on semi-supervised node classification task. The experimen-tal setup is closely followed (Yang et al., 2016; Kipf & Welling, 2017). Statistical overview of5Under review as a conference paper at ICLR 2020Table 2: Experimental results (in percent) on semi-supervised node classification.Model Citeseer Cora PubmedLP (Zhu et al., 2003) 45:3 68 :0 63 :0ICA (Lu & Getoor, 2003) 69:1 75 :1 73 :9ManiReg (Belkin et al., 2006) 60:1 59 :5 70 :7SemiEmb (Weston et al., 2012) 59:6 59 :0 71 :1DeepWalk (Perozzi et al., 2014) 43:2 67 :2 65 :3Planetoid (Yang et al., 2016) 64:7 75 :7 77 :2Spectral CNN (Bruna et al., 2014) 58:9 73 :3 73 :9ChebyNet (Defferrard et al., 2016) 69:8 81 :2 74 :4Vanilla GCN (Kipf & Welling, 2017) 70:3 81 :5 79 :0GWNN (Xu et al., 2019a) 71:7 82 :8 79 :1LNet (Liao et al., 2019) 66:21:9 79:51:8 78:30:3AdaLNet (Liao et al., 2019) 68:71:0 80:41:1 78:10:4SGC (Wu et al., 2019) 71:90:1 81:00:0 78:90:0MoNet (Monti et al., 2017) — 81:70:5 78:80:3GAT (Veli ˇckovi ́c et al., 2018) 72:50:7 83:00:7 79:00:3GIN (Xu et al., 2019b) 66:10:9 77:61:1 77:01:2DGI (Velickovic et al., 2019) 71:80:7 82:30:6 76:80:6OctGCN (this paper) 72:10:283:50:280:50:3datasets is given in Table 1. Three real-world datasets are chosen as benchmarks: Citeseer, Coraand Pubmed (Sen et al., 2008). In these citation networks, nodes are documents with correspondingbag-of-words features and edges are citation links. Label rate denotes the ratio of labeled nodesfetched in training process. We keep the label rate consistent with the classic public split, which is20 labeled nodes per class in each dataset for training. Meantime, the test set contains 1000 labeledsamples for prediction accuracy evaluation, and the validation set includes 500 labeled samples fordetermining hyper-parameters.4.2 B ASELINESWe first compare against traditional baselines, i.e., label propagation (LP) (Zhu et al., 2003), iterativeclassification algorithm (ICA) (Lu & Getoor, 2003), manifold regularization (ManiReg) (Belkinet al., 2006), semi-supervised embedding (SemiEmb) (Weston et al., 2012), skip-gram based graphembeddings (DeepWalk) (Perozzi et al., 2014) and Planetoid (Yang et al., 2016).Then we compare the most recent and state-of-the-art baselines from both spectral and spatialgraph neural networks, since they are shown effective for semi-supervised settings. For spectralapproaches based on convolution theorem, we compare our OctGCN with the Spectral CNN (Brunaet al., 2014), ChebyNet (Defferrard et al., 2016), Vanilla GCN (Kipf & Welling, 2017), GWNN (Xuet al., 2019a), LNet/AdaLNet (Liao et al., 2019) and SGC (Wu et al., 2019). For spatial based meth-ods, we select the MoNet (Monti et al., 2017), GAT (Veli ˇckovi ́c et al., 2018), GIN (Xu et al., 2019b)and DGI (Velickovic et al., 2019) as comparisons.4.3 E XPERIMENTAL SETUPFor all experiments, a 2-layer network of our model is constructed using TensorFlow (Abadi et al.,2015) with 64 hidden units. We train our model utilizing the Adam optimizer (Kingma & Ba, 2014)with an initial learning rate lr= 0:01. We terminate training if validation accuracy does not improvefor 100 consecutive steps, and most runs finish in less than 200 steps as expected. We initialize theweights matrix following (Glorot & Bengio, 2010), employ 5104L2 regularization on weightsand dropout input and hidden layers to prevent overfitting (Srivastava et al., 2014).For hyper-parameters for constructing wavelets s, we adopt the selection of the scaling parametersand sparseness threshold t(the elements of sare set to 0 when smaller than t) as in (Xu et al.,2019a), i.e.,s= 0:7t= 1105for Citeseer, s= 1:0t= 1104for Cora and s= 0:5t= 1107for Pubmed, since both smaller sandtare shown not sensitive to datasets. For6Under review as a conference paper at ICLR 20200.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.7120.7140.7160.7180.7200.722Classification AccuracyBest Fraction citeseer0.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.8050.8100.8150.8200.8250.8300.835Classification AccuracyBest Fraction cora0.0 0.2 0.4 0.6 0.8Fraction of low-frequency components0.79000.79250.79500.79750.80000.80250.80500.8075Classification AccuracyBest Fraction pubmedFigure 2: The performance of learned OctGCN w.r.t the proportion of low-frequency components.The best fraction is marked with the red vertical line.Table 3: Learned weights LandHofOctGCN for low and high frequency w.r.t the best fractionof low frequency components d=n(number followed after the name of datasets).Dataset Citeseer ( 15%) Cora ( 5%) Pubmed ( 10%)Octave filter weights LHLHLHLearned value 0.838 0.162 0.722 0.278 0.860 0.140Table 4: The mean Silhouette Coefficient of learned samples. Larger is better.Dataset Citeseer Cora PubmedModel Vanilla GCN GWNN OctGCN Vanilla GCN GWNN OctGCN Vanilla GCN GWNN OctGCNSilhouette score 0.038 0.050 0.083 0.119 0.153 0.220 0.110 0.130 0.171the only hyper-parameter of OctGCN , the optimal proportion d=nof low-frequency components foreach dataset, is determined through grid search and studied in next Section. The weights of low-and high-frequency components LandHare both initialized with 1and learned automatically. Inexperiments, Max -pooling is chosen to demonstrate the importance the low-frequency components.4.4 E XPERIMENTAL RESULTS4.4.1 P ERFORMANCE OF OctGCN ONNODE CLASSIFICATIONIn Table 2, we demonstrate how our model performs on public splits taken from (Yang et al.,2016). The results of baselines are strictly consistent with the numbers from literature. With thelimited information given in semi-supervised learning, We achieve a average test accuracy of 72:1%,83:5%, and 80:5%on Citeseer, Cora and Pubmed, respectively. As OctGCN learned the octavefeature representations for graph in spectral domain, it can demonstrate the meaningful informationextracted from the underlying "true signal" from low-frequency over high-frequency. This is themain reason that explains why OctGCN outperforms other baseline methods.4.4.2 A NALYSIS ON INTERPRETABILITYIn Figure 2, how the proportion d=nof low-frequency components affect the performance is studied.We fine-tune the proportion in a range of f0%;5%;;95%g. The best proportion of low-frequencycomponents are 15%,5%, and 10% for Citeseer, Cora and Pubmed, respectively. The learnedweights of low- and high-frequency components LandHw.r.t the best proportion for each datasetare demonstrated in Table 3, accordingly. It’s clearly to note that the small proportion of low-frequency components are essential to the learning octave feature representation. The results are inline with the importance of low-frequency in GSP and bring interpretability to the nature of GCNs.7Under review as a conference paper at ICLR 2020(a) Vanilla GCN on Citeseer (b) GWNN on Citeseer (c)OctGCN on Citeseer(d) Vanilla GCN on Cora (e) GWNN on Cora (f)OctGCN on Cora(g) Vanilla GCN on Pubmed (h) GWNN on Pubmed (i)OctGCN on PubmedFigure 3: The t-SNE visualization of OctGCN comparing with spectral convolution based baselines.Each color corresponds to a different class that the embeddings belongs to.4.4.3 T-SNE VISUALIZATION OF LEARNED EMBEDDINGSTable 4 presents the mean Silhouette Coefficient (Rousseeuw, 1987) over all learned samples, largerthe silhouette score is, better the clustering performs. We choose two representative baseline meth-ods, i.e., Vanilla GCN (Kipf & Welling, 2017) and GWNN (Xu et al., 2019a) for comparison. Wecan indicate that OctGCN achieves the best quality of embeddings. Figure 3 depicts the t-SNEvisualization (Maaten & Hinton, 2008) of learned embeddings on all three citation datasets. Wecan visualize the local and sparse property of spectral graph wavelets that utilized in GWNN andOctGCN . Further, the intersections of different classes are more separated in the results our Oct-GCN , since the octave feature embeddings learned from our model tend to capture the importanceinformation in low-frequency components and effectively alleviate the noise from high-frequency.5 C ONCLUSIONIn this paper, we propose OctGCN , a novel spectral-based graph convolutional neural network tolearn the representation of graph with respect to different frequency components. By distinct designof filters for low- and high-frequency, our model can effectively capture the octave feature repre-sentations and enhance the interpretability of GCNs. To the best of our knowledge, this is the firstattempt on octave convolution for graphs. An interesting direction for future work is to extend thedefinition of octave convolution from spectral domain to spatial domain, in order to pursue moreefficient architectures for learning with graphs.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This paper proposes to use octave convolution to learn a representation of a graph. Typically, a learning on a graph is done either in a spatial domain or in a spectral domain. A spectral domain based approach uses a eigenvalue decomposition form of a graph Laplacian (a symmetric matrix) and learning a filter that acts on the eigenvalue of a graph Laplacian while preserving eigenvectors of a graph Laplacian. This architecture is called the graph convolutional network on a spectral domain. This paper's main contribution is to adapt octave convolutional network's architecture to the usual graph convolutional network. While I believe that this is the first work on applying the idea behind octave convolutional network architecture, separating low and high frequency component in the learning stage, to graph convolutional network architecture, I cannot see a good motivation on why this architecture is good for learning on a graph. A comprehensive study in the paper shows a better performance gain compared to the existing method, but it would be better if the gains were substantial or the authors presented a good motivation on why this architecture is good in some cases. Overall, I think the paper is well-written, but I would suggest to present more meaningful justification why and when the octave GCN is better than the GCN. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
BJlo91BYPr
ICLR.cc/2020/Conference
2020
Irrationality can help reward inference
["Lawrence Chan", "Andrew Critch", "Anca Dragan"]
Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior. The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond. This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward. Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference. For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior. We put this to the test in a systematic analysis of the effect of irrationality on reward inference. We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners.
["preference inference", "inverse reinforcement learning", "reward inference", "irrationality"]
ABSTRACTSpecifying reward functions is difficult, which motivates the area of reward in-ference: learning rewards from human behavior. The starting assumption in thearea is that human behavior is optimal given the desired reward function, but inreality people have many different forms of irrationality, from noise to myopia torisk aversion and beyond. This fact seems like it will be strictly harmful to rewardinference: it is already hard to infer the reward from rational behavior, and noiseand systematic biases make actions have less direct of a relationship with the re-ward. Our insight in this work is that, contrary to expectations, irrationality canactually help rather than hinder reward inference. For some types and amountsof irrationality, the expert now produces more varied policies compared to ratio-nal behavior, which help disambiguate among different reward parameters – thosethat otherwise correspond to the same rational behavior. We put this to the test ina systematic analysis of the effect of irrationality on reward inference. We startby covering the space of irrationalities as deviations from the Bellman update,simulate expert behavior, and measure the accuracy of inference to contrast thedifferent types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accuratelymodel irrationality, as well as to what extent we might expect (or be able to train)real people to exhibit helpful irrationalities when teaching rewards to learners.1 I NTRODUCTIONThe application of reinforcement learning (RL) in increasingly complex environments has been mostsuccessful for problems that are already represented by a specified reward function (Lillicrap et al.,2015; Mnih et al., 2015; 2016; Silver et al., 2016). Unfortunately, not only do real-world tasksusually lack an explicit exogenously-specified reward function, but attempting to specify one tendsto lead to unexpected side-effects as the agent is faced with new situations (Lehman et al., 2018).This has motivated the area of reward inference : the process of estimating a reward function fromhuman inputs. The inputs are traditionally demonstrations, leading to inverse reinforcement learn-ing (IRL) (Ng et al., 2000; Abbeel & Ng, 2004) or inverse optimal control (IOC) (Kalman, 1964;Jameson & Kreindler, 1973; Mombaur et al., 2010; Finn et al., 2016). Recent work has expandedthe range of inputs significantly,to comparisons (Wirth et al., 2017; Sadigh et al., 2017; Christianoet al., 2017), natural language instructions (MacGlashan et al., 2015; Fu et al., 2019), physical cor-rections (Jain et al., 2015; Bajcsy et al., 2017), proxy rewards (Hadfield-Menell et al., 2017; Ratneret al., 2018), or scalar reward values (Griffith et al., 2013; Loftin et al., 2014).The central assumption behind these methods is that human behavior is rational, i.e. optimal withrespect to the desired reward (cumulative, in expectation). Unfortunately, decades of research in be-havioral economics and cognitive science Chipman (2014) has unearthed a deluge of irrationalities ,i.e. of ways in which people deviate from optimal decision making: hyperbolic discounting, scopeinsensitivity, optimism bias, decision noise, certainty effects, loss aversion, status quo bias, etc.Work on reward inference has predominantly used one model of irrationality: decision-makingnoise, where the probability of an action relates to the value that action has. The most widely usedmodel by far is a Bolzmann distribution stemming from the Luce-Sherpard rule (Luce, 1959; Shep-ard, 1957; Lucas et al., 2009) and the principle of maximum (causal) entropy in (Ziebart et al., 2008;2010), which we will refer to as Bolzmann-rationality (Fisac et al., 2017). Recent work has startedto incorporate systematic biases though, like risk-aversion (Singh et al., 2017), having the wrong1Under review as a conference paper at ICLR 2020dynamics belief (Reddy et al., 2018), and myopia and hyperbolic discounting (Evans & Goodman,2015; Evans et al., 2016).Learning from irrational experts feels like daunting task: reward inference is already hard withrational behavior, but now a learner needs to make sense of behavior that is noisy or systematicallybiased. Our goal in this work is to characterize just how muddied the waters are – how (and howmuch) do different irrationalities affect reward inference?Our insight is that, contrary to expectations, irrationality can actually help, ratherthan hinder, reward inference.Our explanation is that how good reward inference is depends on the mutual information between thepolicies produced by the expert and the reward parameters to be inferred. While it is often possiblefor two reward parameters to produce the same rational behavior, irrationalities can sometimesproduce different behaviors that disambiguate between those same two reward parameters. Forinstance, noise can help when it is related to the value function, as Boltzmann noise is, because itdistinguishes the difference in values even when the optimal action stays the same. Optimism canbe helpful because the expert takes fewer risk-avoiding actions and acts more directly on their goal.Overall, we contribute 1) an analysis and comparison of the effects of different biases on reward in-ference testing our insight, 2) a way to systematically formalize and cover the space of irrationalitiesin order to conduct such an analysis, and 3) evidence for the importance of assuming the right typeof irrationality during inference.Our good news is that irrationalities can indeed be an ally for inference. Of course, this is not alwaystrue – the details of which irrationality type and how much of it also matter. We see these resultsas opening the door to a better understanding of reward inference, as well as to practical ways ofmaking inference easier by asking for the right kind of expert demonstrations – after all, in somecases it might be easier for people to act optimistically or myopically than to act rationally. Ourresults reinforce that optimal teaching is different from optimal doing, but point out that some formsof teaching might actually be easier than doing.2 M ETHOD2.1 E XPLORING IRRATIONALITY THROUGH SIMULATIONOur goal is to explore the effect irrationalities have on reward inference if the learner knows aboutthem – we explore the need for the learner to accurately model irrationalities in section 4.2. Whileideally we would recruit human subjects with different irrationalities and measure how well wecan learn rewards, this is prohibitive because we do not get to dictate someone’s irrationality type:people exhibit a mix of them, some yet to be discovered. Further, measuring accuracy of inferenceis complicated by the fact that we do not have ground truth access to the desired reward: the learnercan measure agreement with some test set, but the test set itself is produced subject to the sameirrationalities that produced the training data. As experimenters, we would remain deluded aboutthe human’s true intentions and preferences.To address this issue, we simulate expert behavior subject to different irrationalities based on groundtruth reward functions, run reward inference, and measure the performance against the ground truth,i.e. the accuracy of a Bayesian posterior on the reward function given the (simulated) expert’s inputs.2.2 T YPES AND DEGREES OF IRRATIONALITYThere are many possible irrationalities that people exhibit (Chipman, 2014), far more than whatwe could study in one paper. They come with varying degrees of mathematical formalization andreplication across human studies. To provide good coverage of this space, we start from the Bell-man update, and systematically manipulate its terms and operators to produce a variety of differentirrationalities that deviate from the optimal MDP policy in complementary ways. For instance, op-erating on the discount factor can model more myopic behavior, while operating on the transitionfunction can model optimism or the illusion of control. Figure 1 summarizes our approach, whichwe detail below.2Under review as a conference paper at ICLR 2020Vi+1(s) = maxa/summationdisplays′∈ST(s′|s, a)(r(s, a, s′)+γVi(s))BoltzmannOptimism/PessimismIllusion of ControlProspectMyopic VIMyopic DiscountHyperbolicExtremalFigure 1: We modify the components of the Bellman update to cover different types of irrationali-ties: changing the max into a softmax to capture noise, changing the transition function to captureoptimism/pessimism or the illusion of control, changing the reward values to capture the nonlin-ear perception of gains and losses (prospect theory), changing the average reward over time into amaximum (extremal), and changing the discounting to capture more myopic decision-making.2.2.1 R ATIONAL EXPERTTherational expert does value iteration using the Bellman update from figure 1. Our models changethis update to produce different types of non-rational behavior.2.2.2 M ODIFYING THE MAXOPERATOR : BOLZMANNBoltzmann -rationality modifies the maximum over actions maxawith a Boltzmann operator withparameter:Vi+1(s) =BoltzaXs02ST(s0js;a) (r(s;a;s0) +Vi(s))Where Boltz(x) =Pixiexi=Piexi(Ziebart et al., 2010; Asadi & Littman, 2017) This modelsthat people will not be perfect, but rather noisily pick actions in a way that is related to the Q-value of those actions. The constant is called the rationality constant , because as ! 1 ,the human choices approach perfect rationality (optimality), whereas = 0 produces uniformlyrandom choices. This is the standard assumption for reward inference that does not assume perfectrationality, because it easily transforms the rationality assumption into a probability distribution overactions, enabling learners to make sense of imperfect demonstrations that otherwise do not matchup with any reward parameters.2.2.3 M ODIFYING THE TRANSITION FUNCTIONOur next set of irrationalities manipulate the transition function away from reality.Illusion of Control. Humans often overestimate their ability to control random events. To modelthis, we consider experts that use the Bellman update:Vi+1(s) = maxaXs02STn(s0js;a) (r(s;a;s0) +Vi(s))whereTn(s0js;a)/(T(s0js;a))n. Asn!1 , the demonstrator acts as if it exists in a deterministicenvironment. As n!0, the expert acts as if it had an equal chance of transitioning to every possiblesuccessor state. At n= 1, the expert is the rational expert.Optimism/Pessimism. Humans tend to systematically overestimate their chance experiencing ofpositive over negative events. We model this using experts that modify the probability they getoutcomes based on the value of those outcomes:Vi+1(s) = maxaXs02ST1=(s0js;a) (r(s;a;s0) +Vi(s))whereT1=(s0js;a)/T(s0js;a)e(r(s;a;s0)+Vi(s))=.1=controls how pessimistic or optimisticthe expert is. As 1=!+1, the expert becomes increasingly certain that good transitions willhappen. As 1=!1 , the expert becomes increasingly certain that bad transitions will happen.As1=!0, the expert approaches the rational expert.3Under review as a conference paper at ICLR 20202.2.4 M ODIFYING THE REWARD : PROSPECT THEORYNext, we consider experts that use the modified Bellman update:Vi+1(s) = maxaXs02ST(s0js;a) (f(r(s;a;s0)) +Vi(s))wheref:R!Ris some scalar function. This is equivalent to solving the MDP with reward fr.This allows us to model human behavior such as loss aversion and scope insensitivity.Prospect Theory Kahneman & Tversky (2013) inspires us to consider a particular family of rewardtransforms:fc(r) =8<:log(1 +jrj)r>00 r= 0clog(1 +jrj)r<0ccontrols how loss averse the expert is. As c! 1 , the expert primarily focuses on avoidingnegative rewards. As c!0, the expert focuses on maximizing positive rewards and2.2.5 M ODIFYING THE SUM BETWEEN REWARD AND FUTURE VALUE : EXTREMALExtremal. Humans seem to exhibit duration neglect, sometimes only caring about the maximumintensity of an experiennce (Do et al., 2008). We model this using experts that use the Bellman step:Vi+1(s) = maxaXs02ST(s0js;a) (max [r(s;a;s0);(1)r(s;a;s0) +Vi(s)])These experts maximize the expected maximum reward along a trajectory, instead of the expectedsum of rewards. As !1, the expert maximizes the expected maximum reward they achieve alongtheir full trajectory. As !0, the expert becomes greedy, and only cares about the reward theyachieve in the next timestep.2.2.6 M ODIFYING THE DISCOUNTINGMyopic Discount. In practice, humans are often myopic, only considering immediate rewards.One way to model this is to decrease gamma in the Bellman update. At = 1, this is the rationalexpert. As!0, the expert becomes greedy and only acts to maximize immediate reward.Myopic VI. As another way to model human myopia, we consider a expert that performs onlyhsteps of Bellman updates. That is, this expert cares equally about rewards for horizon h, anddiscount to 0 reward after that. As h!1 , this expert becomes rational. If h= 1, this expert onlycares about the immediate reward.Hyperbolic Discounting. Human also exhibit hyperbolic discounting, with a high discount ratefor the immediate future and a low discount rate for the far future. Alexander & Brown (2010)formulate this as the following Bellman update:Vi+1(s) = maxaXs02ST(s0js;a) (r(s;a;s0) +Vi(s))=(1 +kVi(s))kmodulates how much the expert prefers rewards now versus the future. As k!0, this expertbecomes the rational expert.3 I MPACT OF IRRATIONALITIES ON REWARD INFERENCE3.1 E XPERIMENTAL DESIGNSimulation Environment. To reduce possible confounding from our choice of environment, weused a small 5x5 gridworld where the irrationalities nonetheless cause experts to exhibit differentbehavior. Our gridworld consists of three types of cells: ice, holes, and rewards. The expert can startin any ice cell. At each ice cell, the expert can move in one of the four cardinal directions. With4Under review as a conference paper at ICLR 2020Figure 2: The log loss (lower = better) of the posterior as a function of the parameter we vary foreach irrationality type. These six irrationalities all have parameter settings that outperform rationalexperts. For the models that interpolate to rational expert, we denote the value that is closest torational using a dashed vertical line.probability 0:8, they will go in that direction. With probability 0:2, they will instead go in one of thetwo adjacent directions. Holes and rewards are terminal states, and return the expert back to theirstart state. They receive a penalty of 10for falling into a hole and i2[0;4]for entering into theith reward cell.Dependent Measures. To separate the inference difficulty caused by suboptimal inference fromthe difficulty caused by expert irrationality, we perform the exact Bayesian update on the trajectory(Ramachandran & Amir, 2007), which gives us the posterior on given:P(j) =P(j)P()R0P(j0)P(0)We use two metrics to measure the difficulty of inference The first is the expected log loss of thisposterior, or negative log-likelihood:Log Loss (j) =E;[logP(j)]:A low log loss implies that we are assigning a high likelihood to the true . As we are performingexact Bayesian inference with the true model P(j)and priorP(), the log loss is equal to theentropy of the posterior H(j).The second metric is the L2-distance between the mean posterior and the actual theta:L2(j) =E;jjE[j]jj2The closer the inferred posterior mean of is to the actual value , the lower the loss.For each irrationality type, we calculate the performance of reward inference on trajectories of afixed length T, with respect to the two metrics above. To sample a trajectory of length Tfroma expert, we fix and start state s. Then, we perform the expert’s (possibly modified) Bellmanupdates until convergence to recover the policy . Finally, we generate rollouts starting from statesuntilTstate, action pairs have been sampled from .5Under review as a conference paper at ICLR 2020Figure 3: A best case analysis for each irrationality type: the log loss/ L2distance from mean(lower=better) for experts, as a function of the length of trajectory observed. Each irrationalityuses the parameter value that is most informative. As discussed in section 3.2, different irrational-ity types have different slopes and converge to different values. In addition, the best performingirrationality type according to log loss is not the best performing type according to L2loss.3.2 A NALYSISImpact of Each Irrationality. We found that of the 8 irrationalities we studied, 6 had parametersettings that lead to lower log loss than the rational expert. We report how the parameter influencesthe log loss for each of these experts in figure 2.1ForT= 30 , Optimism with 1== 3:16performedthe best, followed by Boltzmann with = 100 and Hyperbolic with k= 0:1. Both forms of Myopiaalso outperformed the rational expert, with best performance occurring at = 0:9andh= 5.Finally, the Extremal expert also slightly outperformed the rational expert, with best performance at= 0:9. Notably, in every case, neither the most irrational expert nor the perfectly rational expertwas the most informative.Impact of Data for Different Irrationalities. Next, we investigate how the quality of inferencevaries as we increase the length of the observed trajectory T. We report our results for the bestperforming parameter for each irrationality type in figure 3. Interestingly, while both metrics de-crease monotonically regardless of irrationality type, the rate at which they decrease differs by theirrationality type, and the best performing irrationality type according to log loss (Optimism) is notthe best performing type according to L2distance (Boltzmann).What is behind these differences? To explain these results, we use the notion of mutual informa-tionI(X;Y)between two variables, defined as:I(X;Y) =EX;YlogP(X;Y )P(X)P(Y)=H(X)H(XjY)The mutual information measures how much our uncertainty about Xdecreases by observing Y.For reward inference, the term we care about is the mutual information between the expert’s trajec-tory and the reward parametersI(;) =E;logP(;)P()P()=H()H(j)The mutual information I(;)is equal to a constant minus the posterior log loss under the truemodel. A expert with mutual information will cause the learner to have a lower posterior log loss.1The plots for the other two irrationalities are included in the appendix.6Under review as a conference paper at ICLR 2020(a) Optimism Bias( = 3:16) (b) Pessimism Bias( =3:16Figure 4: (a) Optimism bias produces different actions for = (4;1)vs.= (1;4)in the statesshown: the rational policy is to go away from the hole regardless of , but an optimistic expert takesthe chance and goes for the larger reward – up in the first case, down in the second. (b) Pessimismbias produces different actions for = (1;1)vs.= (4;4): when the reward is sufficiently large,the expert becomes convinced that no action it takes will lead to the reward, leading it to performrandom actions.(a) Boltzmann ( = 100 ) (b) Myopia(h= 5)Figure 5: (a) Boltzmann-rationality produces different policies for = (1;1)vs.= (4;4):whenjjjjis larger, the policy becomes closer to that of the rational expert. (b) A Myopic expertproduces different policies for = (4;1)vs.= (4;0): while the rational expert always detoursaround the hole and attempts to reach the larger reward, myopia causes the myopic expert to go forthe smaller source of reward when it is non-zero.By the information processing inequality, we have the bound I(;)I(;).To have higher mutual information, different s should be mapped to different policies s. Indeed,we found that the experts that were able to outperform the rational expert were able to disambiguatebetweens that the rational expert could not. To visualize this, we show examples of how the policyof several irrational experts differ when the rational expert’s policies are identical in figures 4 and 5.We plot the correlation between I(;)andI(;)in figure 6. Experts that have more informativepolicies tend to have more informative trajectories, but the correlation is not perfect. Notably, theOptimism expert has the most informative trajectories of length 30, but has less informative policiesthan the Boltzmann expert.In the limit of infinite data from every state, we would have I(;)!I(;). However, as eachtrajectory begins from the same start state, and not every state is reachable with every policy, thebound is not achievable in general, even if we observe an arbitrarily large number of trajectories.This highlights the need for off-policy data in reward inference tasks.4 D ISCUSSION4.1 S UMMARYWe show that, contrary to what we might expect, suboptimal experts can actually help an agent learnthe reward function. Optimism bias, myopia (via heavier discounting or hyperbolic discounting),7Under review as a conference paper at ICLR 2020Figure 6: The informativeness of policies correlates with the informativeness of trajectories of length30, as discussed in section 3.2and noise via Boltzmann rationality were the most informative irrationalities in our environments,far surpassing the performance of the rational expert for their ideal settings. Our contribution overallwas to identify a systematic set of irrationalities by looking at deviations in the terms of the Bellmanupdate, and show that being irrational is not automatically harmful to inference by quantifying andcomparing the inference performance for these different types.4.2 L IMITATIONS AND FUTURE WORK.Estimating expert irrationality. One major limitation of our work is that our findings hold forwhen the learner knows the type and parameter value of the irrationality. In practice, reward infer-ence will require solving the difficult task of estimating the irrationality type and degree (Armstrong& Mindermann, 2018; Shah et al., 2019). We still need to quantify to what extent these results stillhold given uncertainty about the irrationality model. It does, however, seem crucial to reward in-ference that learners do reason explicitly about irrationality – not only is the learner unable to takeadvantage of the irrationality to make better inference if it does not model it, but actually rewardinference in general suffers tremendously if the learner assumes the wrong type.In figure 10 in the Appendix, we compare inference with the true model vs. with assuming a Boltz-mann model as default. The results are quite striking: not knowing the irrationality harms inferencetremendously. Whether irrationalities help, this means that it is really important to model them.Generalization to other environments. A second limitation of our work is that we only testedthese models in a limited range of environments. Further work is needed to test generalization ofour findings across different MDPs of interest. Our analysis of mutual information lends credenceto the Boltzmann rationality result generalizing well: these policies are much more varied with thereward parameters. In contrast, how useful the optimism bias is depends on the task: if we knowabout what to avoid already, as was the case for our learner, the bias is useful; if, on the other hand,we would know the goal but do not know what to avoid, the bias can hinder inference. Overall,this paper merely points out that there is a lot of richness to the ways in which these biases affectinference, and provides a quantitative comparison for a starting domain – much more is needed togain a deeper understanding of this phenomenon.Applications to real humans. A third limitation is that we do not know where real humans lie.Do they have the helpful irrationality types? Do they fall in the range of parameters for these typesthat help inference? And what happens when types combine? While these questions are daunting,there is also a hidden opportunity here: what if we could influence humans to exhibit helpful types ofirrationality? It might be much easier for them, for instance, to act myopically than to act rationally.In the end, reward inference is the confluence of two factors: how well the robot learns, and howwell the teacher teaches. Our results point out that it might be easier than previously thought to be agood teacher – even easier than being a rational expert.8Under review as a conference paper at ICLR 2020
S1e54b8AFH
Official Blind Review #1
3: Weak Reject
This paper studies reward inference from demonstrations of irrational experts. More specifically, a set of semantically meaningful experts' behaviors is considered which is derived from modifying the Bellman update. The quality of reward inference from these different experts is measured by two different scores and analyzed regarding properties of the demonstrator. The main finding is that irrationality can be helpful for inferring rewards. The problem addressed by the paper is very interesting and relevant to a reasonable part of the community but I argue that the paper is not ready for publication in its current form. In particular, the experiments are too limited to thoroughly support the claims (this requires at least the consideration of more different environments; and to really make the paper impactful some parts of the "future work" section should be conducted) and the write-up should be improved from Section 3 onwards to provide more clarity. A few more detailed points: * I see the paper in its current form as a theoretical study on reward inference from irrational experts. To provide insights here, and as these only involves simulations, a rich set of different MDPs should be considered. In the current form it is unclear how general the results are (although I assume that certain findings hold more generally but there is no supporting evidence for that). Maybe even formal theoretical insights can be derived. * Regarding the Bellman update, on the RHS it should be $V_i(s')$. * Regarding the presentation of the irrational experts. Is there a simpler way of presenting the irrational experts through modified MDPs that the expert tries to solve optimally? Are all updates actually convergent, in particular the pessimistic one? * Please provide a formal specification of the reward model used in experiments. * Please provide a describe how you do the computation of the posteriors $\theta$ in the main paper (or at least provide a forward reference to the appendix). Which prior on $\theta$ are you using (put in main paper)? * What is the precise nature of $\xi$? I would assume it is a sequence of state-actions but that is not consistent with the definition of the log-loss which suggests it is only actions. * Probably more interesting than the log-loss and L^2 loss is the actual performance of an optimal policy using the inferred reward parameters. It would be good to report this numbers. How do irrational experts compare looking at this metric? * The discussion of related work should be extended. For instance, R. Shah et al.'s paper "On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference" should be discussed in more detail and similarities and differences clarified. Minor comments and suggestions for improving the paper: * The definition of Boltz in 2.2.2 can be made more clear. Maybe define the function using actions to connect to the above equation. * Correct "update on the the trajectory $\theta$". * Please check the usage of $\theta$ and $\theta^*$ and make it consistent. I think it would also help to make the log-loss and L^2 distance not look like a function of $\theta$. * Figure 4/5: Explain what we see. I guess the black square is the starting position? * Regarding figure 5: Comparing different $\beta$ values seems more sensible if the norm of $\theta$ is normalized.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Irrationality can help reward inference ### Paper Abstract Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior. The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond. This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward. Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference. For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior. We put this to the test in a systematic analysis of the effect of irrationality on reward inference. We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners. ### Paper Keywords ["preference inference", "inverse reinforcement learning", "reward inference", "irrationality"] ### Paper Content ABSTRACTSpecifying reward functions is difficult, which motivates the area of reward in-ference: learning rewards from human behavior. The starting assumption in thearea is that human behavior is optimal given the desired reward function, but inreality people have many different forms of irrationality, from noise to myopia torisk aversion and beyond. This fact seems like it will be strictly harmful to rewardinference: it is already hard to infer the reward from rational behavior, and noiseand systematic biases make actions have less direct of a relationship with the re-ward. Our insight in this work is that, contrary to expectations, irrationality canactually help rather than hinder reward inference. For some types and amountsof irrationality, the expert now produces more varied policies compared to ratio-nal behavior, which help disambiguate among different reward parameters – thosethat otherwise correspond to the same rational behavior. We put this to the test ina systematic analysis of the effect of irrationality on reward inference. We startby covering the space of irrationalities as deviations from the Bellman update,simulate expert behavior, and measure the accuracy of inference to contrast thedifferent types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accuratelymodel irrationality, as well as to what extent we might expect (or be able to train)real people to exhibit helpful irrationalities when teaching rewards to learners.1 I NTRODUCTIONThe application of reinforcement learning (RL) in increasingly complex environments has been mostsuccessful for problems that are already represented by a specified reward function (Lillicrap et al.,2015; Mnih et al., 2015; 2016; Silver et al., 2016). Unfortunately, not only do real-world tasksusually lack an explicit exogenously-specified reward function, but attempting to specify one tendsto lead to unexpected side-effects as the agent is faced with new situations (Lehman et al., 2018).This has motivated the area of reward inference : the process of estimating a reward function fromhuman inputs. The inputs are traditionally demonstrations, leading to inverse reinforcement learn-ing (IRL) (Ng et al., 2000; Abbeel & Ng, 2004) or inverse optimal control (IOC) (Kalman, 1964;Jameson & Kreindler, 1973; Mombaur et al., 2010; Finn et al., 2016). Recent work has expandedthe range of inputs significantly,to comparisons (Wirth et al., 2017; Sadigh et al., 2017; Christianoet al., 2017), natural language instructions (MacGlashan et al., 2015; Fu et al., 2019), physical cor-rections (Jain et al., 2015; Bajcsy et al., 2017), proxy rewards (Hadfield-Menell et al., 2017; Ratneret al., 2018), or scalar reward values (Griffith et al., 2013; Loftin et al., 2014).The central assumption behind these methods is that human behavior is rational, i.e. optimal withrespect to the desired reward (cumulative, in expectation). Unfortunately, decades of research in be-havioral economics and cognitive science Chipman (2014) has unearthed a deluge of irrationalities ,i.e. of ways in which people deviate from optimal decision making: hyperbolic discounting, scopeinsensitivity, optimism bias, decision noise, certainty effects, loss aversion, status quo bias, etc.Work on reward inference has predominantly used one model of irrationality: decision-makingnoise, where the probability of an action relates to the value that action has. The most widely usedmodel by far is a Bolzmann distribution stemming from the Luce-Sherpard rule (Luce, 1959; Shep-ard, 1957; Lucas et al., 2009) and the principle of maximum (causal) entropy in (Ziebart et al., 2008;2010), which we will refer to as Bolzmann-rationality (Fisac et al., 2017). Recent work has startedto incorporate systematic biases though, like risk-aversion (Singh et al., 2017), having the wrong1Under review as a conference paper at ICLR 2020dynamics belief (Reddy et al., 2018), and myopia and hyperbolic discounting (Evans & Goodman,2015; Evans et al., 2016).Learning from irrational experts feels like daunting task: reward inference is already hard withrational behavior, but now a learner needs to make sense of behavior that is noisy or systematicallybiased. Our goal in this work is to characterize just how muddied the waters are – how (and howmuch) do different irrationalities affect reward inference?Our insight is that, contrary to expectations, irrationality can actually help, ratherthan hinder, reward inference.Our explanation is that how good reward inference is depends on the mutual information between thepolicies produced by the expert and the reward parameters to be inferred. While it is often possiblefor two reward parameters to produce the same rational behavior, irrationalities can sometimesproduce different behaviors that disambiguate between those same two reward parameters. Forinstance, noise can help when it is related to the value function, as Boltzmann noise is, because itdistinguishes the difference in values even when the optimal action stays the same. Optimism canbe helpful because the expert takes fewer risk-avoiding actions and acts more directly on their goal.Overall, we contribute 1) an analysis and comparison of the effects of different biases on reward in-ference testing our insight, 2) a way to systematically formalize and cover the space of irrationalitiesin order to conduct such an analysis, and 3) evidence for the importance of assuming the right typeof irrationality during inference.Our good news is that irrationalities can indeed be an ally for inference. Of course, this is not alwaystrue – the details of which irrationality type and how much of it also matter. We see these resultsas opening the door to a better understanding of reward inference, as well as to practical ways ofmaking inference easier by asking for the right kind of expert demonstrations – after all, in somecases it might be easier for people to act optimistically or myopically than to act rationally. Ourresults reinforce that optimal teaching is different from optimal doing, but point out that some formsof teaching might actually be easier than doing.2 M ETHOD2.1 E XPLORING IRRATIONALITY THROUGH SIMULATIONOur goal is to explore the effect irrationalities have on reward inference if the learner knows aboutthem – we explore the need for the learner to accurately model irrationalities in section 4.2. Whileideally we would recruit human subjects with different irrationalities and measure how well wecan learn rewards, this is prohibitive because we do not get to dictate someone’s irrationality type:people exhibit a mix of them, some yet to be discovered. Further, measuring accuracy of inferenceis complicated by the fact that we do not have ground truth access to the desired reward: the learnercan measure agreement with some test set, but the test set itself is produced subject to the sameirrationalities that produced the training data. As experimenters, we would remain deluded aboutthe human’s true intentions and preferences.To address this issue, we simulate expert behavior subject to different irrationalities based on groundtruth reward functions, run reward inference, and measure the performance against the ground truth,i.e. the accuracy of a Bayesian posterior on the reward function given the (simulated) expert’s inputs.2.2 T YPES AND DEGREES OF IRRATIONALITYThere are many possible irrationalities that people exhibit (Chipman, 2014), far more than whatwe could study in one paper. They come with varying degrees of mathematical formalization andreplication across human studies. To provide good coverage of this space, we start from the Bell-man update, and systematically manipulate its terms and operators to produce a variety of differentirrationalities that deviate from the optimal MDP policy in complementary ways. For instance, op-erating on the discount factor can model more myopic behavior, while operating on the transitionfunction can model optimism or the illusion of control. Figure 1 summarizes our approach, whichwe detail below.2Under review as a conference paper at ICLR 2020Vi+1(s) = maxa/summationdisplays′∈ST(s′|s, a)(r(s, a, s′)+γVi(s))BoltzmannOptimism/PessimismIllusion of ControlProspectMyopic VIMyopic DiscountHyperbolicExtremalFigure 1: We modify the components of the Bellman update to cover different types of irrationali-ties: changing the max into a softmax to capture noise, changing the transition function to captureoptimism/pessimism or the illusion of control, changing the reward values to capture the nonlin-ear perception of gains and losses (prospect theory), changing the average reward over time into amaximum (extremal), and changing the discounting to capture more myopic decision-making.2.2.1 R ATIONAL EXPERTTherational expert does value iteration using the Bellman update from figure 1. Our models changethis update to produce different types of non-rational behavior.2.2.2 M ODIFYING THE MAXOPERATOR : BOLZMANNBoltzmann -rationality modifies the maximum over actions maxawith a Boltzmann operator withparameter:Vi+1(s) =BoltzaXs02ST(s0js;a) (r(s;a;s0) +Vi(s))Where Boltz(x) =Pixiexi=Piexi(Ziebart et al., 2010; Asadi & Littman, 2017) This modelsthat people will not be perfect, but rather noisily pick actions in a way that is related to the Q-value of those actions. The constant is called the rationality constant , because as ! 1 ,the human choices approach perfect rationality (optimality), whereas = 0 produces uniformlyrandom choices. This is the standard assumption for reward inference that does not assume perfectrationality, because it easily transforms the rationality assumption into a probability distribution overactions, enabling learners to make sense of imperfect demonstrations that otherwise do not matchup with any reward parameters.2.2.3 M ODIFYING THE TRANSITION FUNCTIONOur next set of irrationalities manipulate the transition function away from reality.Illusion of Control. Humans often overestimate their ability to control random events. To modelthis, we consider experts that use the Bellman update:Vi+1(s) = maxaXs02STn(s0js;a) (r(s;a;s0) +Vi(s))whereTn(s0js;a)/(T(s0js;a))n. Asn!1 , the demonstrator acts as if it exists in a deterministicenvironment. As n!0, the expert acts as if it had an equal chance of transitioning to every possiblesuccessor state. At n= 1, the expert is the rational expert.Optimism/Pessimism. Humans tend to systematically overestimate their chance experiencing ofpositive over negative events. We model this using experts that modify the probability they getoutcomes based on the value of those outcomes:Vi+1(s) = maxaXs02ST1=(s0js;a) (r(s;a;s0) +Vi(s))whereT1=(s0js;a)/T(s0js;a)e(r(s;a;s0)+Vi(s))=.1=controls how pessimistic or optimisticthe expert is. As 1=!+1, the expert becomes increasingly certain that good transitions willhappen. As 1=!1 , the expert becomes increasingly certain that bad transitions will happen.As1=!0, the expert approaches the rational expert.3Under review as a conference paper at ICLR 20202.2.4 M ODIFYING THE REWARD : PROSPECT THEORYNext, we consider experts that use the modified Bellman update:Vi+1(s) = maxaXs02ST(s0js;a) (f(r(s;a;s0)) +Vi(s))wheref:R!Ris some scalar function. This is equivalent to solving the MDP with reward fr.This allows us to model human behavior such as loss aversion and scope insensitivity.Prospect Theory Kahneman & Tversky (2013) inspires us to consider a particular family of rewardtransforms:fc(r) =8<:log(1 +jrj)r>00 r= 0clog(1 +jrj)r<0ccontrols how loss averse the expert is. As c! 1 , the expert primarily focuses on avoidingnegative rewards. As c!0, the expert focuses on maximizing positive rewards and2.2.5 M ODIFYING THE SUM BETWEEN REWARD AND FUTURE VALUE : EXTREMALExtremal. Humans seem to exhibit duration neglect, sometimes only caring about the maximumintensity of an experiennce (Do et al., 2008). We model this using experts that use the Bellman step:Vi+1(s) = maxaXs02ST(s0js;a) (max [r(s;a;s0);(1)r(s;a;s0) +Vi(s)])These experts maximize the expected maximum reward along a trajectory, instead of the expectedsum of rewards. As !1, the expert maximizes the expected maximum reward they achieve alongtheir full trajectory. As !0, the expert becomes greedy, and only cares about the reward theyachieve in the next timestep.2.2.6 M ODIFYING THE DISCOUNTINGMyopic Discount. In practice, humans are often myopic, only considering immediate rewards.One way to model this is to decrease gamma in the Bellman update. At = 1, this is the rationalexpert. As!0, the expert becomes greedy and only acts to maximize immediate reward.Myopic VI. As another way to model human myopia, we consider a expert that performs onlyhsteps of Bellman updates. That is, this expert cares equally about rewards for horizon h, anddiscount to 0 reward after that. As h!1 , this expert becomes rational. If h= 1, this expert onlycares about the immediate reward.Hyperbolic Discounting. Human also exhibit hyperbolic discounting, with a high discount ratefor the immediate future and a low discount rate for the far future. Alexander & Brown (2010)formulate this as the following Bellman update:Vi+1(s) = maxaXs02ST(s0js;a) (r(s;a;s0) +Vi(s))=(1 +kVi(s))kmodulates how much the expert prefers rewards now versus the future. As k!0, this expertbecomes the rational expert.3 I MPACT OF IRRATIONALITIES ON REWARD INFERENCE3.1 E XPERIMENTAL DESIGNSimulation Environment. To reduce possible confounding from our choice of environment, weused a small 5x5 gridworld where the irrationalities nonetheless cause experts to exhibit differentbehavior. Our gridworld consists of three types of cells: ice, holes, and rewards. The expert can startin any ice cell. At each ice cell, the expert can move in one of the four cardinal directions. With4Under review as a conference paper at ICLR 2020Figure 2: The log loss (lower = better) of the posterior as a function of the parameter we vary foreach irrationality type. These six irrationalities all have parameter settings that outperform rationalexperts. For the models that interpolate to rational expert, we denote the value that is closest torational using a dashed vertical line.probability 0:8, they will go in that direction. With probability 0:2, they will instead go in one of thetwo adjacent directions. Holes and rewards are terminal states, and return the expert back to theirstart state. They receive a penalty of 10for falling into a hole and i2[0;4]for entering into theith reward cell.Dependent Measures. To separate the inference difficulty caused by suboptimal inference fromthe difficulty caused by expert irrationality, we perform the exact Bayesian update on the trajectory(Ramachandran & Amir, 2007), which gives us the posterior on given:P(j) =P(j)P()R0P(j0)P(0)We use two metrics to measure the difficulty of inference The first is the expected log loss of thisposterior, or negative log-likelihood:Log Loss (j) =E;[logP(j)]:A low log loss implies that we are assigning a high likelihood to the true . As we are performingexact Bayesian inference with the true model P(j)and priorP(), the log loss is equal to theentropy of the posterior H(j).The second metric is the L2-distance between the mean posterior and the actual theta:L2(j) =E;jjE[j]jj2The closer the inferred posterior mean of is to the actual value , the lower the loss.For each irrationality type, we calculate the performance of reward inference on trajectories of afixed length T, with respect to the two metrics above. To sample a trajectory of length Tfroma expert, we fix and start state s. Then, we perform the expert’s (possibly modified) Bellmanupdates until convergence to recover the policy . Finally, we generate rollouts starting from statesuntilTstate, action pairs have been sampled from .5Under review as a conference paper at ICLR 2020Figure 3: A best case analysis for each irrationality type: the log loss/ L2distance from mean(lower=better) for experts, as a function of the length of trajectory observed. Each irrationalityuses the parameter value that is most informative. As discussed in section 3.2, different irrational-ity types have different slopes and converge to different values. In addition, the best performingirrationality type according to log loss is not the best performing type according to L2loss.3.2 A NALYSISImpact of Each Irrationality. We found that of the 8 irrationalities we studied, 6 had parametersettings that lead to lower log loss than the rational expert. We report how the parameter influencesthe log loss for each of these experts in figure 2.1ForT= 30 , Optimism with 1== 3:16performedthe best, followed by Boltzmann with = 100 and Hyperbolic with k= 0:1. Both forms of Myopiaalso outperformed the rational expert, with best performance occurring at = 0:9andh= 5.Finally, the Extremal expert also slightly outperformed the rational expert, with best performance at= 0:9. Notably, in every case, neither the most irrational expert nor the perfectly rational expertwas the most informative.Impact of Data for Different Irrationalities. Next, we investigate how the quality of inferencevaries as we increase the length of the observed trajectory T. We report our results for the bestperforming parameter for each irrationality type in figure 3. Interestingly, while both metrics de-crease monotonically regardless of irrationality type, the rate at which they decrease differs by theirrationality type, and the best performing irrationality type according to log loss (Optimism) is notthe best performing type according to L2distance (Boltzmann).What is behind these differences? To explain these results, we use the notion of mutual informa-tionI(X;Y)between two variables, defined as:I(X;Y) =EX;YlogP(X;Y )P(X)P(Y)=H(X)H(XjY)The mutual information measures how much our uncertainty about Xdecreases by observing Y.For reward inference, the term we care about is the mutual information between the expert’s trajec-tory and the reward parametersI(;) =E;logP(;)P()P()=H()H(j)The mutual information I(;)is equal to a constant minus the posterior log loss under the truemodel. A expert with mutual information will cause the learner to have a lower posterior log loss.1The plots for the other two irrationalities are included in the appendix.6Under review as a conference paper at ICLR 2020(a) Optimism Bias( = 3:16) (b) Pessimism Bias( =3:16Figure 4: (a) Optimism bias produces different actions for = (4;1)vs.= (1;4)in the statesshown: the rational policy is to go away from the hole regardless of , but an optimistic expert takesthe chance and goes for the larger reward – up in the first case, down in the second. (b) Pessimismbias produces different actions for = (1;1)vs.= (4;4): when the reward is sufficiently large,the expert becomes convinced that no action it takes will lead to the reward, leading it to performrandom actions.(a) Boltzmann ( = 100 ) (b) Myopia(h= 5)Figure 5: (a) Boltzmann-rationality produces different policies for = (1;1)vs.= (4;4):whenjjjjis larger, the policy becomes closer to that of the rational expert. (b) A Myopic expertproduces different policies for = (4;1)vs.= (4;0): while the rational expert always detoursaround the hole and attempts to reach the larger reward, myopia causes the myopic expert to go forthe smaller source of reward when it is non-zero.By the information processing inequality, we have the bound I(;)I(;).To have higher mutual information, different s should be mapped to different policies s. Indeed,we found that the experts that were able to outperform the rational expert were able to disambiguatebetweens that the rational expert could not. To visualize this, we show examples of how the policyof several irrational experts differ when the rational expert’s policies are identical in figures 4 and 5.We plot the correlation between I(;)andI(;)in figure 6. Experts that have more informativepolicies tend to have more informative trajectories, but the correlation is not perfect. Notably, theOptimism expert has the most informative trajectories of length 30, but has less informative policiesthan the Boltzmann expert.In the limit of infinite data from every state, we would have I(;)!I(;). However, as eachtrajectory begins from the same start state, and not every state is reachable with every policy, thebound is not achievable in general, even if we observe an arbitrarily large number of trajectories.This highlights the need for off-policy data in reward inference tasks.4 D ISCUSSION4.1 S UMMARYWe show that, contrary to what we might expect, suboptimal experts can actually help an agent learnthe reward function. Optimism bias, myopia (via heavier discounting or hyperbolic discounting),7Under review as a conference paper at ICLR 2020Figure 6: The informativeness of policies correlates with the informativeness of trajectories of length30, as discussed in section 3.2and noise via Boltzmann rationality were the most informative irrationalities in our environments,far surpassing the performance of the rational expert for their ideal settings. Our contribution overallwas to identify a systematic set of irrationalities by looking at deviations in the terms of the Bellmanupdate, and show that being irrational is not automatically harmful to inference by quantifying andcomparing the inference performance for these different types.4.2 L IMITATIONS AND FUTURE WORK.Estimating expert irrationality. One major limitation of our work is that our findings hold forwhen the learner knows the type and parameter value of the irrationality. In practice, reward infer-ence will require solving the difficult task of estimating the irrationality type and degree (Armstrong& Mindermann, 2018; Shah et al., 2019). We still need to quantify to what extent these results stillhold given uncertainty about the irrationality model. It does, however, seem crucial to reward in-ference that learners do reason explicitly about irrationality – not only is the learner unable to takeadvantage of the irrationality to make better inference if it does not model it, but actually rewardinference in general suffers tremendously if the learner assumes the wrong type.In figure 10 in the Appendix, we compare inference with the true model vs. with assuming a Boltz-mann model as default. The results are quite striking: not knowing the irrationality harms inferencetremendously. Whether irrationalities help, this means that it is really important to model them.Generalization to other environments. A second limitation of our work is that we only testedthese models in a limited range of environments. Further work is needed to test generalization ofour findings across different MDPs of interest. Our analysis of mutual information lends credenceto the Boltzmann rationality result generalizing well: these policies are much more varied with thereward parameters. In contrast, how useful the optimism bias is depends on the task: if we knowabout what to avoid already, as was the case for our learner, the bias is useful; if, on the other hand,we would know the goal but do not know what to avoid, the bias can hinder inference. Overall,this paper merely points out that there is a lot of richness to the ways in which these biases affectinference, and provides a quantitative comparison for a starting domain – much more is needed togain a deeper understanding of this phenomenon.Applications to real humans. A third limitation is that we do not know where real humans lie.Do they have the helpful irrationality types? Do they fall in the range of parameters for these typesthat help inference? And what happens when types combine? While these questions are daunting,there is also a hidden opportunity here: what if we could influence humans to exhibit helpful types ofirrationality? It might be much easier for them, for instance, to act myopically than to act rationally.In the end, reward inference is the confluence of two factors: how well the robot learns, and howwell the teacher teaches. Our results point out that it might be easier than previously thought to be agood teacher – even easier than being a rational expert.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text This paper studies reward inference from demonstrations of irrational experts. More specifically, a set of semantically meaningful experts' behaviors is considered which is derived from modifying the Bellman update. The quality of reward inference from these different experts is measured by two different scores and analyzed regarding properties of the demonstrator. The main finding is that irrationality can be helpful for inferring rewards. The problem addressed by the paper is very interesting and relevant to a reasonable part of the community but I argue that the paper is not ready for publication in its current form. In particular, the experiments are too limited to thoroughly support the claims (this requires at least the consideration of more different environments; and to really make the paper impactful some parts of the "future work" section should be conducted) and the write-up should be improved from Section 3 onwards to provide more clarity. A few more detailed points: * I see the paper in its current form as a theoretical study on reward inference from irrational experts. To provide insights here, and as these only involves simulations, a rich set of different MDPs should be considered. In the current form it is unclear how general the results are (although I assume that certain findings hold more generally but there is no supporting evidence for that). Maybe even formal theoretical insights can be derived. * Regarding the Bellman update, on the RHS it should be $V_i(s')$. * Regarding the presentation of the irrational experts. Is there a simpler way of presenting the irrational experts through modified MDPs that the expert tries to solve optimally? Are all updates actually convergent, in particular the pessimistic one? * Please provide a formal specification of the reward model used in experiments. * Please provide a describe how you do the computation of the posteriors $\theta$ in the main paper (or at least provide a forward reference to the appendix). Which prior on $\theta$ are you using (put in main paper)? * What is the precise nature of $\xi$? I would assume it is a sequence of state-actions but that is not consistent with the definition of the log-loss which suggests it is only actions. * Probably more interesting than the log-loss and L^2 loss is the actual performance of an optimal policy using the inferred reward parameters. It would be good to report this numbers. How do irrational experts compare looking at this metric? * The discussion of related work should be extended. For instance, R. Shah et al.'s paper "On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference" should be discussed in more detail and similarities and differences clarified. Minor comments and suggestions for improving the paper: * The definition of Boltz in 2.2.2 can be made more clear. Maybe define the function using actions to connect to the above equation. * Correct "update on the the trajectory $\theta$". * Please check the usage of $\theta$ and $\theta^*$ and make it consistent. I think it would also help to make the log-loss and L^2 distance not look like a function of $\theta$. * Figure 4/5: Explain what we see. I guess the black square is the starting position? * Regarding figure 5: Comparing different $\beta$ values seems more sensible if the norm of $\theta$ is normalized. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
Sy6iJDqlx
ICLR.cc/2017/conference
2017
Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain
["Janarthanan Rajendran", "Aravind Lakshminarayanan", "Mitesh M. Khapra", "Prasanna P", "Balaraman Ravindran"]
Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.
["Deep learning", "Reinforcement Learning", "Transfer Learning"]
ABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017
Sy-SiOZNe
Review
7: Good paper, accept
The paper tackles important problems in multi-task reinforcement learning: avoid negative transfer and allow finer selective transfer. The method is based on soft attention mechanism, very general, and demonstrated to be applicable in both policy gradient and value iteration methods. The introduction of base network allows learning new policy if the prior policies aren't directly applicable. State-dependent sub policy selection allows finer control and can be thought of assigning state space to different sub policies/experts. The tasks are relatively simplistic but sufficient to demonstrate the benefits. One limitation is that the method is simple and the results/claims are mostly empirical. It would be interesting to see extensions to option-based framework, stochastic hard attention mechanism, sub-policy pruning, progressive networks. In figure 6, the read curve seems to perform worse than the rest in terms of final performance. Perhaps alternative information to put with figures is the attention mask activation statistics during learning, so that we may observe that it learns to turn off adversarial sub-policies and rely on newly learned base policy mostly. This is also generally good to check to see if any weird co-adaptation is happening.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain ### Paper Abstract Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain. ### Paper Keywords ["Deep learning", "Reinforcement Learning", "Transfer Learning"] ### Paper Content ABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text The paper tackles important problems in multi-task reinforcement learning: avoid negative transfer and allow finer selective transfer. The method is based on soft attention mechanism, very general, and demonstrated to be applicable in both policy gradient and value iteration methods. The introduction of base network allows learning new policy if the prior policies aren't directly applicable. State-dependent sub policy selection allows finer control and can be thought of assigning state space to different sub policies/experts. The tasks are relatively simplistic but sufficient to demonstrate the benefits. One limitation is that the method is simple and the results/claims are mostly empirical. It would be interesting to see extensions to option-based framework, stochastic hard attention mechanism, sub-policy pruning, progressive networks. In figure 6, the read curve seems to perform worse than the rest in terms of final performance. Perhaps alternative information to put with figures is the attention mask activation statistics during learning, so that we may observe that it learns to turn off adversarial sub-policies and rely on newly learned base policy mostly. This is also generally good to check to see if any weird co-adaptation is happening. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
rJD6Xgnoz
MIDL.amsterdam/2018/Conference
2018
Recurrent Inference Machines for Accelerated MRI Reconstruction
["Kai L\u00f8nning", "Patrick Putzky", "Matthan W. A. Caan", "Max Welling"]
Accelerated MRI reconstruction is important for making MRI faster and thus applicable in a broader range of problem domains. Computational tools allow for high-resolution imaging without the need to perform time-consuming measurements. Most recently, deep learning approaches have been applied to this problem. However, none of these methods have been shown to transfer well across different measurement settings. We propose to use Recurrent Inference Machines as a framework for accelerated MRI, which allows us to leverage the power of deep learning without explicit domain knowledge. We show in experiments that the model can generalize well across different setups, while at the same time it outperforms another deep learning method and a compressed sensing approach.
["RNNs", "MRI", "Inverse Problem", "Image Reconstruction", "Iterative Methods"]
Recurrent Inference Machinesfor Accelerated MRI ReconstructionKai LønningInformatics InstituteUniversity of Amsterdamkai.lonning@gmail.comPatrick PutzkyAMLAB,University of Amsterdampatrick.putzky@gmail.comMatthan CaanAcademic Medical Center,Spinoza Centre for Neuroimagingm.w.a.caan@amc.nlMax WellingAMLAB, University of Amsterdam,Canadian Institute for Advanced Research (CIFAR)m.welling@uva.nlAbstractAccelerated MRI reconstruction is important for making MRI faster and thusapplicable in a broader range of problem domains. Computational tools allowfor high-resolution imaging without the need to perform time-consuming mea-surements. Most recently, deep learning approaches have been applied to thisproblem. However, none of these methods have been shown to transfer well acrossdifferent measurement settings. We propose to use Recurrent Inference Machinesas a framework for accelerated MRI, which allows us to leverage the power ofdeep learning without explicit domain knowledge. We show in experiments thatthe model can generalize well across different setups, while at the same time itoutperforms another deep learning method and a compressed sensing approach.1 IntroductionMagnetic Resonance Imaging (MRI) measures data in the space of net precession frequencies, knownin MRI as k-space. Once enough samples in k-space are acquired to meet the Nyquist-criterion, theMR-image of tissue density can be computed through the inverse Fourier transform. However, thereare physical constraints to the slew rate of the scanner’s gradient system, putting a lower bound onthe time it takes to fully sample k-space and produce an image. As such, aspirations to reducing MRscan times amount to acquiring k-space samples below the Nyquist-criterion and reconstructing theMR-image through a dealiasing algorithm. This amounts to solving what is known as an inverseproblem. In this context, the forward model is a known process that describes the transformationtaking the true image signal to the measured samples. What is not known, is the forward model’sinverse transformation, taking the measured signal back to the true image, as this information is lostwhen k-space is sparsely sampled.The set of possible MR-images is huge, even when restricted to a particular anatomical region,contrast mechanism or resolution. Also consider that each element spawns a large set of imagecorruptions, one for each permitted set of k-space sub-samples. As such, learning a direct mappingfrom each corruption back to the original signal, for all possible original signals, is a highly complex1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.problem, requiring an unfeasible number of parameters and a lot of data to train them. As such,part of the solution must help constrain the solution space. Well-established methods do this bycareful design of features that exploit some known property inherent to MRI. In this work, we applyRecurrent Inference Machines (RIM) for accelerated MRI reconstruction, which were first proposedin [7] as general inverse problem solvers. They constrain the solution space by learning an iterativeprocess, where step-wise reassessments of the maximum a posteriori estimate result in incrementalupdates that infer the inverse transform of the forward model.Apart from breaking a complex problem into multiple sub-problems, we conjecture that this iterative"meta-learning"-approach prevents the model from overfitting on the image statistics of the dataset,by shifting focus toward learning the inversion procedure itself. This should result in a larger degreeof invariance with respect to changes in the specific imaging settings. In this paper, we show thatRIMs can accurately and efficiently reconstruct sparsely sampled MR-images at varying accelerationfactors, and that their solution is robust against perturbations in sub-sampling points, image resolutionlevels, and to some extent the underlying anatomy being imaged, making them suitable candidatesfor distribution across different MR-acquisition set-ups.2 Background and related workWe start by introducing the forward model of the inverse problem of accelerated MRI reconstruction.Letx2Cnbe the true image signal and y2Cm,mn, be the set of sparsely sampled frequencysignals measured by the scanner in k-space. The sampled measurements can then be described interms of the true image,y=PFx+n: (1)Here, the MR-image is projected onto its frequency domain through the Fourier transform F,followed by a sub-sampling mask P, which discards some fraction of values in k-space, therebyfacilitating the acceleration in scan times. Further, samples are assumed to be subjected to additive,normally distributed noise nsN0;I2+iN0;I2stemming from various measurementerrors accumulated by the scanner. While MR-scanners have multiple receiver coils, which eachproduce partial images with different spatial sensitivity, we assume for simplicity in this work thatthere is only one receiver coil with homogeneous spatial sensitivity.The goal in accelerated MRI reconstruction is to find an inverse transform of the forward model in(1), thereby mapping incomplete measurements yto a high resolution image x. Usually, this is dealtwith by solving the regularized optimization problemargminxfd(y;PFx) +R(x)g; (2)wheredevaluates the data consistency between the reconstruction and measurements, and Ris aregularizer preventing overfitting, with regularization factor . Whereas the data-consistency termin(2)follows explicitly from the forward model in (1), the regularizer R, representing the priordistribution over MR-images, is more a matter of model design.A solution to (2)is typically found through an iterative process. One such example is the well-established method known as Compressed Sensing (CS) [ 6], which uses a transformation knownto compress a particular sub-set of MR-images, combined with the `1-norm, which yields sparsesolutions when used as a regularizer. Relative to deep learning, this shallow form of feature extractionleads to a highly efficient algorithm, however, depending on the network architecture, learned featurestend to win in terms of accuracy. As such, the use of deep neural networks is increasing, also inmedical imaging [5].In recently proposed deep learning-based solutions that model gradient descent on (2), the prioris learned by specifically designated convolutional layers [ 14][4]. Another paper reinforces dataconsistency in separate layers, interspersed between longer sequences of convolutional layers [ 10].Other deep learning proposals move away from (2)altogether. For example, in [ 8], the GenerativeAdversarial Network’s (GAN) overall success is repurposed for the task of removing aliasing artifactsin a non-iterative fashion, whereas in [ 3], they deploy the U-net architecture known from imagesegmentation tasks [9], in an attempt to learn a direct mapping from ytox.As in [ 14] and [ 4], the RIM also learns an iterative scheme based on (2). However, instead of usingseparate parameters for each iteration, the RIM has a recurrent architecture, sharing weights across2Image Space K-SpaceTarget Measurement(a)Measurement Process (b)RIM architectureFigure 1: (a) The goal in MRI is to retrieve a high resolution image (top left). Meaurements aredone in k-space (bottom left) which is related to image space through a Fourier transform. In orderto accelerate the measurement process, the k-space is sub-sampled (bottom right). Imaging thesub-sampled k-space meaurement will lead to an aliased image (top right). The goal of this work is tofind a function that maps from an incomplete k-space (bottom right) to a high resolution image (top left).(b)The RIM update function has used in this work. All images show magnitudes scaled individually.Magnitudes of intermediate internal states s1and s2were averaged over features. Bold lines depictconnections within a single timestep, whereas dotted lines represent recurrent connections that passinformation to the next time step.iterations, while utilizing internal states to distinguish them. Following Recurrent Neural Network(RNN) conventions, we shall hereby refer to these iterations as time-steps. We do not use the sameinput, but this approach bears resemblance to the model described in [ 1], where RNNs are trained tolearn gradient descent schemes by using the gradient of the objective function as the network’s inputfor each time-step.3 Recurrent Inference MachinesIn the context of accelerated MRI reconstruction, the gradient of the log-likelihood, corresponding tothe data-consistency term din(2), is given byryjx:=2F1(PFxy). As for the problemof evaluating the gradient of the log-prior distribution, corresponding to Rin(2), this is solved bypassing the current state xtas an input to the network, such that any function that would implicitlyapproximate the log-prior gradient can be evaluated at xt. The RIM’s iterative scheme can then bedescribed by the following update equations:s0=0; x0=F1y;st+1=gryjxt;xt;st;xt+1=xt+hryjxt;xt;st+1;(3)for0t < T .hdenotes the RIM’s neural network, parametrized by , such that each passthroughhproduces the next incremental update xtin the RIM’s iterative scheme. gis simplythe part of the network responsible for producing the next internal state st+1, which the RIM needs inorder to keep track of time-steps and modify its behaviour based on the progression of the inferenceprocedure.In this work, the update function hwas implemented using a sequence of alternating convolutionallayers and gated recurrent unit (GRU) cells, where the first two convolutional layers are followed byReLU activation functions before the feature maps are passed to the GRUs. The GRU cells work asdescribed in [ 2], and are assigned the task of maintaining the internal state, meaning that in practicethere are two states s=s1;s2represented by sin(3). Figure 1b illustrates the way in whichthese layers were assembled. The current estimate xtis simply concatenated with the log-likelihoodgradientryjxtalong the channel dimension to produce the input of the first convolutional layer,resulting in 4 input channels due to the complex components also being given separate channels. This3first layer is implemented with a kernel size of 55, whereas the next two convolutions have kernelsizes 33. All convolutions are padded to retain the same image size through-out the network. Wewill takeFto mean the number of features in the GRU cells’ states and the number of feature mapsproduced by the convolutional layers. This hyper-parameter is kept the same through-out all internallayers, before the final convolutional layer outputs the complex-valued image update xt. Note thatthe GRU cells’ weights are shared across image pixels, but differ across the feature maps producedby the convolutional layers, allowing the network to process images of any given size.We use the mean square error (MSE) as a loss function, where the estimate xtis evaluated against thetrue image xfor each time-step. The total loss to minimize is then given by the average per-pixelMSE,L(xT) =1nTTXt=1kxtxk22; (4)where, as before, nis the total number of image pixels, and Tis the total number of time steps duringtraining.4 Experiments4.1 DatasetOn a 7:0 TPhilips Achieva scanner (Achieva, Philips Healthcare, Cleveland, USA) equipped with a32-channel Nova head coil, 3D T2-weighted multi-echo FLASH data was acquired with an isotropicresolution of 0:7 mm3and Field-of-View (FOV) 224224126 mm3, matrix size 320180,320slices with transversal slice encoding direction, 6 echoes with echo times (TEs) ranging from 3 msto21 ms , repetition time (TR) 23:4 ms , flip angle 12, and second order image-based B0 shimming.The data were fully sampled with an elliptical shutter, such that the total scanning time was 22:5 min .On a 3:0 TPhilips Ingenia scanner (Philips Healthcare, Best, The Netherlands) equipped with a 32-channel head coil, T1-weighted three-dimensional (3D) magnetization prepared rapid gradient echo(MPRAGE) data were acquired with an isotropic resolution of 1:0 mm3and FOV 256240 mm2,matrix size 256240,225slices with sagittal slice encoding direction, TFE factor 150, shot interval2500 ms , inversion delay 900 ms , flip angle 9, and first order shimming. The data were fully sampledwith an elliptical shutter, such that the total scanning time was 10:8 min .On both scanners, raw data was exported and stored for offline reconstruction experiments. Coilsensitivities were estimated from the data using auto-calibration, after which data of different coilelements was combined [ 11]. The data was then normalized with respect to the maximum magnitudevalue in the image domain.12 healthy subjects were included, from whom written informed consent (under an institutionallyapproved protocol) was obtained beforehand.4.2 Acceleration methodPrevious papers on deep learning methods for accelerated MRI reconstruction train separate modelsfor each acceleration factor they evaluate on [ 4,8,14,3,10]. Whether this is known to be necessarythrough empirical testing or due to a working assumption is unclear. Either way, our hypothesis isthat the RIM is capable of dealing with a range of acceleration factors. All models in this paper weretrained on acceleration factors that were randomly sampled from the uniform distribution U(1:5;5:2).Sub-sampling patterns were then generated using a Gaussian distribution, thereby favoring lowfrequencies that contain more information about the general shape of the image content, while alsocreating incoherent noise due to randomness. We thereby adhere to the requirement of incoherentnoise, as established in compressed sensing [ 6]. Furthermore, the signals emitted near the origin ink-space were always fully sampled within an ellipse with half-axes set to 2% of the image axes.Model selection was done by testing on acceleration factors 2x, 3x, 4x and 5x, using the best averageof three constant sub-sampling patterns within each acceleration category. The selected models werethen evaluated on the hold-out datasets for final comparisons, using 10 sub-sampling patterns withineach acceleration factor. Examples of acceleration patterns used for evaluation can be seen in Fig. 2.4Figure 2: Examples of sub-sampling masks used for testing. Acceleration factors are, from left toright: 2x, 3x, 4x and 5x.4.3 Research focusConcerns referring to the data-driven black box-nature of deep learning algorithms are sometimesraised as a point of preference toward more traditional approaches. Even if they may come withperformance costs, non-learned models always extract the intended features and thus do as prescribedregardless of the data they receive. Furthermore, MR-images are acquired at a large variety of settings,ie. different contrast mechanisms and levels of signal-to-noise and resolution. A reconstruction modelshould be able to generalize well across different settings typically used in the lab or clinic, preferablyeven without having encountered the particular setting during training.With this in mind, we wish to illustrate the RIM’s ability to generalize to images acquired underdifferent conditions than those contained in the training set. We believe that, due to its internal statesand by including the forward model during inference, the RIM is capable of separating featuresrelated to image statistics from the features that have to do with the inversion procedure itself, andtherefore generalizes well to new types of data.In this work, we train one RIM on the 3 T 1:0 mm -dataset using slices along the transverse plane(RIM 3T), and another on the 7 T 0:7 mm -dataset in the coronal plane (RIM 7T), after which wecross-evaluate their performance. Unless otherwise mentioned, the models were trained on randomlycropped patches of size 200200, which were randomly rotated, flipped and mirrored for dataaugmentation purposes. The training sets consisted of 10 subjects, whereas 1 subject was used formodel selection and a final subject for final evaluation. We will use the Structural Similarity index(SSIM) [13] as a metric for quantifying the reconstruction quality.For comparing against other reconstruction methods, we will use CS and the U-net architecture [ 3].CS is chosen because it is currently being used for online reconstruction in many scanners today,whereas the U-net is chosen for its ease of implementation and training, and because it is already awell-known architecture for per-pixel tasks in deep learning.For CS, we use the BART toolbox described in [ 12]. We use the `1-norm of the wavelet transformas a regularizer, with regularization factor 0.05 and 40 iterations. For the U-net, we used nearly thesame architecture as described in [ 3]. However, we achieved better results using max-pooling insteadof average pooling when downscaling the image. We also trained the network using the ADAMoptimizer, and no post-processing step was used. Finally, a big difference with our implementationis that we compare on the same acceleration method as described in 4.2, whereas in [ 3] they use aone-dimensional periodic sub-sampling scheme which remains constant through-out training. Wefound that, unlike the RIM, the U-net could only learn to reconstruct the magnitude of an image, andwould fail when attempting to learn the real and imaginary components in order to obtain a phasereconstruction as well.We begin the experiments by investigating the effect of varying a couple of the RIM’s hyper-parameters, namely the number of time-steps Tand the number of features F.5 Results and discussion5.1 Hyper-parametersThe two plots in Fig. 3 show the SSIM scores on the model validation set of the final reconstructionsfor RIMs trained on different values of TandF. These models were trained on smaller patches ofsize3030.54 6 8 10 12Number of time-steps0.950.960.970.980.99SSIMNumber of features: F = 6432 64 128Number of featuresNumber of time-steps: T = 8Acceleration2x3x4x5xFigure 3: The plots show SSIM-values for the final reconstruction of RIM models trained on differenthyper-parameters. Left figure: The effect of varying the number of features. Right figure: The effect ofvarying the number of time-steps.We first look at the effect of T, where we wish to find the optimal number of updates needed for theRIM to learn its iterative scheme until convergence. Setting the number of features to F= 64 , wetrained RIMs over a total of 4, 6, 8, 10 and 12 time-steps. The fewer the number of time-steps, thefewer the number of training iterations are necessary for the model to learn. We also found that, thefewer the time-steps a model is trained on, the greater the reconstruction improvement is betweentime-steps. In other words, the 4th estimate of a model trained on 4 time-steps will be better thanthe 4th estimate produced by a model trained on 6 time-steps. However, the 6th estimate of a modeltrained on the 6 time-steps will be even better. Moreover, learning becomes unstable when trainingon a low number of time-steps. Especially when training on 4 time-steps we observed divergingloss-curves as it approached a local minimum, requiring the use of gradient clipping and severalrestarts for successful training. Training on 6 time-steps was also problematic, whereas at 8 time-stepsthis problem had subsided.The higher the acceleration factor, the more there is to be gained by increasing the number of time-steps. However, the improvement from training on 8 to 10 time-steps is marginal, whereas going from10 to 12 time-steps even decreases the SSIM score for certain acceleration levels. We suggest thatthis decrease is best interpreted as noise due to differing mini-batch sizes and human error involvedin determining when a model had converged sufficiently to stop training. At this point, the benefitfrom spending more time on training larger models becomes negligible, and there is also a questionof inference speed, hence we proceed with models trained on 8 time-steps going forward.Next, we ask what there is to be gained from increasing the number of RIM features. Keeping T= 8,we train three models setting the number of features to F= 32;64;128. Unlike when increasing thenumber of time-steps, we do not observe a marked drop in improvement when doubling the numberof features. This would imply that we should increase the number of features further. However, weconsider the time it takes to reconstruct an image to be too long at 128 features, and so we make acompromise and select 64 features going forward.5.2 Generalization and overall comparisons against Compressed Sensing and the U-netFig. 4 illustrates the SSIM scores of the different models evaluated on all acceleration levels. Thetwo plots show the difference when the models are evaluated on the two datasets described in 4.1. Asseen, the RIMs are capable of producing nearly the same results on both types of data, regardless ofthe type of data they were trained on, even beating CS when evaluated on the unseen dataset. TheU-nets show a greater tendency to overfit on the type of data used for training, as seen by the drop inperformance on the unseen dataset. Also note that the whisker plots, corresponding to the standarddeviation in SSIM scores, are narrower for the two RIM models than for CS and the U-net, indicatingthat the RIM is more robust against variations in sub-sampling patterns or input images.Qualitative results can be seen for samples from the 3T and 7T test sets in Fig. 6 and Fig. 5,respectively. Extended figures with acceleration factors 2, 3, and 4 can be found in A.62x 3x 4x 5xAcceleration0.850.900.951.00SSIMEvaluation Data = 7T2x 3x 4x 5xAccelerationEvaluation Data = 3TModelCSRIM 3TRIM 7TU-net 3TU-net 7TFigure 4: SSIM of final reconstructions of RIM models that were trained on datasets from the 3 T-scanner at 1 mm resolution, and from the 7 T-scanner at 0:7 mm resolution. Both models generalizewell to the resolution they were not trained on, even out-performing compressed sensing.2.0x acc. | SSIM: 0.899 RIM 3T | SSIM: 0.988RIM 7T | SSIM: 0.989 U-net 7T | SSIM: 0.975U-net 3T | SSIM: 0.938CS | SSIM: 0.9893.0x acc. | SSIM: 0.747 RIM 3T | SSIM: 0.974RIM 7T | SSIM: 0.977 U-net 7T | SSIM: 0.952U-net 3T | SSIM: 0.905CS | SSIM: 0.9774.0x acc. | SSIM: 0.701 RIM 3T | SSIM: 0.962RIM 7T | SSIM: 0.966 U-net 7T | SSIM: 0.94U-net 3T | SSIM: 0.882CS | SSIM: 0.963Target 7T 0.7mm5.0x acc. | SSIM: 0.658 RIM 3T | SSIM: 0.947RIM 7T | SSIM: 0.957 U-net 7T | SSIM: 0.927U-net 3T | SSIM: 0.857CS | SSIM: 0.941Figure 5: Reconstructions of a sample image from the 7T test set at acceleration factor 5x. The twoRIMs and U-nets were trained on 3T and 7T datasets, respectively.As seen in the figures, RIMs produce less noisy reconstructions with more detail preserved whencompared to CS. The reconstruction quality is also highly consistent, whereas images produced byCS vary with respect to the image and sub-sampling pattern used. The same holds when comparingRIMs with U-nets, but to a larger extent. Our results indicate that U-nets tend to overfit on data typeand the specific pattern used for sub-sampling, and are likely to have suffered under the randomizedacceleration method used in this work during training. What the RIM can extract from its currentreconstruction state, log-likelihood gradient, and hidden states, the U-net must make up for in terms ofmodel parameters, which, at 1;328;833versus 94;336parameters, will more likely lead to overfitting.As for reconstruction times, the results shown for the RIM in this paper take 236 ms , althoughthis time can be reduced by lowering the number of features or time-steps without sacrificing toomuch in terms of reconstruction quality, as indicated by Fig. 3. For instance, with F= 32 , thereconstruction time becomes 126 ms . At40 ms , the U-net is 6 times faster than the RIM, which maymake it preferable for low acceleration factors at high resolution levels. These aforementioned timesare measured on a single GPU. Unfortunately, we only have CPU time for CS, which is 1203 ms ,however, we expect that CS might be faster than the RIM when applied on the GPU.7Target 3T 1.0mm5.0x acc. | SSIM: 0.622 RIM Brain 3T | SSIM: 0.981RIM Brain 7T | SSIM: 0.976 U-net Brain 7T | SSIM: 0.912U-net Brain 3T | SSIM: 0.944CS | SSIM: 0.9544.0x acc. | SSIM: 0.646 RIM Brain 3T | SSIM: 0.986RIM Brain 7T | SSIM: 0.981 U-net Brain 7T | SSIM: 0.918U-net Brain 3T | SSIM: 0.958CS | SSIM: 0.973.0x acc. | SSIM: 0.705 RIM Brain 3T | SSIM: 0.991RIM Brain 7T | SSIM: 0.988 U-net Brain 7T | SSIM: 0.939U-net Brain 3T | SSIM: 0.97CS | SSIM: 0.9832.0x acc. | SSIM: 0.862 RIM Brain 3T | SSIM: 0.996RIM Brain 7T | SSIM: 0.993 U-net Brain 7T | SSIM: 0.967U-net Brain 3T | SSIM: 0.986CS | SSIM: 0.991Figure 6: Reconstructions of a sample image from the 3T test set at acceleration factor 5x. The twoRIMs and U-nets were trained on 3T and 7T datasets, respectively.6 ConclusionWe have shown that RIMs are capable of accelerated MRI reconstruction, thereby producing re-constructions of higher quality than the U-net and Compressed Sensing. Reconstructions were lesssensitive to varying input images and perturbations in the sub-sampling pattern used for data acquisi-tion. Further, we have shown that RIMs generalize well to datasets with unseen levels of resolutionand signal-to-noise ratios, and even to data recorded from a different angle than what was trained on.This emphasises the RIMs ability to perform well across the various measurement scenarios that apresent in clinical practice. Furture work will show how well this method can generalize over othermeasurement scenarios such as different types of anatomical data and different numbers of recordingcoils.AcknowledgmentsWe kindly thank dr. F.M. V os and dr. G.A.M. Arkesteijn for their support in data acquisition.Kai Lønning and Max Welling are supported by the Canadian Institute for Advanced Research(CIFAR).Patrick Putzky is supported by the Netherlands Organisation for Scienetific Research (NWO) and theNetherlands Institute for Radio Astronomy (ASTRON) through the big bang, big data grant.References[1]Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, DavidPfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradientdescent. CoRR , abs/1606.04474, 2016.8[2] Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares,Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP , 2014.[3]Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deeplearning for undersampled MRI reconstruction. CoRR , (1):1–11, 2017.[4]Hammernik Kerstin, Klatzer Teresa, Kobler Erich, Recht Michael P., Sodickson Daniel K., PockThomas, and Knoll Florian. Learning a variational network for reconstruction of acceleratedmri data. Magnetic Resonance in Medicine , 79(6):3055–3071.[5]Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, FrancescoCiompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I.Sánchez. A survey on deep learning in medical image analysis. Medical Image Analysis , 42:60– 88, 2017.[6]Michael Lustig, David L. Donoho, Juan M. Santos, and John M. Pauly. Compressed sensingmri. In IEEE Signal Processing Magazine , 2007.[7]Patrick Putzky and Max Welling. Recurrent inference machines for solving inverse problems.CoRR , abs/1706.04008, 2017.[8]Tran Minh Quan, Thanh Nguyen-Duc, and Won Ki Jeong. Compressed Sensing MRI Recon-struction with Cyclic Loss in Generative Adversarial Networks. CoRR , pages 1–10, 2017.[9]Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. CoRR , abs/1505.04597, 2015.[10] J. Schlemper, J. Caballero, J. V . Hajnal, A. N. Price, and D. Rueckert. A deep cascade ofconvolutional neural networks for dynamic mr image reconstruction. IEEE Transactions onMedical Imaging , 37(2):491–503, Feb 2018.[11] Martin Uecker, Peng Lai, Mark J Murphy, Patrick Virtue, Michael Elad, John M Pauly, Shreyas SVasanawala, and Michael Lustig. ESPIRiT — An Eigenvalue Approach to AutocalibratingParallel MRI : Where SENSE Meets GRAPPA. Magnetic Resonance in Medicine , 1001:990–1001, 2014.[12] Martin Uecker, Jon Tamir, Frank Ong, Christian Holme, and Michael Lustig. Bart: version0.4.01, June 2017.[13] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: fromerror visibility to structural similarity. IEEE Transactions on Image Processing , 13(4):600–612,04 2004.[14] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Admm-net: A deep learning approach forcompressive sensing mri. CoRR , abs/1705.06869, 2017.9A Reconstructions on acceleration factors 2x, 3x, and 4x2.0x acc. | SSIM: 0.899 RIM 3T | SSIM: 0.988RIM 7T | SSIM: 0.989 U-net 7T | SSIM: 0.975U-net 3T | SSIM: 0.938CS | SSIM: 0.9893.0x acc. | SSIM: 0.747 RIM 3T | SSIM: 0.974RIM 7T | SSIM: 0.977 U-net 7T | SSIM: 0.952U-net 3T | SSIM: 0.905CS | SSIM: 0.9774.0x acc. | SSIM: 0.701 RIM 3T | SSIM: 0.962RIM 7T | SSIM: 0.966 U-net 7T | SSIM: 0.94U-net 3T | SSIM: 0.882CS | SSIM: 0.963Target 7T 0.7mm5.0x acc. | SSIM: 0.658 RIM 3T | SSIM: 0.947RIM 7T | SSIM: 0.957 U-net 7T | SSIM: 0.927U-net 3T | SSIM: 0.857CS | SSIM: 0.941Figure 7: Final reconstructions of an image from the 7T test set, at acceleration factors 2x, 3x, 4x and5x. The two RIMs and U-nets were trained on 3T and 7T datasets.10Target 3T 1.0mm5.0x acc. | SSIM: 0.622 RIM Brain 3T | SSIM: 0.981RIM Brain 7T | SSIM: 0.976 U-net Brain 7T | SSIM: 0.912U-net Brain 3T | SSIM: 0.944CS | SSIM: 0.9544.0x acc. | SSIM: 0.646 RIM Brain 3T | SSIM: 0.986RIM Brain 7T | SSIM: 0.981 U-net Brain 7T | SSIM: 0.918U-net Brain 3T | SSIM: 0.958CS | SSIM: 0.973.0x acc. | SSIM: 0.705 RIM Brain 3T | SSIM: 0.991RIM Brain 7T | SSIM: 0.988 U-net Brain 7T | SSIM: 0.939U-net Brain 3T | SSIM: 0.97CS | SSIM: 0.9832.0x acc. | SSIM: 0.862 RIM Brain 3T | SSIM: 0.996RIM Brain 7T | SSIM: 0.993 U-net Brain 7T | SSIM: 0.967U-net Brain 3T | SSIM: 0.986CS | SSIM: 0.991Figure 8: Final reconstructions of an image from the 3T test set, at acceleration factors 2x, 3x, 4x and5x. The two RIMs and U-nets were trained on 3T and 7T datasets.11
Hkt131ZRf
Interesting work but authors need to highlight the novel aspects
2: Marginally below acceptance threshold
This paper presents a recurrent inference machines (RIM) framework to accelerate the reconstruction of magnetic resonance imaging (MRI). The RIM model is a combination of alternating convolutional layers and gated recurrent units (GRU). A ReLU activation function is used after the convolution layers. This study is conducted on datasets from two scanners (7.0 T & 3.0 T), where the model was trained on randomly cropped patches of size 200 x 200 after performing data augmentation. The reconstruction quality is estimated by using structural similarity index. The paper also shows the reconstruction of MR images at different acceleration factors (2x, 3x, 4x, 5x). Overall the RIM outperformed the U-net model and a compressed sensing approach. The paper is well written and the problem is well explained. Below are some major and minor comments, which I would like the authors to comment on 1- The proposed method is based on a previously published work (RIM model) and as such, this reviewer found it difficult to find any significant novel contribution(s) in this paper as compared to the RIM model [7]. I would encourage the authors to highlight the novel aspects of their work and if there are any modifications or task-specific information incorporated within previously published RIM model. 2- In the comparative analysis, it is not clear which compressed sensing algorithm was used for comparison and there is no reference available for the reader. It would be helpful to add the appropriate citation and briefly explain the reason for selecting that particular compressed sensing algorithm. 3- The reconstruction time for U-net architecture is 6x faster than RIM model. It would be interesting to briefly explain the reason (U-net is fully convolutional could be one of the reason) or provide an estimate for the number of trainable parameters involved on both models. 4- The motivation behind selecting the RIM model to solve this problem can be improved further. 5- In Fig 1b, it would be more informative to highlight which part is h(theta) and g(theta) and the overall flow of the diagram can be improved. 6- Is there any particular reason for using 30x30 patch size for hyper-parameters selection instead of 200x200?
2: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Recurrent Inference Machines for Accelerated MRI Reconstruction ### Paper Abstract Accelerated MRI reconstruction is important for making MRI faster and thus applicable in a broader range of problem domains. Computational tools allow for high-resolution imaging without the need to perform time-consuming measurements. Most recently, deep learning approaches have been applied to this problem. However, none of these methods have been shown to transfer well across different measurement settings. We propose to use Recurrent Inference Machines as a framework for accelerated MRI, which allows us to leverage the power of deep learning without explicit domain knowledge. We show in experiments that the model can generalize well across different setups, while at the same time it outperforms another deep learning method and a compressed sensing approach. ### Paper Keywords ["RNNs", "MRI", "Inverse Problem", "Image Reconstruction", "Iterative Methods"] ### Paper Content Recurrent Inference Machinesfor Accelerated MRI ReconstructionKai LønningInformatics InstituteUniversity of Amsterdamkai.lonning@gmail.comPatrick PutzkyAMLAB,University of Amsterdampatrick.putzky@gmail.comMatthan CaanAcademic Medical Center,Spinoza Centre for Neuroimagingm.w.a.caan@amc.nlMax WellingAMLAB, University of Amsterdam,Canadian Institute for Advanced Research (CIFAR)m.welling@uva.nlAbstractAccelerated MRI reconstruction is important for making MRI faster and thusapplicable in a broader range of problem domains. Computational tools allowfor high-resolution imaging without the need to perform time-consuming mea-surements. Most recently, deep learning approaches have been applied to thisproblem. However, none of these methods have been shown to transfer well acrossdifferent measurement settings. We propose to use Recurrent Inference Machinesas a framework for accelerated MRI, which allows us to leverage the power ofdeep learning without explicit domain knowledge. We show in experiments thatthe model can generalize well across different setups, while at the same time itoutperforms another deep learning method and a compressed sensing approach.1 IntroductionMagnetic Resonance Imaging (MRI) measures data in the space of net precession frequencies, knownin MRI as k-space. Once enough samples in k-space are acquired to meet the Nyquist-criterion, theMR-image of tissue density can be computed through the inverse Fourier transform. However, thereare physical constraints to the slew rate of the scanner’s gradient system, putting a lower bound onthe time it takes to fully sample k-space and produce an image. As such, aspirations to reducing MRscan times amount to acquiring k-space samples below the Nyquist-criterion and reconstructing theMR-image through a dealiasing algorithm. This amounts to solving what is known as an inverseproblem. In this context, the forward model is a known process that describes the transformationtaking the true image signal to the measured samples. What is not known, is the forward model’sinverse transformation, taking the measured signal back to the true image, as this information is lostwhen k-space is sparsely sampled.The set of possible MR-images is huge, even when restricted to a particular anatomical region,contrast mechanism or resolution. Also consider that each element spawns a large set of imagecorruptions, one for each permitted set of k-space sub-samples. As such, learning a direct mappingfrom each corruption back to the original signal, for all possible original signals, is a highly complex1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.problem, requiring an unfeasible number of parameters and a lot of data to train them. As such,part of the solution must help constrain the solution space. Well-established methods do this bycareful design of features that exploit some known property inherent to MRI. In this work, we applyRecurrent Inference Machines (RIM) for accelerated MRI reconstruction, which were first proposedin [7] as general inverse problem solvers. They constrain the solution space by learning an iterativeprocess, where step-wise reassessments of the maximum a posteriori estimate result in incrementalupdates that infer the inverse transform of the forward model.Apart from breaking a complex problem into multiple sub-problems, we conjecture that this iterative"meta-learning"-approach prevents the model from overfitting on the image statistics of the dataset,by shifting focus toward learning the inversion procedure itself. This should result in a larger degreeof invariance with respect to changes in the specific imaging settings. In this paper, we show thatRIMs can accurately and efficiently reconstruct sparsely sampled MR-images at varying accelerationfactors, and that their solution is robust against perturbations in sub-sampling points, image resolutionlevels, and to some extent the underlying anatomy being imaged, making them suitable candidatesfor distribution across different MR-acquisition set-ups.2 Background and related workWe start by introducing the forward model of the inverse problem of accelerated MRI reconstruction.Letx2Cnbe the true image signal and y2Cm,mn, be the set of sparsely sampled frequencysignals measured by the scanner in k-space. The sampled measurements can then be described interms of the true image,y=PFx+n: (1)Here, the MR-image is projected onto its frequency domain through the Fourier transform F,followed by a sub-sampling mask P, which discards some fraction of values in k-space, therebyfacilitating the acceleration in scan times. Further, samples are assumed to be subjected to additive,normally distributed noise nsN0;I2+iN0;I2stemming from various measurementerrors accumulated by the scanner. While MR-scanners have multiple receiver coils, which eachproduce partial images with different spatial sensitivity, we assume for simplicity in this work thatthere is only one receiver coil with homogeneous spatial sensitivity.The goal in accelerated MRI reconstruction is to find an inverse transform of the forward model in(1), thereby mapping incomplete measurements yto a high resolution image x. Usually, this is dealtwith by solving the regularized optimization problemargminxfd(y;PFx) +R(x)g; (2)wheredevaluates the data consistency between the reconstruction and measurements, and Ris aregularizer preventing overfitting, with regularization factor . Whereas the data-consistency termin(2)follows explicitly from the forward model in (1), the regularizer R, representing the priordistribution over MR-images, is more a matter of model design.A solution to (2)is typically found through an iterative process. One such example is the well-established method known as Compressed Sensing (CS) [ 6], which uses a transformation knownto compress a particular sub-set of MR-images, combined with the `1-norm, which yields sparsesolutions when used as a regularizer. Relative to deep learning, this shallow form of feature extractionleads to a highly efficient algorithm, however, depending on the network architecture, learned featurestend to win in terms of accuracy. As such, the use of deep neural networks is increasing, also inmedical imaging [5].In recently proposed deep learning-based solutions that model gradient descent on (2), the prioris learned by specifically designated convolutional layers [ 14][4]. Another paper reinforces dataconsistency in separate layers, interspersed between longer sequences of convolutional layers [ 10].Other deep learning proposals move away from (2)altogether. For example, in [ 8], the GenerativeAdversarial Network’s (GAN) overall success is repurposed for the task of removing aliasing artifactsin a non-iterative fashion, whereas in [ 3], they deploy the U-net architecture known from imagesegmentation tasks [9], in an attempt to learn a direct mapping from ytox.As in [ 14] and [ 4], the RIM also learns an iterative scheme based on (2). However, instead of usingseparate parameters for each iteration, the RIM has a recurrent architecture, sharing weights across2Image Space K-SpaceTarget Measurement(a)Measurement Process (b)RIM architectureFigure 1: (a) The goal in MRI is to retrieve a high resolution image (top left). Meaurements aredone in k-space (bottom left) which is related to image space through a Fourier transform. In orderto accelerate the measurement process, the k-space is sub-sampled (bottom right). Imaging thesub-sampled k-space meaurement will lead to an aliased image (top right). The goal of this work is tofind a function that maps from an incomplete k-space (bottom right) to a high resolution image (top left).(b)The RIM update function has used in this work. All images show magnitudes scaled individually.Magnitudes of intermediate internal states s1and s2were averaged over features. Bold lines depictconnections within a single timestep, whereas dotted lines represent recurrent connections that passinformation to the next time step.iterations, while utilizing internal states to distinguish them. Following Recurrent Neural Network(RNN) conventions, we shall hereby refer to these iterations as time-steps. We do not use the sameinput, but this approach bears resemblance to the model described in [ 1], where RNNs are trained tolearn gradient descent schemes by using the gradient of the objective function as the network’s inputfor each time-step.3 Recurrent Inference MachinesIn the context of accelerated MRI reconstruction, the gradient of the log-likelihood, corresponding tothe data-consistency term din(2), is given byryjx:=2F1(PFxy). As for the problemof evaluating the gradient of the log-prior distribution, corresponding to Rin(2), this is solved bypassing the current state xtas an input to the network, such that any function that would implicitlyapproximate the log-prior gradient can be evaluated at xt. The RIM’s iterative scheme can then bedescribed by the following update equations:s0=0; x0=F1y;st+1=gryjxt;xt;st;xt+1=xt+hryjxt;xt;st+1;(3)for0t < T .hdenotes the RIM’s neural network, parametrized by , such that each passthroughhproduces the next incremental update xtin the RIM’s iterative scheme. gis simplythe part of the network responsible for producing the next internal state st+1, which the RIM needs inorder to keep track of time-steps and modify its behaviour based on the progression of the inferenceprocedure.In this work, the update function hwas implemented using a sequence of alternating convolutionallayers and gated recurrent unit (GRU) cells, where the first two convolutional layers are followed byReLU activation functions before the feature maps are passed to the GRUs. The GRU cells work asdescribed in [ 2], and are assigned the task of maintaining the internal state, meaning that in practicethere are two states s=s1;s2represented by sin(3). Figure 1b illustrates the way in whichthese layers were assembled. The current estimate xtis simply concatenated with the log-likelihoodgradientryjxtalong the channel dimension to produce the input of the first convolutional layer,resulting in 4 input channels due to the complex components also being given separate channels. This3first layer is implemented with a kernel size of 55, whereas the next two convolutions have kernelsizes 33. All convolutions are padded to retain the same image size through-out the network. Wewill takeFto mean the number of features in the GRU cells’ states and the number of feature mapsproduced by the convolutional layers. This hyper-parameter is kept the same through-out all internallayers, before the final convolutional layer outputs the complex-valued image update xt. Note thatthe GRU cells’ weights are shared across image pixels, but differ across the feature maps producedby the convolutional layers, allowing the network to process images of any given size.We use the mean square error (MSE) as a loss function, where the estimate xtis evaluated against thetrue image xfor each time-step. The total loss to minimize is then given by the average per-pixelMSE,L(xT) =1nTTXt=1kxtxk22; (4)where, as before, nis the total number of image pixels, and Tis the total number of time steps duringtraining.4 Experiments4.1 DatasetOn a 7:0 TPhilips Achieva scanner (Achieva, Philips Healthcare, Cleveland, USA) equipped with a32-channel Nova head coil, 3D T2-weighted multi-echo FLASH data was acquired with an isotropicresolution of 0:7 mm3and Field-of-View (FOV) 224224126 mm3, matrix size 320180,320slices with transversal slice encoding direction, 6 echoes with echo times (TEs) ranging from 3 msto21 ms , repetition time (TR) 23:4 ms , flip angle 12, and second order image-based B0 shimming.The data were fully sampled with an elliptical shutter, such that the total scanning time was 22:5 min .On a 3:0 TPhilips Ingenia scanner (Philips Healthcare, Best, The Netherlands) equipped with a 32-channel head coil, T1-weighted three-dimensional (3D) magnetization prepared rapid gradient echo(MPRAGE) data were acquired with an isotropic resolution of 1:0 mm3and FOV 256240 mm2,matrix size 256240,225slices with sagittal slice encoding direction, TFE factor 150, shot interval2500 ms , inversion delay 900 ms , flip angle 9, and first order shimming. The data were fully sampledwith an elliptical shutter, such that the total scanning time was 10:8 min .On both scanners, raw data was exported and stored for offline reconstruction experiments. Coilsensitivities were estimated from the data using auto-calibration, after which data of different coilelements was combined [ 11]. The data was then normalized with respect to the maximum magnitudevalue in the image domain.12 healthy subjects were included, from whom written informed consent (under an institutionallyapproved protocol) was obtained beforehand.4.2 Acceleration methodPrevious papers on deep learning methods for accelerated MRI reconstruction train separate modelsfor each acceleration factor they evaluate on [ 4,8,14,3,10]. Whether this is known to be necessarythrough empirical testing or due to a working assumption is unclear. Either way, our hypothesis isthat the RIM is capable of dealing with a range of acceleration factors. All models in this paper weretrained on acceleration factors that were randomly sampled from the uniform distribution U(1:5;5:2).Sub-sampling patterns were then generated using a Gaussian distribution, thereby favoring lowfrequencies that contain more information about the general shape of the image content, while alsocreating incoherent noise due to randomness. We thereby adhere to the requirement of incoherentnoise, as established in compressed sensing [ 6]. Furthermore, the signals emitted near the origin ink-space were always fully sampled within an ellipse with half-axes set to 2% of the image axes.Model selection was done by testing on acceleration factors 2x, 3x, 4x and 5x, using the best averageof three constant sub-sampling patterns within each acceleration category. The selected models werethen evaluated on the hold-out datasets for final comparisons, using 10 sub-sampling patterns withineach acceleration factor. Examples of acceleration patterns used for evaluation can be seen in Fig. 2.4Figure 2: Examples of sub-sampling masks used for testing. Acceleration factors are, from left toright: 2x, 3x, 4x and 5x.4.3 Research focusConcerns referring to the data-driven black box-nature of deep learning algorithms are sometimesraised as a point of preference toward more traditional approaches. Even if they may come withperformance costs, non-learned models always extract the intended features and thus do as prescribedregardless of the data they receive. Furthermore, MR-images are acquired at a large variety of settings,ie. different contrast mechanisms and levels of signal-to-noise and resolution. A reconstruction modelshould be able to generalize well across different settings typically used in the lab or clinic, preferablyeven without having encountered the particular setting during training.With this in mind, we wish to illustrate the RIM’s ability to generalize to images acquired underdifferent conditions than those contained in the training set. We believe that, due to its internal statesand by including the forward model during inference, the RIM is capable of separating featuresrelated to image statistics from the features that have to do with the inversion procedure itself, andtherefore generalizes well to new types of data.In this work, we train one RIM on the 3 T 1:0 mm -dataset using slices along the transverse plane(RIM 3T), and another on the 7 T 0:7 mm -dataset in the coronal plane (RIM 7T), after which wecross-evaluate their performance. Unless otherwise mentioned, the models were trained on randomlycropped patches of size 200200, which were randomly rotated, flipped and mirrored for dataaugmentation purposes. The training sets consisted of 10 subjects, whereas 1 subject was used formodel selection and a final subject for final evaluation. We will use the Structural Similarity index(SSIM) [13] as a metric for quantifying the reconstruction quality.For comparing against other reconstruction methods, we will use CS and the U-net architecture [ 3].CS is chosen because it is currently being used for online reconstruction in many scanners today,whereas the U-net is chosen for its ease of implementation and training, and because it is already awell-known architecture for per-pixel tasks in deep learning.For CS, we use the BART toolbox described in [ 12]. We use the `1-norm of the wavelet transformas a regularizer, with regularization factor 0.05 and 40 iterations. For the U-net, we used nearly thesame architecture as described in [ 3]. However, we achieved better results using max-pooling insteadof average pooling when downscaling the image. We also trained the network using the ADAMoptimizer, and no post-processing step was used. Finally, a big difference with our implementationis that we compare on the same acceleration method as described in 4.2, whereas in [ 3] they use aone-dimensional periodic sub-sampling scheme which remains constant through-out training. Wefound that, unlike the RIM, the U-net could only learn to reconstruct the magnitude of an image, andwould fail when attempting to learn the real and imaginary components in order to obtain a phasereconstruction as well.We begin the experiments by investigating the effect of varying a couple of the RIM’s hyper-parameters, namely the number of time-steps Tand the number of features F.5 Results and discussion5.1 Hyper-parametersThe two plots in Fig. 3 show the SSIM scores on the model validation set of the final reconstructionsfor RIMs trained on different values of TandF. These models were trained on smaller patches ofsize3030.54 6 8 10 12Number of time-steps0.950.960.970.980.99SSIMNumber of features: F = 6432 64 128Number of featuresNumber of time-steps: T = 8Acceleration2x3x4x5xFigure 3: The plots show SSIM-values for the final reconstruction of RIM models trained on differenthyper-parameters. Left figure: The effect of varying the number of features. Right figure: The effect ofvarying the number of time-steps.We first look at the effect of T, where we wish to find the optimal number of updates needed for theRIM to learn its iterative scheme until convergence. Setting the number of features to F= 64 , wetrained RIMs over a total of 4, 6, 8, 10 and 12 time-steps. The fewer the number of time-steps, thefewer the number of training iterations are necessary for the model to learn. We also found that, thefewer the time-steps a model is trained on, the greater the reconstruction improvement is betweentime-steps. In other words, the 4th estimate of a model trained on 4 time-steps will be better thanthe 4th estimate produced by a model trained on 6 time-steps. However, the 6th estimate of a modeltrained on the 6 time-steps will be even better. Moreover, learning becomes unstable when trainingon a low number of time-steps. Especially when training on 4 time-steps we observed divergingloss-curves as it approached a local minimum, requiring the use of gradient clipping and severalrestarts for successful training. Training on 6 time-steps was also problematic, whereas at 8 time-stepsthis problem had subsided.The higher the acceleration factor, the more there is to be gained by increasing the number of time-steps. However, the improvement from training on 8 to 10 time-steps is marginal, whereas going from10 to 12 time-steps even decreases the SSIM score for certain acceleration levels. We suggest thatthis decrease is best interpreted as noise due to differing mini-batch sizes and human error involvedin determining when a model had converged sufficiently to stop training. At this point, the benefitfrom spending more time on training larger models becomes negligible, and there is also a questionof inference speed, hence we proceed with models trained on 8 time-steps going forward.Next, we ask what there is to be gained from increasing the number of RIM features. Keeping T= 8,we train three models setting the number of features to F= 32;64;128. Unlike when increasing thenumber of time-steps, we do not observe a marked drop in improvement when doubling the numberof features. This would imply that we should increase the number of features further. However, weconsider the time it takes to reconstruct an image to be too long at 128 features, and so we make acompromise and select 64 features going forward.5.2 Generalization and overall comparisons against Compressed Sensing and the U-netFig. 4 illustrates the SSIM scores of the different models evaluated on all acceleration levels. Thetwo plots show the difference when the models are evaluated on the two datasets described in 4.1. Asseen, the RIMs are capable of producing nearly the same results on both types of data, regardless ofthe type of data they were trained on, even beating CS when evaluated on the unseen dataset. TheU-nets show a greater tendency to overfit on the type of data used for training, as seen by the drop inperformance on the unseen dataset. Also note that the whisker plots, corresponding to the standarddeviation in SSIM scores, are narrower for the two RIM models than for CS and the U-net, indicatingthat the RIM is more robust against variations in sub-sampling patterns or input images.Qualitative results can be seen for samples from the 3T and 7T test sets in Fig. 6 and Fig. 5,respectively. Extended figures with acceleration factors 2, 3, and 4 can be found in A.62x 3x 4x 5xAcceleration0.850.900.951.00SSIMEvaluation Data = 7T2x 3x 4x 5xAccelerationEvaluation Data = 3TModelCSRIM 3TRIM 7TU-net 3TU-net 7TFigure 4: SSIM of final reconstructions of RIM models that were trained on datasets from the 3 T-scanner at 1 mm resolution, and from the 7 T-scanner at 0:7 mm resolution. Both models generalizewell to the resolution they were not trained on, even out-performing compressed sensing.2.0x acc. | SSIM: 0.899 RIM 3T | SSIM: 0.988RIM 7T | SSIM: 0.989 U-net 7T | SSIM: 0.975U-net 3T | SSIM: 0.938CS | SSIM: 0.9893.0x acc. | SSIM: 0.747 RIM 3T | SSIM: 0.974RIM 7T | SSIM: 0.977 U-net 7T | SSIM: 0.952U-net 3T | SSIM: 0.905CS | SSIM: 0.9774.0x acc. | SSIM: 0.701 RIM 3T | SSIM: 0.962RIM 7T | SSIM: 0.966 U-net 7T | SSIM: 0.94U-net 3T | SSIM: 0.882CS | SSIM: 0.963Target 7T 0.7mm5.0x acc. | SSIM: 0.658 RIM 3T | SSIM: 0.947RIM 7T | SSIM: 0.957 U-net 7T | SSIM: 0.927U-net 3T | SSIM: 0.857CS | SSIM: 0.941Figure 5: Reconstructions of a sample image from the 7T test set at acceleration factor 5x. The twoRIMs and U-nets were trained on 3T and 7T datasets, respectively.As seen in the figures, RIMs produce less noisy reconstructions with more detail preserved whencompared to CS. The reconstruction quality is also highly consistent, whereas images produced byCS vary with respect to the image and sub-sampling pattern used. The same holds when comparingRIMs with U-nets, but to a larger extent. Our results indicate that U-nets tend to overfit on data typeand the specific pattern used for sub-sampling, and are likely to have suffered under the randomizedacceleration method used in this work during training. What the RIM can extract from its currentreconstruction state, log-likelihood gradient, and hidden states, the U-net must make up for in terms ofmodel parameters, which, at 1;328;833versus 94;336parameters, will more likely lead to overfitting.As for reconstruction times, the results shown for the RIM in this paper take 236 ms , althoughthis time can be reduced by lowering the number of features or time-steps without sacrificing toomuch in terms of reconstruction quality, as indicated by Fig. 3. For instance, with F= 32 , thereconstruction time becomes 126 ms . At40 ms , the U-net is 6 times faster than the RIM, which maymake it preferable for low acceleration factors at high resolution levels. These aforementioned timesare measured on a single GPU. Unfortunately, we only have CPU time for CS, which is 1203 ms ,however, we expect that CS might be faster than the RIM when applied on the GPU.7Target 3T 1.0mm5.0x acc. | SSIM: 0.622 RIM Brain 3T | SSIM: 0.981RIM Brain 7T | SSIM: 0.976 U-net Brain 7T | SSIM: 0.912U-net Brain 3T | SSIM: 0.944CS | SSIM: 0.9544.0x acc. | SSIM: 0.646 RIM Brain 3T | SSIM: 0.986RIM Brain 7T | SSIM: 0.981 U-net Brain 7T | SSIM: 0.918U-net Brain 3T | SSIM: 0.958CS | SSIM: 0.973.0x acc. | SSIM: 0.705 RIM Brain 3T | SSIM: 0.991RIM Brain 7T | SSIM: 0.988 U-net Brain 7T | SSIM: 0.939U-net Brain 3T | SSIM: 0.97CS | SSIM: 0.9832.0x acc. | SSIM: 0.862 RIM Brain 3T | SSIM: 0.996RIM Brain 7T | SSIM: 0.993 U-net Brain 7T | SSIM: 0.967U-net Brain 3T | SSIM: 0.986CS | SSIM: 0.991Figure 6: Reconstructions of a sample image from the 3T test set at acceleration factor 5x. The twoRIMs and U-nets were trained on 3T and 7T datasets, respectively.6 ConclusionWe have shown that RIMs are capable of accelerated MRI reconstruction, thereby producing re-constructions of higher quality than the U-net and Compressed Sensing. Reconstructions were lesssensitive to varying input images and perturbations in the sub-sampling pattern used for data acquisi-tion. Further, we have shown that RIMs generalize well to datasets with unseen levels of resolutionand signal-to-noise ratios, and even to data recorded from a different angle than what was trained on.This emphasises the RIMs ability to perform well across the various measurement scenarios that apresent in clinical practice. Furture work will show how well this method can generalize over othermeasurement scenarios such as different types of anatomical data and different numbers of recordingcoils.AcknowledgmentsWe kindly thank dr. F.M. V os and dr. G.A.M. Arkesteijn for their support in data acquisition.Kai Lønning and Max Welling are supported by the Canadian Institute for Advanced Research(CIFAR).Patrick Putzky is supported by the Netherlands Organisation for Scienetific Research (NWO) and theNetherlands Institute for Radio Astronomy (ASTRON) through the big bang, big data grant.References[1]Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, DavidPfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradientdescent. CoRR , abs/1606.04474, 2016.8[2] Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares,Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP , 2014.[3]Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deeplearning for undersampled MRI reconstruction. CoRR , (1):1–11, 2017.[4]Hammernik Kerstin, Klatzer Teresa, Kobler Erich, Recht Michael P., Sodickson Daniel K., PockThomas, and Knoll Florian. Learning a variational network for reconstruction of acceleratedmri data. Magnetic Resonance in Medicine , 79(6):3055–3071.[5]Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, FrancescoCiompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I.Sánchez. A survey on deep learning in medical image analysis. Medical Image Analysis , 42:60– 88, 2017.[6]Michael Lustig, David L. Donoho, Juan M. Santos, and John M. Pauly. Compressed sensingmri. In IEEE Signal Processing Magazine , 2007.[7]Patrick Putzky and Max Welling. Recurrent inference machines for solving inverse problems.CoRR , abs/1706.04008, 2017.[8]Tran Minh Quan, Thanh Nguyen-Duc, and Won Ki Jeong. Compressed Sensing MRI Recon-struction with Cyclic Loss in Generative Adversarial Networks. CoRR , pages 1–10, 2017.[9]Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. CoRR , abs/1505.04597, 2015.[10] J. Schlemper, J. Caballero, J. V . Hajnal, A. N. Price, and D. Rueckert. A deep cascade ofconvolutional neural networks for dynamic mr image reconstruction. IEEE Transactions onMedical Imaging , 37(2):491–503, Feb 2018.[11] Martin Uecker, Peng Lai, Mark J Murphy, Patrick Virtue, Michael Elad, John M Pauly, Shreyas SVasanawala, and Michael Lustig. ESPIRiT — An Eigenvalue Approach to AutocalibratingParallel MRI : Where SENSE Meets GRAPPA. Magnetic Resonance in Medicine , 1001:990–1001, 2014.[12] Martin Uecker, Jon Tamir, Frank Ong, Christian Holme, and Michael Lustig. Bart: version0.4.01, June 2017.[13] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: fromerror visibility to structural similarity. IEEE Transactions on Image Processing , 13(4):600–612,04 2004.[14] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Admm-net: A deep learning approach forcompressive sensing mri. CoRR , abs/1705.06869, 2017.9A Reconstructions on acceleration factors 2x, 3x, and 4x2.0x acc. | SSIM: 0.899 RIM 3T | SSIM: 0.988RIM 7T | SSIM: 0.989 U-net 7T | SSIM: 0.975U-net 3T | SSIM: 0.938CS | SSIM: 0.9893.0x acc. | SSIM: 0.747 RIM 3T | SSIM: 0.974RIM 7T | SSIM: 0.977 U-net 7T | SSIM: 0.952U-net 3T | SSIM: 0.905CS | SSIM: 0.9774.0x acc. | SSIM: 0.701 RIM 3T | SSIM: 0.962RIM 7T | SSIM: 0.966 U-net 7T | SSIM: 0.94U-net 3T | SSIM: 0.882CS | SSIM: 0.963Target 7T 0.7mm5.0x acc. | SSIM: 0.658 RIM 3T | SSIM: 0.947RIM 7T | SSIM: 0.957 U-net 7T | SSIM: 0.927U-net 3T | SSIM: 0.857CS | SSIM: 0.941Figure 7: Final reconstructions of an image from the 7T test set, at acceleration factors 2x, 3x, 4x and5x. The two RIMs and U-nets were trained on 3T and 7T datasets.10Target 3T 1.0mm5.0x acc. | SSIM: 0.622 RIM Brain 3T | SSIM: 0.981RIM Brain 7T | SSIM: 0.976 U-net Brain 7T | SSIM: 0.912U-net Brain 3T | SSIM: 0.944CS | SSIM: 0.9544.0x acc. | SSIM: 0.646 RIM Brain 3T | SSIM: 0.986RIM Brain 7T | SSIM: 0.981 U-net Brain 7T | SSIM: 0.918U-net Brain 3T | SSIM: 0.958CS | SSIM: 0.973.0x acc. | SSIM: 0.705 RIM Brain 3T | SSIM: 0.991RIM Brain 7T | SSIM: 0.988 U-net Brain 7T | SSIM: 0.939U-net Brain 3T | SSIM: 0.97CS | SSIM: 0.9832.0x acc. | SSIM: 0.862 RIM Brain 3T | SSIM: 0.996RIM Brain 7T | SSIM: 0.993 U-net Brain 7T | SSIM: 0.967U-net Brain 3T | SSIM: 0.986CS | SSIM: 0.991Figure 8: Final reconstructions of an image from the 3T test set, at acceleration factors 2x, 3x, 4x and5x. The two RIMs and U-nets were trained on 3T and 7T datasets.11<|im_end|> <|im_start|>assistant ### Review Title Interesting work but authors need to highlight the novel aspects ### Review Text This paper presents a recurrent inference machines (RIM) framework to accelerate the reconstruction of magnetic resonance imaging (MRI). The RIM model is a combination of alternating convolutional layers and gated recurrent units (GRU). A ReLU activation function is used after the convolution layers. This study is conducted on datasets from two scanners (7.0 T & 3.0 T), where the model was trained on randomly cropped patches of size 200 x 200 after performing data augmentation. The reconstruction quality is estimated by using structural similarity index. The paper also shows the reconstruction of MR images at different acceleration factors (2x, 3x, 4x, 5x). Overall the RIM outperformed the U-net model and a compressed sensing approach. The paper is well written and the problem is well explained. Below are some major and minor comments, which I would like the authors to comment on 1- The proposed method is based on a previously published work (RIM model) and as such, this reviewer found it difficult to find any significant novel contribution(s) in this paper as compared to the RIM model [7]. I would encourage the authors to highlight the novel aspects of their work and if there are any modifications or task-specific information incorporated within previously published RIM model. 2- In the comparative analysis, it is not clear which compressed sensing algorithm was used for comparison and there is no reference available for the reader. It would be helpful to add the appropriate citation and briefly explain the reason for selecting that particular compressed sensing algorithm. 3- The reconstruction time for U-net architecture is 6x faster than RIM model. It would be interesting to briefly explain the reason (U-net is fully convolutional could be one of the reason) or provide an estimate for the number of trainable parameters involved on both models. 4- The motivation behind selecting the RIM model to solve this problem can be improved further. 5- In Fig 1b, it would be more informative to highlight which part is h(theta) and g(theta) and the overall flow of the diagram can be improved. 6- Is there any particular reason for using 30x30 patch size for hyper-parameters selection instead of 200x200? ### Review Rating 2: Marginally below acceptance threshold ### Review Confidence 2: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
HkLXCE9lx
ICLR.cc/2017/conference
2017
RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning
["Yan Duan", "John Schulman", "Xi Chen", "Peter L. Bartlett", "Ilya Sutskever", "Pieter Abbeel"]
Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL^2, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL^2 experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL^2 is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL^2 on a vision-based navigation task and show that it scales up to high-dimensional problems.
["Reinforcement Learning", "Deep learning"]
ABSTRACTDeep reinforcement learning (deep RL) has been successful in learning sophis-ticated behaviors automatically; however, the learning process requires a hugenumber of trials. In contrast, animals can learn new tasks in just a few trials, bene-fiting from their prior knowledge about the world. This paper seeks to bridge thisgap. Rather than designing a “fast” reinforcement learning algorithm, we proposeto represent it as a recurrent neural network (RNN) and learn it from data. Inour proposed method, RL2, the algorithm is encoded in the weights of the RNN,which are learned slowly through a general-purpose (“slow”) RL algorithm. TheRNN receives all information a typical RL algorithm would receive, including ob-servations, actions, rewards, and termination flags; and it retains its state acrossepisodes in a given Markov Decision Process (MDP). The activations of the RNNstore the state of the “fast” RL algorithm on the current (previously unseen) MDP.We evaluate RL2experimentally on both small-scale and large-scale problems.On the small-scale side, we train it to solve randomly generated multi-armed ban-dit problems and finite MDPs. After RL2is trained, its performance on new MDPsis close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL2on a vision-based navigation task and show that it scalesup to high-dimensional problems.1 I NTRODUCTIONIn recent years, deep reinforcement learning has achieved many impressive results, including play-ing Atari games from raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015), andacquiring advanced manipulation and locomotion skills (Levine et al., 2016; Lillicrap et al., 2015;Watter et al., 2015; Heess et al., 2015b; Schulman et al., 2015; 2016). However, many of the suc-cesses come at the expense of high sample complexity. For example, the state-of-the-art Atari resultsrequire tens of thousands of episodes of experience (Mnih et al., 2015) per game. To master a game,one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals arecapable of learning a new task in a very small number of trials. Continuing the previous example,the human player in Mnih et al. (2015) only needed 2 hours of experience before mastering a game.We argue that the reason for this sharp contrast is largely due to the lack of a good prior, whichresults in these deep RL agents needing to rebuild their knowledge about the world from scratch.Although Bayesian reinforcement learning provides a solid framework for incorporating priorknowledge into the learning process (Strens, 2000; Ghavamzadeh et al., 2015; Kolter & Ng, 2009),exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi-cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specificideas to bring down sample complexity and computational burden. Notable examples include guidedpolicy search with unknown dynamics (Levine & Abbeel, 2014) and PILCO (Deisenroth & Ras-mussen, 2011). These methods can learn a task using a few minutes to a few hours of real experience,compared to days or even weeks required by previous methods (Schulman et al., 2015; 2016; Lilli-crap et al., 2015). However, these methods tend to make assumptions about the environment (e.g.,instrumentation for access to the state at learning time), or become computationally intractable inhigh-dimensional settings (Wahlstr ̈om et al., 2015).1Under review as a conference paper at ICLR 2017Rather than hand-designing domain-specific reinforcement learning algorithms, we take a differentapproach in this paper: we view the learning process of the agent itself as an objective, which canbe optimized using standard reinforcement learning algorithms. The objective is averaged acrossall possible MDPs according to a specific distribution, which reflects the prior that we would liketo distill into the agent. We structure the agent as a recurrent neural network, which receives pastrewards, actions, and termination flags as inputs in addition to the normally received observations.Furthermore, its internal state is preserved across episodes, so that it has the capacity to performlearning in its own hidden activations. The learned agent thus also acts as the learning algorithm,and can adapt to the task at hand when deployed.We evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs.These problems have been extensively studied, and there exist algorithms that achieve asymptoti-cally optimal performance. We demonstrate that our method, named RL2, can achieve performancecomparable with these theoretically justified algorithms. Next, we evaluate RL2on a vision-basednavigation task implemented using the ViZDoom environment (Kempka et al., 2016), showing thatRL2can also scale to high-dimensional problems.2 M ETHOD2.1 P RELIMINARIESWe define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M=(S;A;P;r; 0;;T ), in whichSis a state set,Aan action set,P:SAS! R+a transitionprobability distribution, r:SA! [Rmax;Rmax]a bounded reward function, 0:S!R+aninitial state distribution, 2[0;1]a discount factor, and Tthe horizon. In policy search methods,we typically optimize a stochastic policy :SA! R+parametrized by . The objective isto maximize its expected discounted return, () =E[PTt=0tr(st;at)], where= (s0;a0;:::)denotes the whole trajectory, s00(s0),at(atjst), andst+1P(st+1jst;at).2.2 F ORMULATIONWe now describe our formulation, which casts learning an RL algorithm as a reinforcement learningproblem, and hence the name RL2.We assume knowledge of a set of MDPs, denoted by M, and a distribution over them: M:M!R+. We only need to sample from this distribution. We use nto denote the total number of episodesallowed to spend with a specific MDP. We define a trial to be such a series of episodes of interactionwith a fixed MDP.Episode 1Episode 2s0s1s2h0h1a0r0,d0h2h3s3a1r1,d1a2r2,d2s0s1s2h4h5a0r0,d0h6a1r1,d1AgentMDP 1Episode 1s0s1...h0h1a0r0,d0...a1AgentMDP 2.........Trial 1Trial 2Figure 1: Procedure of agent-environment interactionThis process of interaction between an agent and the environment is illustrated in Figure 1. Here,each trial happens to consist of two episodes, hence n= 2. For each trial, a separate MDP isdrawn from M, and for each episode, a fresh s0is drawn from the initial state distribution specificto the corresponding MDP. Upon receiving an action atproduced by the agent, the environmentcomputes reward rt, steps forward, and computes the next state st+1. If the episode has terminated,it sets termination flag dtto1, which otherwise defaults to 0. Together, the next state st+1, action2Under review as a conference paper at ICLR 2017at, rewardrt, and termination flag dt, are concatenated to form the input to the policy1, which,conditioned on the hidden state ht+1, generates the next hidden state ht+2and actionat+1. At theend of an episode, the hidden state of the policy is preserved to the next episode, but not preservedbetween trials.The objective under this formulation is to maximize the expected total discounted reward accumu-lated during a single trial rather than a single episode. Maximizing this objective is equivalent tominimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi, 2012). Since the underlyingMDP changes across trials, as long as different strategies are required for different MDPs, the agentmust act differently according to its belief over which MDP it is currently in. Hence, the agent isforced to integrate all the information it has received, including past actions, rewards, and termi-nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimizationprocess, where the agent is encouraged to learn a “fast” reinforcement learning algorithm.For clarity of exposition, we have defined the “inner” problem (of which the agent sees neach trials)to be an MDP rather than a POMDP. However, the method can also be applied in the partially-observed setting without any conceptual changes. In the partially observed setting, the agent isfaced with a sequence of POMDPs, and it receives an observation otinstead of state stat timet.The visual navigation experiment in Section 3.3, is actually an instance of the this POMDP setting.2.3 P OLICY REPRESENTATIONWe represent the policy as a general recurrent neural network. Each timestep, it receives the tuple(s;a;r;d )as input, which is embedded using a function (s;a;r;d )and provided as input to anRNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengioet al., 1994), we use Gated Recurrent Units (GRUs) (Cho et al., 2014) which have been demonstratedto have good empirical performance (Chung et al., 2014; J ́ozefowicz et al., 2015). The output of theGRU is fed to a fully connected layer followed by a softmax function, which forms the distributionover actions.We have also experimented with alternative architectures which explicitly reset part of the hiddenstate each episode of the sampled MDP, but we did not find any improvement over the simple archi-tecture described above.2.4 P OLICY OPTIMIZATIONAfter formulating the task as a reinforcement learning problem, we can readily use standard off-the-shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust RegionPolicy Optimization (TRPO) (Schulman et al., 2015), because of its excellent empirical perfor-mance, and because it does not require excessive hyperparameter tuning. For more details, we referthe reader to the original paper. To reduce variance in the stochastic gradient estimation, we use abaseline which is also represented as an RNN using GRUs as building blocks. We optionally applyGeneralized Advantage Estimation (GAE) (Schulman et al., 2016) to further reduce the variance.3 E VALUATIONWe designed experiments to answer the following questions:Can RL2learn algorithms that achieve good performance on MDP classes with specialstructure, relative to existing algorithms tailored to this structure that have been proposedin the literature?Can RL2scale to high-dimensional tasks?For the first question, we evaluate RL2on two sets of tasks, multi-armed bandits (MAB) and tabularMDPs. These problems have been studied extensively in the reinforcement learning literature, andthis body of work includes algorithms with guarantees of asymptotic optimality. We demonstratethat our approach achieves comparable performance to these theoretically justified algorithms.1To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input tothe policy.3Under review as a conference paper at ICLR 2017For the second question, we evaluate RL2on a vision-based navigation task. Our experiments showthat the learned policy makes effective use of the learned visual information and also short-terminformation acquired from previous episodes.3.1 M ULTI -ARMED BANDITSMulti-armed bandit problems are a subset of MDPs where the agent’s environment is stateless.Specifically, there are karms (actions), and at every time step, the agent pulls one of the arms, sayi, and receives a reward drawn from an unknown distribution: our experiments take each arm tobe a Bernoulli distribution with parameter pi. The goal is to maximize the total reward obtainedover a fixed number of time steps. The key challenge is balancing exploration and exploitation—“exploring” each arm enough times to estimate its distribution ( pi), but eventually switching over to“exploitation” of the best arm. Despite the simplicity of multi-arm bandit problems, their study hasled to a rich theory and a collection of algorithms with optimality guarantees.Using RL2, we can train an RNN policy to solve bandit problems by training it on a given distributionM. If the learning is successful, the resulting policy should be able to perform competitively withthe theoretically optimal algorithms. We randomly generated bandit problems by sampling eachparameterpifrom the uniform distribution on [0;1]. After training the RNN policy with RL2, wecompared it against the following strategies:Random: this is a baseline strategy, where the agent pulls a random arm each time.Gittins index (Gittins, 1979): this method gives the Bayes optimal solution in the dis-counted infinite-horizon case, by computing an index separately for each arm, and takingthe arm with the largest index. While this work shows it is sufficient to independently com-pute an index for each arm (hence avoiding combinatorial explosion with the number ofarms), it doesn’t show how to tractably compute these individual indices exactly. We fol-low the practical approximations described in Gittins et al. (2011), Chakravorty & Mahajan(2013), and Whittle (1982), and choose the best-performing approximation for each setup.UCB1 (Auer, 2002): this method estimates an upper-confidence bound, and pulls the armwith the largest value of ucbi(t) = ^i(t1)+cq2 logtTi(t1), where ^i(t1)is the estimatedmean parameter for the ith arm,Ti(t1)is the number of times the ith arm has been pulled,andcis a tunable hyperparameter (Audibert & Munos, 2011). We initialize the statisticswith exactly one success and one failure, which corresponds to a Beta(1;1)prior.Thompson sampling (TS) (Thompson, 1933): this is a simple method which, at each timestep, samples a list of arm means from the posterior distribution, and choose the best armaccording to this sample. It has been demonstrated to compare favorably to UCB1 empir-ically (Chapelle & Li, 2011). We also experiment with an optimistic variant (OTS) (Mayet al., 2012), which samples Ntimes from the posterior, and takes the one with the highestprobability.-Greedy: in this strategy, the agent chooses the arm with the best empirical mean withprobability 1, and chooses a random arm with probability . We use the same initial-ization as UCB1.Greedy: this is a special case of -Greedy with = 0.The Bayesian methods, Gittins index and Thompson sampling, take advantage of the distributionM; and we provide these methods with the true distribution. For each method with hyperparame-ters, we maximize the score with a separate grid search for each of the experimental settings. Thehyperparameters used for TRPO are shown in the appendix.The results are summarized in Table 1. Learning curves for various settings are shown in Figure 2.We observe that our approach achieves performance that is almost as good as the the reference meth-ods, which were (human) designed specifically to perform well on multi-armed bandit problems. Itis worth noting that the published algorithms are mostly designed to minimize asymptotic regret(rather than finite horizon regret), hence there tends to be a little bit of room to outperform them inthe finite horizon settings.4Under review as a conference paper at ICLR 2017Table 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instancesof the bandit problem. We consider k2f5;10;50gbandits and n2f10;100;500gepisodes ofinteraction. We highlight the best-performing algorithms in each setup according to the computedmean, and we also highlight the other algorithms in that row whose performance is not significantlydifferent from the best one (determined by a one-sided t-test withp= 0:05).Setup Random Gittins TS OTS UCB1 -Greedy Greedy RL2n= 10;k= 5 5:0 6:6 5:7 6:5 6:7 6:6 6:6 6:7n= 10;k= 10 5:0 6:6 5:5 6:2 6:7 6:6 6:6 6:7n= 10;k= 50 5:1 6:5 5:2 5:5 6:6 6:5 6:5 6:8n= 100;k= 5 49:9 78:3 74:7 77:9 78:0 75:4 74:8 78:7n= 100;k= 10 49:9 82:8 76:7 81:4 82:4 77:4 77:1 83:5n= 100;k= 50 49:8 85:2 64:5 67:7 84:3 78:3 78:0 84:9n= 500;k= 5 249:8 405:8 402:0 406:7 405:8388:2 380:6 401:6n= 500;k= 10 249:0 437:8 429:5438:9 437:1408:0 395:0 432:5n= 500;k= 50 249:6 463:7 427:2 437:6 457:6 413:6 402:8 438:90 300Iteration01Normalized total rewardk = 5k = 10k = 50Gittins(a)n= 100 600Iteration01Normalized total rewardk = 5k = 10k = 50Gittins (b)n= 1000 600Iteration01Normalized total rewardk = 5k = 10k = 50Gittins (c)n= 500Figure 2: RL2learning curves for multi-armed bandits. Performance is normalized such that Gittinsindex scores 1, and random policy scores 0.We observe that there is a noticeable gap between Gittins index and RL2in the most challengingscenario, with 50arms and 500 episodes. This raises the question whether better architecturesor better (slow) RL algorithms should be explored. To determine the bottleneck, we trained thesame policy architecture using supervised learning, using the trajectories generated by the Gittinsindex approach as training data. We found that the learned policy, when executed in test domains,achieved the same level of performance as the Gittins index approach, suggesting that there is roomfor improvement by using better RL algorithms.3.2 T ABULAR MDP SThe bandit problem provides a natural and simple setting to investigate whether the policy learnsto trade off between exploration and exploitation. However, the problem itself involves no sequen-tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, weperform further experiments using randomly generated tabular MDPs, where there is a finite num-ber of possible states and actions—small enough that the transition probability distribution can beexplicitly given as a table. We compare our approach with the following methods:Random: the agent chooses an action uniformly at random for each time step;PSRL (Strens, 2000; Osband et al., 2013): this is a direct generalization of Thompson sam-pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos-terior distribution, and take actions according to the optimal policy for the entire episode.Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os-band & Van Roy (2016).BEB (Kolter & Ng, 2009): this is a model-based optimistic algorithm that adds an explo-ration bonus to (thus far) infrequently visited states and actions.5Under review as a conference paper at ICLR 2017UCRL2 (Jaksch et al., 2010): this algorithm computes, at each iteration, the optimal pol-icy against an optimistic MDP under the current belief, using an extended value iterationprocedure.-Greedy: this algorithm takes actions optimal against the MAP estimate according to thecurrent posterior, which is updated once per episode.Greedy: a special case of -Greedy with = 0.Table 2: Random MDP ResultsSetup Random PSRL OPSRL UCRL2 BEB -Greedy Greedy RL2n= 10 100 :1 138:1 144:1 146:6 150:2 132:8 134:8 156:2n= 25 250 :2 408:8 425:2 424:1 427:8 377:3 368:8 445:7n= 50 499 :7 904:4 930:7 918:9 917:8 823:3 769:3 936:1n= 75 749 :9 1417 :11449:21427:6 1422:6 1293:9 1172:9 1428:8n= 100 999 :4 1939 :51973:91942:1 1935:1 1778:2 1578:5 1913:7The distribution over MDPs is constructed with jSj= 10 ,jAj= 5. The rewards follow a Gaus-sian distribution with unit variance, and the mean parameters are sampled independently fromNormal(1;1). The transitions are sampled from a flat Dirichlet distribution. This constructionmatches the commonly used prior in Bayesian RL methods. We set the horizon for each episode tobeT= 10 , and an episode always starts on the first state.0 1000 5000Iteration01Normalized total rewardn = 10n = 25n = 50n = 75n = 100OPSRLFigure 3: RL2learning curves for tabular MDPs. Performance is normalized such that OPSRLscores 1, and random policy scores 0.The results are summarized in Table 2, and the learning curves are shown in Figure 3. We followthe same evaluation procedure as in the bandit case. We experiment with n2f10;25;50;75;100g.For fewer episodes, our approach surprisingly outperforms existing methods by a large margin. Theadvantage is reversed as nincreases, suggesting that the reinforcement learning problem in the outerloop becomes more challenging to solve. We think that the advantage for small ncomes from theneed for more aggressive exploitation: since there are 140degrees of freedom to estimate in orderto characterize the MDP, and by the 10th episode, we will not have enough samples to form agood estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approachshould be able to cope with this shortage of samples, and decides to exploit sooner compared to thereference algorithms.3.3 V ISUAL NAVIGATIONThe previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea-sibility of scaling up RL2, we further experiment with a challenging vision-based task, where the6Under review as a conference paper at ICLR 2017agent is asked to navigate a randomly generated maze to find a randomly placed target2. The agentreceives a +1reward when it reaches the target, 0:001when it hits the wall, and 0:04per timestep to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur-ing which the maze structure and target position are held fixed. The optimal strategy is to explorethe maze efficiently during the first episode, and after locating the target, act optimally against thecurrent maze and target based on the collected information. An illustration of the task is given inFigure 4.(a) Sample observation (b) Layout of the 55maze in (a) (c) Layout of a 99mazeFigure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in themaze layout.Visual navigation alone is a challenging task for reinforcement learning. The agent only receivesvery sparse rewards during training, and does not have the primitives for efficient exploration at thebeginning of training. It also needs to make efficient use of memory to decide how it should explorethe space, without forgetting about where it has already explored. Previously, Oh et al. (2016) havestudied similar vision-based navigation tasks in Minecraft. However, they use higher-level actionsfor efficient navigation. Similar high-level actions in our task would each require around 5low-levelactions combined in the right way. In contrast, our RL2agent needs to learn these higher-levelactions from scratch.We use a simple training setup, where we use small mazes of size 55, with 2episodes of interac-tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cellsalong each wall in a discrete representation of the maze. During each trial, we sample 1out of 1000randomly generated configurations of map layout and target positions. During testing, we evaluateon1000 separately generated configurations. In addition, we also study its extrapolation behavioralong two axes, by (1) testing on large mazes of size 99(see Figure 4c) and (2) running the agentfor up to 5episodes in both small and large mazes. For the large maze, we also increase the horizonper episode by 4x due to the increased size of the maze.Table 3: Results for visual navigation. These metrics are computed using the best run among allruns shown in Figure 5. In 3c, we measure the proportion of mazes where the trajectory length inthe second episode does not exceed the trajectory length in the first episode.(a) Average length of successful trajectoriesEpisode Small Large1 52:41:3 180:16:02 39:10:9 151:85:93 42:61:0 169:36:34 43:51:1 162:36:45 43:91:1 169:36:5(b) %SuccessEpisode Small Large1 99:3% 97:1%2 99:6% 96:7%3 99:7% 95:8%4 99:4% 95:6%5 99:6% 96:1%(c) %ImprovedSmall Large91:7% 71:4%2Videos for the task are available at https://goo.gl/rDDBpb .7Under review as a conference paper at ICLR 20170 500 1000 1500 2000 2500 3000 3500Iteration1614121086420Total rewardFigure 5: RL2learning curves for visual navigation. Each curve shows a different random initial-ization of the RNN weights (by using a different random seed). Performance varies greatly acrossdifferent initializations.The results are summarized in Table 3, and the learning curves are shown in Figure 5. We observethat there is a significant reduction in trajectory lengths between the first two episodes in both thesmaller and larger mazes, suggesting that the agent has learned how to use information from pastepisodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining itsperformance, although there is a small drop in the rate of success in the larger mazes. We alsoobserve that on larger mazes, the ratio of improved trajectories is lower, likely because the agent hasnot learned how to act optimally in the larger mazes.Still, even on the small mazes, the agent does not learn to perfectly reuse prior information. Anillustration of the agent’s behavior is shown in Figure 6. The intended behavior, which occurs mostfrequently, as shown in 6a and 6b, is that the agent should remember the target’s location, and utilizeit to act optimally in the second episode. However, occasionally the agent forgets about where thetarget was, and continues to explore in the second episode, as shown in 6c and 6d. We believe thatbetter reinforcement learning techniques used as the outer-loop algorithm will improve these resultsin the future.(a) Good behavior, 1stepisode(b) Good behavior, 2ndepisode(c) Bad behavior, 1stepisode(d) Bad behavior, 2ndepisodeFigure 6: Visualization of the agent’s behavior. In each scenario, the agent starts at the center of theblue block, and the goal is to reach anywhere in the red block.4 R ELATED WORKThe concept of using prior experience to speed up reinforcement learning algorithms has been ex-plored in the past in various forms. Earlier studies have investigated automatic tuning of hyper-parameters, such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003),as a form of meta-learning. Wilson et al. (2007) use hierarchical Bayesian methods to maintain aposterior over possible models of dynamics, and apply optimistic Thompson sampling according tothe posterior. Many works in hierarchical reinforcement learning propose to extract reusable skillsfrom previous tasks to speed up exploration in new tasks (Singh, 1992; Perkins et al., 1999). We8Under review as a conference paper at ICLR 2017refer the reader to Taylor & Stone (2009) for a more thorough survey on the multi-task and transferlearning aspects.The formulation of searching for a best-performing algorithm, whose performance is averaged overa given distribution over MDPs, have been investigated in the past in more limited forms (Maeset al., 2011; Castronovo et al., 2012). There, they propose to learn an algorithm to solve multi-armed bandits using program search, where the search space consists of simple formulas composedfrom hand-specified primitives, which needs to be tuned for each specific distribution over MDPs.In comparison, our approach allows for entirely end-to-end training without requiring such domainknowledge.More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknowndynamics (Levine & Abbeel, 2014), which uses samples collected from previous tasks to builda neural network prior for the dynamics, and can perform one-shot learning on new, but relatedtasks thanks to reduced sample complexity. There has been a growing interest in using deep neuralnetworks for multi-task learning and transfer learning (Parisotto et al., 2015; Rusu et al., 2015;2016a; Devin et al., 2016; Rusu et al., 2016b).In the broader context of machine learning, there has been a lot of interest in one-shot learningfor object classification (Vilalta & Drissi, 2002; Fei-Fei et al., 2006; Larochelle et al., 2008; Lakeet al., 2011; Koch, 2015). Our work draws inspiration from a particular line of work (Younger et al.,2001; Santoro et al., 2016; Vinyals et al., 2016), which formulates meta-learning as an optimizationproblem, and can thus be optimized end-to-end via gradient descent. While these work applies tothe supervised learning setting, our work applies in the more general reinforcement learning setting.Although the reinforcement learning setting is more challenging, the resulting behavior is far richer:our agent must not only learn to exploit existing information, but also learn to explore, a problemthat is usually not a factor in supervised learning. Another line of work (Hochreiter et al., 2001;Younger et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016) studies meta-learning over theoptimization process. There, the meta-learner makes explicit updates to a parametrized model. Incomparison, we do not use a directly parametrized policy; instead, the recurrent neural networkagent acts as the meta-learner and the resulting policy simultaneously.Our formulation essentially constructs a partially observable MDP (POMDP) which is solved in theouter loop, where the underlying MDP is unobserved by the agent. This reduction of an unknownMDP to a POMDP can be traced back to dual control theory (Feldbaum, 1960), where “dual” refersto the fact that one is controlling both the state and the state estimate. Feldbaum pointed out thatthe solution can in principle be computed with dynamic programming, but doing so is usually im-practical. POMDPs with such structure have also been studied under the name “mixed observabilityMDPs” (Ong et al., 2010). However, the method proposed there suffers from the usual challengesof solving POMDPs in high dimensions.Apart from the various multiple-episode tasks we investigate in this work, previous literature ontraining RNN policies have used similar tasks that require memory to test if long-term dependencycan be learned. Recent examples include the Labyrinth experiment in the A3C paper (Mnih et al.,2016), and the water maze experiment in the Recurrent DDPG paper (Heess et al., 2015a). Althoughthese tasks can be reformulated under the RL2framework, the key difference is that they focus onthe memory aspect instead of the fast RL aspect.5 D ISCUSSIONThis paper suggests a different approach for designing better reinforcement learning algorithms:instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein-forcement learning techniques. That is, the “fast” RL algorithm is a computation whose state isstored in the RNN activations, and the RNN’s weights are learned by a general-purpose “slow” re-inforcement learning algorithm. Our method, RL2, has demonstrated competence comparable withtheoretically optimal algorithms in small-scale settings. We have further shown its potential to scaleto high-dimensional tasks.In the experiments, we have identified opportunities to improve upon RL2: the outer-loop reinforce-ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settingswith extremely long horizons, better architecture may also be required for the policy. Although we9Under review as a conference paper at ICLR 2017have used generic methods and architectures for the outer-loop algorithm and the policy, doing thisalso ignores the underlying episodic structure. We expect algorithms and policy architectures thatexploit the problem structure to significantly boost the performance.ACKNOWLEDGMENTSWe would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. Thisresearch was funded in part by ONR through a PECASE award. Yan Duan was also supported by aBerkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by aBerkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF throughgrant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through theARC Centre of Excellence for Mathematical and Statistical Frontiers.
SJwrBiWVl
Review
3: Clear rejection
The paper proposes to use RL methods on sequences of episodes instead of single episodes. The underlying idea is the problem of 'learning to learn', and the experimental protocol proposed here allows one to understand how a neural network-based RL model can keep memory of past episodes in order to improve its ability to solve a particular problem. Experiments are made on bandit problems, but also on maze problems and show the interesting properties of such an approach, particularly on the maze problem where the agent seems to learn to first explore the maze, and then to exploit its knowledge to quickly find the goal. The paper is based on a very simple and natural idea which is acutally a good point. I really like the idea, and also the experiment on the maze which is very interesting. Experiments on bandits problem are less interesting since meta-learning models have been already proposed in the bandit problem with interesting results and the proposed model does not really bring additionnal information. My main concerns is based on the fact that the paper never clearly formally defines the problem that it attempts to solve. So, between the intuitive idea and the experimental results, the reader does not understand what exactly the learning problem is, what is its impact and/or to which concrete application it belongs to. From my point of view, the article clearly lacks of maturity and does not bring yet a strong contribution to the field. Good: * Interesting experimental setting * Simple and natural idea * Nice maze experiments and model behaviour Bad: * No real problem defined, only an intuition is given. Is it really useful ? For which problems ? What is the performance criterion one wants to optimize ? ... * Bandit experiments do not really bring relevant informations
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning ### Paper Abstract Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL^2, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL^2 experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL^2 is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL^2 on a vision-based navigation task and show that it scales up to high-dimensional problems. ### Paper Keywords ["Reinforcement Learning", "Deep learning"] ### Paper Content ABSTRACTDeep reinforcement learning (deep RL) has been successful in learning sophis-ticated behaviors automatically; however, the learning process requires a hugenumber of trials. In contrast, animals can learn new tasks in just a few trials, bene-fiting from their prior knowledge about the world. This paper seeks to bridge thisgap. Rather than designing a “fast” reinforcement learning algorithm, we proposeto represent it as a recurrent neural network (RNN) and learn it from data. Inour proposed method, RL2, the algorithm is encoded in the weights of the RNN,which are learned slowly through a general-purpose (“slow”) RL algorithm. TheRNN receives all information a typical RL algorithm would receive, including ob-servations, actions, rewards, and termination flags; and it retains its state acrossepisodes in a given Markov Decision Process (MDP). The activations of the RNNstore the state of the “fast” RL algorithm on the current (previously unseen) MDP.We evaluate RL2experimentally on both small-scale and large-scale problems.On the small-scale side, we train it to solve randomly generated multi-armed ban-dit problems and finite MDPs. After RL2is trained, its performance on new MDPsis close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL2on a vision-based navigation task and show that it scalesup to high-dimensional problems.1 I NTRODUCTIONIn recent years, deep reinforcement learning has achieved many impressive results, including play-ing Atari games from raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015), andacquiring advanced manipulation and locomotion skills (Levine et al., 2016; Lillicrap et al., 2015;Watter et al., 2015; Heess et al., 2015b; Schulman et al., 2015; 2016). However, many of the suc-cesses come at the expense of high sample complexity. For example, the state-of-the-art Atari resultsrequire tens of thousands of episodes of experience (Mnih et al., 2015) per game. To master a game,one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals arecapable of learning a new task in a very small number of trials. Continuing the previous example,the human player in Mnih et al. (2015) only needed 2 hours of experience before mastering a game.We argue that the reason for this sharp contrast is largely due to the lack of a good prior, whichresults in these deep RL agents needing to rebuild their knowledge about the world from scratch.Although Bayesian reinforcement learning provides a solid framework for incorporating priorknowledge into the learning process (Strens, 2000; Ghavamzadeh et al., 2015; Kolter & Ng, 2009),exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi-cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specificideas to bring down sample complexity and computational burden. Notable examples include guidedpolicy search with unknown dynamics (Levine & Abbeel, 2014) and PILCO (Deisenroth & Ras-mussen, 2011). These methods can learn a task using a few minutes to a few hours of real experience,compared to days or even weeks required by previous methods (Schulman et al., 2015; 2016; Lilli-crap et al., 2015). However, these methods tend to make assumptions about the environment (e.g.,instrumentation for access to the state at learning time), or become computationally intractable inhigh-dimensional settings (Wahlstr ̈om et al., 2015).1Under review as a conference paper at ICLR 2017Rather than hand-designing domain-specific reinforcement learning algorithms, we take a differentapproach in this paper: we view the learning process of the agent itself as an objective, which canbe optimized using standard reinforcement learning algorithms. The objective is averaged acrossall possible MDPs according to a specific distribution, which reflects the prior that we would liketo distill into the agent. We structure the agent as a recurrent neural network, which receives pastrewards, actions, and termination flags as inputs in addition to the normally received observations.Furthermore, its internal state is preserved across episodes, so that it has the capacity to performlearning in its own hidden activations. The learned agent thus also acts as the learning algorithm,and can adapt to the task at hand when deployed.We evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs.These problems have been extensively studied, and there exist algorithms that achieve asymptoti-cally optimal performance. We demonstrate that our method, named RL2, can achieve performancecomparable with these theoretically justified algorithms. Next, we evaluate RL2on a vision-basednavigation task implemented using the ViZDoom environment (Kempka et al., 2016), showing thatRL2can also scale to high-dimensional problems.2 M ETHOD2.1 P RELIMINARIESWe define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M=(S;A;P;r; 0;;T ), in whichSis a state set,Aan action set,P:SAS! R+a transitionprobability distribution, r:SA! [Rmax;Rmax]a bounded reward function, 0:S!R+aninitial state distribution, 2[0;1]a discount factor, and Tthe horizon. In policy search methods,we typically optimize a stochastic policy :SA! R+parametrized by . The objective isto maximize its expected discounted return, () =E[PTt=0tr(st;at)], where= (s0;a0;:::)denotes the whole trajectory, s00(s0),at(atjst), andst+1P(st+1jst;at).2.2 F ORMULATIONWe now describe our formulation, which casts learning an RL algorithm as a reinforcement learningproblem, and hence the name RL2.We assume knowledge of a set of MDPs, denoted by M, and a distribution over them: M:M!R+. We only need to sample from this distribution. We use nto denote the total number of episodesallowed to spend with a specific MDP. We define a trial to be such a series of episodes of interactionwith a fixed MDP.Episode 1Episode 2s0s1s2h0h1a0r0,d0h2h3s3a1r1,d1a2r2,d2s0s1s2h4h5a0r0,d0h6a1r1,d1AgentMDP 1Episode 1s0s1...h0h1a0r0,d0...a1AgentMDP 2.........Trial 1Trial 2Figure 1: Procedure of agent-environment interactionThis process of interaction between an agent and the environment is illustrated in Figure 1. Here,each trial happens to consist of two episodes, hence n= 2. For each trial, a separate MDP isdrawn from M, and for each episode, a fresh s0is drawn from the initial state distribution specificto the corresponding MDP. Upon receiving an action atproduced by the agent, the environmentcomputes reward rt, steps forward, and computes the next state st+1. If the episode has terminated,it sets termination flag dtto1, which otherwise defaults to 0. Together, the next state st+1, action2Under review as a conference paper at ICLR 2017at, rewardrt, and termination flag dt, are concatenated to form the input to the policy1, which,conditioned on the hidden state ht+1, generates the next hidden state ht+2and actionat+1. At theend of an episode, the hidden state of the policy is preserved to the next episode, but not preservedbetween trials.The objective under this formulation is to maximize the expected total discounted reward accumu-lated during a single trial rather than a single episode. Maximizing this objective is equivalent tominimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi, 2012). Since the underlyingMDP changes across trials, as long as different strategies are required for different MDPs, the agentmust act differently according to its belief over which MDP it is currently in. Hence, the agent isforced to integrate all the information it has received, including past actions, rewards, and termi-nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimizationprocess, where the agent is encouraged to learn a “fast” reinforcement learning algorithm.For clarity of exposition, we have defined the “inner” problem (of which the agent sees neach trials)to be an MDP rather than a POMDP. However, the method can also be applied in the partially-observed setting without any conceptual changes. In the partially observed setting, the agent isfaced with a sequence of POMDPs, and it receives an observation otinstead of state stat timet.The visual navigation experiment in Section 3.3, is actually an instance of the this POMDP setting.2.3 P OLICY REPRESENTATIONWe represent the policy as a general recurrent neural network. Each timestep, it receives the tuple(s;a;r;d )as input, which is embedded using a function (s;a;r;d )and provided as input to anRNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengioet al., 1994), we use Gated Recurrent Units (GRUs) (Cho et al., 2014) which have been demonstratedto have good empirical performance (Chung et al., 2014; J ́ozefowicz et al., 2015). The output of theGRU is fed to a fully connected layer followed by a softmax function, which forms the distributionover actions.We have also experimented with alternative architectures which explicitly reset part of the hiddenstate each episode of the sampled MDP, but we did not find any improvement over the simple archi-tecture described above.2.4 P OLICY OPTIMIZATIONAfter formulating the task as a reinforcement learning problem, we can readily use standard off-the-shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust RegionPolicy Optimization (TRPO) (Schulman et al., 2015), because of its excellent empirical perfor-mance, and because it does not require excessive hyperparameter tuning. For more details, we referthe reader to the original paper. To reduce variance in the stochastic gradient estimation, we use abaseline which is also represented as an RNN using GRUs as building blocks. We optionally applyGeneralized Advantage Estimation (GAE) (Schulman et al., 2016) to further reduce the variance.3 E VALUATIONWe designed experiments to answer the following questions:Can RL2learn algorithms that achieve good performance on MDP classes with specialstructure, relative to existing algorithms tailored to this structure that have been proposedin the literature?Can RL2scale to high-dimensional tasks?For the first question, we evaluate RL2on two sets of tasks, multi-armed bandits (MAB) and tabularMDPs. These problems have been studied extensively in the reinforcement learning literature, andthis body of work includes algorithms with guarantees of asymptotic optimality. We demonstratethat our approach achieves comparable performance to these theoretically justified algorithms.1To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input tothe policy.3Under review as a conference paper at ICLR 2017For the second question, we evaluate RL2on a vision-based navigation task. Our experiments showthat the learned policy makes effective use of the learned visual information and also short-terminformation acquired from previous episodes.3.1 M ULTI -ARMED BANDITSMulti-armed bandit problems are a subset of MDPs where the agent’s environment is stateless.Specifically, there are karms (actions), and at every time step, the agent pulls one of the arms, sayi, and receives a reward drawn from an unknown distribution: our experiments take each arm tobe a Bernoulli distribution with parameter pi. The goal is to maximize the total reward obtainedover a fixed number of time steps. The key challenge is balancing exploration and exploitation—“exploring” each arm enough times to estimate its distribution ( pi), but eventually switching over to“exploitation” of the best arm. Despite the simplicity of multi-arm bandit problems, their study hasled to a rich theory and a collection of algorithms with optimality guarantees.Using RL2, we can train an RNN policy to solve bandit problems by training it on a given distributionM. If the learning is successful, the resulting policy should be able to perform competitively withthe theoretically optimal algorithms. We randomly generated bandit problems by sampling eachparameterpifrom the uniform distribution on [0;1]. After training the RNN policy with RL2, wecompared it against the following strategies:Random: this is a baseline strategy, where the agent pulls a random arm each time.Gittins index (Gittins, 1979): this method gives the Bayes optimal solution in the dis-counted infinite-horizon case, by computing an index separately for each arm, and takingthe arm with the largest index. While this work shows it is sufficient to independently com-pute an index for each arm (hence avoiding combinatorial explosion with the number ofarms), it doesn’t show how to tractably compute these individual indices exactly. We fol-low the practical approximations described in Gittins et al. (2011), Chakravorty & Mahajan(2013), and Whittle (1982), and choose the best-performing approximation for each setup.UCB1 (Auer, 2002): this method estimates an upper-confidence bound, and pulls the armwith the largest value of ucbi(t) = ^i(t1)+cq2 logtTi(t1), where ^i(t1)is the estimatedmean parameter for the ith arm,Ti(t1)is the number of times the ith arm has been pulled,andcis a tunable hyperparameter (Audibert & Munos, 2011). We initialize the statisticswith exactly one success and one failure, which corresponds to a Beta(1;1)prior.Thompson sampling (TS) (Thompson, 1933): this is a simple method which, at each timestep, samples a list of arm means from the posterior distribution, and choose the best armaccording to this sample. It has been demonstrated to compare favorably to UCB1 empir-ically (Chapelle & Li, 2011). We also experiment with an optimistic variant (OTS) (Mayet al., 2012), which samples Ntimes from the posterior, and takes the one with the highestprobability.-Greedy: in this strategy, the agent chooses the arm with the best empirical mean withprobability 1, and chooses a random arm with probability . We use the same initial-ization as UCB1.Greedy: this is a special case of -Greedy with = 0.The Bayesian methods, Gittins index and Thompson sampling, take advantage of the distributionM; and we provide these methods with the true distribution. For each method with hyperparame-ters, we maximize the score with a separate grid search for each of the experimental settings. Thehyperparameters used for TRPO are shown in the appendix.The results are summarized in Table 1. Learning curves for various settings are shown in Figure 2.We observe that our approach achieves performance that is almost as good as the the reference meth-ods, which were (human) designed specifically to perform well on multi-armed bandit problems. Itis worth noting that the published algorithms are mostly designed to minimize asymptotic regret(rather than finite horizon regret), hence there tends to be a little bit of room to outperform them inthe finite horizon settings.4Under review as a conference paper at ICLR 2017Table 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instancesof the bandit problem. We consider k2f5;10;50gbandits and n2f10;100;500gepisodes ofinteraction. We highlight the best-performing algorithms in each setup according to the computedmean, and we also highlight the other algorithms in that row whose performance is not significantlydifferent from the best one (determined by a one-sided t-test withp= 0:05).Setup Random Gittins TS OTS UCB1 -Greedy Greedy RL2n= 10;k= 5 5:0 6:6 5:7 6:5 6:7 6:6 6:6 6:7n= 10;k= 10 5:0 6:6 5:5 6:2 6:7 6:6 6:6 6:7n= 10;k= 50 5:1 6:5 5:2 5:5 6:6 6:5 6:5 6:8n= 100;k= 5 49:9 78:3 74:7 77:9 78:0 75:4 74:8 78:7n= 100;k= 10 49:9 82:8 76:7 81:4 82:4 77:4 77:1 83:5n= 100;k= 50 49:8 85:2 64:5 67:7 84:3 78:3 78:0 84:9n= 500;k= 5 249:8 405:8 402:0 406:7 405:8388:2 380:6 401:6n= 500;k= 10 249:0 437:8 429:5438:9 437:1408:0 395:0 432:5n= 500;k= 50 249:6 463:7 427:2 437:6 457:6 413:6 402:8 438:90 300Iteration01Normalized total rewardk = 5k = 10k = 50Gittins(a)n= 100 600Iteration01Normalized total rewardk = 5k = 10k = 50Gittins (b)n= 1000 600Iteration01Normalized total rewardk = 5k = 10k = 50Gittins (c)n= 500Figure 2: RL2learning curves for multi-armed bandits. Performance is normalized such that Gittinsindex scores 1, and random policy scores 0.We observe that there is a noticeable gap between Gittins index and RL2in the most challengingscenario, with 50arms and 500 episodes. This raises the question whether better architecturesor better (slow) RL algorithms should be explored. To determine the bottleneck, we trained thesame policy architecture using supervised learning, using the trajectories generated by the Gittinsindex approach as training data. We found that the learned policy, when executed in test domains,achieved the same level of performance as the Gittins index approach, suggesting that there is roomfor improvement by using better RL algorithms.3.2 T ABULAR MDP SThe bandit problem provides a natural and simple setting to investigate whether the policy learnsto trade off between exploration and exploitation. However, the problem itself involves no sequen-tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, weperform further experiments using randomly generated tabular MDPs, where there is a finite num-ber of possible states and actions—small enough that the transition probability distribution can beexplicitly given as a table. We compare our approach with the following methods:Random: the agent chooses an action uniformly at random for each time step;PSRL (Strens, 2000; Osband et al., 2013): this is a direct generalization of Thompson sam-pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos-terior distribution, and take actions according to the optimal policy for the entire episode.Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os-band & Van Roy (2016).BEB (Kolter & Ng, 2009): this is a model-based optimistic algorithm that adds an explo-ration bonus to (thus far) infrequently visited states and actions.5Under review as a conference paper at ICLR 2017UCRL2 (Jaksch et al., 2010): this algorithm computes, at each iteration, the optimal pol-icy against an optimistic MDP under the current belief, using an extended value iterationprocedure.-Greedy: this algorithm takes actions optimal against the MAP estimate according to thecurrent posterior, which is updated once per episode.Greedy: a special case of -Greedy with = 0.Table 2: Random MDP ResultsSetup Random PSRL OPSRL UCRL2 BEB -Greedy Greedy RL2n= 10 100 :1 138:1 144:1 146:6 150:2 132:8 134:8 156:2n= 25 250 :2 408:8 425:2 424:1 427:8 377:3 368:8 445:7n= 50 499 :7 904:4 930:7 918:9 917:8 823:3 769:3 936:1n= 75 749 :9 1417 :11449:21427:6 1422:6 1293:9 1172:9 1428:8n= 100 999 :4 1939 :51973:91942:1 1935:1 1778:2 1578:5 1913:7The distribution over MDPs is constructed with jSj= 10 ,jAj= 5. The rewards follow a Gaus-sian distribution with unit variance, and the mean parameters are sampled independently fromNormal(1;1). The transitions are sampled from a flat Dirichlet distribution. This constructionmatches the commonly used prior in Bayesian RL methods. We set the horizon for each episode tobeT= 10 , and an episode always starts on the first state.0 1000 5000Iteration01Normalized total rewardn = 10n = 25n = 50n = 75n = 100OPSRLFigure 3: RL2learning curves for tabular MDPs. Performance is normalized such that OPSRLscores 1, and random policy scores 0.The results are summarized in Table 2, and the learning curves are shown in Figure 3. We followthe same evaluation procedure as in the bandit case. We experiment with n2f10;25;50;75;100g.For fewer episodes, our approach surprisingly outperforms existing methods by a large margin. Theadvantage is reversed as nincreases, suggesting that the reinforcement learning problem in the outerloop becomes more challenging to solve. We think that the advantage for small ncomes from theneed for more aggressive exploitation: since there are 140degrees of freedom to estimate in orderto characterize the MDP, and by the 10th episode, we will not have enough samples to form agood estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approachshould be able to cope with this shortage of samples, and decides to exploit sooner compared to thereference algorithms.3.3 V ISUAL NAVIGATIONThe previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea-sibility of scaling up RL2, we further experiment with a challenging vision-based task, where the6Under review as a conference paper at ICLR 2017agent is asked to navigate a randomly generated maze to find a randomly placed target2. The agentreceives a +1reward when it reaches the target, 0:001when it hits the wall, and 0:04per timestep to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur-ing which the maze structure and target position are held fixed. The optimal strategy is to explorethe maze efficiently during the first episode, and after locating the target, act optimally against thecurrent maze and target based on the collected information. An illustration of the task is given inFigure 4.(a) Sample observation (b) Layout of the 55maze in (a) (c) Layout of a 99mazeFigure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in themaze layout.Visual navigation alone is a challenging task for reinforcement learning. The agent only receivesvery sparse rewards during training, and does not have the primitives for efficient exploration at thebeginning of training. It also needs to make efficient use of memory to decide how it should explorethe space, without forgetting about where it has already explored. Previously, Oh et al. (2016) havestudied similar vision-based navigation tasks in Minecraft. However, they use higher-level actionsfor efficient navigation. Similar high-level actions in our task would each require around 5low-levelactions combined in the right way. In contrast, our RL2agent needs to learn these higher-levelactions from scratch.We use a simple training setup, where we use small mazes of size 55, with 2episodes of interac-tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cellsalong each wall in a discrete representation of the maze. During each trial, we sample 1out of 1000randomly generated configurations of map layout and target positions. During testing, we evaluateon1000 separately generated configurations. In addition, we also study its extrapolation behavioralong two axes, by (1) testing on large mazes of size 99(see Figure 4c) and (2) running the agentfor up to 5episodes in both small and large mazes. For the large maze, we also increase the horizonper episode by 4x due to the increased size of the maze.Table 3: Results for visual navigation. These metrics are computed using the best run among allruns shown in Figure 5. In 3c, we measure the proportion of mazes where the trajectory length inthe second episode does not exceed the trajectory length in the first episode.(a) Average length of successful trajectoriesEpisode Small Large1 52:41:3 180:16:02 39:10:9 151:85:93 42:61:0 169:36:34 43:51:1 162:36:45 43:91:1 169:36:5(b) %SuccessEpisode Small Large1 99:3% 97:1%2 99:6% 96:7%3 99:7% 95:8%4 99:4% 95:6%5 99:6% 96:1%(c) %ImprovedSmall Large91:7% 71:4%2Videos for the task are available at https://goo.gl/rDDBpb .7Under review as a conference paper at ICLR 20170 500 1000 1500 2000 2500 3000 3500Iteration1614121086420Total rewardFigure 5: RL2learning curves for visual navigation. Each curve shows a different random initial-ization of the RNN weights (by using a different random seed). Performance varies greatly acrossdifferent initializations.The results are summarized in Table 3, and the learning curves are shown in Figure 5. We observethat there is a significant reduction in trajectory lengths between the first two episodes in both thesmaller and larger mazes, suggesting that the agent has learned how to use information from pastepisodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining itsperformance, although there is a small drop in the rate of success in the larger mazes. We alsoobserve that on larger mazes, the ratio of improved trajectories is lower, likely because the agent hasnot learned how to act optimally in the larger mazes.Still, even on the small mazes, the agent does not learn to perfectly reuse prior information. Anillustration of the agent’s behavior is shown in Figure 6. The intended behavior, which occurs mostfrequently, as shown in 6a and 6b, is that the agent should remember the target’s location, and utilizeit to act optimally in the second episode. However, occasionally the agent forgets about where thetarget was, and continues to explore in the second episode, as shown in 6c and 6d. We believe thatbetter reinforcement learning techniques used as the outer-loop algorithm will improve these resultsin the future.(a) Good behavior, 1stepisode(b) Good behavior, 2ndepisode(c) Bad behavior, 1stepisode(d) Bad behavior, 2ndepisodeFigure 6: Visualization of the agent’s behavior. In each scenario, the agent starts at the center of theblue block, and the goal is to reach anywhere in the red block.4 R ELATED WORKThe concept of using prior experience to speed up reinforcement learning algorithms has been ex-plored in the past in various forms. Earlier studies have investigated automatic tuning of hyper-parameters, such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003),as a form of meta-learning. Wilson et al. (2007) use hierarchical Bayesian methods to maintain aposterior over possible models of dynamics, and apply optimistic Thompson sampling according tothe posterior. Many works in hierarchical reinforcement learning propose to extract reusable skillsfrom previous tasks to speed up exploration in new tasks (Singh, 1992; Perkins et al., 1999). We8Under review as a conference paper at ICLR 2017refer the reader to Taylor & Stone (2009) for a more thorough survey on the multi-task and transferlearning aspects.The formulation of searching for a best-performing algorithm, whose performance is averaged overa given distribution over MDPs, have been investigated in the past in more limited forms (Maeset al., 2011; Castronovo et al., 2012). There, they propose to learn an algorithm to solve multi-armed bandits using program search, where the search space consists of simple formulas composedfrom hand-specified primitives, which needs to be tuned for each specific distribution over MDPs.In comparison, our approach allows for entirely end-to-end training without requiring such domainknowledge.More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknowndynamics (Levine & Abbeel, 2014), which uses samples collected from previous tasks to builda neural network prior for the dynamics, and can perform one-shot learning on new, but relatedtasks thanks to reduced sample complexity. There has been a growing interest in using deep neuralnetworks for multi-task learning and transfer learning (Parisotto et al., 2015; Rusu et al., 2015;2016a; Devin et al., 2016; Rusu et al., 2016b).In the broader context of machine learning, there has been a lot of interest in one-shot learningfor object classification (Vilalta & Drissi, 2002; Fei-Fei et al., 2006; Larochelle et al., 2008; Lakeet al., 2011; Koch, 2015). Our work draws inspiration from a particular line of work (Younger et al.,2001; Santoro et al., 2016; Vinyals et al., 2016), which formulates meta-learning as an optimizationproblem, and can thus be optimized end-to-end via gradient descent. While these work applies tothe supervised learning setting, our work applies in the more general reinforcement learning setting.Although the reinforcement learning setting is more challenging, the resulting behavior is far richer:our agent must not only learn to exploit existing information, but also learn to explore, a problemthat is usually not a factor in supervised learning. Another line of work (Hochreiter et al., 2001;Younger et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016) studies meta-learning over theoptimization process. There, the meta-learner makes explicit updates to a parametrized model. Incomparison, we do not use a directly parametrized policy; instead, the recurrent neural networkagent acts as the meta-learner and the resulting policy simultaneously.Our formulation essentially constructs a partially observable MDP (POMDP) which is solved in theouter loop, where the underlying MDP is unobserved by the agent. This reduction of an unknownMDP to a POMDP can be traced back to dual control theory (Feldbaum, 1960), where “dual” refersto the fact that one is controlling both the state and the state estimate. Feldbaum pointed out thatthe solution can in principle be computed with dynamic programming, but doing so is usually im-practical. POMDPs with such structure have also been studied under the name “mixed observabilityMDPs” (Ong et al., 2010). However, the method proposed there suffers from the usual challengesof solving POMDPs in high dimensions.Apart from the various multiple-episode tasks we investigate in this work, previous literature ontraining RNN policies have used similar tasks that require memory to test if long-term dependencycan be learned. Recent examples include the Labyrinth experiment in the A3C paper (Mnih et al.,2016), and the water maze experiment in the Recurrent DDPG paper (Heess et al., 2015a). Althoughthese tasks can be reformulated under the RL2framework, the key difference is that they focus onthe memory aspect instead of the fast RL aspect.5 D ISCUSSIONThis paper suggests a different approach for designing better reinforcement learning algorithms:instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein-forcement learning techniques. That is, the “fast” RL algorithm is a computation whose state isstored in the RNN activations, and the RNN’s weights are learned by a general-purpose “slow” re-inforcement learning algorithm. Our method, RL2, has demonstrated competence comparable withtheoretically optimal algorithms in small-scale settings. We have further shown its potential to scaleto high-dimensional tasks.In the experiments, we have identified opportunities to improve upon RL2: the outer-loop reinforce-ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settingswith extremely long horizons, better architecture may also be required for the policy. Although we9Under review as a conference paper at ICLR 2017have used generic methods and architectures for the outer-loop algorithm and the policy, doing thisalso ignores the underlying episodic structure. We expect algorithms and policy architectures thatexploit the problem structure to significantly boost the performance.ACKNOWLEDGMENTSWe would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. Thisresearch was funded in part by ONR through a PECASE award. Yan Duan was also supported by aBerkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by aBerkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF throughgrant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through theARC Centre of Excellence for Mathematical and Statistical Frontiers.<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text The paper proposes to use RL methods on sequences of episodes instead of single episodes. The underlying idea is the problem of 'learning to learn', and the experimental protocol proposed here allows one to understand how a neural network-based RL model can keep memory of past episodes in order to improve its ability to solve a particular problem. Experiments are made on bandit problems, but also on maze problems and show the interesting properties of such an approach, particularly on the maze problem where the agent seems to learn to first explore the maze, and then to exploit its knowledge to quickly find the goal. The paper is based on a very simple and natural idea which is acutally a good point. I really like the idea, and also the experiment on the maze which is very interesting. Experiments on bandits problem are less interesting since meta-learning models have been already proposed in the bandit problem with interesting results and the proposed model does not really bring additionnal information. My main concerns is based on the fact that the paper never clearly formally defines the problem that it attempts to solve. So, between the intuitive idea and the experimental results, the reader does not understand what exactly the learning problem is, what is its impact and/or to which concrete application it belongs to. From my point of view, the article clearly lacks of maturity and does not bring yet a strong contribution to the field. Good: * Interesting experimental setting * Simple and natural idea * Nice maze experiments and model behaviour Bad: * No real problem defined, only an intuition is given. Is it really useful ? For which problems ? What is the performance criterion one wants to optimize ? ... * Bandit experiments do not really bring relevant informations ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
-5W5OBfFlwX
ICLR.cc/2021/Conference
2021
Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms
["Mengfan Xu", "Diego Klabjan"]
EXP-based algorithms are often used for exploration in multi-armed bandit. We revisit the EXP3.P algorithm and establish both the lower and upper bounds of regret in the Gaussian multi-armed bandit setting, as well as a more general distribution option. The analyses do not require bounded rewards compared to classical regret assumptions. We also extend EXP4 from multi-armed bandit to reinforcement learning to incentivize exploration by multiple agents. The resulting algorithm has been tested on hard-to-explore games and it shows an improvement on exploration compared to state-of-the-art.
["exploration", "algorithms", "bounds", "reinforcement", "bandit", "algorithm", "lower", "upper bounds", "regret", "gaussian"]
ABSTRACTEXP-based algorithms are often used for exploration in multi-armed bandit. Werevisit the EXP3.P algorithm and establish both the lower and upper bounds ofregret in the Gaussian multi-armed bandit setting, as well as a more generaldistribution option. The analyses do not require bounded rewards compared toclassical regret assumptions. We also extend EXP4 from multi-armed bandit toreinforcement learning to incentivize exploration by multiple agents. The resultingalgorithm has been tested on hard-to-explore games and it shows an improvementon exploration compared to state-of-the-art.1 I NTRODUCTIONMulti-armed bandit (MAB) is to maximize cumulative reward of a player throughout a bandit gameby choosing different arms at each time step. It is also equivalent to minimizing the regret definedas the difference between the best rewards that can be achieved and the actual reward gained by theplayer. Formally, given time horizon T, in time step tTthe player choose one arm atamongKarms, receives rtatamong rewards rt= (rt1;rt2;:::;rtK), and maximizes the total rewardPTt=1rtator minimizes the regret. Computationally efficient and with abundant theoretical analyses are theEXP-type MAB algorithms. In EXP3.P, each arm has a trust coefficient (weight). The player sampleseach arm with probability being the sum of its normalized weights and a bias term, receives reward ofthe sampled arm and exponentially updates the weights based on the corresponding reward estimates.It achieves the regret of the order O(pT)in a high probability sense. In EXP4, there are any numberof experts. Each has a sample rule over actions and a weight. The player samples according to theweighted average of experts’ sample rules and updates the weights respectively.Contextual bandit is a variant of MAB by adding context or state space S. At time step t, the playerhas context st2Swiths1:T= (s1;s2;:::;sT)being independent. Rewards rtfollowF((st))whereFis any distribution and (st)is the mean vector that depends on state st. ReinforcementLearning (RL) generalizes contextual bandit, where state and reward transitions follow a MarkovDecision Process (MDP) represented by transition kernel P(st+1;rtjat;st). A key challenge in RLis the trade-off between exploration and exploitation. Exploration is to encourage the player to trynew arms in MAB or new actions in RL to understand the game better. It helps to plan for the future,but with the sacrifice of potentially lowering the current reward. Exploitation aims to exploit currentlyknown states and arms to maximize the current reward, but it potentially prevents the player to gainmore information to increase local reward. To maximize the cumulative reward, the player needs toknow the game by exploration, while guaranteeing current reward by exploitation.How to incentivize exploration in RL has been a main focus in RL. Since RL is built on MAB, itis natural to extend MAB techniques to RL and UCB is such a success. UCB (Auer et al. (2002a))motivates count-based exploration (Strehl and Littman, 2008) in RL and the subsequent Pseudo-Count exploration (Bellemare et al., 2016). New deep RL exploration algorithms have been recentlyproposed. Using deep neural networks to keep track of the Q-values by means of Q-networks inRL is called DQN (Mnih et al. (2013)). This combination of deep learning and RL has showngreat success. -greedy in Mnih et al. (2015) is a simple exploration technique using DQN. Besides-greedy, intrinsic model exploration computes intrinsic rewards by focusing on experiences. Intrinsicrewards directly measure and incentivize exploration if added to extrinsic (actual) rewards of RL, e.g.DORA (Fox et al., 2018) and (Stadie et al., 2015). Random Network Distillation (RND) (Burda et al.,2018) is a more recent suggestion relying on a fixed target network. A drawback of RND is its localfocus without global exploration.1Under review as a conference paper at ICLR 2021In order to address weak points of these various exploration algorithms in the RL context, thenotion of experts is natural and thus EXP-type MAB algorithms are appropriate. The allowance ofarbitrary experts provides exploration for harder contextual bandits and hence providing explorationpossibilities for RL. We develop an EXP4 exploration algorithm for RL that relies on several generalexperts. This is the first RL algorithm using several exploration experts enabling global exploration.Focusing on DQN, in the computational study we focus on two agents consisting of RND and-greedy DQN.We implement the RL EXP4 algorithm on the hard-to-explore RL game Montezuma’s Revenge andcompare it with the benchmark algorithm RND (Burda et al. (2018)). The numerical results showthat the algorithm gains more exploration than RND and it gains the ability of global exploration bynot getting stuck in local maximums of RND. Its total reward also increases with training. Overall,our algorithm improves exploration and exploitation on the benchmark game and demonstrates alearning process in RL.Reward in RL in many cases is unbounded which relates to unbounded MAB rewards. There are threemajor versions of MAB: Adversarial, Stochastic, and herein introduced Gaussian. For adversarialMAB, rewards of the Karmsrtcan be chosen arbitrarily by adversaries at step t. For stochastic MAB,the rewards at different steps are assumed to be i.i.d. and the rewards across arms are independent.It is assumed that 0rti1for any arm iand stept. For Gaussian MAB, rewards rtfollowmulti-variate normal N(;)withbeing the mean vector and the covariance matrix of the Karms. Here the rewards are neither bounded, nor independent among the arms. For this reason theintroduced Gaussian MAB reflects the RL setting and is the subject of our MAB analyses of EXP3.P.EXP-type algorithms (Auer et al. (2002b)) are optimal in the two classical MABs. Auer et al. (2002b)show lower and upper bounds on regret of the order O(pT)for adversarial MAB and of the orderO(log(T))for stochastic MAB. All of the proofs of these regret bounds by EXP-type algorithms arebased on the bounded reward assumption, which does not hold for Gaussian MAB. Therefore, theregret bounds for Gaussian MAB with unbounded rewards studied herein are significantly differentfrom prior works.We show both lower and upper bounds on regret of Gaussian MAB under certain assumptions. Someanalyses even hold for more generally distributed MAB. Upper bounds borrow some ideas fromthe analysis of the EXP3.P algorithm in Auer et al. (2002b) for bounded MAB to our unboundedMAB, while lower bounds are by our brand new construction of instances. Precisely, we derivelower bounds of order (T)for certain fixed Tand upper bounds of order O(pT)forTbeing largeenough. The question of bounds for any value of Tremains open.The main contributions of this work are as follows. On the analytical side we introduce GaussianMAB with the unique aspect and challenge of unbounded rewards. We provide the very first regretlower bound in such a case by constructing a novel family of Gaussian bandits and we are able toanalyze the EXP3.P algorithm for Gaussian MAB. Unbounded reward poses a non-trivial challengein the analyses. We also provide the very first extension of EXP4 to RL exploration. We show itssuperior performance on two hard-to-explore RL games.A literature review is provided in Section 2. Then in Section 3 we exhibit upper bounds for unboundedMAB of the EXP3.P algorithm and lower bounds, respectively. Section 4 discusses the EXP4algorithm for RL exploration. Finally, in Section 5, we present numerical results related to theproposed algorithm.2 L ITERATURE REVIEWThe importance of exploration in RL is well understood. Count-based exploration in RL relieson UCB. Strehl and Littman (2008) develop Bellman value iteration V(s) = max a^R(s;a) +E[V(s0)] +N(s;a)12, whereN(s;a)is the number of visits to (s;a)for statesand actiona.ValueN(s;a)12is positively correlated with curiosity of (s;a)and encourages exploration. Thismethod is limited to tableau model-based MDP for small state spaces, while Bellemare et al. (2016)introduce Pseudo-Count exploration for non-tableau MDP with density models.In conjunction with DQN, -greedy in Mnih et al. (2015) is a simple exploration technique usingDQN. Besides -greedy, intrinsic model exploration computes intrinsic rewards by the accuracy ofa model trained on experiences. Intrinsic rewards directly measure and incentivize exploration if2Under review as a conference paper at ICLR 2021added to extrinsic (actual) rewards of RL, e.g. DORA in Fox et al. (2018) and Stadie et al. (2015).Intrinsic rewards in Stadie et al. (2015) are defined as e(s;a) =jj(s0)M((s);a)jj22whereMis a parametric model, s0is the next state and is input extraction. Intrinsic reward e(s;a)relies onstochastic transition from stos0and brings noise to exploration. Random Network Distillation(RND)in Burda et al. (2018) addresses this by defining e(s;a) =jj^f(s0)f(s0)jj22where ^fis a parametricmodel andfis a randomly initialized but fixed model. Here e(s;a), independent of the transition,only depends on state s0and drives RND to outperform other algorithms on Montezuma’s Revenge.None of these algorithms use several experts which is a significant departure from our work.In terms of MAB regret analyses focusing on EXP-type algorithms, Auer et al. (2002b) first introduceEXP3.P for bounded adversarial MAB and EXP4 for contextual bandits. Under the EXP3.P algorithm,an upper bound on regret of the order O(pT)is achieved, which has no gap with the lower boundand hence it establishes that EXP3.P is optimal. However these regret bounds are not applicable toGaussian MAB since rewards can be infinite. Meanwhile for unbounded MAB, Srinivas et al. (2010)demonstrate a regret bound of order O(pTT)for noisy Gaussian process bandits where a rewardobservation contains noise. The information gain Tis not well-defined in a noiseless Gaussiansetting. For noiseless Gaussian bandits, Grünewälder et al. (2010) show both the optimal lower andupper bounds on regret, but the regret definition is not consistent with the one used in Auer et al.(2002b). We establish a lower bound of the order (T)for certainTand an upper bound of theorderO(pT)asymptotically on regret of unbounded noiseless Gaussian MAB following standarddefinitions of regret.3 R EGRET BOUNDS FOR GAUSSIAN MABFor Gaussian MAB with time horizon T, at step 0< tTrewardsrtfollow multi-variatenormalN(;)where= (1;2;:::;K)is the mean vector and = (aij)i;j2f1;:::;Kgis thecovariance matrix of the Karms. The player receives reward yt=rtatby pulling arm at. We useR0T=TmaxkkPtE[yt]to denote pseudo regret called simply regret. (Note that the alternativedefinition of regret RT= maxiPTt=1rtiPTt=1ytdepends on realizations of rewards.)3.1 L OWER BOUNDS ON REGRETIn this section we derive a lower bound for Gaussian and general MAB under an assumption. GeneralMAB replaces Gaussian with a general distribution. The main technique is to construct instances orsub-classes that have certain regret, no matter what strategies are deployed. We need the followingassumption or setting.Assumption 1 There are two types of arms with general Kwith one type being superior ( Sisthe set of superior arms) and the other being inferior ( Iis the set of inferior arms). Let 1q;qbethe proportions of the superior and inferior arms, respectively which is known to the adversary andclearly 0q1. The arms in Sare indistinguishable and so are those in I. The first pull of theplayer has two steps. In the first step the player selects an inferior or superior set of arms based onP(S) = 1qandP(I) =qand once a set is selected, the corresponding reward of an arm from theselected set is received.An interesting special case of Assumption 1 is the case of two arms and q= 1=2. In this case, theplayer has no prior knowledge and in the first pull chooses an arm uniformly at random.The lower bound is defined as RL(T) = inf supR0T, where, first, infis taken among all the strategiesand then supis among all Gaussian MAB. All proofs are in the Appendix.The following is the main result with respect to lower bounds and it is based on inferior arms beingdistributed asN(0;1)and superior asN(;1)with>0.Theorem 1. In Gaussian MAB under Assumption 1, for any q1=3we haveRL(T)(q)Twherehas to satisfy G(q;)<qwithandTdetermined byG(q;)<<q; TG(q;)(1q)Rex22e(x)22+ 2andG(q;) = maxnRqex22(1q)e(x)22dx;R(1q)ex22qe(x)22dxo:3Under review as a conference paper at ICLR 2021To prove Theorem 1, we construct a special subset of Gaussian MAB with equal variances and zerocovariances. On these instances we find a unique way to explicitly represent any policy. This builds aconnection between abstract policies and this concrete mathematical representation. Then we showthat pseudo regret R0Tmust be greater than certain values no matter what policies are deployed, whichindicates a regret lower bound on these subset of instances.The feasibility of the aforementioned conditions is established in the following theorem.Theorem 2. In Gaussian MAB under Assumption 1, for any q1=3, there existand;< suchthatRL(T)(q)T.The following result with two arms and equal probability in the first pull deals /with general probabil-ities. Even in the case of Gaussian MAB it is not a special case of Theorem 2 since it is stronger.Theorem 3. For general MAB under Assumption 1 with K= 2;q= 1=2, we have that RL(T)T4holds for any distributions f0for the arms in Iandf1for the arms in SwithRjf1f0j>0(possiblywith unbounded support), for any >0andTsatisfyingT12Rjf0f1j+ 1:The theorem establishes that for any fixed >0there is a finite set of horizons Tand instances ofGaussian MAB so that no algorithm can achieve regret smaller than linear in T. Table 1 provides thevalues of the relationship between and largestTin the Gaussian case where the inferior arms aredistributed based on the standard normal and the superior arms have mean >0and variance 1. Forexample, there is no way to attain regret lower than T104=4for any 1T2501 . The functiondecreases very quickly.Table 1: Upper bounds for Tas a function of 105104103102101Upper bound for T 25001 2501 251 26 3.5The established lower bound result RL(T)(T)is larger than known results of classical MAB.This is not surprising since the rewards in classical MAB are assumed to be bounded, while rewardsin our setting follow an unbounded Gaussian distribution, which apparently increases regret.Besides the known result (pT)of adversarial MAB and (logT)of stochastic MAB, for noisyGaussian Process bandits, Srinivas et al. (2010) show RL(T)(pTT). Our lower bound forGaussian MAB is different from this lower bound. The information gain term Tin noisy Gaussianbandits is not well-defined in Gaussian MAB and thus the two bounds are not comparable.3.2 U PPER BOUNDS ON REGRETIn this section, we establish upper bounds for regret of Gaussian MAB by means of the EXP3.Palgorithm (see Algorithm 1) from Auer et al. (2002b). We stress that rewards can be infinite, withoutthe bounded assumption present in stochastic and adversarial MAB. We only consider non-degenerateGaussian MAB where variance of each arm is strictly positive, i.e. miniaii>0.Algorithm 1: EXP3.PInitialization: Weights wi(1) = exp (3qTK);i2f1;2;:::;Kgfor>0and2(0;1);fort= 1;2;:::;T dofori= 1;2;:::;K dopi(t) = (1)wi(t)PKj=1wj(t)+KendChooseitrandomly according to the distribution p1(t);:::;pK(t);Receive reward rit(t);forj= 1;:::;K do^xj(t) =rj(t)pj(t)1j=it,wj(t+ 1) =wj(t) exp3K(^xj(t) +pj(t)pKT)endend4Under review as a conference paper at ICLR 2021Formally, we provide analyses for upper bounds on RTwith high probability, on E[RT]and onR0T.In Auer et al. (2002b) EXP3.P is studied to yield a bound on regret RTwith high probability in thebounded MAB setting. As part of our contributions, we show that EXP3.P regret is of the orderO(pT)in the unbounded Gaussian MAB in the case of RTwith high probability, E[RT]andR0T.The results are summarized as follows. The density of N(;)is denoted by f.Theorem 4. For Gaussian MAB, any time horizon T, for any 0<< 1, EXP3.P has regretRT4()(qKTlog(KT) + 4q53KTlogK+ 8 log(KT))with probability (1)(1)Twhere ()is determined byR:::Rf(x1;:::;xK)dx1:::dxK= 1:In the proof of Theorem 4, we first perform truncation of the rewards of Gaussian MAB by dividingthe rewards to a bounded part and unbounded tail throughout the game. For the bounded part, wedirectly borrow the regret upper bound of EXP3.P in Auer et al. (2002b) and conclude with the regretupper bound of order O(()pT). Since a Gaussian distribution is a light-tailed distribution we cancontrol the probability of tail shrinking which leads to the overall result.The dependence of the bound on can be removed by considering large enough Tas stated next.Theorem 5. For Gaussian MAB, and any a>2,0<< 1, EXP3.P has regretRTlog(1=)O(pT)with probability (1)(11Ta)T:The constant behind Odepends onK;a; and.The above theorems deal with RTbut the aforementioned lower bounds are with respect to pseudoregret. To complete the analysis of Gaussian MAB, it is desirable to have an upper bound on pseudoregret which is established next. It is easy to verify by the Jensen’s inequality that R0TE[RT]andthus it suffices to obtain an upper bound on E[RT].For adversarial and stochastic MAB, the upper bound for E[RT]is of the same order as RTwhichfollows by a simple argument. For Gaussian MAB, establishing an upper bound on E[RT]orR0Tbased onRTrequires more work. We show an upper bound on E[RT]by using select inequalities,limit theories, and Randemacher complexity. To this end, the main result reads as follows.Theorem 6. The regret of EXP3.P in Gaussian MAB satisfiesR0TE[RT]O(pT).All these three theorems also hold for sub-Gaussian MAB, which is defined by replacing Gaussianwith sub-Gaussian. This generalization is straightforward and it is directly shown in the proof ofGaussian MAB in Appendix. Optimal upper bounds for adversarial MAB and noisy Gaussian Processbandits are of the same order as our upper bound. Auer et al. (2002b) derive an upper bound of thesame orderO(pT)as the lower bound for adversarial MAB. For noisy Gaussian Process bandits,there is also no gap between its upper and lower bounds.Our upper bound of the order O(pT)is of the same order as the one for bounded MAB. In our casethe upper bound result O(pT)holds for large enough Twhich is hidden behind Owhile the linearlower bounds is valid only for small values of T. This illustrates the rationality of the lower bound ofO(T)and the upper bound of order O(pT).4 EXP4 ALGORITHM FOR RLEXP4 has shown great success in contextual bandits. Therefore, in this section, we extend EXP4 toRL and develop EXP4-RL illustrated in Algorithm 2.The player has experts that are represented by deep Q-networks trained by RL algorithms (thereis a one to one correspondence between the experts and Q-networks). Each expert also has a trustcoefficient. Trust coefficients are also updated exponentially based on the reward estimates as inEXP4. At each step of one episode, the player samples an expert ( Q-network) with probability that isproportional to the weighted average of expert’s trust coefficients. Then -greedy DQN is applied onthe chosenQ-network. Here different from EXP4, the player needs to store all the interaction tuples5Under review as a conference paper at ICLR 2021in experience buffer since RL is a MDP. After one episode, the player trains all Q-networks with theexperience buffer and uses the trained networks as experts for the next episode.Algorithm 2: EXP4-RLInitialization: Trust coefficients wk= 1for anyk2f1;:::;Eg,E=number of experts(Q-networks),K=number of actions, ;;> 0and temperature z; > 0,nr=1 (anupper bound on reward);while TruedoInitialize episode by setting s0;fori= 1;2;:::;T (length of episode) doObserve state si;Let probability of Qk-network be k= (1)wkPEk=1wk+E;Sample network kaccording tofkgk;ForQk-network, use -greedy to sample an actiona=argmaxaQk(si;a); j= (1)1j=a+K11j6=aj2f1;2;:::;KgSample action aibased on;Interact with the environment to receive reward riand next state si+1;nr= maxfri;nrg;Update the trust coefficient wkof eachQk-network as follows:Pk=-greedy (Qk);^xkj= 11j=aPkj+ (nrri);j21;2;:::;K;y k=E[^xkj];wk=wkeykzStore (si;ai;ri;si+1) in experience replay buffer B;endUpdate each expert’s Qk-network from buffer B;endThe basic idea is the same as EXP4 by using the experts that give advice vectors with deep Q-networks.It is a combination of deep neural networks with EXP4 updates. From a different perspective, we canalso view it as an ensemble in classification (Xia et al. (2011)), by treating Q-networks as ensemblesin RL, instead of classification algorithms. While Q-networks do not necessarily have to be experts,i.e., other experts can be used, these are natural in a DQN framework.In our implementation and experiments we use two experts, thus E= 2with twoQ-networks. Thefirst one is based on RND (Burda et al. (2018)) while the second one is a simple DQN. To this end, inthe algorithm before storing to the buffer, we also record cir=jj^f(si)f(si)jj2, the RND intrinsicreward as in Burda et al. (2018). This value is then added to the 4-tuple pushed to B. When updatingQ1corresponding to RND at the end of an iteration in the algorithm, by using rj+cjrwe modifytheQ1-network and by using cjran update to ^fis executed. Network Q2pertaining to -greedy isupdated directly by using rj.Intuitively, Algorithm 2 circumvents this drawback with the total exploration guided by two expertswith EXP4 updated trust coefficients. When the RND expert drives high exploration, its trustcoefficient leads to a high total exploration. When it has low exploration, the second expert DQNshould have a high one and it incentivizes the total exploration accordingly. Trust coefficients areupdated by reward estimates iteratively as in EXP4, so they keep track of the long-term performanceof experts and then guide the total exploration globally. These dynamics of EXP4 combined withintrinsic rewards guarantees global exploration. The experimental results exhibited in the next sectionverify this intuition regarding exploration behind Algorithm 2.We point out that potentially more general RL algorithms based on Q-factors can be used, e.g.,boostrapped DQN (Osband et al. (2016)), random prioritized DQN (Osband et al. (2018)) or adaptive-greedy VDBE (Tokic (2010)) are a possibility. Furthermore, experts in EXP4 can even be policynetworks trained by PPO (Schulman et al. (2017)) instead of DQN for exploration. These possibilitiesdemonstrate the flexibility of the EXP4-RL algorithm.6Under review as a conference paper at ICLR 20215 C OMPUTATIONAL STUDYAs a numerical demonstration of the superior performance and exploration incentive of Algorithm 2,we show the improvements on baselines on two hard-to-explore RL games, Mountain Car andMontezuma’s Revenge. More precisely, we present that the real reward on Mountain Car improvessignificantly by Algorithm 2 in Section 5.1. Then we implement Algorithm 2 on Montezuma’sRevenge and show the growing and remarkable improvement of exploration in Section 5.2.Intrinsic reward cir=jj^f(si)f(si)jj2given by intrinsic model ^frepresents the exploration of RNDin Burda et al. (2018) as introduced in Sections 2 and 4. We use the same criterion for evaluatingexploration performance of our algorithm and RND herein. RND incentivizes local exploration withthe single step intrinsic reward but with the absence of global exploration.5.1 M OUNTAIN CARIn this part, we summarize the experimental results of Algorithm 2 on Mountain Car, a classicalcontrol RL game. This game has very sparse positive rewards, which brings the necessity andhardness of exploration. Blog post (Rivlin (2019)) shows that RND based on DQN improves theperformance of traditional DQN, since RND has intrinsic reward to incentivize exploration. Weuse RND on DQN from Rivlin (2019) as the baseline and show the real reward improvement ofAlgorithm 2, which supports the intuition and superiority of the algorithm.The comparison between Algorithm 2 and RND is presented in Figure 1. Here the x-axis is theepoch number and the y-axis is the cumulative reward of that epoch. Figure 1a shows the rawdata comparison between EXP4-RL and RND. We observe that though at first RND has severalspikes exceeding those of EXP4-RL, EXP4-RL has much higher rewards than RND after 300 epochs.Overall, the relative difference of areas under the curve (AUC) is 4.9% for EXP4-RL over RND,which indicates the significant improvement of our algorithm. This improvement is better illustratedin Figure 1b with the smoothed reward values. Here there is a notable difference between EXP4-RLand RND. Note that the maximum reward hit by EXP4-RL is 86and the one by RND is 118,which additionally demonstrates our improvement on RND.(a) original (b) smoothFigure 1: The performance of Algorithm 2 and RND measured by the epoch-wise reward on MountainCar, with the left one being the original data and the right being the smoothed reward values.We conclude that Algorithm 2 performs better than the RND baseline and that the improvementincreases at the later training stage. Exploration brought by Algorithm 2 gains real reward on thishard-to-explore Mountain Car, compared to the RND counterpart (without the DQN expert). Thepower of our algorithm can be enhanced by adopting more complex experts, not limited to only DQN.5.2 M ONTEZUMA ’SREVENGE AND PURE EXPLORATION SETTINGIn this section, we show the experimental details of Algorithm 2 on Montezuma’s Revenge, anothernotoriously hard-to-explore RL game. The benchmark on Montezuma’s Revenge is RND based onDQN which achieves a reward of zero in our environment (the PPO algorithm reported in Burda et al.(2018) has reward 8,000 with many more computing resources; we ran the PPO-based RND with 10parallel environments and 800 epochs to observe that the reward is also 0), which indicates that DQNhas room for improvement regarding exploration.To this end, we first implement the DQN-version RND (called simply RND hereafter) on Montezuma’sRevenge as our benchmark by replacing the PPO with DQN. Then we implement Algorithm 2 withtwo experts as aforementioned. Our computing environment allows at most 10 parallel environments.In subsequent figures the x-axis always corresponds to the number of epochs. RND update probabilityis the proportion of experience that are used for training the intrinsic model ^f(Burda et al., 2018).7Under review as a conference paper at ICLR 2021A comparison between Algorithm 2 (EXP4-RL) and RND without parallel environments (the updateprobability is 100% since it is a single environment) is shown in Figure 2 with the emphasis onexploration by means of the intrinsic reward. We use 3 different numbers of burn-in periods (58,68, 167 burn-in epochs) to remove the initial training steps, which is common in Gibbs sampling.Overall EXP4-RL outperforms RND with many significant spikes in the intrinsic rewards. The largerthe number of burn-in periods is, the more significant is the dominance of EXP4-RL over RND.EXP4-RL has much higher exploration than RND at some epochs and stays close to RND at otherepochs. At some epochs, EXP4-RL even has 6 times higher exploration. The relative difference inthe areas under the curves are 6.9%, 17.0%, 146.0%, respectively, which quantifies the much betterperformance of EXP4-RL.(a) small (b) medium (c) largeFigure 2: The performance of Algorithm 2 and RND measured by intrinsic reward without parallelenvironments with three different burn-in periods(a)Q-network losses with0.25 update(b) Intrinsic reward aftersmoothing with 0.25 update(c) Intrinsic reward aftersmoothing with 0.125 updateFigure 3: The performance of Algorithm 2 and RND with 10 parallel environments and with RNDupdate probability 0.25 and 0.125, measured by loss and intrinsic reward.We next compare EXP4-RL and RND with 10 parallel environments and different RND updateprobabilities in Figure 3. The experiences are generated by the 10 parallel environments.Figure 3a shows that both experts in EXP4-RL are learning with decreasing losses of their Q-networks.The drop is steeper for the RND expert but it starts with a higher loss. With RND update probability0.25 in Figure 3b we observe that EXP4-RL and RND are very close when RND exhibits highexploration. When RND is at its local minima, EXP4-RL outperforms it. Usually these local minimaare driven by sticking to local maxima and then training the model intensively at local maxima,typical of the RND local exploration behavior. EXP4-RL improves on RND as training progresses,e.g. the improvement after 550 epochs is higher than the one between epochs 250 and 550. In termsfor AUC, this is expressed by 1.6% and 3.5%, respectively. Overall, EXP4-RL improves RND localminima of exploration, keeps high exploration of RND and induces a smoother global exploration.With the update probability of 0.125 in Figure 3c, EXP4-RL almost always outperforms RND with anotable difference. The improvement also increases with epochs and is dramatically larger at RND’slocal minima. These local minima appear more frequently in training of RND, so our improvementis more significant as well as crucial. The relative AUC improvement is 49.4%. The excellentperformance in Figure 3c additionally shows that EXP4-RL improves RND with global explorationby improving local minima of RND or not staying at local maxima.Overall, with either 0.25 or 0.125, EXP4-RL incentivizes global exploration on RND by not gettingstuck in local exploration maxima and outperforms RND exploration aggressively. With 0.125 theimprovement with respect to RND is more significant and steady. These experimental evidenceverifies our intuition behind EXP4-RL and provides excellent support for it. With experts being moreadvanced RL exploration algorithms, e.g. DORA, EXP4-RL can bring additional possibilities.8Under review as a conference paper at ICLR 2021
mr6VACkYmZ9
Contributions appear marginal, and some doubts about the regret lower bound.
4: Ok but not good enough - rejection
The authors consider analyzing the EXP3.P algorithm for the case of unbounded reward functions, in the sense that the rewards are governed by a Gaussian distribution. The authors first demonstrate a regret lower bound result on the Gaussian MABs when the time horizon is bounded from above. Then, the authors proceed to the analysis of the EXP3.P algorithm on the Gaussian MABs, and establish a regret bound similar to that of Auer et al. 2002. Finally, the authors apply the EXP3.P, where an expert corresponds to a Q-learning network, in the EXP4-RL algorithm, and evaluate it on multiple RL instances. Major comments: The major technical contributions seem to be the regret bound for EXP3.P for the case of Gaussian MAB, as stated in Theorem 4. Based on the authors' notation in the first paragraph of page 3, the Gaussian reward distribution is stationary across time. This contribution appears marginal, since the Theorem appears to be a straightforward consequence of the EXP3.P regret by Auer et al. 2002 by conditioning on all realized rewards to lie in [-\Delta, \Delta]. The technical part on how to identify the best expert is already dealt with by the analysis in Auer et al. 2002. Another contributions are Theorems 1-3, which are regret lower bounds for Gaussian MABs. I am not sure how to interpret these regret lower bounds, since they require the horizon length to be bounded from above. More precisely, the authors show that for any algorithm, there exists a Gaussian MAB instance such that $\text{Reg}(T) \geq c T$ when $T\leq C$, where $c, C$ are instance-dependent constants. While this bound is a mathematically sound statement, it does not imply anything about the difficulty of the underlying problem when T is large. For example, for regret upper bound of an MAB algorithm, one almost always establishes a guarantee of the form $\text{Reg}(T) \leq \text{Bound}(T)$ for all $T \geq C'$, where $C'$ an instance dependent constant. I am not too sure what is the message the authors are trying to convey here, since we know that the state-of-the-art regret lower bound is $\Omega(\sqrt{KT})$ for sufficiently large T. Finally, if I understand the underlying motivation of the authors correctly, the ultimate problem that the authors are trying to address seems to be a stochastic best arm identification problem with (sub-)Gaussian rewards, where an arm here corresponds to a Q-network. I am not sure why the authors resort to EXP type algorithms for a stochastic problem. Minor Comments: I believe that the inequality w RL(T) ≥ O(√T · γ ) on page 4 should be \leq. In Algorithm 1, in the initialization, w_i should be replaced by w_i(1). In Algorithm 2, it requires to compute y_k = E[ˆx_{kj} ], and the authors should elaborate on how the expectation is computed. In general, there are quite a few typos, and some parts of the writing are a bit ambiguous in the way they are phrased. I advise the authors to proofread and also polish the writing.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms ### Paper Abstract EXP-based algorithms are often used for exploration in multi-armed bandit. We revisit the EXP3.P algorithm and establish both the lower and upper bounds of regret in the Gaussian multi-armed bandit setting, as well as a more general distribution option. The analyses do not require bounded rewards compared to classical regret assumptions. We also extend EXP4 from multi-armed bandit to reinforcement learning to incentivize exploration by multiple agents. The resulting algorithm has been tested on hard-to-explore games and it shows an improvement on exploration compared to state-of-the-art. ### Paper Keywords ["exploration", "algorithms", "bounds", "reinforcement", "bandit", "algorithm", "lower", "upper bounds", "regret", "gaussian"] ### Paper Content ABSTRACTEXP-based algorithms are often used for exploration in multi-armed bandit. Werevisit the EXP3.P algorithm and establish both the lower and upper bounds ofregret in the Gaussian multi-armed bandit setting, as well as a more generaldistribution option. The analyses do not require bounded rewards compared toclassical regret assumptions. We also extend EXP4 from multi-armed bandit toreinforcement learning to incentivize exploration by multiple agents. The resultingalgorithm has been tested on hard-to-explore games and it shows an improvementon exploration compared to state-of-the-art.1 I NTRODUCTIONMulti-armed bandit (MAB) is to maximize cumulative reward of a player throughout a bandit gameby choosing different arms at each time step. It is also equivalent to minimizing the regret definedas the difference between the best rewards that can be achieved and the actual reward gained by theplayer. Formally, given time horizon T, in time step tTthe player choose one arm atamongKarms, receives rtatamong rewards rt= (rt1;rt2;:::;rtK), and maximizes the total rewardPTt=1rtator minimizes the regret. Computationally efficient and with abundant theoretical analyses are theEXP-type MAB algorithms. In EXP3.P, each arm has a trust coefficient (weight). The player sampleseach arm with probability being the sum of its normalized weights and a bias term, receives reward ofthe sampled arm and exponentially updates the weights based on the corresponding reward estimates.It achieves the regret of the order O(pT)in a high probability sense. In EXP4, there are any numberof experts. Each has a sample rule over actions and a weight. The player samples according to theweighted average of experts’ sample rules and updates the weights respectively.Contextual bandit is a variant of MAB by adding context or state space S. At time step t, the playerhas context st2Swiths1:T= (s1;s2;:::;sT)being independent. Rewards rtfollowF((st))whereFis any distribution and (st)is the mean vector that depends on state st. ReinforcementLearning (RL) generalizes contextual bandit, where state and reward transitions follow a MarkovDecision Process (MDP) represented by transition kernel P(st+1;rtjat;st). A key challenge in RLis the trade-off between exploration and exploitation. Exploration is to encourage the player to trynew arms in MAB or new actions in RL to understand the game better. It helps to plan for the future,but with the sacrifice of potentially lowering the current reward. Exploitation aims to exploit currentlyknown states and arms to maximize the current reward, but it potentially prevents the player to gainmore information to increase local reward. To maximize the cumulative reward, the player needs toknow the game by exploration, while guaranteeing current reward by exploitation.How to incentivize exploration in RL has been a main focus in RL. Since RL is built on MAB, itis natural to extend MAB techniques to RL and UCB is such a success. UCB (Auer et al. (2002a))motivates count-based exploration (Strehl and Littman, 2008) in RL and the subsequent Pseudo-Count exploration (Bellemare et al., 2016). New deep RL exploration algorithms have been recentlyproposed. Using deep neural networks to keep track of the Q-values by means of Q-networks inRL is called DQN (Mnih et al. (2013)). This combination of deep learning and RL has showngreat success. -greedy in Mnih et al. (2015) is a simple exploration technique using DQN. Besides-greedy, intrinsic model exploration computes intrinsic rewards by focusing on experiences. Intrinsicrewards directly measure and incentivize exploration if added to extrinsic (actual) rewards of RL, e.g.DORA (Fox et al., 2018) and (Stadie et al., 2015). Random Network Distillation (RND) (Burda et al.,2018) is a more recent suggestion relying on a fixed target network. A drawback of RND is its localfocus without global exploration.1Under review as a conference paper at ICLR 2021In order to address weak points of these various exploration algorithms in the RL context, thenotion of experts is natural and thus EXP-type MAB algorithms are appropriate. The allowance ofarbitrary experts provides exploration for harder contextual bandits and hence providing explorationpossibilities for RL. We develop an EXP4 exploration algorithm for RL that relies on several generalexperts. This is the first RL algorithm using several exploration experts enabling global exploration.Focusing on DQN, in the computational study we focus on two agents consisting of RND and-greedy DQN.We implement the RL EXP4 algorithm on the hard-to-explore RL game Montezuma’s Revenge andcompare it with the benchmark algorithm RND (Burda et al. (2018)). The numerical results showthat the algorithm gains more exploration than RND and it gains the ability of global exploration bynot getting stuck in local maximums of RND. Its total reward also increases with training. Overall,our algorithm improves exploration and exploitation on the benchmark game and demonstrates alearning process in RL.Reward in RL in many cases is unbounded which relates to unbounded MAB rewards. There are threemajor versions of MAB: Adversarial, Stochastic, and herein introduced Gaussian. For adversarialMAB, rewards of the Karmsrtcan be chosen arbitrarily by adversaries at step t. For stochastic MAB,the rewards at different steps are assumed to be i.i.d. and the rewards across arms are independent.It is assumed that 0rti1for any arm iand stept. For Gaussian MAB, rewards rtfollowmulti-variate normal N(;)withbeing the mean vector and the covariance matrix of the Karms. Here the rewards are neither bounded, nor independent among the arms. For this reason theintroduced Gaussian MAB reflects the RL setting and is the subject of our MAB analyses of EXP3.P.EXP-type algorithms (Auer et al. (2002b)) are optimal in the two classical MABs. Auer et al. (2002b)show lower and upper bounds on regret of the order O(pT)for adversarial MAB and of the orderO(log(T))for stochastic MAB. All of the proofs of these regret bounds by EXP-type algorithms arebased on the bounded reward assumption, which does not hold for Gaussian MAB. Therefore, theregret bounds for Gaussian MAB with unbounded rewards studied herein are significantly differentfrom prior works.We show both lower and upper bounds on regret of Gaussian MAB under certain assumptions. Someanalyses even hold for more generally distributed MAB. Upper bounds borrow some ideas fromthe analysis of the EXP3.P algorithm in Auer et al. (2002b) for bounded MAB to our unboundedMAB, while lower bounds are by our brand new construction of instances. Precisely, we derivelower bounds of order (T)for certain fixed Tand upper bounds of order O(pT)forTbeing largeenough. The question of bounds for any value of Tremains open.The main contributions of this work are as follows. On the analytical side we introduce GaussianMAB with the unique aspect and challenge of unbounded rewards. We provide the very first regretlower bound in such a case by constructing a novel family of Gaussian bandits and we are able toanalyze the EXP3.P algorithm for Gaussian MAB. Unbounded reward poses a non-trivial challengein the analyses. We also provide the very first extension of EXP4 to RL exploration. We show itssuperior performance on two hard-to-explore RL games.A literature review is provided in Section 2. Then in Section 3 we exhibit upper bounds for unboundedMAB of the EXP3.P algorithm and lower bounds, respectively. Section 4 discusses the EXP4algorithm for RL exploration. Finally, in Section 5, we present numerical results related to theproposed algorithm.2 L ITERATURE REVIEWThe importance of exploration in RL is well understood. Count-based exploration in RL relieson UCB. Strehl and Littman (2008) develop Bellman value iteration V(s) = max a^R(s;a) +E[V(s0)] +N(s;a)12, whereN(s;a)is the number of visits to (s;a)for statesand actiona.ValueN(s;a)12is positively correlated with curiosity of (s;a)and encourages exploration. Thismethod is limited to tableau model-based MDP for small state spaces, while Bellemare et al. (2016)introduce Pseudo-Count exploration for non-tableau MDP with density models.In conjunction with DQN, -greedy in Mnih et al. (2015) is a simple exploration technique usingDQN. Besides -greedy, intrinsic model exploration computes intrinsic rewards by the accuracy ofa model trained on experiences. Intrinsic rewards directly measure and incentivize exploration if2Under review as a conference paper at ICLR 2021added to extrinsic (actual) rewards of RL, e.g. DORA in Fox et al. (2018) and Stadie et al. (2015).Intrinsic rewards in Stadie et al. (2015) are defined as e(s;a) =jj(s0)M((s);a)jj22whereMis a parametric model, s0is the next state and is input extraction. Intrinsic reward e(s;a)relies onstochastic transition from stos0and brings noise to exploration. Random Network Distillation(RND)in Burda et al. (2018) addresses this by defining e(s;a) =jj^f(s0)f(s0)jj22where ^fis a parametricmodel andfis a randomly initialized but fixed model. Here e(s;a), independent of the transition,only depends on state s0and drives RND to outperform other algorithms on Montezuma’s Revenge.None of these algorithms use several experts which is a significant departure from our work.In terms of MAB regret analyses focusing on EXP-type algorithms, Auer et al. (2002b) first introduceEXP3.P for bounded adversarial MAB and EXP4 for contextual bandits. Under the EXP3.P algorithm,an upper bound on regret of the order O(pT)is achieved, which has no gap with the lower boundand hence it establishes that EXP3.P is optimal. However these regret bounds are not applicable toGaussian MAB since rewards can be infinite. Meanwhile for unbounded MAB, Srinivas et al. (2010)demonstrate a regret bound of order O(pTT)for noisy Gaussian process bandits where a rewardobservation contains noise. The information gain Tis not well-defined in a noiseless Gaussiansetting. For noiseless Gaussian bandits, Grünewälder et al. (2010) show both the optimal lower andupper bounds on regret, but the regret definition is not consistent with the one used in Auer et al.(2002b). We establish a lower bound of the order (T)for certainTand an upper bound of theorderO(pT)asymptotically on regret of unbounded noiseless Gaussian MAB following standarddefinitions of regret.3 R EGRET BOUNDS FOR GAUSSIAN MABFor Gaussian MAB with time horizon T, at step 0< tTrewardsrtfollow multi-variatenormalN(;)where= (1;2;:::;K)is the mean vector and = (aij)i;j2f1;:::;Kgis thecovariance matrix of the Karms. The player receives reward yt=rtatby pulling arm at. We useR0T=TmaxkkPtE[yt]to denote pseudo regret called simply regret. (Note that the alternativedefinition of regret RT= maxiPTt=1rtiPTt=1ytdepends on realizations of rewards.)3.1 L OWER BOUNDS ON REGRETIn this section we derive a lower bound for Gaussian and general MAB under an assumption. GeneralMAB replaces Gaussian with a general distribution. The main technique is to construct instances orsub-classes that have certain regret, no matter what strategies are deployed. We need the followingassumption or setting.Assumption 1 There are two types of arms with general Kwith one type being superior ( Sisthe set of superior arms) and the other being inferior ( Iis the set of inferior arms). Let 1q;qbethe proportions of the superior and inferior arms, respectively which is known to the adversary andclearly 0q1. The arms in Sare indistinguishable and so are those in I. The first pull of theplayer has two steps. In the first step the player selects an inferior or superior set of arms based onP(S) = 1qandP(I) =qand once a set is selected, the corresponding reward of an arm from theselected set is received.An interesting special case of Assumption 1 is the case of two arms and q= 1=2. In this case, theplayer has no prior knowledge and in the first pull chooses an arm uniformly at random.The lower bound is defined as RL(T) = inf supR0T, where, first, infis taken among all the strategiesand then supis among all Gaussian MAB. All proofs are in the Appendix.The following is the main result with respect to lower bounds and it is based on inferior arms beingdistributed asN(0;1)and superior asN(;1)with>0.Theorem 1. In Gaussian MAB under Assumption 1, for any q1=3we haveRL(T)(q)Twherehas to satisfy G(q;)<qwithandTdetermined byG(q;)<<q; TG(q;)(1q)Rex22e(x)22+ 2andG(q;) = maxnRqex22(1q)e(x)22dx;R(1q)ex22qe(x)22dxo:3Under review as a conference paper at ICLR 2021To prove Theorem 1, we construct a special subset of Gaussian MAB with equal variances and zerocovariances. On these instances we find a unique way to explicitly represent any policy. This builds aconnection between abstract policies and this concrete mathematical representation. Then we showthat pseudo regret R0Tmust be greater than certain values no matter what policies are deployed, whichindicates a regret lower bound on these subset of instances.The feasibility of the aforementioned conditions is established in the following theorem.Theorem 2. In Gaussian MAB under Assumption 1, for any q1=3, there existand;< suchthatRL(T)(q)T.The following result with two arms and equal probability in the first pull deals /with general probabil-ities. Even in the case of Gaussian MAB it is not a special case of Theorem 2 since it is stronger.Theorem 3. For general MAB under Assumption 1 with K= 2;q= 1=2, we have that RL(T)T4holds for any distributions f0for the arms in Iandf1for the arms in SwithRjf1f0j>0(possiblywith unbounded support), for any >0andTsatisfyingT12Rjf0f1j+ 1:The theorem establishes that for any fixed >0there is a finite set of horizons Tand instances ofGaussian MAB so that no algorithm can achieve regret smaller than linear in T. Table 1 provides thevalues of the relationship between and largestTin the Gaussian case where the inferior arms aredistributed based on the standard normal and the superior arms have mean >0and variance 1. Forexample, there is no way to attain regret lower than T104=4for any 1T2501 . The functiondecreases very quickly.Table 1: Upper bounds for Tas a function of 105104103102101Upper bound for T 25001 2501 251 26 3.5The established lower bound result RL(T)(T)is larger than known results of classical MAB.This is not surprising since the rewards in classical MAB are assumed to be bounded, while rewardsin our setting follow an unbounded Gaussian distribution, which apparently increases regret.Besides the known result (pT)of adversarial MAB and (logT)of stochastic MAB, for noisyGaussian Process bandits, Srinivas et al. (2010) show RL(T)(pTT). Our lower bound forGaussian MAB is different from this lower bound. The information gain term Tin noisy Gaussianbandits is not well-defined in Gaussian MAB and thus the two bounds are not comparable.3.2 U PPER BOUNDS ON REGRETIn this section, we establish upper bounds for regret of Gaussian MAB by means of the EXP3.Palgorithm (see Algorithm 1) from Auer et al. (2002b). We stress that rewards can be infinite, withoutthe bounded assumption present in stochastic and adversarial MAB. We only consider non-degenerateGaussian MAB where variance of each arm is strictly positive, i.e. miniaii>0.Algorithm 1: EXP3.PInitialization: Weights wi(1) = exp (3qTK);i2f1;2;:::;Kgfor>0and2(0;1);fort= 1;2;:::;T dofori= 1;2;:::;K dopi(t) = (1)wi(t)PKj=1wj(t)+KendChooseitrandomly according to the distribution p1(t);:::;pK(t);Receive reward rit(t);forj= 1;:::;K do^xj(t) =rj(t)pj(t)1j=it,wj(t+ 1) =wj(t) exp3K(^xj(t) +pj(t)pKT)endend4Under review as a conference paper at ICLR 2021Formally, we provide analyses for upper bounds on RTwith high probability, on E[RT]and onR0T.In Auer et al. (2002b) EXP3.P is studied to yield a bound on regret RTwith high probability in thebounded MAB setting. As part of our contributions, we show that EXP3.P regret is of the orderO(pT)in the unbounded Gaussian MAB in the case of RTwith high probability, E[RT]andR0T.The results are summarized as follows. The density of N(;)is denoted by f.Theorem 4. For Gaussian MAB, any time horizon T, for any 0<< 1, EXP3.P has regretRT4()(qKTlog(KT) + 4q53KTlogK+ 8 log(KT))with probability (1)(1)Twhere ()is determined byR:::Rf(x1;:::;xK)dx1:::dxK= 1:In the proof of Theorem 4, we first perform truncation of the rewards of Gaussian MAB by dividingthe rewards to a bounded part and unbounded tail throughout the game. For the bounded part, wedirectly borrow the regret upper bound of EXP3.P in Auer et al. (2002b) and conclude with the regretupper bound of order O(()pT). Since a Gaussian distribution is a light-tailed distribution we cancontrol the probability of tail shrinking which leads to the overall result.The dependence of the bound on can be removed by considering large enough Tas stated next.Theorem 5. For Gaussian MAB, and any a>2,0<< 1, EXP3.P has regretRTlog(1=)O(pT)with probability (1)(11Ta)T:The constant behind Odepends onK;a; and.The above theorems deal with RTbut the aforementioned lower bounds are with respect to pseudoregret. To complete the analysis of Gaussian MAB, it is desirable to have an upper bound on pseudoregret which is established next. It is easy to verify by the Jensen’s inequality that R0TE[RT]andthus it suffices to obtain an upper bound on E[RT].For adversarial and stochastic MAB, the upper bound for E[RT]is of the same order as RTwhichfollows by a simple argument. For Gaussian MAB, establishing an upper bound on E[RT]orR0Tbased onRTrequires more work. We show an upper bound on E[RT]by using select inequalities,limit theories, and Randemacher complexity. To this end, the main result reads as follows.Theorem 6. The regret of EXP3.P in Gaussian MAB satisfiesR0TE[RT]O(pT).All these three theorems also hold for sub-Gaussian MAB, which is defined by replacing Gaussianwith sub-Gaussian. This generalization is straightforward and it is directly shown in the proof ofGaussian MAB in Appendix. Optimal upper bounds for adversarial MAB and noisy Gaussian Processbandits are of the same order as our upper bound. Auer et al. (2002b) derive an upper bound of thesame orderO(pT)as the lower bound for adversarial MAB. For noisy Gaussian Process bandits,there is also no gap between its upper and lower bounds.Our upper bound of the order O(pT)is of the same order as the one for bounded MAB. In our casethe upper bound result O(pT)holds for large enough Twhich is hidden behind Owhile the linearlower bounds is valid only for small values of T. This illustrates the rationality of the lower bound ofO(T)and the upper bound of order O(pT).4 EXP4 ALGORITHM FOR RLEXP4 has shown great success in contextual bandits. Therefore, in this section, we extend EXP4 toRL and develop EXP4-RL illustrated in Algorithm 2.The player has experts that are represented by deep Q-networks trained by RL algorithms (thereis a one to one correspondence between the experts and Q-networks). Each expert also has a trustcoefficient. Trust coefficients are also updated exponentially based on the reward estimates as inEXP4. At each step of one episode, the player samples an expert ( Q-network) with probability that isproportional to the weighted average of expert’s trust coefficients. Then -greedy DQN is applied onthe chosenQ-network. Here different from EXP4, the player needs to store all the interaction tuples5Under review as a conference paper at ICLR 2021in experience buffer since RL is a MDP. After one episode, the player trains all Q-networks with theexperience buffer and uses the trained networks as experts for the next episode.Algorithm 2: EXP4-RLInitialization: Trust coefficients wk= 1for anyk2f1;:::;Eg,E=number of experts(Q-networks),K=number of actions, ;;> 0and temperature z; > 0,nr=1 (anupper bound on reward);while TruedoInitialize episode by setting s0;fori= 1;2;:::;T (length of episode) doObserve state si;Let probability of Qk-network be k= (1)wkPEk=1wk+E;Sample network kaccording tofkgk;ForQk-network, use -greedy to sample an actiona=argmaxaQk(si;a); j= (1)1j=a+K11j6=aj2f1;2;:::;KgSample action aibased on;Interact with the environment to receive reward riand next state si+1;nr= maxfri;nrg;Update the trust coefficient wkof eachQk-network as follows:Pk=-greedy (Qk);^xkj= 11j=aPkj+ (nrri);j21;2;:::;K;y k=E[^xkj];wk=wkeykzStore (si;ai;ri;si+1) in experience replay buffer B;endUpdate each expert’s Qk-network from buffer B;endThe basic idea is the same as EXP4 by using the experts that give advice vectors with deep Q-networks.It is a combination of deep neural networks with EXP4 updates. From a different perspective, we canalso view it as an ensemble in classification (Xia et al. (2011)), by treating Q-networks as ensemblesin RL, instead of classification algorithms. While Q-networks do not necessarily have to be experts,i.e., other experts can be used, these are natural in a DQN framework.In our implementation and experiments we use two experts, thus E= 2with twoQ-networks. Thefirst one is based on RND (Burda et al. (2018)) while the second one is a simple DQN. To this end, inthe algorithm before storing to the buffer, we also record cir=jj^f(si)f(si)jj2, the RND intrinsicreward as in Burda et al. (2018). This value is then added to the 4-tuple pushed to B. When updatingQ1corresponding to RND at the end of an iteration in the algorithm, by using rj+cjrwe modifytheQ1-network and by using cjran update to ^fis executed. Network Q2pertaining to -greedy isupdated directly by using rj.Intuitively, Algorithm 2 circumvents this drawback with the total exploration guided by two expertswith EXP4 updated trust coefficients. When the RND expert drives high exploration, its trustcoefficient leads to a high total exploration. When it has low exploration, the second expert DQNshould have a high one and it incentivizes the total exploration accordingly. Trust coefficients areupdated by reward estimates iteratively as in EXP4, so they keep track of the long-term performanceof experts and then guide the total exploration globally. These dynamics of EXP4 combined withintrinsic rewards guarantees global exploration. The experimental results exhibited in the next sectionverify this intuition regarding exploration behind Algorithm 2.We point out that potentially more general RL algorithms based on Q-factors can be used, e.g.,boostrapped DQN (Osband et al. (2016)), random prioritized DQN (Osband et al. (2018)) or adaptive-greedy VDBE (Tokic (2010)) are a possibility. Furthermore, experts in EXP4 can even be policynetworks trained by PPO (Schulman et al. (2017)) instead of DQN for exploration. These possibilitiesdemonstrate the flexibility of the EXP4-RL algorithm.6Under review as a conference paper at ICLR 20215 C OMPUTATIONAL STUDYAs a numerical demonstration of the superior performance and exploration incentive of Algorithm 2,we show the improvements on baselines on two hard-to-explore RL games, Mountain Car andMontezuma’s Revenge. More precisely, we present that the real reward on Mountain Car improvessignificantly by Algorithm 2 in Section 5.1. Then we implement Algorithm 2 on Montezuma’sRevenge and show the growing and remarkable improvement of exploration in Section 5.2.Intrinsic reward cir=jj^f(si)f(si)jj2given by intrinsic model ^frepresents the exploration of RNDin Burda et al. (2018) as introduced in Sections 2 and 4. We use the same criterion for evaluatingexploration performance of our algorithm and RND herein. RND incentivizes local exploration withthe single step intrinsic reward but with the absence of global exploration.5.1 M OUNTAIN CARIn this part, we summarize the experimental results of Algorithm 2 on Mountain Car, a classicalcontrol RL game. This game has very sparse positive rewards, which brings the necessity andhardness of exploration. Blog post (Rivlin (2019)) shows that RND based on DQN improves theperformance of traditional DQN, since RND has intrinsic reward to incentivize exploration. Weuse RND on DQN from Rivlin (2019) as the baseline and show the real reward improvement ofAlgorithm 2, which supports the intuition and superiority of the algorithm.The comparison between Algorithm 2 and RND is presented in Figure 1. Here the x-axis is theepoch number and the y-axis is the cumulative reward of that epoch. Figure 1a shows the rawdata comparison between EXP4-RL and RND. We observe that though at first RND has severalspikes exceeding those of EXP4-RL, EXP4-RL has much higher rewards than RND after 300 epochs.Overall, the relative difference of areas under the curve (AUC) is 4.9% for EXP4-RL over RND,which indicates the significant improvement of our algorithm. This improvement is better illustratedin Figure 1b with the smoothed reward values. Here there is a notable difference between EXP4-RLand RND. Note that the maximum reward hit by EXP4-RL is 86and the one by RND is 118,which additionally demonstrates our improvement on RND.(a) original (b) smoothFigure 1: The performance of Algorithm 2 and RND measured by the epoch-wise reward on MountainCar, with the left one being the original data and the right being the smoothed reward values.We conclude that Algorithm 2 performs better than the RND baseline and that the improvementincreases at the later training stage. Exploration brought by Algorithm 2 gains real reward on thishard-to-explore Mountain Car, compared to the RND counterpart (without the DQN expert). Thepower of our algorithm can be enhanced by adopting more complex experts, not limited to only DQN.5.2 M ONTEZUMA ’SREVENGE AND PURE EXPLORATION SETTINGIn this section, we show the experimental details of Algorithm 2 on Montezuma’s Revenge, anothernotoriously hard-to-explore RL game. The benchmark on Montezuma’s Revenge is RND based onDQN which achieves a reward of zero in our environment (the PPO algorithm reported in Burda et al.(2018) has reward 8,000 with many more computing resources; we ran the PPO-based RND with 10parallel environments and 800 epochs to observe that the reward is also 0), which indicates that DQNhas room for improvement regarding exploration.To this end, we first implement the DQN-version RND (called simply RND hereafter) on Montezuma’sRevenge as our benchmark by replacing the PPO with DQN. Then we implement Algorithm 2 withtwo experts as aforementioned. Our computing environment allows at most 10 parallel environments.In subsequent figures the x-axis always corresponds to the number of epochs. RND update probabilityis the proportion of experience that are used for training the intrinsic model ^f(Burda et al., 2018).7Under review as a conference paper at ICLR 2021A comparison between Algorithm 2 (EXP4-RL) and RND without parallel environments (the updateprobability is 100% since it is a single environment) is shown in Figure 2 with the emphasis onexploration by means of the intrinsic reward. We use 3 different numbers of burn-in periods (58,68, 167 burn-in epochs) to remove the initial training steps, which is common in Gibbs sampling.Overall EXP4-RL outperforms RND with many significant spikes in the intrinsic rewards. The largerthe number of burn-in periods is, the more significant is the dominance of EXP4-RL over RND.EXP4-RL has much higher exploration than RND at some epochs and stays close to RND at otherepochs. At some epochs, EXP4-RL even has 6 times higher exploration. The relative difference inthe areas under the curves are 6.9%, 17.0%, 146.0%, respectively, which quantifies the much betterperformance of EXP4-RL.(a) small (b) medium (c) largeFigure 2: The performance of Algorithm 2 and RND measured by intrinsic reward without parallelenvironments with three different burn-in periods(a)Q-network losses with0.25 update(b) Intrinsic reward aftersmoothing with 0.25 update(c) Intrinsic reward aftersmoothing with 0.125 updateFigure 3: The performance of Algorithm 2 and RND with 10 parallel environments and with RNDupdate probability 0.25 and 0.125, measured by loss and intrinsic reward.We next compare EXP4-RL and RND with 10 parallel environments and different RND updateprobabilities in Figure 3. The experiences are generated by the 10 parallel environments.Figure 3a shows that both experts in EXP4-RL are learning with decreasing losses of their Q-networks.The drop is steeper for the RND expert but it starts with a higher loss. With RND update probability0.25 in Figure 3b we observe that EXP4-RL and RND are very close when RND exhibits highexploration. When RND is at its local minima, EXP4-RL outperforms it. Usually these local minimaare driven by sticking to local maxima and then training the model intensively at local maxima,typical of the RND local exploration behavior. EXP4-RL improves on RND as training progresses,e.g. the improvement after 550 epochs is higher than the one between epochs 250 and 550. In termsfor AUC, this is expressed by 1.6% and 3.5%, respectively. Overall, EXP4-RL improves RND localminima of exploration, keeps high exploration of RND and induces a smoother global exploration.With the update probability of 0.125 in Figure 3c, EXP4-RL almost always outperforms RND with anotable difference. The improvement also increases with epochs and is dramatically larger at RND’slocal minima. These local minima appear more frequently in training of RND, so our improvementis more significant as well as crucial. The relative AUC improvement is 49.4%. The excellentperformance in Figure 3c additionally shows that EXP4-RL improves RND with global explorationby improving local minima of RND or not staying at local maxima.Overall, with either 0.25 or 0.125, EXP4-RL incentivizes global exploration on RND by not gettingstuck in local exploration maxima and outperforms RND exploration aggressively. With 0.125 theimprovement with respect to RND is more significant and steady. These experimental evidenceverifies our intuition behind EXP4-RL and provides excellent support for it. With experts being moreadvanced RL exploration algorithms, e.g. DORA, EXP4-RL can bring additional possibilities.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Contributions appear marginal, and some doubts about the regret lower bound. ### Review Text The authors consider analyzing the EXP3.P algorithm for the case of unbounded reward functions, in the sense that the rewards are governed by a Gaussian distribution. The authors first demonstrate a regret lower bound result on the Gaussian MABs when the time horizon is bounded from above. Then, the authors proceed to the analysis of the EXP3.P algorithm on the Gaussian MABs, and establish a regret bound similar to that of Auer et al. 2002. Finally, the authors apply the EXP3.P, where an expert corresponds to a Q-learning network, in the EXP4-RL algorithm, and evaluate it on multiple RL instances. Major comments: The major technical contributions seem to be the regret bound for EXP3.P for the case of Gaussian MAB, as stated in Theorem 4. Based on the authors' notation in the first paragraph of page 3, the Gaussian reward distribution is stationary across time. This contribution appears marginal, since the Theorem appears to be a straightforward consequence of the EXP3.P regret by Auer et al. 2002 by conditioning on all realized rewards to lie in [-\Delta, \Delta]. The technical part on how to identify the best expert is already dealt with by the analysis in Auer et al. 2002. Another contributions are Theorems 1-3, which are regret lower bounds for Gaussian MABs. I am not sure how to interpret these regret lower bounds, since they require the horizon length to be bounded from above. More precisely, the authors show that for any algorithm, there exists a Gaussian MAB instance such that $\text{Reg}(T) \geq c T$ when $T\leq C$, where $c, C$ are instance-dependent constants. While this bound is a mathematically sound statement, it does not imply anything about the difficulty of the underlying problem when T is large. For example, for regret upper bound of an MAB algorithm, one almost always establishes a guarantee of the form $\text{Reg}(T) \leq \text{Bound}(T)$ for all $T \geq C'$, where $C'$ an instance dependent constant. I am not too sure what is the message the authors are trying to convey here, since we know that the state-of-the-art regret lower bound is $\Omega(\sqrt{KT})$ for sufficiently large T. Finally, if I understand the underlying motivation of the authors correctly, the ultimate problem that the authors are trying to address seems to be a stochastic best arm identification problem with (sub-)Gaussian rewards, where an arm here corresponds to a Q-network. I am not sure why the authors resort to EXP type algorithms for a stochastic problem. Minor Comments: I believe that the inequality w RL(T) ≥ O(√T · γ ) on page 4 should be \leq. In Algorithm 1, in the initialization, w_i should be replaced by w_i(1). In Algorithm 2, it requires to compute y_k = E[ˆx_{kj} ], and the authors should elaborate on how the expectation is computed. In general, there are quite a few typos, and some parts of the writing are a bit ambiguous in the way they are phrased. I advise the authors to proofread and also polish the writing. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
bM4Iqfg8M2k
ICLR.cc/2021/Conference
2021
Graph Information Bottleneck for Subgraph Recognition
["Junchi Yu", "Tingyang Xu", "Yu Rong", "Yatao Bian", "Junzhou Huang", "Ran He"]
Given the input graph and its label/property, several key problems of graph learning, such as finding interpretable subgraphs, graph denoising and graph compression, can be attributed to the fundamental problem of recognizing a subgraph of the original one. This subgraph shall be as informative as possible, yet contains less redundant and noisy structure. This problem setting is closely related to the well-known information bottleneck (IB) principle, which, however, has less been studied for the irregular graph data and graph neural networks (GNNs). In this paper, we propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning. Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph. However, the GIB objective is notoriously hard to optimize, mostly due to the intractability of the mutual information of irregular graph data and the unstable optimization process. In order to tackle these challenges, we propose: i) a GIB objective based-on a mutual information estimator for the irregular graph data; ii) a bi-level optimization scheme to maximize the GIB objective; iii) a connectivity loss to stabilize the optimization process. We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising. Extensive experiments demonstrate that the information-theoretic IB-subgraph enjoys superior graph properties.
["irregular graph data", "gib objective", "graph information bottleneck", "graph denoising", "subgraph", "informative", "framework", "subgraph recognition", "input graph"]
ABSTRACTGiven the input graph and its label/property, several key problems of graph learn-ing, such as finding interpretable subgraphs, graph denoising and graph compres-sion, can be attributed to the fundamental problem of recognizing a subgraph ofthe original one. This subgraph shall be as informative as possible, yet containsless redundant and noisy structure. This problem setting is closely related to thewell-known information bottleneck (IB) principle, which, however, has less beenstudied for the irregular graph data and graph neural networks (GNNs). In thispaper, we propose a framework of Graph Information Bottleneck (GIB) for thesubgraph recognition problem in deep graph learning. Under this framework, onecan recognize the maximally informative yet compressive subgraph, named IB-subgraph . However, the GIB objective is notoriously hard to optimize, mostly dueto the intractability of the mutual information of irregular graph data and the unsta-ble optimization process. In order to tackle these challenges, we propose: i) a GIBobjective based-on a mutual information estimator for the irregular graph data; ii)a bi-level optimization scheme to maximize the GIB objective; iii) a connectiv-ity loss to stabilize the optimization process. We evaluate the properties of theIB-subgraph in three application scenarios: improvement of graph classification,graph interpretation and graph denoising. Extensive experiments demonstrate thatthe information-theoretic IB-subgraph enjoys superior graph properties.1 I NTRODUCTIONClassifying the underlying labels or properties of graphs is a fundamental problem in deep graphlearning with applications across many fields, such as biochemistry and social network analysis.However, real world graphs are likely to contain redundant even noisy information (Franceschi et al.,2019; Yu et al., 2019), which poses a huge negative impact for graph classification. This triggers aninteresting problem of recognizing an informative yet compressed subgraph from the original graph.For example, in drug discovery, when viewing molecules as graphs with atoms as nodes and chem-ical bonds as edges, biochemists are interested in identifying the subgraphs that mostly representcertain properties of the molecules, namely the functional groups (Jin et al., 2020b; Gilmer et al.,2017). In graph representation learning, the predictive subgraph highlights the vital substructurefor graph classification, and provides an alternative way for yielding graph representation besidesmean/sum aggregation (Kipf & Welling, 2017; Velickovic et al., 2017; Xu et al., 2019) and poolingaggregation (Ying et al., 2018; Lee et al., 2019; Bianchi et al., 2020). In graph attack and defense, itis vital to purify a perturbed graph and mine the robust structures for classification (Jin et al., 2020a).Recently, the mechanism of self-attentive aggregation (Li et al., 2019) somehow discovers a vitalsubstructure at node level with a well-selected threshold. However, this method only identifiesisolated important nodes but ignores the topological information at subgraph level. Consequently, itThis work was done when Junchi Yu was a research intern at Tencent AI LAB.yCorresponding Author1Published as a conference paper at ICLR 2021leads to a novel challenge as subgraph recognition: How can we recognize a compressed subgraphwith minimum information loss in terms of predicting the graph labels/properties?Recalling the above challenge, there is a similar problem setting in information theory called infor-mation bottleneck (IB) principle (Tishby et al., 1999), which aims to juice out a compressed datafrom the original data that keeps most predictive information of labels or properties. Enhanced withdeep learning, IB can learn informative representation from regular data in the fields of computervision (Peng et al., 2019; Alemi et al., 2017; Luo et al., 2019), reinforcement learning (Goyal et al.,2019; Igl et al., 2019) and natural language precessing (Wang et al., 2020). However, current IBmethods, like VIB (Alemi et al., 2017), is still incapable for irregular graph data. It is still challeng-ing for IB to compress irregular graph data, like a subgraph from an original graph, with a minimuminformation loss.Hence, we advance the IB principle for irregular graph data to resolve the proposed subgraph recog-nition problem, which leads to a novel principle, Graph Information Bottleneck (GIB). Differentfrom prior researches in IB that aims to learn an optimal representation of the input data in the hid-den space, GIB directly reveals the vital substructure in the subgraph level. We first i) leverage themutual information estimator from Deep Variational Information Bottleneck (VIB) (Alemi et al.,2017) for irregular graph data as the GIB objective. However, VIB is intractable to compute themutual information without knowing the distribution forms, especially on graph data. To tacklethis issue, ii) we adopt a bi-level optimization scheme to maximize the GIB objective. Meanwhile,the continuous relaxation that we adopt to approach the discrete selection of subgraph will lead tounstable optimization process. To further stabilize the training process and encourage a compactsubgraph, iii) we propose a novel connectivity loss to assist GIB to effectively discover the maxi-mally informative but compressed subgraph, which is defined as IB-subgraph . By optimizing theabove GIB objective and connectivity loss, one can recognize the IB-subgraph without any explicitsubgraph annotation. On the other hand, iv) GIB is model-agnostic and can be easily plugged intovarious Graph Neural Networks (GNNs).We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graphclassification, graph interpretation, and graph denoising. Extensive experiments on both syntheticand real world datasets demonstrate that the information-theoretic IB-subgraph enjoys superiorgraph properties compared to the subgraphs found by SOTA baselines.2 R ELATED WORKGraph Classification. In recent literature, there is a surge of interest in adopting graph neural net-works (GNN) in graph classification. The core idea is to aggregate all the node information for graphrepresentation. A typical implementation is the mean/sum aggregation (Kipf & Welling, 2017; Xuet al., 2019), which is to average or sum up the node embeddings. An alternative way is to lever-age the hierarchical structure of graphs, which leads to the pooling aggregation (Ying et al., 2018;Zhang et al., 2018; Lee et al., 2019; Bianchi et al., 2020). When tackling with the redundant andnoisy graphs, these approaches will likely to result in sub-optimal graph representation. Recently,InfoGraph (Sun et al., 2019) maximize the mutual information between graph representations andmulti-level local representations to obtain more informative global representations.Information Bottleneck. Information bottleneck (IB), originally proposed for signal processing,attempts to find a short code of the input signal but preserve maximum information of the code(Tishby et al., 1999). (Alemi et al., 2017) firstly bridges the gap between IB and the deep learning,and proposed variational information bottleneck (VIB). Nowadays, IB and VIB have been wildlyemployed in computer vision (Peng et al., 2019; Luo et al., 2019), reinforcement learning (Goyalet al., 2019; Igl et al., 2019), natural language processing (Wang et al., 2020) and speech and acous-tics (Qian et al., 2020) due to the capability of learning compact and meaningful representations.However, IB is less researched on irregular graphs due to the intractability of mutual information.Subgraph Discovery. Traditional subgraph discovery includes dense subgraph discovery and fre-quent subgraph mining. Dense subgraph discovery aims to find the subgraph with the highest density(e.g. the number of edges over the number of nodes (Fang et al., 2019; Gionis & Tsourakakis, 2015)).Frequent subgraph mining is to look for the most common substructure among graphs (Yan & Yan,2002; Ketkar et al., 2005; Zaki, 2005). At node-level, researchers discover the vital substructure2Published as a conference paper at ICLR 2021via the attention mechanism (Velickovic et al., 2017; Lee et al., 2019; Knyazev et al., 2019). Yinget al. (2019) further identifies the important computational graph for node classification. Alsentzeret al. (2020) discovers subgraph representations with specific topology given subgraph-level anno-tation. Recently, it is popular to select a neighborhood subgraph of a central node to do messagepassing in node representation learning. DropEdge (Rong et al., 2020) relieves the over-smoothingphenomenon in deep GCNs by randomly dropping a portion of edges in graph data. Similar toDropEdge, DropNode (Chen et al., 2018; Hamilton et al., 2017; Huang et al., 2018) principle is alsowidely adopted in node representation learning. FastGCN (Chen et al., 2018) and ASGCN (Huanget al., 2018) accelerate GCN training via node sampling. GraphSAGE (Hamilton et al., 2017) lever-ages neighborhood sampling for inductive node representation learning. NeuralSparse (Zheng et al.,2020) select Top-K (K is a hyper-parameter) task-relevant 1-hop neighbors of a central node forrobust node classification. Similarly, researchers discover the vital substructure at node level via theattention mechanism (Velickovic et al., 2017; Lee et al., 2019; Knyazev et al., 2019).3 N OTATIONS AND PRELIMINARIESLetf(G1;Y1);:::; (GN;YN)gbe a set ofNgraphs with their real value properties or categories,whereGnrefers to the n-th graph and Ynrefers to the corresponding properties or labels. We denotebyGn= (V;E;A;X)then-th graph of size Mnwith node set V=fViji= 1;:::;Mng, edgesetE=f(Vi;Vj)ji > j ;Vi;Vjis connectedg, adjacent matrix A2f0;1gMnMn, and featurematrixX2RMndofVwithddimensions, respectively. Denote the neighborhood of ViasN(Vi) =fVjj(Vi;Vj)2Eg. We useGsubas a specific subgraph and Gsubas the complementarystructure ofGsubinG. Letf:G!R=[0;1;;n]be the mapping from graphs to the real valueproperty or category, Y,Gis the domain of the input graphs. I(X;Y)refers to the Shannon mutualinformation of two random variables.3.1 G RAPH CONVOLUTIONAL NETWORKGraph convolutional network (GCN) is widely adopted to graph classification. Given a graph G=(V;E)with node feature Xand adjacent matrix A, GCN outputs the node embeddings X0fromthe following process:X0= GCN( A;X;W) = ReLU( D12^AD12XW ); (1)whereDrefers to the diagonal matrix with nodes’ degrees and Wrefers to the model parameters.One can simply sum up the node embeddings to get a fixed length graph embeddings (Xu et al.,2019). Recently, researchers attempt to exploit hierarchical structure of graphs, which leads tovarious graph pooling methods (Li et al., 2019; Gao & Ji, 2019; Lee et al., 2019; Diehl, 2019; Zhanget al., 2018; Ranjan et al., 2020; Ying et al., 2018). Li et al. (2019) enhances the graph pooling withself-attention mechanism to leverage the importance of different nodes contributing to the results.Finally, the graph embedding is obtained by multiplying the node embeddings with the normalizedattention scores:E= Att(X0) = softmax( 2tanh( 1X0T))X0; (2)where 1and2refers to the model parameters of self-attention.3.2 O PTIMIZING INFORMATION BOTTLENECK OBJECTIVEGiven the input signal Xand the label Y, the objective of IB is maximized to find the the internalcodeZ:maxZI(Z;Y)I(X;Z), whererefers to a hyper-parameter trading off informative-ness and compression. Optimizing this objective will lead to a compact but informative Z. Alemiet al. (2017) optimize a tractable lower bound of the IB objective:LVIB=1NXNi=1Zp(zjxi) logq(yijz)dzKL(p(zjxi)jr(z)); (3)whereq(yijz)is the variational approximation to p(yijz)andr(z)is the prior distribution ofZ. However, it is hard to estimate the mutual information in high dimensional space when thedistribution forms are inaccessible, especially for irregular graph data.3Published as a conference paper at ICLR 2021Figure 1: Illustration of the proposed graph information bottleneck (GIB) framework. We employa bi-level optimization scheme to optimize the GIB objective and thus yielding the IB-subgraph.In the inner optimization phase, we estimate I(G;Gsub)by optimizing the statistics network of theDONSKER-V ARADHAN representation (Donsker & Varadhan, 1983). Given a good estimationofI(G;Gsub), in the outer optimization phase, we maximize the GIB objective by optimizing themutual information, the classification loss Lclsand connectivity loss Lcon.4 O PTIMIZING THE GRAPH INFORMATION BOTTLENECK OBJECTIVE FORSUBGRAPH RECOGNITIONIn this section, we will elaborate the proposed method in details. We first formally define the graphinformation bottleneck and IB-subgraph. Then, we introduce a novel framework for GIB to effec-tively find the IB-subgraph. We further propose a bi-level optimization scheme and a graph mutualinformation estimator for GIB optimization. Moreover, we do a continuous relaxation to the gener-ation of subgraph, and propose a novel loss to stabilize the training process.4.1 GRAPH INFORMATION BOTTLENECKWe generalize the information bottleneck principle to learn an informative representation of irregulargraphs, which leads to the graph information bottleneck (GIB) principle.Definition 4.1 (Graph Information Bottleneck) .Given a graphGand its label Y, the GIB seeks forthe most informative yet compressed representation Zby optimizing the following objective:maxZI(Y;Z)s.t.I(G;Z)Ic: (4)whereIcis the information constraint between GandZ. By introducing a Lagrange multiplier toEq. 4, we reach its unconstrained form:maxZI(Y;Z)I(G;Z): (5)Eq. 5 gives a general formulation of GIB. Here, in subgraph recognition, we focus on a subgraphwhich is compressed with minimum information loss in terms of graph properties.Definition 4.2 (IB-subgraph) .For a graphG, its maximally informative yet compressed subgraph,namely IB-subgraph can be obtained by optimizing the following objective:maxGsub2GsubI(Y;Gsub)I(G;Gsub): (6)where Gsubindicates the set of all subgraphs of G.IB-subgraph enjoys various pleasant properties and can be applied to multiple graph learning taskssuch as improvement of graph classification, graph interpretation, and graph denoising. However,the GIB objective in Eq. 6 is notoriously hard to optimize due to the intractability of mutual infor-mation and the discrete nature of irregular graph data. We then introduce approaches on how tooptimize such objective and derive the IB-subgraph.4Published as a conference paper at ICLR 20214.2 B I-LEVEL OPTIMIZATION FOR THE GIB OBJECTIVEThe GIB objective in Eq. 6 consists of two parts. We examine the first term I(Y;Gsub)in Eq. 6, first.This term measures the relevance between GsubandY. We expand I(Y;Gsub)as:I(Y;Gsub) =Zp(y;Gsub) logp(yjGsub)dydGsub+ H(Y): (7)H(Y)is the entropy of Yand thus can be ignored. In practice, we approximate p(y;Gsub)withan empirical distribution p(y;Gsub)1NPNi=1y(yi)Gsub(Gsubi), where()is the Dirac functionto sample training data. Gsubiandyiare the output subgraph and graph label corresponding toi-th training data. By substituting the true posterior p(yjGsub)with a variational approximationq1(yjGsub), we obtain a tractable lower bound of the first term in Eq. 6:I(Y;Gsub)Zp(y;Gsub) logq1(yjGsub)dydGsub1NNXi=1logq1(yijGsubi) =:Lcls(q1(yjGsub);ygt);(8)whereygtis the ground truth label of the graph. Eq. 8 indicates that maximizing I(Y;Gsub)isachieved by the minimization of the classification loss between YandGsubasLcls. Intuitively,minimizingLclsencourages the subgraph to be predictive of the graph label. In practice, we choosethe cross entropy loss for categorical Yand the mean squared loss for continuous Y, respectively.For more details of deriving Eq. 7 and Eq. 8, please refer to Appendix A.1.Then, we consider the minimization of I(G;Gsub)which is the second term of Eq. 6. Remind thatAlemi et al. (2017) introduces a tractable prior distribution r(Z)in Eq. 3, and thus results in a vari-ational upper bound. However, this setting is troublesome as it is hard to find a reasonable priordistribution for p(Gsub), which is the distribution of graph substructures instead of latent representa-tion. Thus we go for another route. Directly applying the DONSKER-V ARADHAN representation(Donsker & Varadhan, 1983) of the KL-divergence, we have:I(G;Gsub) = supf2:GG!REG;Gsub2p(G;Gsub)f2(G;Gsub)logEG2p(G);Gsub2p(Gsub)ef2(G;Gsub);(9)wheref2is the statistics network that maps from the graph set to the set of real numbers. Inorder to approximate I(G;Gsub)using Eq. 9, we design a statistics network based on modern GNNarchitectures as shown by Figure 1: first we use a GNN to extract embeddings from both GandGsub(parameter shared with the subgraph generator, which will be elaborated in Section 4.3), thenconcatenateGandGsubembeddings and feed them into a MLP, which finally produces the realnumber. In conjunction with the sampling method to approximate p(G;Gsub),p(G)andp(Gsub), wereach the following optimization problem to approximate1I(G;Gsub):max2LMI(2;Gsub) =1NNXi=1f2(Gi;Gsubi)log1NNXi=1;j6=ief2(Gi;Gsubj): (10)With the approximation to the MI in graph data, we combine Eq. 6 , Eq. 8 and Eq. 10 and formulatethe optimization process of GIB as a tractable bi-level optimization problem:minGsub;1L(Gsub;1;2) =Lcls(q1(yjGsub);ygt) +LMI(2;Gsub) (11)s.t.2= arg max2LMI(2;Gsub): (12)We first derive a sub-optimal 2notated as2by optimizing Eq. 12 for T steps as inner loops. Afterthe T-step optimization of the inner-loop ends, Eq. 10 is a proxy for MI minimization for the GIBobjective as an outer loop. Then, the parameter 1and the subgraphGsubare optimized to yieldIB-subgraph. However, in the outer loop, the discrete nature of GandGsubhinders applying thegradient-based method to optimize the bi-level objective and find the IB-subgraph.1Notice that the MINE estimator (Belghazi et al., 2018) straightforwardly uses the DONSKER-V ARADHAN representation to derive an MI estimator between the regular input data and its vectorized repre-sentation/encoding. It cannot be applied to estimate the mutual information between GandGsubsince both ofGandGsubare irregular graph data.5Published as a conference paper at ICLR 2021Table 1: Classification accuracy. The pooling methods yield pooling aggregation while the back-bones yield mean aggregation. The proposed GIB method with backbones yields subgraph embed-ding by aggregating the nodes in subgraphs.Method MUTAG PROTEINS IMDB-BINARY DDSortPool 0.8440.141 0.7470.044 0.7120.047 0.7320.087ASAPool 0.743 0.077 0.7210.043 0.7150.044 0.7170.037DiffPool 0.839 0.097 0.7270.046 0.7090.053 0.7780.030EdgePool 0.759 0.077 0.7230.044 0.7280.044 0.7360.040AttPool 0.721 0.086 0.7280.041 0.7220.047 0.7110.055GCN 0.743 0.110 0.7190.041 0.7070.037 0.7250.046GraphSAGE 0.743 0.077 0.7210.042 0.7090.041 0.7290.041GIN 0.825 0.068 0.7070.056 0.7320.048 0.7300.033GAT 0.738 0.074 0.7140.040 0.7130.042 0.6950.045GAT + DropEdge 0.743 0.081 0.7110.043 0.7100.041 0.7170.035GCN+GIB 0.7760.075 0.7480.046 0.7220.039 0.7650.050GraphSAGE+GIB 0.7600.074 0.7340.043 0.7190.052 0.7810.042GIN+GIB 0.8390.064 0.7490.051 0.7370.070 0.7470.039GAT+GIB 0.7490.097 0.7370.044 0.7290.046 0.7690.040GAT+GIB+DropEdge 0.7540.085 0.7370.037 0.7310.003 0.7760.034Table 2: The mean and standard deviation of absolute property bias between the graphs and thecorresponding subgraphs.Method QED DRD2 HLM-CLint MLM-CLintGCN+Att05 0.48 0.07 0.200.13 0.900.89 0.920.61GCN+Att07 0.41 0.07 0.160.11 1.180.60 1.690.88GCN+GIB 0.38 0.12 0.060.09 0.370.30 0.720.554.3 T HESUBGRAPH GENERATOR AND CONNECTIVITY LOSSTo alleviate the discreteness in Eq. 11, we propose the continuous relaxation to the subgraph recog-nition and propose a loss to stabilize the training process.Subgraph generator: For the input graph G, we generate its IB-subgraph with the node assignmentSwhich indicates the node is in GsuborGsub. Then, we introduce a continuous relaxation to thenode assignment with the probability of nodes belonging to the GsuborGsub. For example, the i-throw ofSis a 2-dimensional vector [p(Vi2GsubjVi);p(Vi2GsubjVi)]. We first use an l-layer GNNto obtain the node embedding and employ a multi-layer perceptron (MLP) to output S:Xl= GNN( A;Xl1;1);S=Softmax (MLP(Xl;2)): (13)Sis an2matrix, where nis the number of nodes. We add row-wise Softmax to the output ofMLP to ensure the nodes are either in or out of the subgraph. For simplicity, we compile the abovemodules as the subgraph generator, denoted as g(;)with:= (1;2). When Sis well-learned, theassignment of nodes is supposed to saturate to 0/1. The representation of Gsub, which is employedfor predicting the graph label, can be obtained by taking the first row of STXl.Connectivity loss: However, poor initialization will cause p(Vi2GsubjVi)andp(Vi2GsubjVi)to be close. This will either lead the model to assign all nodes to Gsub/Gsub, or result that therepresentations ofGsubcontain much information from the redundant nodes. These two scenarioswill cause the training process to be unstable. On the other hand, we suppose our model to havean inductive bias to better leverage the topological information while Soutputs the subgraph at anode-level. Therefore, we propose the following connectivity loss:Lcon=jjNorm(STAS)I2jjF; (14)where Norm()is the row-wise normalization, jjjjFis the Frobenius norm, and I2is a22identitymatrix.Lconnot only leads to distinguishable node assignment, but also encourage the subgraph tobe compact. Take (STAS)1:for example, denote a11;a12the element 1,1 and the element 1,2 of6Published as a conference paper at ICLR 2021Table 3: Ablation study on LconandLMI. Note that we try several initiations for GIB w/o LconandLMIto get the current results due to the instability of optimization process.Method QED DRD2 HLM-CLint MLM-CLintGIB w/oLcon 0.460.07 0.150.12 0.450.37 1.580.86GIB w/oLMI 0.430.15 0.210.13 0.480.34 1.200.97GIB 0.380.12 0.060.09 0.370.30 0.720.55STAS,a11=Xi;jAijp(Vi2GsubjVi)p(Vj2GsubjVj);a12=Xi;jAijp(Vi2GsubjVi)p(Vj2GsubjVj):(15)MinimizingLconresults ina11a11+a12!1. This occurs if Viis inGsub, the elements ofN(Vi)have a high probability in Gsub. MinimizingLconalso encouragesa12a11+a12!0. This encouragesp(Vi2 GsubjVi)!0=1and less cuts between GsubandGsub. This also holds for Gsubwhenanalyzinga21anda22.In a word,Lconencourages distinctive Sto stabilize the training process and a compact topology inthe subgraph. Therefore, the overall loss is:min;1L(;1;2) =Lcls(q1(g(G;));ygt) +Lcon(g(G;)) +LMI(2;Gsub)s.t.2= arg max2LMI(2;Gsub):(16)We provide the pseudo code in the Appendix to better illustrate how to optimize the above objective.5 E XPERIMENTSIn this section, we evaluate the proposed GIB method on three scenarios, including improvement ofgraph classification, graph interpretation and graph denoising.5.1 B ASELINES AND SETTINGSImprovement of graph classification: For improvement of graph classification, GIB generatesgraph representation by aggregating the subgraph information. We plug GIB into various backbonesincluding GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2017), GIN (Xu et al., 2019) andGraphSAGE (Hamilton et al., 2017). We compare the proposed method with the mean/sum ag-gregation (Kipf & Welling, 2017; Velickovic et al., 2017; Hamilton et al., 2017; Xu et al., 2019)and pooling aggregation (Zhang et al., 2018; Ranjan et al., 2020; Ying et al., 2018; Diehl, 2019) interms of classification accuracy. Moreover, we apply DropEdge (Rong et al., 2020) to GAT, namelyGAT+DropEdge, which randomly drop 30% edges in message-passing at node-level. Similarly, weapply GIB to GAT+DropEdge, resulting in GAT+GIB+DropEdge. For fare comparisions, all thebackbones for different methods consist of the same 2-layer GNN with 16 hidden-size.Graph interpretation: The goal of graph interpretation is to find the substructure which shares themost similar property to the molecule. If the substructure is disconnected, we evaluate its largestconnected part. We compare GIB with the attention mechanism (Li et al., 2019). That is, we atten-tively aggregate the node information for graph prediction. The interpretable subgraph is generatedby choosing the nodes with top 50% and70% attention scores, namely Att05 and Att07. GIB outputsthe interpretation with the IB-subgraph. Then, we evaluate the absolute property bias (the absolutevalue of the difference between the property of graph and subgraph) between the graph and its in-terpretation. Similarly, for fare comparisons, all the backbones for different methods consist of thesame 2-layer GNN with 16 hidden-size.Graph denoising: We translate the permuted graph into the line-graph and use GIB and attentionto 1) infer the real structure of graph, 2) classify the permuted graph via the inferred structure. Wefurther compare the performance of GCN and DiffPool on the permuted graphs.7Published as a conference paper at ICLR 2021Table 4: Quantitative results on graph denoising. We report the classification accuracy (Acc), num-ber of real edges over total real edges (Recall) and number of real edges over total edges in subgraphs(Precision) on the test setMethod GCN DiffPool GCN+Att05 GCN+Att07 GCN+GIBRecall - - 0.226 0.047 0.3240.049 0.4930.035Precision - - 0.638 0.141 0.6750.104 0.6920.061Acc 0.617 0.658 0.649 0.667 0.684Figure 2: The molecules with their interpretable subgraphs discovered by different methods. Thesesubgraphs exhibit similar chemical properties compared to the molecules on the left.5.2 DATASETSImprovement of graph classification: We evaluate different methods on the datasets of MUTAG(Rupp et al., 2012), PROTEINS (Borgwardt et al., 2005), IMDB-BINARY andDD(Rossi &Ahmed, 2015) datasets.2. The statistics of the datasets are available in Table 7 of Appendix.Graph interpretation: We construct the datasets for graph interpretation on four molecule prop-erties based on ZINC dataset, which contains 250K molecules. QED measures the drug likenessof a molecule, which is bounded within the range (0;1:0).DRD2 measures the probability that amolecule is active against dopamine type 2 receptor, which is bounded with (0;1:0).HLM-CLintandMLM-CLint are estimated values of in vitro human and mouse liver microsome metabolic sta-bility (base 10 logrithm of mL/min/g). We sample the molecules with QED 0:85, DRD20:50,HLM-CLint2, MLM-CLint2for each task. We use 85% of these molecules for training, 5%for validating and 10% for testing. The statistics of the datasets are available in Table 8 of Appendix.Graph denoising: We generate a synthetic dataset by adding 30% redundant edges for each graphinMUTAG dataset. We use 70% of these graphs for training, 5%for validating and 25% for testing.5.3 R ESULTSImprovement of Graph Classification: In Table 1, we comprehensively evaluate the proposedmethod and baselines on improvement of graph classification. We train GIB on various back-bones and aggregate the graph representations only from the subgraphs. We compare the perfor-mance of our framework with the mean/sum aggregation and pooling aggregation. This showsthat GIB improves the graph classification by reducing the redundancies in the graph structure.Table 5: Average number of disconnected substruc-tures per graph selected by different methodsMethod QED DRD2 HLM MLMGCN+Att05 3.38 1.94 3.11 5.16GCN+Att07 2.04 1.76 2.75 3.00GCN+GIB 1.57 1.08 2.29 2.06Graph interpretation: Table 2 showsthe quantitative performance of differentmethods on the graph interpretation task.GIB is able to generate precise graph inter-pretation (IB-subgraph), as the substruc-tures found by GIB has the most similarproperty to the input molecules. In Fig. 2,GIB generates more compact and reason-able interpretation to the property of molecules confirmed by chemical experts. More results are2We follow the protocol in https://github.com/rusty1s/pytorch geometric/tree/master/benchmark/kernel8Published as a conference paper at ICLR 2021Table 6: The influence of the hyper-parameter ofLconto the size of subgraphs. 1 3 5 10All 0.4830.143 0.4960.150 0.4940.147 0.4660.150Max 0.3870.173 0.4130.169 0.4110.169 0.3910.172provided in the Appendix. In Table 5, we compare the average number of disconnected substruc-tures per graph since a compact subgraph should preserve more topological information. GIB gen-erates more compact subgraphs to better interpret the graph property. Moreover, compared to thebaselines, GIB does not require a hyper-parameter to control the sizes of subgraphs, thus being moreadaptive to different tasks. Please refer to Table 9 and Table 10 of Appendix for details. The trainingdynamic is shown in Fig. 7. We provide results with other MI estimators in Table 11 in Appendix.Graph denoising: Table 4 shows the performance of different methods on noisy graph classifica-tion. GIB outperforms the baselines on classification accuracy by a large margin due to the superiorproperty of IB-subgraph. Moreover, GIB is able to better reveal the real structure of permuted graphsin terms of precision and recall rate of true edges.5.4 A BLATION STUDYTo further understand the rolls of LconandLMI, we derive two variants of our method by deletingLconandLMI, namely GIB w/o Lconand GIB w/oLMI. Note that GIB w/o LMIis similarto InfoGraph (Sun et al., 2019) and GNNExplainer (Ying et al., 2019), as they only consider tomaximize MI between latent embedding and global summarization and ignore compression. Whenadapted to sub graph recognition, it is likely to be G=Gsub. We evaluate the variants with 2-layerGCN and 16 hidden size on graph interpretation. In practice, we find that the training process ofGIB w/oLconis unstable as discussed in Section 4.3. Moreover, we find that GIB w/o LMIis verylikely to outputGsub=G, as it does not consider compression. Therefore, we try several initiationsfor GIB w/oLconandLMIto get the current results. As shown in Table 3, GIB also outperformsthe variants, and thus indicates that every part of our model does contribute to the improvement ofperformance.5.5 M ORE DISCUSSION ON CONNECTIVITY LOSSLconis proposed for stabilizing the training process and resulting in compact subgraphs. As it posesregularization for the subgraph generation, we are interested in its potential influence on the sizes ofthe chosen IB-subgraph. Therefore, we show the influence of different hyper-parameters of Lcontothe sizes of the chosen IB-subgraph. We implement the experiments with varies inf1;3;5;10gon QED dataset and compute the mean and standard deviation of the sizes of IB-subgraph (All)and their largest connected parts (Max). As shown in Table 6, we observe that different result insimilar sizes of IB-subgraph. Therefore, its influence on the size of chosen subgraphs is weak.6 C ONCLUSIONIn this paper, we have studied a subgraph recognition problem to infer a maximally informativeyet compressed subgraph. We define such a subgraph as IB-subgraph and propose the graph in-formation bottleneck (GIB) framework for effectively discovering an IB-subgraph. We derive theGIB objective from a mutual information estimator for irregular graph data, which is optimized bya bi-level learning scheme. A connectivity loss is further used to stabilize the learning process. Weevaluate our GIB framework in the improvement of graph classification, graph interpretation andgraph denoising. Experimental results verify the superior properties of IB-subgraphs.ACKNOWLEDGEMENTSThis work is partially funded by Beijing Natural Science Foundation (Grant No. JQ18017) andYouth Innovation Promotion Association CAS (Grant No. Y201929).9Published as a conference paper at ICLR 2021
0J7VemrR9ze
A good paper with a clear theoretical contribution and rigorous empirical evaluation - clear accept recommendation with medium reviewer confidence
8: Top 50% of accepted papers, clear accept
# Summary The paper introduces the Graph Information Bottleneck (GIB) which aims to learn the most-informative compressed representation $Z$ given graph $G$ with associated label $Y$. Further, it defines GIB-Subgraph which aims to learn the compressed representation as the subgraph $G_{sub}$ which maximizes the mutual information within the family of subgraphs ${\cal G}_sub$ of $G$. The paper introduces bi-level optimization objective which has the following parts: (a) optimizing the mutual information loss $L_{cls}$ between the subgraph representation $G_{sub}$ and the graph label $Y$ using the backbone GNN followed by aggregation of subgraph node embeddings $X_{sub}$ and cross-entropy loss when comparing to graph labels. (b) approximates the mutual information $L_{MI}$ between the original graph and a subgraph $I(G, G_{sub})$ using statistics network $f_{\phi}$ which uses the backbone GNN to obtain graph embeddings (using mean/sum or pooling over node embeddings) followed by MLP over concatenated embeddings of $G$ and $G_{sub}$. The procedure retrains the graph-subgraph mutual information estimator in the inner loop for each step (eqn. 10) before updating the parameters of the backbone GNN and the subgraph selection MLP and finally updating the subgraph-label MI estimator ($L_{cls}$). In order to obtain compact subgraphs, the paper introduces a regularization term $L_{con}$ closely related to graph cut. The papers shows empirically on downstream task of graph classification that adding the GIP objective improves classification accuracy. Further, on graph interpretation task, the authors show that the GIP objective improves the similarity of the retrieved subgraphs using domain-specific metrics. The authors also evaluate on graph denoising on the MUTAG dataset. # Recommendation I vote for a strong accept. This paper is well-written, makes a clear theoretical contribution to the field as well as provides sufficient empirical evaluation. # Questions to the authors - I would have liked to see in the supplementary material an example of the algorithm on a toy graph example (similar to case study A). - I wonder does the initialization have an influence on the final chosen subgraph nodes. Does $S$ (node-assignment) (always/almost always?) saturate as mentioned on page 5? - What is the influence of the ${\cal L}_{con}$ on the size of the final chosen subgraph. A table showing the size of final subgraphs (in term of output of MLP $\theta_2$ in Figure 1) might be helpful, though this is partially addressed in Table 4. - For completeness, it would be good to provide in the supplementary material the properties of the datasets used e.g., number of graphs, mean/max/min number of nodes, edges, dimension of node features, dimension of edge features (if any), etc. - It would have been good to see plots showing the convergence of the different losses as part of the bi-level optimization iterations. - [optional] On the graph denoising experiment, it might be good to add more concrete evaluation both on larger graphs e.g. on graph families such as Power-Law, SBM as well as non-uniform edge addition.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Graph Information Bottleneck for Subgraph Recognition ### Paper Abstract Given the input graph and its label/property, several key problems of graph learning, such as finding interpretable subgraphs, graph denoising and graph compression, can be attributed to the fundamental problem of recognizing a subgraph of the original one. This subgraph shall be as informative as possible, yet contains less redundant and noisy structure. This problem setting is closely related to the well-known information bottleneck (IB) principle, which, however, has less been studied for the irregular graph data and graph neural networks (GNNs). In this paper, we propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning. Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph. However, the GIB objective is notoriously hard to optimize, mostly due to the intractability of the mutual information of irregular graph data and the unstable optimization process. In order to tackle these challenges, we propose: i) a GIB objective based-on a mutual information estimator for the irregular graph data; ii) a bi-level optimization scheme to maximize the GIB objective; iii) a connectivity loss to stabilize the optimization process. We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising. Extensive experiments demonstrate that the information-theoretic IB-subgraph enjoys superior graph properties. ### Paper Keywords ["irregular graph data", "gib objective", "graph information bottleneck", "graph denoising", "subgraph", "informative", "framework", "subgraph recognition", "input graph"] ### Paper Content ABSTRACTGiven the input graph and its label/property, several key problems of graph learn-ing, such as finding interpretable subgraphs, graph denoising and graph compres-sion, can be attributed to the fundamental problem of recognizing a subgraph ofthe original one. This subgraph shall be as informative as possible, yet containsless redundant and noisy structure. This problem setting is closely related to thewell-known information bottleneck (IB) principle, which, however, has less beenstudied for the irregular graph data and graph neural networks (GNNs). In thispaper, we propose a framework of Graph Information Bottleneck (GIB) for thesubgraph recognition problem in deep graph learning. Under this framework, onecan recognize the maximally informative yet compressive subgraph, named IB-subgraph . However, the GIB objective is notoriously hard to optimize, mostly dueto the intractability of the mutual information of irregular graph data and the unsta-ble optimization process. In order to tackle these challenges, we propose: i) a GIBobjective based-on a mutual information estimator for the irregular graph data; ii)a bi-level optimization scheme to maximize the GIB objective; iii) a connectiv-ity loss to stabilize the optimization process. We evaluate the properties of theIB-subgraph in three application scenarios: improvement of graph classification,graph interpretation and graph denoising. Extensive experiments demonstrate thatthe information-theoretic IB-subgraph enjoys superior graph properties.1 I NTRODUCTIONClassifying the underlying labels or properties of graphs is a fundamental problem in deep graphlearning with applications across many fields, such as biochemistry and social network analysis.However, real world graphs are likely to contain redundant even noisy information (Franceschi et al.,2019; Yu et al., 2019), which poses a huge negative impact for graph classification. This triggers aninteresting problem of recognizing an informative yet compressed subgraph from the original graph.For example, in drug discovery, when viewing molecules as graphs with atoms as nodes and chem-ical bonds as edges, biochemists are interested in identifying the subgraphs that mostly representcertain properties of the molecules, namely the functional groups (Jin et al., 2020b; Gilmer et al.,2017). In graph representation learning, the predictive subgraph highlights the vital substructurefor graph classification, and provides an alternative way for yielding graph representation besidesmean/sum aggregation (Kipf & Welling, 2017; Velickovic et al., 2017; Xu et al., 2019) and poolingaggregation (Ying et al., 2018; Lee et al., 2019; Bianchi et al., 2020). In graph attack and defense, itis vital to purify a perturbed graph and mine the robust structures for classification (Jin et al., 2020a).Recently, the mechanism of self-attentive aggregation (Li et al., 2019) somehow discovers a vitalsubstructure at node level with a well-selected threshold. However, this method only identifiesisolated important nodes but ignores the topological information at subgraph level. Consequently, itThis work was done when Junchi Yu was a research intern at Tencent AI LAB.yCorresponding Author1Published as a conference paper at ICLR 2021leads to a novel challenge as subgraph recognition: How can we recognize a compressed subgraphwith minimum information loss in terms of predicting the graph labels/properties?Recalling the above challenge, there is a similar problem setting in information theory called infor-mation bottleneck (IB) principle (Tishby et al., 1999), which aims to juice out a compressed datafrom the original data that keeps most predictive information of labels or properties. Enhanced withdeep learning, IB can learn informative representation from regular data in the fields of computervision (Peng et al., 2019; Alemi et al., 2017; Luo et al., 2019), reinforcement learning (Goyal et al.,2019; Igl et al., 2019) and natural language precessing (Wang et al., 2020). However, current IBmethods, like VIB (Alemi et al., 2017), is still incapable for irregular graph data. It is still challeng-ing for IB to compress irregular graph data, like a subgraph from an original graph, with a minimuminformation loss.Hence, we advance the IB principle for irregular graph data to resolve the proposed subgraph recog-nition problem, which leads to a novel principle, Graph Information Bottleneck (GIB). Differentfrom prior researches in IB that aims to learn an optimal representation of the input data in the hid-den space, GIB directly reveals the vital substructure in the subgraph level. We first i) leverage themutual information estimator from Deep Variational Information Bottleneck (VIB) (Alemi et al.,2017) for irregular graph data as the GIB objective. However, VIB is intractable to compute themutual information without knowing the distribution forms, especially on graph data. To tacklethis issue, ii) we adopt a bi-level optimization scheme to maximize the GIB objective. Meanwhile,the continuous relaxation that we adopt to approach the discrete selection of subgraph will lead tounstable optimization process. To further stabilize the training process and encourage a compactsubgraph, iii) we propose a novel connectivity loss to assist GIB to effectively discover the maxi-mally informative but compressed subgraph, which is defined as IB-subgraph . By optimizing theabove GIB objective and connectivity loss, one can recognize the IB-subgraph without any explicitsubgraph annotation. On the other hand, iv) GIB is model-agnostic and can be easily plugged intovarious Graph Neural Networks (GNNs).We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graphclassification, graph interpretation, and graph denoising. Extensive experiments on both syntheticand real world datasets demonstrate that the information-theoretic IB-subgraph enjoys superiorgraph properties compared to the subgraphs found by SOTA baselines.2 R ELATED WORKGraph Classification. In recent literature, there is a surge of interest in adopting graph neural net-works (GNN) in graph classification. The core idea is to aggregate all the node information for graphrepresentation. A typical implementation is the mean/sum aggregation (Kipf & Welling, 2017; Xuet al., 2019), which is to average or sum up the node embeddings. An alternative way is to lever-age the hierarchical structure of graphs, which leads to the pooling aggregation (Ying et al., 2018;Zhang et al., 2018; Lee et al., 2019; Bianchi et al., 2020). When tackling with the redundant andnoisy graphs, these approaches will likely to result in sub-optimal graph representation. Recently,InfoGraph (Sun et al., 2019) maximize the mutual information between graph representations andmulti-level local representations to obtain more informative global representations.Information Bottleneck. Information bottleneck (IB), originally proposed for signal processing,attempts to find a short code of the input signal but preserve maximum information of the code(Tishby et al., 1999). (Alemi et al., 2017) firstly bridges the gap between IB and the deep learning,and proposed variational information bottleneck (VIB). Nowadays, IB and VIB have been wildlyemployed in computer vision (Peng et al., 2019; Luo et al., 2019), reinforcement learning (Goyalet al., 2019; Igl et al., 2019), natural language processing (Wang et al., 2020) and speech and acous-tics (Qian et al., 2020) due to the capability of learning compact and meaningful representations.However, IB is less researched on irregular graphs due to the intractability of mutual information.Subgraph Discovery. Traditional subgraph discovery includes dense subgraph discovery and fre-quent subgraph mining. Dense subgraph discovery aims to find the subgraph with the highest density(e.g. the number of edges over the number of nodes (Fang et al., 2019; Gionis & Tsourakakis, 2015)).Frequent subgraph mining is to look for the most common substructure among graphs (Yan & Yan,2002; Ketkar et al., 2005; Zaki, 2005). At node-level, researchers discover the vital substructure2Published as a conference paper at ICLR 2021via the attention mechanism (Velickovic et al., 2017; Lee et al., 2019; Knyazev et al., 2019). Yinget al. (2019) further identifies the important computational graph for node classification. Alsentzeret al. (2020) discovers subgraph representations with specific topology given subgraph-level anno-tation. Recently, it is popular to select a neighborhood subgraph of a central node to do messagepassing in node representation learning. DropEdge (Rong et al., 2020) relieves the over-smoothingphenomenon in deep GCNs by randomly dropping a portion of edges in graph data. Similar toDropEdge, DropNode (Chen et al., 2018; Hamilton et al., 2017; Huang et al., 2018) principle is alsowidely adopted in node representation learning. FastGCN (Chen et al., 2018) and ASGCN (Huanget al., 2018) accelerate GCN training via node sampling. GraphSAGE (Hamilton et al., 2017) lever-ages neighborhood sampling for inductive node representation learning. NeuralSparse (Zheng et al.,2020) select Top-K (K is a hyper-parameter) task-relevant 1-hop neighbors of a central node forrobust node classification. Similarly, researchers discover the vital substructure at node level via theattention mechanism (Velickovic et al., 2017; Lee et al., 2019; Knyazev et al., 2019).3 N OTATIONS AND PRELIMINARIESLetf(G1;Y1);:::; (GN;YN)gbe a set ofNgraphs with their real value properties or categories,whereGnrefers to the n-th graph and Ynrefers to the corresponding properties or labels. We denotebyGn= (V;E;A;X)then-th graph of size Mnwith node set V=fViji= 1;:::;Mng, edgesetE=f(Vi;Vj)ji > j ;Vi;Vjis connectedg, adjacent matrix A2f0;1gMnMn, and featurematrixX2RMndofVwithddimensions, respectively. Denote the neighborhood of ViasN(Vi) =fVjj(Vi;Vj)2Eg. We useGsubas a specific subgraph and Gsubas the complementarystructure ofGsubinG. Letf:G!R=[0;1;;n]be the mapping from graphs to the real valueproperty or category, Y,Gis the domain of the input graphs. I(X;Y)refers to the Shannon mutualinformation of two random variables.3.1 G RAPH CONVOLUTIONAL NETWORKGraph convolutional network (GCN) is widely adopted to graph classification. Given a graph G=(V;E)with node feature Xand adjacent matrix A, GCN outputs the node embeddings X0fromthe following process:X0= GCN( A;X;W) = ReLU( D12^AD12XW ); (1)whereDrefers to the diagonal matrix with nodes’ degrees and Wrefers to the model parameters.One can simply sum up the node embeddings to get a fixed length graph embeddings (Xu et al.,2019). Recently, researchers attempt to exploit hierarchical structure of graphs, which leads tovarious graph pooling methods (Li et al., 2019; Gao & Ji, 2019; Lee et al., 2019; Diehl, 2019; Zhanget al., 2018; Ranjan et al., 2020; Ying et al., 2018). Li et al. (2019) enhances the graph pooling withself-attention mechanism to leverage the importance of different nodes contributing to the results.Finally, the graph embedding is obtained by multiplying the node embeddings with the normalizedattention scores:E= Att(X0) = softmax( 2tanh( 1X0T))X0; (2)where 1and2refers to the model parameters of self-attention.3.2 O PTIMIZING INFORMATION BOTTLENECK OBJECTIVEGiven the input signal Xand the label Y, the objective of IB is maximized to find the the internalcodeZ:maxZI(Z;Y)I(X;Z), whererefers to a hyper-parameter trading off informative-ness and compression. Optimizing this objective will lead to a compact but informative Z. Alemiet al. (2017) optimize a tractable lower bound of the IB objective:LVIB=1NXNi=1Zp(zjxi) logq(yijz)dzKL(p(zjxi)jr(z)); (3)whereq(yijz)is the variational approximation to p(yijz)andr(z)is the prior distribution ofZ. However, it is hard to estimate the mutual information in high dimensional space when thedistribution forms are inaccessible, especially for irregular graph data.3Published as a conference paper at ICLR 2021Figure 1: Illustration of the proposed graph information bottleneck (GIB) framework. We employa bi-level optimization scheme to optimize the GIB objective and thus yielding the IB-subgraph.In the inner optimization phase, we estimate I(G;Gsub)by optimizing the statistics network of theDONSKER-V ARADHAN representation (Donsker & Varadhan, 1983). Given a good estimationofI(G;Gsub), in the outer optimization phase, we maximize the GIB objective by optimizing themutual information, the classification loss Lclsand connectivity loss Lcon.4 O PTIMIZING THE GRAPH INFORMATION BOTTLENECK OBJECTIVE FORSUBGRAPH RECOGNITIONIn this section, we will elaborate the proposed method in details. We first formally define the graphinformation bottleneck and IB-subgraph. Then, we introduce a novel framework for GIB to effec-tively find the IB-subgraph. We further propose a bi-level optimization scheme and a graph mutualinformation estimator for GIB optimization. Moreover, we do a continuous relaxation to the gener-ation of subgraph, and propose a novel loss to stabilize the training process.4.1 GRAPH INFORMATION BOTTLENECKWe generalize the information bottleneck principle to learn an informative representation of irregulargraphs, which leads to the graph information bottleneck (GIB) principle.Definition 4.1 (Graph Information Bottleneck) .Given a graphGand its label Y, the GIB seeks forthe most informative yet compressed representation Zby optimizing the following objective:maxZI(Y;Z)s.t.I(G;Z)Ic: (4)whereIcis the information constraint between GandZ. By introducing a Lagrange multiplier toEq. 4, we reach its unconstrained form:maxZI(Y;Z)I(G;Z): (5)Eq. 5 gives a general formulation of GIB. Here, in subgraph recognition, we focus on a subgraphwhich is compressed with minimum information loss in terms of graph properties.Definition 4.2 (IB-subgraph) .For a graphG, its maximally informative yet compressed subgraph,namely IB-subgraph can be obtained by optimizing the following objective:maxGsub2GsubI(Y;Gsub)I(G;Gsub): (6)where Gsubindicates the set of all subgraphs of G.IB-subgraph enjoys various pleasant properties and can be applied to multiple graph learning taskssuch as improvement of graph classification, graph interpretation, and graph denoising. However,the GIB objective in Eq. 6 is notoriously hard to optimize due to the intractability of mutual infor-mation and the discrete nature of irregular graph data. We then introduce approaches on how tooptimize such objective and derive the IB-subgraph.4Published as a conference paper at ICLR 20214.2 B I-LEVEL OPTIMIZATION FOR THE GIB OBJECTIVEThe GIB objective in Eq. 6 consists of two parts. We examine the first term I(Y;Gsub)in Eq. 6, first.This term measures the relevance between GsubandY. We expand I(Y;Gsub)as:I(Y;Gsub) =Zp(y;Gsub) logp(yjGsub)dydGsub+ H(Y): (7)H(Y)is the entropy of Yand thus can be ignored. In practice, we approximate p(y;Gsub)withan empirical distribution p(y;Gsub)1NPNi=1y(yi)Gsub(Gsubi), where()is the Dirac functionto sample training data. Gsubiandyiare the output subgraph and graph label corresponding toi-th training data. By substituting the true posterior p(yjGsub)with a variational approximationq1(yjGsub), we obtain a tractable lower bound of the first term in Eq. 6:I(Y;Gsub)Zp(y;Gsub) logq1(yjGsub)dydGsub1NNXi=1logq1(yijGsubi) =:Lcls(q1(yjGsub);ygt);(8)whereygtis the ground truth label of the graph. Eq. 8 indicates that maximizing I(Y;Gsub)isachieved by the minimization of the classification loss between YandGsubasLcls. Intuitively,minimizingLclsencourages the subgraph to be predictive of the graph label. In practice, we choosethe cross entropy loss for categorical Yand the mean squared loss for continuous Y, respectively.For more details of deriving Eq. 7 and Eq. 8, please refer to Appendix A.1.Then, we consider the minimization of I(G;Gsub)which is the second term of Eq. 6. Remind thatAlemi et al. (2017) introduces a tractable prior distribution r(Z)in Eq. 3, and thus results in a vari-ational upper bound. However, this setting is troublesome as it is hard to find a reasonable priordistribution for p(Gsub), which is the distribution of graph substructures instead of latent representa-tion. Thus we go for another route. Directly applying the DONSKER-V ARADHAN representation(Donsker & Varadhan, 1983) of the KL-divergence, we have:I(G;Gsub) = supf2:GG!REG;Gsub2p(G;Gsub)f2(G;Gsub)logEG2p(G);Gsub2p(Gsub)ef2(G;Gsub);(9)wheref2is the statistics network that maps from the graph set to the set of real numbers. Inorder to approximate I(G;Gsub)using Eq. 9, we design a statistics network based on modern GNNarchitectures as shown by Figure 1: first we use a GNN to extract embeddings from both GandGsub(parameter shared with the subgraph generator, which will be elaborated in Section 4.3), thenconcatenateGandGsubembeddings and feed them into a MLP, which finally produces the realnumber. In conjunction with the sampling method to approximate p(G;Gsub),p(G)andp(Gsub), wereach the following optimization problem to approximate1I(G;Gsub):max2LMI(2;Gsub) =1NNXi=1f2(Gi;Gsubi)log1NNXi=1;j6=ief2(Gi;Gsubj): (10)With the approximation to the MI in graph data, we combine Eq. 6 , Eq. 8 and Eq. 10 and formulatethe optimization process of GIB as a tractable bi-level optimization problem:minGsub;1L(Gsub;1;2) =Lcls(q1(yjGsub);ygt) +LMI(2;Gsub) (11)s.t.2= arg max2LMI(2;Gsub): (12)We first derive a sub-optimal 2notated as2by optimizing Eq. 12 for T steps as inner loops. Afterthe T-step optimization of the inner-loop ends, Eq. 10 is a proxy for MI minimization for the GIBobjective as an outer loop. Then, the parameter 1and the subgraphGsubare optimized to yieldIB-subgraph. However, in the outer loop, the discrete nature of GandGsubhinders applying thegradient-based method to optimize the bi-level objective and find the IB-subgraph.1Notice that the MINE estimator (Belghazi et al., 2018) straightforwardly uses the DONSKER-V ARADHAN representation to derive an MI estimator between the regular input data and its vectorized repre-sentation/encoding. It cannot be applied to estimate the mutual information between GandGsubsince both ofGandGsubare irregular graph data.5Published as a conference paper at ICLR 2021Table 1: Classification accuracy. The pooling methods yield pooling aggregation while the back-bones yield mean aggregation. The proposed GIB method with backbones yields subgraph embed-ding by aggregating the nodes in subgraphs.Method MUTAG PROTEINS IMDB-BINARY DDSortPool 0.8440.141 0.7470.044 0.7120.047 0.7320.087ASAPool 0.743 0.077 0.7210.043 0.7150.044 0.7170.037DiffPool 0.839 0.097 0.7270.046 0.7090.053 0.7780.030EdgePool 0.759 0.077 0.7230.044 0.7280.044 0.7360.040AttPool 0.721 0.086 0.7280.041 0.7220.047 0.7110.055GCN 0.743 0.110 0.7190.041 0.7070.037 0.7250.046GraphSAGE 0.743 0.077 0.7210.042 0.7090.041 0.7290.041GIN 0.825 0.068 0.7070.056 0.7320.048 0.7300.033GAT 0.738 0.074 0.7140.040 0.7130.042 0.6950.045GAT + DropEdge 0.743 0.081 0.7110.043 0.7100.041 0.7170.035GCN+GIB 0.7760.075 0.7480.046 0.7220.039 0.7650.050GraphSAGE+GIB 0.7600.074 0.7340.043 0.7190.052 0.7810.042GIN+GIB 0.8390.064 0.7490.051 0.7370.070 0.7470.039GAT+GIB 0.7490.097 0.7370.044 0.7290.046 0.7690.040GAT+GIB+DropEdge 0.7540.085 0.7370.037 0.7310.003 0.7760.034Table 2: The mean and standard deviation of absolute property bias between the graphs and thecorresponding subgraphs.Method QED DRD2 HLM-CLint MLM-CLintGCN+Att05 0.48 0.07 0.200.13 0.900.89 0.920.61GCN+Att07 0.41 0.07 0.160.11 1.180.60 1.690.88GCN+GIB 0.38 0.12 0.060.09 0.370.30 0.720.554.3 T HESUBGRAPH GENERATOR AND CONNECTIVITY LOSSTo alleviate the discreteness in Eq. 11, we propose the continuous relaxation to the subgraph recog-nition and propose a loss to stabilize the training process.Subgraph generator: For the input graph G, we generate its IB-subgraph with the node assignmentSwhich indicates the node is in GsuborGsub. Then, we introduce a continuous relaxation to thenode assignment with the probability of nodes belonging to the GsuborGsub. For example, the i-throw ofSis a 2-dimensional vector [p(Vi2GsubjVi);p(Vi2GsubjVi)]. We first use an l-layer GNNto obtain the node embedding and employ a multi-layer perceptron (MLP) to output S:Xl= GNN( A;Xl1;1);S=Softmax (MLP(Xl;2)): (13)Sis an2matrix, where nis the number of nodes. We add row-wise Softmax to the output ofMLP to ensure the nodes are either in or out of the subgraph. For simplicity, we compile the abovemodules as the subgraph generator, denoted as g(;)with:= (1;2). When Sis well-learned, theassignment of nodes is supposed to saturate to 0/1. The representation of Gsub, which is employedfor predicting the graph label, can be obtained by taking the first row of STXl.Connectivity loss: However, poor initialization will cause p(Vi2GsubjVi)andp(Vi2GsubjVi)to be close. This will either lead the model to assign all nodes to Gsub/Gsub, or result that therepresentations ofGsubcontain much information from the redundant nodes. These two scenarioswill cause the training process to be unstable. On the other hand, we suppose our model to havean inductive bias to better leverage the topological information while Soutputs the subgraph at anode-level. Therefore, we propose the following connectivity loss:Lcon=jjNorm(STAS)I2jjF; (14)where Norm()is the row-wise normalization, jjjjFis the Frobenius norm, and I2is a22identitymatrix.Lconnot only leads to distinguishable node assignment, but also encourage the subgraph tobe compact. Take (STAS)1:for example, denote a11;a12the element 1,1 and the element 1,2 of6Published as a conference paper at ICLR 2021Table 3: Ablation study on LconandLMI. Note that we try several initiations for GIB w/o LconandLMIto get the current results due to the instability of optimization process.Method QED DRD2 HLM-CLint MLM-CLintGIB w/oLcon 0.460.07 0.150.12 0.450.37 1.580.86GIB w/oLMI 0.430.15 0.210.13 0.480.34 1.200.97GIB 0.380.12 0.060.09 0.370.30 0.720.55STAS,a11=Xi;jAijp(Vi2GsubjVi)p(Vj2GsubjVj);a12=Xi;jAijp(Vi2GsubjVi)p(Vj2GsubjVj):(15)MinimizingLconresults ina11a11+a12!1. This occurs if Viis inGsub, the elements ofN(Vi)have a high probability in Gsub. MinimizingLconalso encouragesa12a11+a12!0. This encouragesp(Vi2 GsubjVi)!0=1and less cuts between GsubandGsub. This also holds for Gsubwhenanalyzinga21anda22.In a word,Lconencourages distinctive Sto stabilize the training process and a compact topology inthe subgraph. Therefore, the overall loss is:min;1L(;1;2) =Lcls(q1(g(G;));ygt) +Lcon(g(G;)) +LMI(2;Gsub)s.t.2= arg max2LMI(2;Gsub):(16)We provide the pseudo code in the Appendix to better illustrate how to optimize the above objective.5 E XPERIMENTSIn this section, we evaluate the proposed GIB method on three scenarios, including improvement ofgraph classification, graph interpretation and graph denoising.5.1 B ASELINES AND SETTINGSImprovement of graph classification: For improvement of graph classification, GIB generatesgraph representation by aggregating the subgraph information. We plug GIB into various backbonesincluding GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2017), GIN (Xu et al., 2019) andGraphSAGE (Hamilton et al., 2017). We compare the proposed method with the mean/sum ag-gregation (Kipf & Welling, 2017; Velickovic et al., 2017; Hamilton et al., 2017; Xu et al., 2019)and pooling aggregation (Zhang et al., 2018; Ranjan et al., 2020; Ying et al., 2018; Diehl, 2019) interms of classification accuracy. Moreover, we apply DropEdge (Rong et al., 2020) to GAT, namelyGAT+DropEdge, which randomly drop 30% edges in message-passing at node-level. Similarly, weapply GIB to GAT+DropEdge, resulting in GAT+GIB+DropEdge. For fare comparisions, all thebackbones for different methods consist of the same 2-layer GNN with 16 hidden-size.Graph interpretation: The goal of graph interpretation is to find the substructure which shares themost similar property to the molecule. If the substructure is disconnected, we evaluate its largestconnected part. We compare GIB with the attention mechanism (Li et al., 2019). That is, we atten-tively aggregate the node information for graph prediction. The interpretable subgraph is generatedby choosing the nodes with top 50% and70% attention scores, namely Att05 and Att07. GIB outputsthe interpretation with the IB-subgraph. Then, we evaluate the absolute property bias (the absolutevalue of the difference between the property of graph and subgraph) between the graph and its in-terpretation. Similarly, for fare comparisons, all the backbones for different methods consist of thesame 2-layer GNN with 16 hidden-size.Graph denoising: We translate the permuted graph into the line-graph and use GIB and attentionto 1) infer the real structure of graph, 2) classify the permuted graph via the inferred structure. Wefurther compare the performance of GCN and DiffPool on the permuted graphs.7Published as a conference paper at ICLR 2021Table 4: Quantitative results on graph denoising. We report the classification accuracy (Acc), num-ber of real edges over total real edges (Recall) and number of real edges over total edges in subgraphs(Precision) on the test setMethod GCN DiffPool GCN+Att05 GCN+Att07 GCN+GIBRecall - - 0.226 0.047 0.3240.049 0.4930.035Precision - - 0.638 0.141 0.6750.104 0.6920.061Acc 0.617 0.658 0.649 0.667 0.684Figure 2: The molecules with their interpretable subgraphs discovered by different methods. Thesesubgraphs exhibit similar chemical properties compared to the molecules on the left.5.2 DATASETSImprovement of graph classification: We evaluate different methods on the datasets of MUTAG(Rupp et al., 2012), PROTEINS (Borgwardt et al., 2005), IMDB-BINARY andDD(Rossi &Ahmed, 2015) datasets.2. The statistics of the datasets are available in Table 7 of Appendix.Graph interpretation: We construct the datasets for graph interpretation on four molecule prop-erties based on ZINC dataset, which contains 250K molecules. QED measures the drug likenessof a molecule, which is bounded within the range (0;1:0).DRD2 measures the probability that amolecule is active against dopamine type 2 receptor, which is bounded with (0;1:0).HLM-CLintandMLM-CLint are estimated values of in vitro human and mouse liver microsome metabolic sta-bility (base 10 logrithm of mL/min/g). We sample the molecules with QED 0:85, DRD20:50,HLM-CLint2, MLM-CLint2for each task. We use 85% of these molecules for training, 5%for validating and 10% for testing. The statistics of the datasets are available in Table 8 of Appendix.Graph denoising: We generate a synthetic dataset by adding 30% redundant edges for each graphinMUTAG dataset. We use 70% of these graphs for training, 5%for validating and 25% for testing.5.3 R ESULTSImprovement of Graph Classification: In Table 1, we comprehensively evaluate the proposedmethod and baselines on improvement of graph classification. We train GIB on various back-bones and aggregate the graph representations only from the subgraphs. We compare the perfor-mance of our framework with the mean/sum aggregation and pooling aggregation. This showsthat GIB improves the graph classification by reducing the redundancies in the graph structure.Table 5: Average number of disconnected substruc-tures per graph selected by different methodsMethod QED DRD2 HLM MLMGCN+Att05 3.38 1.94 3.11 5.16GCN+Att07 2.04 1.76 2.75 3.00GCN+GIB 1.57 1.08 2.29 2.06Graph interpretation: Table 2 showsthe quantitative performance of differentmethods on the graph interpretation task.GIB is able to generate precise graph inter-pretation (IB-subgraph), as the substruc-tures found by GIB has the most similarproperty to the input molecules. In Fig. 2,GIB generates more compact and reason-able interpretation to the property of molecules confirmed by chemical experts. More results are2We follow the protocol in https://github.com/rusty1s/pytorch geometric/tree/master/benchmark/kernel8Published as a conference paper at ICLR 2021Table 6: The influence of the hyper-parameter ofLconto the size of subgraphs. 1 3 5 10All 0.4830.143 0.4960.150 0.4940.147 0.4660.150Max 0.3870.173 0.4130.169 0.4110.169 0.3910.172provided in the Appendix. In Table 5, we compare the average number of disconnected substruc-tures per graph since a compact subgraph should preserve more topological information. GIB gen-erates more compact subgraphs to better interpret the graph property. Moreover, compared to thebaselines, GIB does not require a hyper-parameter to control the sizes of subgraphs, thus being moreadaptive to different tasks. Please refer to Table 9 and Table 10 of Appendix for details. The trainingdynamic is shown in Fig. 7. We provide results with other MI estimators in Table 11 in Appendix.Graph denoising: Table 4 shows the performance of different methods on noisy graph classifica-tion. GIB outperforms the baselines on classification accuracy by a large margin due to the superiorproperty of IB-subgraph. Moreover, GIB is able to better reveal the real structure of permuted graphsin terms of precision and recall rate of true edges.5.4 A BLATION STUDYTo further understand the rolls of LconandLMI, we derive two variants of our method by deletingLconandLMI, namely GIB w/o Lconand GIB w/oLMI. Note that GIB w/o LMIis similarto InfoGraph (Sun et al., 2019) and GNNExplainer (Ying et al., 2019), as they only consider tomaximize MI between latent embedding and global summarization and ignore compression. Whenadapted to sub graph recognition, it is likely to be G=Gsub. We evaluate the variants with 2-layerGCN and 16 hidden size on graph interpretation. In practice, we find that the training process ofGIB w/oLconis unstable as discussed in Section 4.3. Moreover, we find that GIB w/o LMIis verylikely to outputGsub=G, as it does not consider compression. Therefore, we try several initiationsfor GIB w/oLconandLMIto get the current results. As shown in Table 3, GIB also outperformsthe variants, and thus indicates that every part of our model does contribute to the improvement ofperformance.5.5 M ORE DISCUSSION ON CONNECTIVITY LOSSLconis proposed for stabilizing the training process and resulting in compact subgraphs. As it posesregularization for the subgraph generation, we are interested in its potential influence on the sizes ofthe chosen IB-subgraph. Therefore, we show the influence of different hyper-parameters of Lcontothe sizes of the chosen IB-subgraph. We implement the experiments with varies inf1;3;5;10gon QED dataset and compute the mean and standard deviation of the sizes of IB-subgraph (All)and their largest connected parts (Max). As shown in Table 6, we observe that different result insimilar sizes of IB-subgraph. Therefore, its influence on the size of chosen subgraphs is weak.6 C ONCLUSIONIn this paper, we have studied a subgraph recognition problem to infer a maximally informativeyet compressed subgraph. We define such a subgraph as IB-subgraph and propose the graph in-formation bottleneck (GIB) framework for effectively discovering an IB-subgraph. We derive theGIB objective from a mutual information estimator for irregular graph data, which is optimized bya bi-level learning scheme. A connectivity loss is further used to stabilize the learning process. Weevaluate our GIB framework in the improvement of graph classification, graph interpretation andgraph denoising. Experimental results verify the superior properties of IB-subgraphs.ACKNOWLEDGEMENTSThis work is partially funded by Beijing Natural Science Foundation (Grant No. JQ18017) andYouth Innovation Promotion Association CAS (Grant No. Y201929).9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title A good paper with a clear theoretical contribution and rigorous empirical evaluation - clear accept recommendation with medium reviewer confidence ### Review Text # Summary The paper introduces the Graph Information Bottleneck (GIB) which aims to learn the most-informative compressed representation $Z$ given graph $G$ with associated label $Y$. Further, it defines GIB-Subgraph which aims to learn the compressed representation as the subgraph $G_{sub}$ which maximizes the mutual information within the family of subgraphs ${\cal G}_sub$ of $G$. The paper introduces bi-level optimization objective which has the following parts: (a) optimizing the mutual information loss $L_{cls}$ between the subgraph representation $G_{sub}$ and the graph label $Y$ using the backbone GNN followed by aggregation of subgraph node embeddings $X_{sub}$ and cross-entropy loss when comparing to graph labels. (b) approximates the mutual information $L_{MI}$ between the original graph and a subgraph $I(G, G_{sub})$ using statistics network $f_{\phi}$ which uses the backbone GNN to obtain graph embeddings (using mean/sum or pooling over node embeddings) followed by MLP over concatenated embeddings of $G$ and $G_{sub}$. The procedure retrains the graph-subgraph mutual information estimator in the inner loop for each step (eqn. 10) before updating the parameters of the backbone GNN and the subgraph selection MLP and finally updating the subgraph-label MI estimator ($L_{cls}$). In order to obtain compact subgraphs, the paper introduces a regularization term $L_{con}$ closely related to graph cut. The papers shows empirically on downstream task of graph classification that adding the GIP objective improves classification accuracy. Further, on graph interpretation task, the authors show that the GIP objective improves the similarity of the retrieved subgraphs using domain-specific metrics. The authors also evaluate on graph denoising on the MUTAG dataset. # Recommendation I vote for a strong accept. This paper is well-written, makes a clear theoretical contribution to the field as well as provides sufficient empirical evaluation. # Questions to the authors - I would have liked to see in the supplementary material an example of the algorithm on a toy graph example (similar to case study A). - I wonder does the initialization have an influence on the final chosen subgraph nodes. Does $S$ (node-assignment) (always/almost always?) saturate as mentioned on page 5? - What is the influence of the ${\cal L}_{con}$ on the size of the final chosen subgraph. A table showing the size of final subgraphs (in term of output of MLP $\theta_2$ in Figure 1) might be helpful, though this is partially addressed in Table 4. - For completeness, it would be good to provide in the supplementary material the properties of the datasets used e.g., number of graphs, mean/max/min number of nodes, edges, dimension of node features, dimension of edge features (if any), etc. - It would have been good to see plots showing the convergence of the different losses as part of the bi-level optimization iterations. - [optional] On the graph denoising experiment, it might be good to add more concrete evaluation both on larger graphs e.g. on graph families such as Power-Law, SBM as well as non-uniform edge addition. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
B1xGGTEtDH
ICLR.cc/2020/Conference
2020
Universal Approximation with Deep Narrow Networks
["Patrick Kidger", "Terry Lyons"]
The classical Universal Approximation Theorem certifies that the universal approximation property holds for the class of neural networks of arbitrary width. Here we consider the natural `dual' theorem for width-bounded networks of arbitrary depth. Precisely, let $n$ be the number of inputs neurons, $m$ be the number of output neurons, and let $\rho$ be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width $n + m + 2$, and activation function $\rho$, exhibits the universal approximation property with respect to the uniform norm on compact subsets of $\mathbb{R}^n$. This covers every activation function possible to use in practice; in particular this includes polynomial activation functions, making this genuinely different to the classical case. We go on to consider extensions of this result. First we show an analogous result for a certain class of nowhere differentiable activation functions. Second we establish an analogous result for noncompact domains, by showing that deep narrow networks with the ReLU activation function exhibit the universal approximation property with respect to the $p$-norm on $\mathbb{R}^n$. Finally we show that width of only $n + m + 1$ suffices for `most' activation functions.
["deep learning", "universal approximation", "deep narrow networks"]
ABSTRACTThe classical Universal Approximation Theorem certifies that the universal ap-proximation property holds for the class of neural networks of arbitrary width.Here we consider the natural ‘dual’ theorem for width-bounded networks of arbi-trary depth. Precisely, let nbe the number of inputs neurons, mbe the number ofoutput neurons, and let be any nonaffine continuous function, with a continuousnonzero derivative at some point. Then we show that the class of neural networksof arbitrary depth, width n+m+ 2, and activation function , exhibits the univer-sal approximation property with respect to the uniform norm on compact subsetsofRn. This covers every activation function possible to use in practice; in partic-ular this includes polynomial activation functions, making this genuinely differentto the classical case. We go on to consider extensions of this result. First weshow an analogous result for a certain class of nowhere differentiable activationfunctions. Second we establish an analogous result for noncompact domains, byshowing that deep narrow networks with the ReLU activation function exhibit theuniversal approximation property with respect to the p-norm on Rn. Finally weshow that width of only n+m+ 1suffices for ‘most’ activation functions.1 I NTRODUCTIONThe Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991; Pinkus, 1999) states thatuniversal approximation holds for the class of neural networks with a single hidden layer of arbitrarywidth, with any continuous nonpolynomial activation function:Theorem 1.1. Let:R!Rbe any continuous function. Let Nnrepresent the class of neuralnetworks with activation function , withnneurons in the input layer, one neuron in the outputlayer, and one hidden layer with an arbitrary number of neurons. Let KRnbe compact. ThenNnis dense inC(K)if and only if is nonpolynomial.What if arbitrary width is replaced with arbitrary depth? Put more precisely, can networks ofbounded width and arbitrary depth provide universal approximation? In some sense this poses aquestion ‘dual’ to the problem answered by the classical Universal Approximation Theorem. Werefer to networks of this type as deep, narrow networks.Furthermore we might ask how narrow the network may be, and what activation functions may beadmitted. We provide a near-complete answer to these various questions.Universal approximation may be established with respect to more than one topology. Continuousactivation functions beget networks representing continuous functions. Thus when working withrespect to the uniform norm, it is natural to seek density in C(K;Rm)forKRn. When workingwith respect to the p-norm, it is natural to seek density in Lp(Rn;Rm)forp2[1;1). In this lattercase we may hope to generalise to noncompact domains, as functions in Lp(Rn;Rm)must exhibitsome sort of decay.The primary motivation for this work stems from the work of Lu et al. (2017), who study thisquestion in the special case of the popular ReLU activation function, and who establish densityinL1(Rn). The other notable result we are aware of is the work of Hanin & Sellke (2017), whoshow another special case: they also consider the ReLU activation function, and establish density inC(K;Rm)forKRncompact.1Under review as a conference paper at ICLR 2020This article demonstrates generalisations of these results, in particular to general activation func-tions, without relying on the strong algebraic and analytic properties of the ReLU activation func-tion. This also improves certain results specific to the ReLU.The rest of the paper is laid out as follows. Section 2 discusses existing work. Section 3 providesa brief summary of our results; these are then presented in detail in Section 4. Section 5 is theconclusion. Several proofs are deferred to the appendices, due to length and technical content.2 C ONTEXTSome positive results have been established showing that particular classes of networks are dense incertain spaces. Some negative results have also been established, showing that insufficiently widenetworks will fail to be dense.Hanin & Sellke (2017) have shown that deep narrow networks with the ReLU activation functionexhibit the universal approximation property in C(K;Rm)forKRncompact.Lu et al. (2017) have shown that deep narrow networks with the ReLU activation function exhibitthe universal approximation property in L1(Rn), whilst Lin & Jegelka (2018) have shown that aparticular description of residual networks, with the ReLU activation function, also exhibit the uni-versal approximation property in this space. We are not aware of any results for the general case ofLp(Rn;Rm)forp2[1;1).We do not know of any positive results applying to activation functions other than the ReLU.Regarding widths insufficient for a class of deep narrow networks to exhibit the universal approx-imation property, consider the case of a network with ninput neurons and a single output neuron.For certain activation functions, Johnson (2019) shows that width nis insufficient to give density inC(K). For the ReLU activation function, Lu et al. (2017) show that width nis insufficient to givedensity inL1(Rn), and that width n1is insufficient in L1([1;1]n). For the ReLU activationfunction, Hanin & Sellke (2017) shows that width nis insufficient to give density in C(K), and thatin fact that this is the greatest possible width not achieving universal approximation in this context.The precise minimum width for activation functions other than ReLU, or for multiple output neu-rons, remains unknown.Everything discussed so far is in the most general case of approximating functions on Euclideanspace: in the language of machine learning, they are regression tasks. There has been some re-lated work in the special case of classification tasks, for example Beise et al. (2018); Szymanski& McCane (2012); Rojas (2003); Nguyen et al. (2018). There has also been some related workin the special case of certain finite domains; Le Roux & Bengio (2010) show that networks withsigmoid activation function and width ncan approximate any distribution on f0;1gn. See alsoSutskever & Hinton (2008). Mont ́ufar (2014) considers the analogous scenario for distributions onf0;1;:::;q1gn.3 S UMMARY OF RESULTSDefinition 3.1. Let:R!Randn;m;k2N. Then letNNn;m;krepresent the class of functionsRn!Rmdescribed by neural networks with nneurons in the input layer, mneurons in the outputlayer,kneurons in each hidden layer, and an arbitrary number of hidden layers, such that everyneuron in every hidden layer has activation function , and every neuron in the output layer has theidentity activation function.Our central result is the following theorem.Theorem 3.2. Let:R!Rbe any continuous function which is continuously differentiable at atleast one point, with nonzero derivative at that point. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).The technical condition is very weak; in particular it is satisfied by every piecewise- C1functionnot identically zero. Thus any activation function that one might practically imagine using on acomputer must satisfy this property.2Under review as a conference paper at ICLR 2020Theorem 3.2 is proved by handling particular classes of activation functions as special cases. Firstwe have the result for nonpolynomial activation functions, for which the width can be made slightlysmaller.Theorem 4.4. Let:R!Rbe any continuous nonpolynomial function which is continuouslydifferentiable at at least one point, with nonzero derivative at that point. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).We observe a corollary for noncompact domains, which generalises Lu et al. (2017, Theorem 1) tomultiple output neurons, a narrower width, and Lpforp>1instead of just p= 1.Corollary 4.6. Letbe the ReLU activation function. Let p2[1;1). ThenNNn;m;n +m+1isdense inLp(Rn;Rm).Moving on to polynomial activation functions, the smaller width of n+m+ 1also suffices for alarge class of polynomials.Theorem 4.8. Let:R!Rbe any polynomial for which there exists a point 2Rsuch that0() = 0 and00()6= 0. LetKRnbe compact. ThenNNn;m;n +m+1is dense inC(K;Rm).The simplest example of such a isx7!x2. Note that in the classical arbitrary-width case it is bothnecessary and sufficient that the activation function be nonpolynomial. Here, however, the samerestriction does not hold. Polynomial activation functions are a reasonable choice in this context.The technical restrictions on the polynomial may be lifted by allowing the full n+m+ 2neuronsper hidden layer.Theorem 4.10. Let:R!Rbe any nonaffine polynomial. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).It is clear that Theorems 4.4 and 4.10 together imply Theorem 3.2.Finally we observe that even pathological cases not satisfying the technical condition of Theorem3.2 may exhibit the universal approximation property.Proposition 4.13. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex, which will also be nowhere differentiable. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Whilst not of direct practical application, this result exemplifies that little necessarily needs to beassumed about an activation function to understand the corresponding class of neural networks.Remark 3.3. Every proof in this article is constructive, and can in principle be traced so as to deter-mine how depth changes with approximation error. We have instead chosen to focus on quantifyingthe width necessary for universal approximation. In fact there are places in our arguments where wehave used a deeper network over a shallower one, when the deeper network is more easily explained.Remark 3.4. An understanding of universal approximation in deep narrow networks is applicableto an understanding of bottlenecks, when information must be discarded due to space constraints,for example in autoencoders (Bengio et al., 2006). This article demonstrates that certain narrownetworks will notconstitute a bottleneck; a converse example is Johnson (2019), who demonstratesthat networks of insufficient width are forced to maintain certain topological invariants.4 U NIVERSAL APPROXIMATION4.1 P RELIMINARIESRemark 4.1. A neuron is usually defined as an activation function composed with an affine function.For ease, we shall extend the definition of a neuron to allow it to represent a function of the form , where andare affine functions, and is the activation function. This does not increasethe representational power of the network, as the new affine functions may be absorbed into theaffine parts of the next layer, but it will make the neural representation of many functions easier topresent. We refer to these as enhanced neurons . It is similarly allowable to take affine combinationsof multiple enhanced neurons; we will use this fact as well.3Under review as a conference paper at ICLR 2020One of the key ideas behind our constructions is that most reasonable activation functions can betaken to approximate the identity function. Indeed, this is essentially the notion that differentiabilitycaptures: that a function is locally affine. This makes it possible to treat neurons as ‘registers’, inwhich information may be stored and preserved through the layers. This allows for preserving theinput values between layers, which is crucial to performing computations in a memory-boundedregime. Thus our constructions have strong overtones of space-limited algorithm design in tradi-tional computer science settings.Lemma 4.2. Let:R!Rbe any continuous function which is continuously differentiable at atleast one point, with nonzero derivative at that point. Let LRbe compact. Then a single enhancedneuron with activation function may uniformly approximate the identity function :R!RonL,with arbitrarily small error.Proof. By assumption, as iscontinuously differentiable, there exists [a;b]Rwitha6=b, onsome neighbourhood of which is differentiable, and 2(a;b)at which0is continuous, and forwhich0()is nonzero.Forh2Rnf0g, leth(x) =hx+, and let h(x) =x()h0():Thenh= hhis of the form that an enhanced neuron can represent. Then for all u2[a;b], by the Mean ValueTheorem there exists ubetweenuandsuch that(u) =() + (u)0(u);and henceh(x) = ( hh)(x)= h(() +hx0(hx+))=x0(hx+)0()forhsufficiently small that h(L)[a;b].Now let0have modulus of continuity !on[a;b]. Let:R!Rrepresent the identity function.Then for all x2L,jh(x)(x)j=jxj0(hx+)0()0()6jxjj0()j!(hx);and soh!uniformly over L.Notation. Throughout the rest of this paper hwill be used to denote such an approximation to theidentity function, where h!uniformly as h!0.An enhnaced neuron may be described as performing (for example) the computation x7!h(4x+3).This is possible as the affine transformation x7!4x+ 3and the affine transformation h(from thedescription of h) may be combined together into a single affine transformation.4.2 N ONPOLYNOMIAL ACTIVATION FUNCTIONSWe consider the ‘Register Model’, which represents a simplification of a neural network.Proposition 4.3 (Register Model) .Let:R!Rbe any continuous nonpolynomial function. LetIn;m;n +m+1represent the class of neural networks with nneurons in the input layer, mneurons inthe output layer, n+m+ 1neurons in each hidden layer, an arbitrary number of hidden layers, andfor whichn+mof the neurons in each hidden layer have the identity activation function, and oneneuron in each hidden layer has activation function . LetKRnbe compact. ThenIn;m;n +m+1is dense inC(K;Rm).4Under review as a conference paper at ICLR 2020PMj=1jx1x2 xnM=M(x1;:::;xn)PM1j=1j x1x2 xn3=3(x1;:::;xn)1+2x1x2 xn2=2(x1;:::;xn)1x1x2 xn1=1(x1;:::;xn) 0x1x2 xnFigure 1: A simple example of how to prove the Register Model. The values x1;:::;xnare inputsto the network, and the valuePMj=1jis the output. Each cell represents one neuron. Each iis ofthe form ii, where iandiare affine functions and is the activation function.See Appendix A for the proof.A simplified depiction of the proof of the Register Model is shown in Figure 1, for the special caseofm= 1. It usesnneurons in each layer as registers to preserve the input values. A single neuronin each layer performs a computation based off of the input values, which were preserved in theprevious layer. The remaining neuron in each layer also acts a register, gradually summing up theresults of the computation neurons. The computation neurons may be shown to exist by the classicalUniversal Approximation Theorem.The Register Model is similar to Hanin & Sellke (2017), who have a related construction specific tothe ReLU. The idea of the Register Model may also be thought of as thematically similar to residualnetworks, as in Lin & Jegelka (2018): in both cases the network is almost applying the identitytransformation at each layer, with only a small amount of nonlinearity.Theorem 4.4. Let:R!Rbe any continuous nonpolynomial function which is continuouslydifferentiable at at least one point, with nonzero derivative at that point. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. Letf2C(K;Rm)and">0. Set up a neural network as in the Register Model (Proposition4.3), approximating fto within"=2. Every neuron requiring an identity activation function in theRegister Model will instead approximate the identity, in the manner of Lemma 4.2.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly contin-uous limits is again uniformly convergent. So as a neural network is a layer-by-layer composition offunctions then the new model can be taken within "=2of the Register Model, with respect to kk1inK, by takinghsufficiently small.Remark 4.5. This of course implies approximation in Lp(K;Rm)forp2[1;1). However, whenis the ReLU activation function, then the next corollary shows that in fact the result may begeneralised to unbounded domains.Corollary 4.6. Letbe the ReLU activation function. Let p2[1;1). ThenNNn;m;n +m+1isdense inLp(Rn;Rm).See Appendix B for the proof.Given some f2Lp(Rn;Rm), the essential idea of the proof is to choose a compact set KRnon whichfplaces most of its mass, and find a neural approximation to fonKin the manner ofTheorem 4.4. Once this is done, a cut-off function is applied outside the set, so that the networktakes the value zero in RnnK. The interesting bit is finding a neural representation of such cut-offbehaviour.In particular the ‘obvious’ thing to do – multiply by a cut-off function – does not appear to havea suitable neural representation, as merely approximating the multiplication operation is not neces-sarily enough on an unbounded domain. Instead the strategy is to take a maximum and a minimumwith suitable cut-off functions.5Under review as a conference paper at ICLR 20204.3 P OLYNOMIAL ACTIVATION FUNCTIONSFor the classical Universal Approximation Theorem, it was necessary that the activation function benonpolynomial. However that turns out to be unnecessary here; deep narrow networks are differentto shallow wide networks, and polynomial activations functions are reasonable choices.We begin with the simplest possible nonaffine polynomial, namely (x) =x2.Proposition 4.7 (Square Model) .Let(x) =x2. LetKRnbe compact. ThenNNn;m;n +m+1isdense inC(K;Rm).See Appendix C for the proof. As might be expected, density is established with the help of theStone–Weierstrass theorem, reducing the problem to the approximation of arbitrary polynomials.We remark that it is actually straightforward to find a construction showing that NNn;m;n +m+2isdense inC(K;Rm)when(x) =x2, note the increased width. This is because the square activationfunction can be used to perform multiplication, via xy= ((x+y)2(xy)2)=4, and this makes iteasy to construct arbitrary polynomials. In fact this is what is done in the proof of Proposition 4.7 forfindingm1of themoutputs, when there is still a ‘spare’ neuron in each layer. It is computing thefinal output that actually requires the bulk of the work. The key to this argument is a width-efficientapproximation to division.It is a consequence of Proposition 4.7 that any (polynomial) activation function which can approxi-mate the square activation function, in a suitable manner, is also capable of universal approximation.Theorem 4.8. Let:R!Rbe any polynomial for which there exists a point 2Rsuch that0() = 0 and00()6= 0. LetKRnbe compact. ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. Leth2Rnf0g. Defineh:R!Rbyh(x) =(+hx)()h200()=2:Then, taking a Taylor explansion around ,h(x) =() +hx0() +h2x200()=2 +O(h3x3)()h200()=2=x2+O(hx3):Lets(x) =x2. Thenh!suniformly over any compact set as h!0.Now set up a network as in the Square Model (Proposition 4.7), with every neuron using the squareactivation function. Call this network N. Create a network Nhby copying Nand giving everyneuron in the network the activation function hinstead.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly con-tinuous limits is again uniformly convergent. So as a neural network is a layer-by-layer compositionof functions, then the difference between NandNh, with respect tokk1onK, may be takenarbitrarily small by taking harbitrarily small.Furthermore note that his justpre- and post-composed with affine functions. (Note that thereis only one term in the definition of h(x)which depends on x.) This means that any networkwhich may be represented with activation function hmay be precisely represented with activationfunction, by combining the affine transformations involved.Remark 4.9. Thatis polynomial is never really used in the proof of Theorem 4.8. Only a certainamount of differentiability is required, and all such nonpolynomial functions are already covered byTheorem 4.4, as a nonzero second derivative at implies a nonzero first derivative somewhere closeto. Nonetheless in principle this provides another possible construction by which certain networksmay be shown to exhibit universal approximation.Note that the converse strategy (applying nonpolynomial techniques to the polynomial case) fails.This is because the Register Model requires nonpolynomial activation functions due to its depen-dence on the classical Universal Approximation Theorem.6Under review as a conference paper at ICLR 20201()22()2n+m+1()2=)Expand1;n+m+1=(1;n+m)2;n+m+1=(2;n+m)n+m+1;n+m+1= (n+m+1;n+m)21;2=(1;1) 2;2= (2;1)2n+m+1;2=(n+m+1;1)1;1=1()22;1=2() n+m+1;1=n+m+1()Figure 2: A layer with square activation functions is equivalent to multiple layers with only a singlesquare activation function in each layer. The other neurons use the identity activation function,denoted.Theorem 4.10. Let:R!Rbe any nonaffine polynomial. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).Proof. Fix2Rsuch that00()6= 0, which exists as is nonaffine. Now let h2(0;1). Defineh:R!Rbyh(x) =(+hx)2() +(hx)h200():Then Taylor expanding (+hx)and(hx)around,h(x) =() +hx0() +h2x200()=2 +O(h3x3)h200()2()h200()+()hx0() +h2x200()=2 +O(h3x3)h200()=x2+O(hx3):Observe that hneeds precisely two operations of on (affine transformations of) x, and so maybe computed by two enhanced neurons with activation function . Thus the operation of a singleenhanced neuron with square activation function may be approximated by two enhanced neuronswith activation function .LetNbe a network as in the Square Model (Proposition 4.7) with every neuron using the squareactivation function. Let `be any hidden layer of N; it containsn+m+ 1neurons. Let be a vectorof the values of the neurons of the previous layer. Let ibe the affine part of the ith neuron of `, sothat`computes1()2;:::;n+m+1()2. Then this may equivalently be calculated with n+m+1layers ofn+m+ 1neurons each, with n+mof the neurons in each of these new layers using theidentity function, and one neuron using the square activation function. The first of these new layersapplies thei, and theith layer squares the value of the ith neuron. See Figure 2.Apply this procedure to every layer of N; call the resulting network eN. It will compute exactlythe same function as N, and will have n+m+ 1times as many layers, but will use only a singlesquaring operation in each layer.Create a copy of eN, call iteNh. Replace its identity activation functions with approximations in themanner of Lemma 4.2, using activation function . Replace its square activation functions (one ineach layer) by approximations in the manner described above with h; this requires an extra neuronin each hidden layer, so that the network is now of width n+m+ 2. ThuseNhuses the activationfunctionthroughout.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly con-tinuous limits is again uniformly convergent. So as a neural network is a layer-by-layer composition7Under review as a conference paper at ICLR 2020of functions, then the difference between eNhandeN, with respect tokk1onK, may be takenarbitrarily small by taking harbitrarily small.Remark 4.11. It is possible to construct shallower networks analogous to eN. The proof of Propo-sition 4.7 in Appendix C, uses most of the network’s neurons to approximate the identity anyway;only a few in each layer are used to square a valued that is desired to be squared. These are the onlyneurons that actually require the procedure used in Figure 2 and the proof of Theorem 4.10.4.4 N ONDIFFERENTIABLE ACTIVATION FUNCTIONSAlthough not of direct practical application, results for nondifferentiable activation functions demon-strate how certain pathological cases are still capable of being handled.Lemma 4.12. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex. LetLRbe compact. Then a single enhanced neuron with activationfunctionmay uniformly approximate the identity function :R!RonL, with arbitrarily smallerror.Proof. Forh2Rnf0gandA22N, leth;A(x) =hx+A, and let (x) =x=h. Leth;A= hh;A;which is of the form that an enhanced neuron can represent. Then jointly taking hsmall enough andAlarge enough it is clear that h;Amay be taken uniformly close to onL.Proposition 4.13. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex, which will also be nowhere differentiable. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. As the proof of Theorem 4.4, except substituting Lemma 4.12 for Lemma 4.2.This manner of proof may be extended to other nondifferentiable activation functions as well.5 C ONCLUSIONThere is a large literature on theoretical properties of neural networks, but much of it deals only withthe ReLU.1However how to select an activation function remains a poorly understood topic, andmany other options have been proposed: leaky ReLU, PReLU, RRelu, ELU, SELU and other moreexotic activation functions as well.2Our central contribution is to provide results for universal approximation using general activationfunctions (Theorems 3.2, 4.4, 4.8 and 4.10). In contrast to previous work, these results do not relyon the nice properties of the ReLU, and in particular do not rely on its explicit description. Thetechniques we use are straightforward, and robust enough to handle even the pathological case ofnondifferentiable activation functions (Proposition 4.13).We also consider approximation in Lpnorm (Remark 4.5), and generalise previous work to smallerwidths, multiple output neurons, and p>1in place ofp= 1(Corollary 4.6).In contrast to much previous work, every result we show also handles the general case of multipleoutput neurons.ACKNOWLEDGEMENTS(Redacted from anonymised submission)1See for example Hanin & Sellke (2017); Petersen & V oigtlaender (2018); G ̈uhring et al.; Daubechies et al.(2019); Arora et al. (2018).2See Maas et al. (2013); He et al. (2015); Xu et al. (2015); Clevert et al. (2016); Klambauer et al. (2017);Molina et al. (2019); Krizhevsky (2012) respectively.8Under review as a conference paper at ICLR 2020
ByevCWg15H
Official Blind Review #1
6: Weak Accept
- This paper complement of the fundamental Universal approximation theorem variants - Based of the Register model, that the authors seem to have developed themselves from the scratch, elegant, although non-obvious - Proof is straightforward, although the pictural description can be enhanced. In reality the width neurons are unfolded horizontally in the n+m+1 layer - As of now, single-point continuous function does not seem to have been proven to be sufficient to build universal single-layer approximator networks in the 1999 paper. It is unclear how the authors prove that part of their theorems. - The third part of the paper relies on Stone-Weierstrass theorem and a manipulations around the concept of "enhanced neurons", carefully constrcutred to fit in the n+m+1 and n+m+2 budgets - However, the proof of relaxing the Polynomial functional constraint in Theorem 4.8 is not entirely clear. While it seems to be a two-staged proof (convergence for non-linear function x^2, then convergence of a class of polynomial functions to x^2), it is unclear how the $\rho_h$ neurons can be assembled with the registry neuron budget from the initial polynomial function. - Although inspired by prior work, the author's contribution is novel, original and important. Overall, I find this paper highly useful, elegant and, to the extent of my knowledge and understanding, properly proved. My main suggestions to the authors are with regards to the clarification of the proof of the Theorem 4.8. after which I could increase my score. Acknowledging the rebuttal: thank you for your clarification and for updating the paper accordingly.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Universal Approximation with Deep Narrow Networks ### Paper Abstract The classical Universal Approximation Theorem certifies that the universal approximation property holds for the class of neural networks of arbitrary width. Here we consider the natural `dual' theorem for width-bounded networks of arbitrary depth. Precisely, let $n$ be the number of inputs neurons, $m$ be the number of output neurons, and let $\rho$ be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width $n + m + 2$, and activation function $\rho$, exhibits the universal approximation property with respect to the uniform norm on compact subsets of $\mathbb{R}^n$. This covers every activation function possible to use in practice; in particular this includes polynomial activation functions, making this genuinely different to the classical case. We go on to consider extensions of this result. First we show an analogous result for a certain class of nowhere differentiable activation functions. Second we establish an analogous result for noncompact domains, by showing that deep narrow networks with the ReLU activation function exhibit the universal approximation property with respect to the $p$-norm on $\mathbb{R}^n$. Finally we show that width of only $n + m + 1$ suffices for `most' activation functions. ### Paper Keywords ["deep learning", "universal approximation", "deep narrow networks"] ### Paper Content ABSTRACTThe classical Universal Approximation Theorem certifies that the universal ap-proximation property holds for the class of neural networks of arbitrary width.Here we consider the natural ‘dual’ theorem for width-bounded networks of arbi-trary depth. Precisely, let nbe the number of inputs neurons, mbe the number ofoutput neurons, and let be any nonaffine continuous function, with a continuousnonzero derivative at some point. Then we show that the class of neural networksof arbitrary depth, width n+m+ 2, and activation function , exhibits the univer-sal approximation property with respect to the uniform norm on compact subsetsofRn. This covers every activation function possible to use in practice; in partic-ular this includes polynomial activation functions, making this genuinely differentto the classical case. We go on to consider extensions of this result. First weshow an analogous result for a certain class of nowhere differentiable activationfunctions. Second we establish an analogous result for noncompact domains, byshowing that deep narrow networks with the ReLU activation function exhibit theuniversal approximation property with respect to the p-norm on Rn. Finally weshow that width of only n+m+ 1suffices for ‘most’ activation functions.1 I NTRODUCTIONThe Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991; Pinkus, 1999) states thatuniversal approximation holds for the class of neural networks with a single hidden layer of arbitrarywidth, with any continuous nonpolynomial activation function:Theorem 1.1. Let:R!Rbe any continuous function. Let Nnrepresent the class of neuralnetworks with activation function , withnneurons in the input layer, one neuron in the outputlayer, and one hidden layer with an arbitrary number of neurons. Let KRnbe compact. ThenNnis dense inC(K)if and only if is nonpolynomial.What if arbitrary width is replaced with arbitrary depth? Put more precisely, can networks ofbounded width and arbitrary depth provide universal approximation? In some sense this poses aquestion ‘dual’ to the problem answered by the classical Universal Approximation Theorem. Werefer to networks of this type as deep, narrow networks.Furthermore we might ask how narrow the network may be, and what activation functions may beadmitted. We provide a near-complete answer to these various questions.Universal approximation may be established with respect to more than one topology. Continuousactivation functions beget networks representing continuous functions. Thus when working withrespect to the uniform norm, it is natural to seek density in C(K;Rm)forKRn. When workingwith respect to the p-norm, it is natural to seek density in Lp(Rn;Rm)forp2[1;1). In this lattercase we may hope to generalise to noncompact domains, as functions in Lp(Rn;Rm)must exhibitsome sort of decay.The primary motivation for this work stems from the work of Lu et al. (2017), who study thisquestion in the special case of the popular ReLU activation function, and who establish densityinL1(Rn). The other notable result we are aware of is the work of Hanin & Sellke (2017), whoshow another special case: they also consider the ReLU activation function, and establish density inC(K;Rm)forKRncompact.1Under review as a conference paper at ICLR 2020This article demonstrates generalisations of these results, in particular to general activation func-tions, without relying on the strong algebraic and analytic properties of the ReLU activation func-tion. This also improves certain results specific to the ReLU.The rest of the paper is laid out as follows. Section 2 discusses existing work. Section 3 providesa brief summary of our results; these are then presented in detail in Section 4. Section 5 is theconclusion. Several proofs are deferred to the appendices, due to length and technical content.2 C ONTEXTSome positive results have been established showing that particular classes of networks are dense incertain spaces. Some negative results have also been established, showing that insufficiently widenetworks will fail to be dense.Hanin & Sellke (2017) have shown that deep narrow networks with the ReLU activation functionexhibit the universal approximation property in C(K;Rm)forKRncompact.Lu et al. (2017) have shown that deep narrow networks with the ReLU activation function exhibitthe universal approximation property in L1(Rn), whilst Lin & Jegelka (2018) have shown that aparticular description of residual networks, with the ReLU activation function, also exhibit the uni-versal approximation property in this space. We are not aware of any results for the general case ofLp(Rn;Rm)forp2[1;1).We do not know of any positive results applying to activation functions other than the ReLU.Regarding widths insufficient for a class of deep narrow networks to exhibit the universal approx-imation property, consider the case of a network with ninput neurons and a single output neuron.For certain activation functions, Johnson (2019) shows that width nis insufficient to give density inC(K). For the ReLU activation function, Lu et al. (2017) show that width nis insufficient to givedensity inL1(Rn), and that width n1is insufficient in L1([1;1]n). For the ReLU activationfunction, Hanin & Sellke (2017) shows that width nis insufficient to give density in C(K), and thatin fact that this is the greatest possible width not achieving universal approximation in this context.The precise minimum width for activation functions other than ReLU, or for multiple output neu-rons, remains unknown.Everything discussed so far is in the most general case of approximating functions on Euclideanspace: in the language of machine learning, they are regression tasks. There has been some re-lated work in the special case of classification tasks, for example Beise et al. (2018); Szymanski& McCane (2012); Rojas (2003); Nguyen et al. (2018). There has also been some related workin the special case of certain finite domains; Le Roux & Bengio (2010) show that networks withsigmoid activation function and width ncan approximate any distribution on f0;1gn. See alsoSutskever & Hinton (2008). Mont ́ufar (2014) considers the analogous scenario for distributions onf0;1;:::;q1gn.3 S UMMARY OF RESULTSDefinition 3.1. Let:R!Randn;m;k2N. Then letNNn;m;krepresent the class of functionsRn!Rmdescribed by neural networks with nneurons in the input layer, mneurons in the outputlayer,kneurons in each hidden layer, and an arbitrary number of hidden layers, such that everyneuron in every hidden layer has activation function , and every neuron in the output layer has theidentity activation function.Our central result is the following theorem.Theorem 3.2. Let:R!Rbe any continuous function which is continuously differentiable at atleast one point, with nonzero derivative at that point. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).The technical condition is very weak; in particular it is satisfied by every piecewise- C1functionnot identically zero. Thus any activation function that one might practically imagine using on acomputer must satisfy this property.2Under review as a conference paper at ICLR 2020Theorem 3.2 is proved by handling particular classes of activation functions as special cases. Firstwe have the result for nonpolynomial activation functions, for which the width can be made slightlysmaller.Theorem 4.4. Let:R!Rbe any continuous nonpolynomial function which is continuouslydifferentiable at at least one point, with nonzero derivative at that point. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).We observe a corollary for noncompact domains, which generalises Lu et al. (2017, Theorem 1) tomultiple output neurons, a narrower width, and Lpforp>1instead of just p= 1.Corollary 4.6. Letbe the ReLU activation function. Let p2[1;1). ThenNNn;m;n +m+1isdense inLp(Rn;Rm).Moving on to polynomial activation functions, the smaller width of n+m+ 1also suffices for alarge class of polynomials.Theorem 4.8. Let:R!Rbe any polynomial for which there exists a point 2Rsuch that0() = 0 and00()6= 0. LetKRnbe compact. ThenNNn;m;n +m+1is dense inC(K;Rm).The simplest example of such a isx7!x2. Note that in the classical arbitrary-width case it is bothnecessary and sufficient that the activation function be nonpolynomial. Here, however, the samerestriction does not hold. Polynomial activation functions are a reasonable choice in this context.The technical restrictions on the polynomial may be lifted by allowing the full n+m+ 2neuronsper hidden layer.Theorem 4.10. Let:R!Rbe any nonaffine polynomial. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).It is clear that Theorems 4.4 and 4.10 together imply Theorem 3.2.Finally we observe that even pathological cases not satisfying the technical condition of Theorem3.2 may exhibit the universal approximation property.Proposition 4.13. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex, which will also be nowhere differentiable. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Whilst not of direct practical application, this result exemplifies that little necessarily needs to beassumed about an activation function to understand the corresponding class of neural networks.Remark 3.3. Every proof in this article is constructive, and can in principle be traced so as to deter-mine how depth changes with approximation error. We have instead chosen to focus on quantifyingthe width necessary for universal approximation. In fact there are places in our arguments where wehave used a deeper network over a shallower one, when the deeper network is more easily explained.Remark 3.4. An understanding of universal approximation in deep narrow networks is applicableto an understanding of bottlenecks, when information must be discarded due to space constraints,for example in autoencoders (Bengio et al., 2006). This article demonstrates that certain narrownetworks will notconstitute a bottleneck; a converse example is Johnson (2019), who demonstratesthat networks of insufficient width are forced to maintain certain topological invariants.4 U NIVERSAL APPROXIMATION4.1 P RELIMINARIESRemark 4.1. A neuron is usually defined as an activation function composed with an affine function.For ease, we shall extend the definition of a neuron to allow it to represent a function of the form , where andare affine functions, and is the activation function. This does not increasethe representational power of the network, as the new affine functions may be absorbed into theaffine parts of the next layer, but it will make the neural representation of many functions easier topresent. We refer to these as enhanced neurons . It is similarly allowable to take affine combinationsof multiple enhanced neurons; we will use this fact as well.3Under review as a conference paper at ICLR 2020One of the key ideas behind our constructions is that most reasonable activation functions can betaken to approximate the identity function. Indeed, this is essentially the notion that differentiabilitycaptures: that a function is locally affine. This makes it possible to treat neurons as ‘registers’, inwhich information may be stored and preserved through the layers. This allows for preserving theinput values between layers, which is crucial to performing computations in a memory-boundedregime. Thus our constructions have strong overtones of space-limited algorithm design in tradi-tional computer science settings.Lemma 4.2. Let:R!Rbe any continuous function which is continuously differentiable at atleast one point, with nonzero derivative at that point. Let LRbe compact. Then a single enhancedneuron with activation function may uniformly approximate the identity function :R!RonL,with arbitrarily small error.Proof. By assumption, as iscontinuously differentiable, there exists [a;b]Rwitha6=b, onsome neighbourhood of which is differentiable, and 2(a;b)at which0is continuous, and forwhich0()is nonzero.Forh2Rnf0g, leth(x) =hx+, and let h(x) =x()h0():Thenh= hhis of the form that an enhanced neuron can represent. Then for all u2[a;b], by the Mean ValueTheorem there exists ubetweenuandsuch that(u) =() + (u)0(u);and henceh(x) = ( hh)(x)= h(() +hx0(hx+))=x0(hx+)0()forhsufficiently small that h(L)[a;b].Now let0have modulus of continuity !on[a;b]. Let:R!Rrepresent the identity function.Then for all x2L,jh(x)(x)j=jxj0(hx+)0()0()6jxjj0()j!(hx);and soh!uniformly over L.Notation. Throughout the rest of this paper hwill be used to denote such an approximation to theidentity function, where h!uniformly as h!0.An enhnaced neuron may be described as performing (for example) the computation x7!h(4x+3).This is possible as the affine transformation x7!4x+ 3and the affine transformation h(from thedescription of h) may be combined together into a single affine transformation.4.2 N ONPOLYNOMIAL ACTIVATION FUNCTIONSWe consider the ‘Register Model’, which represents a simplification of a neural network.Proposition 4.3 (Register Model) .Let:R!Rbe any continuous nonpolynomial function. LetIn;m;n +m+1represent the class of neural networks with nneurons in the input layer, mneurons inthe output layer, n+m+ 1neurons in each hidden layer, an arbitrary number of hidden layers, andfor whichn+mof the neurons in each hidden layer have the identity activation function, and oneneuron in each hidden layer has activation function . LetKRnbe compact. ThenIn;m;n +m+1is dense inC(K;Rm).4Under review as a conference paper at ICLR 2020PMj=1jx1x2 xnM=M(x1;:::;xn)PM1j=1j x1x2 xn3=3(x1;:::;xn)1+2x1x2 xn2=2(x1;:::;xn)1x1x2 xn1=1(x1;:::;xn) 0x1x2 xnFigure 1: A simple example of how to prove the Register Model. The values x1;:::;xnare inputsto the network, and the valuePMj=1jis the output. Each cell represents one neuron. Each iis ofthe form ii, where iandiare affine functions and is the activation function.See Appendix A for the proof.A simplified depiction of the proof of the Register Model is shown in Figure 1, for the special caseofm= 1. It usesnneurons in each layer as registers to preserve the input values. A single neuronin each layer performs a computation based off of the input values, which were preserved in theprevious layer. The remaining neuron in each layer also acts a register, gradually summing up theresults of the computation neurons. The computation neurons may be shown to exist by the classicalUniversal Approximation Theorem.The Register Model is similar to Hanin & Sellke (2017), who have a related construction specific tothe ReLU. The idea of the Register Model may also be thought of as thematically similar to residualnetworks, as in Lin & Jegelka (2018): in both cases the network is almost applying the identitytransformation at each layer, with only a small amount of nonlinearity.Theorem 4.4. Let:R!Rbe any continuous nonpolynomial function which is continuouslydifferentiable at at least one point, with nonzero derivative at that point. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. Letf2C(K;Rm)and">0. Set up a neural network as in the Register Model (Proposition4.3), approximating fto within"=2. Every neuron requiring an identity activation function in theRegister Model will instead approximate the identity, in the manner of Lemma 4.2.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly contin-uous limits is again uniformly convergent. So as a neural network is a layer-by-layer composition offunctions then the new model can be taken within "=2of the Register Model, with respect to kk1inK, by takinghsufficiently small.Remark 4.5. This of course implies approximation in Lp(K;Rm)forp2[1;1). However, whenis the ReLU activation function, then the next corollary shows that in fact the result may begeneralised to unbounded domains.Corollary 4.6. Letbe the ReLU activation function. Let p2[1;1). ThenNNn;m;n +m+1isdense inLp(Rn;Rm).See Appendix B for the proof.Given some f2Lp(Rn;Rm), the essential idea of the proof is to choose a compact set KRnon whichfplaces most of its mass, and find a neural approximation to fonKin the manner ofTheorem 4.4. Once this is done, a cut-off function is applied outside the set, so that the networktakes the value zero in RnnK. The interesting bit is finding a neural representation of such cut-offbehaviour.In particular the ‘obvious’ thing to do – multiply by a cut-off function – does not appear to havea suitable neural representation, as merely approximating the multiplication operation is not neces-sarily enough on an unbounded domain. Instead the strategy is to take a maximum and a minimumwith suitable cut-off functions.5Under review as a conference paper at ICLR 20204.3 P OLYNOMIAL ACTIVATION FUNCTIONSFor the classical Universal Approximation Theorem, it was necessary that the activation function benonpolynomial. However that turns out to be unnecessary here; deep narrow networks are differentto shallow wide networks, and polynomial activations functions are reasonable choices.We begin with the simplest possible nonaffine polynomial, namely (x) =x2.Proposition 4.7 (Square Model) .Let(x) =x2. LetKRnbe compact. ThenNNn;m;n +m+1isdense inC(K;Rm).See Appendix C for the proof. As might be expected, density is established with the help of theStone–Weierstrass theorem, reducing the problem to the approximation of arbitrary polynomials.We remark that it is actually straightforward to find a construction showing that NNn;m;n +m+2isdense inC(K;Rm)when(x) =x2, note the increased width. This is because the square activationfunction can be used to perform multiplication, via xy= ((x+y)2(xy)2)=4, and this makes iteasy to construct arbitrary polynomials. In fact this is what is done in the proof of Proposition 4.7 forfindingm1of themoutputs, when there is still a ‘spare’ neuron in each layer. It is computing thefinal output that actually requires the bulk of the work. The key to this argument is a width-efficientapproximation to division.It is a consequence of Proposition 4.7 that any (polynomial) activation function which can approxi-mate the square activation function, in a suitable manner, is also capable of universal approximation.Theorem 4.8. Let:R!Rbe any polynomial for which there exists a point 2Rsuch that0() = 0 and00()6= 0. LetKRnbe compact. ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. Leth2Rnf0g. Defineh:R!Rbyh(x) =(+hx)()h200()=2:Then, taking a Taylor explansion around ,h(x) =() +hx0() +h2x200()=2 +O(h3x3)()h200()=2=x2+O(hx3):Lets(x) =x2. Thenh!suniformly over any compact set as h!0.Now set up a network as in the Square Model (Proposition 4.7), with every neuron using the squareactivation function. Call this network N. Create a network Nhby copying Nand giving everyneuron in the network the activation function hinstead.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly con-tinuous limits is again uniformly convergent. So as a neural network is a layer-by-layer compositionof functions, then the difference between NandNh, with respect tokk1onK, may be takenarbitrarily small by taking harbitrarily small.Furthermore note that his justpre- and post-composed with affine functions. (Note that thereis only one term in the definition of h(x)which depends on x.) This means that any networkwhich may be represented with activation function hmay be precisely represented with activationfunction, by combining the affine transformations involved.Remark 4.9. Thatis polynomial is never really used in the proof of Theorem 4.8. Only a certainamount of differentiability is required, and all such nonpolynomial functions are already covered byTheorem 4.4, as a nonzero second derivative at implies a nonzero first derivative somewhere closeto. Nonetheless in principle this provides another possible construction by which certain networksmay be shown to exhibit universal approximation.Note that the converse strategy (applying nonpolynomial techniques to the polynomial case) fails.This is because the Register Model requires nonpolynomial activation functions due to its depen-dence on the classical Universal Approximation Theorem.6Under review as a conference paper at ICLR 20201()22()2n+m+1()2=)Expand1;n+m+1=(1;n+m)2;n+m+1=(2;n+m)n+m+1;n+m+1= (n+m+1;n+m)21;2=(1;1) 2;2= (2;1)2n+m+1;2=(n+m+1;1)1;1=1()22;1=2() n+m+1;1=n+m+1()Figure 2: A layer with square activation functions is equivalent to multiple layers with only a singlesquare activation function in each layer. The other neurons use the identity activation function,denoted.Theorem 4.10. Let:R!Rbe any nonaffine polynomial. Let KRnbe compact. ThenNNn;m;n +m+2is dense inC(K;Rm).Proof. Fix2Rsuch that00()6= 0, which exists as is nonaffine. Now let h2(0;1). Defineh:R!Rbyh(x) =(+hx)2() +(hx)h200():Then Taylor expanding (+hx)and(hx)around,h(x) =() +hx0() +h2x200()=2 +O(h3x3)h200()2()h200()+()hx0() +h2x200()=2 +O(h3x3)h200()=x2+O(hx3):Observe that hneeds precisely two operations of on (affine transformations of) x, and so maybe computed by two enhanced neurons with activation function . Thus the operation of a singleenhanced neuron with square activation function may be approximated by two enhanced neuronswith activation function .LetNbe a network as in the Square Model (Proposition 4.7) with every neuron using the squareactivation function. Let `be any hidden layer of N; it containsn+m+ 1neurons. Let be a vectorof the values of the neurons of the previous layer. Let ibe the affine part of the ith neuron of `, sothat`computes1()2;:::;n+m+1()2. Then this may equivalently be calculated with n+m+1layers ofn+m+ 1neurons each, with n+mof the neurons in each of these new layers using theidentity function, and one neuron using the square activation function. The first of these new layersapplies thei, and theith layer squares the value of the ith neuron. See Figure 2.Apply this procedure to every layer of N; call the resulting network eN. It will compute exactlythe same function as N, and will have n+m+ 1times as many layers, but will use only a singlesquaring operation in each layer.Create a copy of eN, call iteNh. Replace its identity activation functions with approximations in themanner of Lemma 4.2, using activation function . Replace its square activation functions (one ineach layer) by approximations in the manner described above with h; this requires an extra neuronin each hidden layer, so that the network is now of width n+m+ 2. ThuseNhuses the activationfunctionthroughout.Uniform continuity preserves uniform convergence, compactness is preserved by continuous func-tions, and a composition of two uniformly convergent sequences of functions with uniformly con-tinuous limits is again uniformly convergent. So as a neural network is a layer-by-layer composition7Under review as a conference paper at ICLR 2020of functions, then the difference between eNhandeN, with respect tokk1onK, may be takenarbitrarily small by taking harbitrarily small.Remark 4.11. It is possible to construct shallower networks analogous to eN. The proof of Propo-sition 4.7 in Appendix C, uses most of the network’s neurons to approximate the identity anyway;only a few in each layer are used to square a valued that is desired to be squared. These are the onlyneurons that actually require the procedure used in Figure 2 and the proof of Theorem 4.10.4.4 N ONDIFFERENTIABLE ACTIVATION FUNCTIONSAlthough not of direct practical application, results for nondifferentiable activation functions demon-strate how certain pathological cases are still capable of being handled.Lemma 4.12. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex. LetLRbe compact. Then a single enhanced neuron with activationfunctionmay uniformly approximate the identity function :R!RonL, with arbitrarily smallerror.Proof. Forh2Rnf0gandA22N, leth;A(x) =hx+A, and let (x) =x=h. Leth;A= hh;A;which is of the form that an enhanced neuron can represent. Then jointly taking hsmall enough andAlarge enough it is clear that h;Amay be taken uniformly close to onL.Proposition 4.13. Letw:R!Rbe any bounded continuous nowhere differentiable function. Let(x) = sin(x) +w(x)ex, which will also be nowhere differentiable. Let KRnbe compact.ThenNNn;m;n +m+1is dense inC(K;Rm).Proof. As the proof of Theorem 4.4, except substituting Lemma 4.12 for Lemma 4.2.This manner of proof may be extended to other nondifferentiable activation functions as well.5 C ONCLUSIONThere is a large literature on theoretical properties of neural networks, but much of it deals only withthe ReLU.1However how to select an activation function remains a poorly understood topic, andmany other options have been proposed: leaky ReLU, PReLU, RRelu, ELU, SELU and other moreexotic activation functions as well.2Our central contribution is to provide results for universal approximation using general activationfunctions (Theorems 3.2, 4.4, 4.8 and 4.10). In contrast to previous work, these results do not relyon the nice properties of the ReLU, and in particular do not rely on its explicit description. Thetechniques we use are straightforward, and robust enough to handle even the pathological case ofnondifferentiable activation functions (Proposition 4.13).We also consider approximation in Lpnorm (Remark 4.5), and generalise previous work to smallerwidths, multiple output neurons, and p>1in place ofp= 1(Corollary 4.6).In contrast to much previous work, every result we show also handles the general case of multipleoutput neurons.ACKNOWLEDGEMENTS(Redacted from anonymised submission)1See for example Hanin & Sellke (2017); Petersen & V oigtlaender (2018); G ̈uhring et al.; Daubechies et al.(2019); Arora et al. (2018).2See Maas et al. (2013); He et al. (2015); Xu et al. (2015); Clevert et al. (2016); Klambauer et al. (2017);Molina et al. (2019); Krizhevsky (2012) respectively.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text - This paper complement of the fundamental Universal approximation theorem variants - Based of the Register model, that the authors seem to have developed themselves from the scratch, elegant, although non-obvious - Proof is straightforward, although the pictural description can be enhanced. In reality the width neurons are unfolded horizontally in the n+m+1 layer - As of now, single-point continuous function does not seem to have been proven to be sufficient to build universal single-layer approximator networks in the 1999 paper. It is unclear how the authors prove that part of their theorems. - The third part of the paper relies on Stone-Weierstrass theorem and a manipulations around the concept of "enhanced neurons", carefully constrcutred to fit in the n+m+1 and n+m+2 budgets - However, the proof of relaxing the Polynomial functional constraint in Theorem 4.8 is not entirely clear. While it seems to be a two-staged proof (convergence for non-linear function x^2, then convergence of a class of polynomial functions to x^2), it is unclear how the $\rho_h$ neurons can be assembled with the registry neuron budget from the initial polynomial function. - Although inspired by prior work, the author's contribution is novel, original and important. Overall, I find this paper highly useful, elegant and, to the extent of my knowledge and understanding, properly proved. My main suggestions to the authors are with regards to the clarification of the proof of the Theorem 4.8. after which I could increase my score. Acknowledging the rebuttal: thank you for your clarification and for updating the paper accordingly. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
ByxPYjC5KQ
ICLR.cc/2019/Conference
2019
Improving Generalization and Stability of Generative Adversarial Networks
["Hoang Thanh-Tung", "Truyen Tran", "Svetha Venkatesh"]
Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis.
["GAN", "generalization", "gradient penalty", "zero centered", "convergence"]
ABSTRACTGenerative Adversarial Networks (GANs) are one of the most popular tools forlearning complex high dimensional distributions. However, generalization prop-erties of GANs have not been well understood. In this paper, we analyze thegeneralization of GANs in practical settings. We show that discriminators trainedon discrete datasets with the original GAN loss have poor generalization capabil-ity and do not approximate the theoretically optimal discriminator. We proposea zero-centered gradient penalty for improving the generalization of the discrimi-nator by pushing it toward the optimal discriminator. The penalty guarantees thegeneralization and convergence of GANs. Experiments on synthetic and largescale datasets verify our theoretical analysis.1 I NTRODUCTIONGANs (Goodfellow et al., 2014) are one of the most popular tools for modeling high dimensionaldata. The original GAN is, however, highly unstable and often suffers from mode collapse. Much ofrecent researches has focused on improving the stability of GANs (Radford et al., 2015; Arjovskyet al., 2017; Heusel et al., 2017; Miyato et al., 2018; Karras et al., 2018). On the theoretical aspect,Nagarajan & Kolter (2017) proved that gradient based training of the original GAN is locally stable.Heusel et al. (2017) further proved that GANs trained with Two Timescale Update Rule (TTUR)converge to local equilibria. However, the generalization of GANs at local equilibria is not discussedin depth in these papers.Arora et al. (2017) showed that the generator can win by remembering a polynomial number oftraining examples. The result implies that a low capacity discriminator cannot detect the lack of di-versity. Therefore, it cannot teach the generator to approximate the target distribution. In section 4,we discuss the generalization capability of high capacity discriminators. We show that high capacitydiscriminators trained with the original GAN loss tends to overfit to the mislabeled samples in train-ing dataset, guiding the generator toward collapsed equilibria (i.e. equilibria where the generatorhas mode collapse).Arora et al. (2018) proposed to measure the generalization capability of GAN by estimating thenumber of modes in the model distribution using the birthday paradox. Experiments on severaldatasets showed that the number of modes in the model distribution is several times greater thanthe number of training examples. The author concluded that although GANs might not be able tolearn distributions, they do exhibit some level of generalization. Our analysis shows that poor gen-eralization comes from the mismatch between discriminators trained on discrete finite datasets andthe theoretically optimal discriminator. We propose a zero-centered gradient penalty for improvingthe generalization capability of (high capacity) discriminators. Our zero-centered gradient penaltypushes the discriminator toward the optimal one, making GAN to converge to equilibrium with goodgeneralization capability.Our contributions are as follow:1Published as a conference paper at ICLR 20191. We show that discriminators trained with the original GAN loss have poor generalizationcapability. Poor generalization in the discriminator prevents the generator from learningthe target distribution.2. We show that the original GAN objective encourages gradient exploding in the discrimina-tor. Gradient exploding in the discriminator can lead to mode collapse in the generator.3. We propose a zero-centered gradient penalty (0-GP) for improving the generalization ca-pability of the discriminator. We show that non-zero centered GP and the zero-centered GPproposed in Mescheder et al. (2018) cannot make the discriminator generalize. Our 0-GPhelps GANs to converge to generalizable equilibria. Theoretical results are verified on realworld datasets.4. We show that 0-GP helps the discriminator to distribute its capacity more equally betweenregions of the space, effectively preventing mode collapse. Experiments on synthetic andreal world datasets verify that 0-GP can prevent mode collapse. GANs with 0-GP is muchmore robust to changes in hyper parameters, optimizers, and network architectures than theoriginal GAN and GANs with other gradient penalties.Table 1 compares the key properties of our 0-GP with one centered GP (1-GP) (Gulrajani et al.,2017) and zero centered GP on real/fake samples only (0-GP-sample) (Mescheder et al., 2018).NOTATIONSpr the target distributionpg the model distributionpz the noise distributiondx the dimensionality of a data sample (real or fake)dz the dimensionality of a noise samplesupp(p) the support of distribution pxpr a real samplezpz a noise vector drawn from the noise distribution pzy=G(z) a generated sampleDr=fx1;:::;xng the set ofnreal samplesD(t)g=ny(t)1;:::;y(t)mothe set ofmgenerated samples at step tD(t)=Dr[D(t)g the training dataset at step t2 R ELATED WORKSGradient penalties are widely used in GANs literature. There are a plethora of works on usinggradient penalty to improve the stability of GANs (Mescheder et al., 2018; Gulrajani et al., 2017;Petzka et al., 2018; Roth et al., 2017; Qi, 2017). However, these works mostly focused on makingthe training of GANs stable and convergent. Our work aims to improve the generalization capabilityof GANs via gradient regularization.Arora et al. (2018) showed that the number of modes in the model distribution grows linearly withthe size of the discriminator. The result implies that higher capacity discriminators are needed forbetter approximation of the target distribution. Zhang et al. (2018) studied the tradeoff betweengeneralization and discrimination in GANs. The authors showed that generalization is guaranteed ifthe discriminator set is small enough. In practice, rich discriminators are usually used for better dis-criminative power. Our GP makes rich discriminators generalizable while remaining discriminative.Although less mode collapse is not exactly the same as generalization, the ability to produce morediverse samples implies better generalization. There are a large number of papers on preventingmode collapse in GANs. Radford et al. (2015); Salimans et al. (2016) introduced a number ofempirical tricks to help stabilizing GANs. Arjovsky & Bottou (2017) showed the importance ofdivergences in GAN training, leading to the introduction of Wasserstein GAN (Arjovsky et al.,2017). The use of weak divergence is further explored by Mroueh & Sercu (2017); Mroueh et al.(2018). Lucas et al. (2018) advocated the use of mixed-batches, mini-batches of real and fake data,2Published as a conference paper at ICLR 2019GP Formula Improve gen-eralizationPrevent gradexpodingConvergenceguaranteeOur 0-GP Ev2C[k(rD)vk2],Cfromytox3 3 31-GP E~x[(k(rD)~xk 1)2],where ~x=x+ (1)y7 3 70-GP-sample Ev2D[k(rD)vk2] 7 7 3Table 1: Summary of different gradient penaltiesto smooth out the loss surface. The method exploits the distributional information in a mini-batchto prevent mode collapse. VEEGAN (Srivastava et al., 2017) uses an inverse of the generator tomap the data to the prior distribution. The mismatch between the inverse mapping and the prior isused to detect mode collapse. If the generator can remember the entire training set, then the inversemapping can be arbitrarily close the the prior distribution. It suggests that VEEGAN might not beable to help GAN to generalize outside of the training dataset. Our method helps GANs to discoverunseen regions of the target distribution, significantly improve the diversity of generated samples.3 B ACKGROUNDIn the original GAN, the discriminator Dmaximizes the following objectiveL=Expr[log(D(x))] +Ezpz[log(1D(G(z)))] (1)Goodfellow et al. (2014) showed that if the density functions pgandprare known, then for a fixedgeneratorGthe optimal discriminator isD(v) =pr(v)pr(v) +pg(v);8v2supp(pr)[supp(pg) (2)In the beginning of the training, pgis very different from prso we havepr(x)pg(x);forx2Drandpg(y)pr(y);fory2Dg. Therefore, in the beginning of the training D(x)1;forx2DrandD(y)0;fory2Dg. As the training progresses, the generator will bring pgcloser topr. The game reaches the global equilibrium when pr=pg. At the global equilibrium, D(v) =12;8v2supp(pr)[supp(pg). One important result of the original paper is that, if the discriminatoris optimal at every step of the GAN algorithm, then pgconverges to pr.In practice, density functions are not known and the optimal discriminator is approximated by op-timizing the classification performance of a parametric discriminator D(;D)on a discrete finitedatasetD=Dr[Dg. We call a discriminator trained on a discrete finite dataset an empiricaldiscriminator. The empirically optimal discriminator is denoted by ^D.Arora et al. (2017) defined generalization of a divergence das follow: A divergence dis said to havegeneralization error ifjd(Dg;Dr)d(pg;pr)j (3)A discriminator Ddefines a divergence between two distributions. The performance of a discrim-inator with good generalization capability on the training dataset should be similar to that on theentire data space. In practice, generalization capability of Dcan be estimated by measuring thedifference between its performance on the training dataset and a held-out dataset.4 G ENERALIZATION CAPABILITY OF DISCRIMINATORS4.1 T HE EMPIRICALLY OPTIMAL DISCRIMINATOR DOES NOT APPROXIMATE THETHEORETICALLY OPTIMAL DISCRIMINATORIt has been observed that if the discriminator is too good at discriminating real and fake samples,the generator cannot learn effectively (Goodfellow et al., 2014; Arjovsky & Bottou, 2017). The phe-nomenon suggests that ^Ddoes not well approximate D, and does not guarantee the convergenceofpgtopr. In the following, we clarify the mismatch between ^DandD, and its implications.3Published as a conference paper at ICLR 20192 1 0 1 2 3 42101234Powered by TCPDF (www.tcpdf.org)Powered by TCPDF (www.tcpdf.org)(a)2 1 0 1 2 3 42101234 (b)2 1 0 1 2 3 42101234 (c)2 1 0 1 2 3 42101234 (d)2 1 0 1 2 3 42101234 (e)2 1 0 1 2 3 421012340.00.20.40.60.81.0 (f)Figure 1: Value surfaces of discriminators trained for 10,000 iterations with different gradient penal-ties, on samples from two Gaussian distributions. The discriminator is a 2 hidden layer MLP with 64hidden neurons.(a) No GP. (b) No GP with more samples. (c) One-centered GP (1-GP) with = 1.(d) Zero-centered GP on real/fake samples only (0-GP-sample) with = 1. (e) Our zero-centeredGP with= 1. (f) Theoretically optimal discriminator computed using Eqn. 2.Proposition 1. The two datasetsDrandD(t)gare disjoint with probability 1regardless of how closethe two distributions prandp(t)gare.Proof. See appendix A.DrandD(t)gare disjoint with probability 1 even when pgandprare exactly the same. ^Dperfectlyclassifies the real and the fake datasets, and ^D(x) = 1;8x2Dr,^D(y) = 0;8y2D(t)g. Thevalue of ^DonD(t)does not depend on the distance between the two distributions and does notreflect the learning progress. The value of ^Don the training dataset approximates that of Din thebeginning of the learning process but not when the two distributions are close. When trained usinggradient descent on a discrete finite dataset with the loss in Eqn. 1, the discriminator Dis pushedtoward ^D, notD. This behavior does not depend on the size of training set (see Fig. 1a, 1b),implying that the original GAN is not guaranteed to converge to the target distribution even whengiven enough data .4.2 E MPIRICAL DISCRIMINATORS HAVE POOR GENERALIZATION CAPABILITYWhen the generator gets better, generated samples are more similar to samples from the target dis-tribution. However, regardless of their quality, generated samples are still labeled as fake in Eqn. 1.The training dataset Dis a bad dataset as it contains many mislabeled examples. A discriminatortrained on such dataset will overfit to the mislabeled examples and has poor generalization capabil-ity. It will misclassify unseen samples and cannot teach the generator to generate these samples.Figure 1a and 1b demonstrate the problem on a synthetic dataset consisting of samples from twoGaussian distributions. The discriminator in Fig. 1a overfits to the small dataset and does notgeneralize to new samples in Fig. 1b. Although the discriminator in Fig. 1b was trained on a largerdataset which is sufficient to characterize the two distributions, it still overfits to the data and itsvalue surface is very different from that of the theoretically optimal discriminator in Fig. 1f.An overfitted discriminator does not guide the model distribution toward target distribution but to-ward the real samples in the dataset. This explains why the original GAN usually exhibits modecollapse behavior. Finding the empirically optimal discriminator using gradient descent usuallyrequires many iterations. Heuristically, overfitting can be alleviated by limiting the number of dis-criminator updates per generator update. Goodfellow et al. (2014) recommended to update thediscriminator once every generator update. In the next subsection, we show that limiting the numberof discriminator updates per generator update prevents the discriminator from overfitting.4.2.1-OPTIMAL DISCRIMINATORS^Dis costly to find and maintain. We consider here a weaker notion of optimality which can beachieved in practical settings.4Published as a conference paper at ICLR 2019Definition 1 (-optimal discriminator) .Given two disjoint datasets DrandDg, and a number >0,a discriminator Dis-optimal ifD(x)12+2;8x2DrD(y)122;8y2DgAs observed in Goodfellow et al. (2014), ^Ddoes not generate usable gradient for the generator.Goodfellow et al. proposed the non-saturating loss for the generator to circumvent this vanishinggradient problem. For an -optimal discriminator, if is relatively small, then the gradient of thediscriminator w.r.t. fake datapoints might not vanish and can be used to guide the model distributiontoward the target distribution.Proposition 2. Given two disjoint datasets DrandDg, and a number >0, an-optimal discrimi-natorDexists and can be constructed as a one hidden layer MLP with O(dx(m+n))parameters.Proof. See appendix B.Because deep networks are more powerful than shallow ones, the size of a deep -optimal discrim-inator can be much smaller than O(dx(m+n)). From the formula, the size of a shallow -optimaldiscriminator for real world datasets ranges from a few to hundreds of millions parameters. That iscomparable to the size of discriminators used in practice. Arjovsky & Bottou (2017) showed thateven when the generator can generate realistic samples, a discriminator that can perfectly classifyreal and fake samples can be found easily using gradient descent. The experiment verified that-optimal discriminator can be found using gradient descent in practical settings.We observe that the norm of the gradient w.r.t. the discriminator’s parameters decreases as fakessamples approach real samples. If the discriminator’s learning rate is fixed, then the number ofgradient descent steps that the discriminator has to take to reach -optimal state should increase.Proposition 3. Alternating gradient descent with the same learning rate for discriminator andgenerator, and fixed number of discriminator updates per generator update (Fixed-Alt-GD) can-not maintain the (empirical) optimality of the discriminator.Fixed-Alt-GD decreases the discriminative power of the discriminator to improve its generalizationcapability. The proof for linear case is given in appendix C.In GANs trained with Two Timescale Update Rule (TTUR) (Heusel et al., 2017), the ratio betweenthe learning rate of the discriminator and that of the generator goes to infinity as the iteration numbergoes to infinity. Therefore, the discriminator can learn much faster than the generator and might beable to maintain its optimality throughout the learning process.4.2.2 G RADIENT EXPLODING IN -OPTIMAL DISCRIMINATORSLet’s consider a simplified scenario where the real and the fake datasets each contains a singledatapoint:Dr=fxg,D(t)g=y(t). Updating the generator according to the gradient from thediscriminator will push y(t)towardx. The absolute value of directional derivative of Din thedirectionu=xy(t), atxisj(ruD)xj= limy(t)u !xD(x)D(y(t))xy(t)IfDis always-optimal, thenD(x)D(y(t));8t2N, andj(ruD)xj limy(t)u !xxy(t)=1The directional derivate of the -optimal discriminator explodes as the fake datapoint approaches thereal datapoint. Directional derivative exploding implies gradient exploding at datapoints on the line5Published as a conference paper at ICLR 201910 5 0 5 1010.07.55.02.50.02.55.07.510.0(a)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (b)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (c)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (d)10 5 0 5 1010.07.55.02.50.02.55.07.510.0(e)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (f)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (g)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (h)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (i)Figure 2: Gradient w.r.t. the input of the discriminator of a GAN trained with different gradientpenalties. The vector associated with a datapoint vpoints in the direction that increases the valueoflog (D(v))the fastest. The discriminator is a 2 hidden layer MLP with 512 hidden neurons. Thediscriminator is updated once every generator update. SGD is used for optimization. (a), (b) NoGP, iter. 1000 and 10,000. (c), (d) No GP with TTUR, iter. 1,000 and 10,000. (e) Our 0-GP with= 10 , iter. 10,000. (f), (g) Our 0-GP with TTUR and = 10 , iter. 10,000 and 20,000. (h) 1-GPwith= 10 , iter. 10,000. (i) 0-GP-sample with = 10 , iter. 10,000.segment connecting xandy(t). If in the next iteration, the generator produces a sample in a regionwhere the gradient explodes, then the gradient w.r.t. the generator’s parameters explodes.Let’s consider the following line integralZC(rD)vds=D(x)D(y(t)) (4)whereCis the line segment from y(t)tox. As the model distribution gets closer to the targetdistribution, the length of Cshould be non increasing. Therefore, maximizing D(x)D(y(t)), orthe discriminative power of D, leads to the maximization of the directional derivative of Din thedirectionds. The original GAN loss makes Dto maximize its discriminative power, encouraginggradient exploding to occur.Gradient exploding happens in the discriminator trained with TTUR in Fig. 2c and 2d. BecauseTTUR can help the discriminator to maintain its optimality, gradient exploding happens and persiststhroughout the training process. Without TTUR, the discriminator cannot maintain its optimality sogradient exploding can happen sometimes during the training but does not persist (Fig. 2a and 2b).Because of the saturated regions in the sigmoid function used in neural network based discrimina-tors, the gradient w.r.t. datapoints in the training set could vanishes. However, gradient explodingmust happen at some datapoints on the path between a pair of samples, where the sigmoid functiondoes not saturate. In Fig. 1a, gradient exploding happens near the decision boundary.In practice,DrandDgcontain many datapoints and the generator is updated using the average ofgradients of the discriminator w.r.t. fake datapoints in the mini-batch. If a fake datapoint y0is veryclose to a real datapoint x0, the gradient (rD)y0might explode. When the average gradient iscomputed over the mini-batch, (rD)y0outweighs other gradients. The generator updated with thisaverage gradient will move many fake datapoints in the direction of (rD)y0, towardx0, makingmode collapse visible.6Published as a conference paper at ICLR 20195 I MPROVING GENERALIZATION CAPABILITY OF EMPIRICALDISCRIMINATORSAlthough the theoretically optimal discriminator Dis generalizable, the original GAN loss doesnot push empirical discriminators toward D. We aim to improve the generalization capability ofempirical discriminators by pushing them toward D.5.1 P USHING EMPIRICAL DISCRIMINATORS TOWARD DFor any input v2supp(pr)[supp(pg), the value of D(v)goes to12and the gradient (rD)vgoes to 0aspgapproachespr. Consider again the line integral in Eqn. 4. As D(x)andD(y)approach12for allx2supp(pr)andy2supp(pg), we haveD(x)D(y) =ZC(rD)vds!0 (5)for all pairs of xandyand all pathsCfromytox. That means, the discriminative power of Dmust decrease as the two distributions become more similar.To push an empirical discriminator DtowardD, we forceDto satisfy two requirements:1.(rD)v!0;8v2supp(pr)[supp(pg)2.D(x)D(y) =RC(rD)vds!0;8xpr;ypg;Cfromytox5.2 Z ERO-CENTERED GRADIENT PENALTYThe first requirement can be implemented by sampling some datapoints v2supp(pr)[supp(pg)and force (rD)vto be 0. The second requirement can be implemented by sampling pairs of realand fake datapoints (x;y)and forceD(x)D(y)to be 0. The two requirements can be added tothe discriminator’s objective as follows^L=L1Ev[k(rD)vk2]2Ex;y[(D(x)D(y))2]whereLis the objective in Eqn. 1. However, as discussed in section 4.2.2, an -optimal discriminatorcan have zero gradient on the training dataset and have gradient exploding outside of the trainingdataset. The gradient norm could go to infinity even when D(x)D(y)is small. Regulating thedifference between D(x)andD(y)is not an efficient way to prevent gradient exploding.We want to prevent gradient exploding on every path in supp(pr)[supp(pg). Because (rD)v!0for allv2supp(pr)[supp(pg)aspgapproachpr, we could push the gradient w.r.t. everydatapoint on every path C2supp(pr)[supp(pg)toward 0. We note that, if (rD)v!0;8v2CthenRC(rD)vds!0. Therefore, the two requirements can be enforced by a single zero-centeredgradient penalty of the formEv2C[k(rD)vk2]The remaining problem is how to find the path Cfrom a fake to a real sample which lies insidesupp(pr)[supp(pg). Because we do not have access to the full supports of prandpg, and thesupports of two distributions could be disjoint in the beginning of the training process, finding a pathwhich lies completely inside the support is infeasible.In the current implementation, we approximate Cwith the straight line connecting a pair of samples,although there is no guarantee that all datapoints on that straight line are in supp(pr)[supp(pg).That results in the following objectiveL0GP=LE~x[k(rD)~xk2] (6)where ~x=x+ (1)y,xpr,ypg, andU(0;1)1. We describe a more sophisticatedway of finding a better path in appendix F.1Wu et al. (2018) independently proposed the Wasserstein divergence for WGAN which uses a gradientpenalty of similar form. Although the two penalties have similar approximate form, they have different moti-vations and addresses different problems in GANs.7Published as a conference paper at ICLR 2019The largeris, the stronger (rD)~xis pushed toward 0. Ifis 0, then the discriminator will onlyfocus on maximizing its discriminative power. If approaches infinity, then the discriminator hasmaximum generalization capability and no discriminative power. controls the tradeoff betweendiscrimination and generalization in the discriminator.5.3 G ENERALIZATION CAPABILITY OF DIFFERENT GRADIENT PENALTIESMescheder et al. (2018) proposed to force the gradient w.r.t. datapoints in the real and/or fakedataset(s) to be 0to make the training of GANs convergent. In section 4, we showed that for dis-crete training dataset, an empirically optimal discriminator ^Dalways exists and could be foundby gradient descent. Although (r^D)v=0;8v2D ,^Ddoes not satisfy the requirement inEqn. 5 and have gradient exploding when some fake datapoints approach a real datapoint. Thediscriminators in Fig. 1a, 1b, 1d, 2c and 2d have vanishingly small gradients on datapoints in thetraining dataset and very large gradients outside. They have poor generalization capability and can-not teach the generator to generate unseen real datapoints. Therefore, zero-centered gradient penaltyon samples from prandpgonly cannot help improving the generalization of the discriminator .Non-zero centered GPs do not push an empirical discriminator toward Dbecause the gradientdoes not converge to 0. A commonly used non-zero centered GP is the one-centered GP (1-GP)(Gulrajani et al., 2017) which has the following formE~x[(k(rD)~xk1)2] (7)where ~x=x+(1)y,xpr,ypg, andU(0;1). Although the initial goal of 1-GP wasto enforce Lipschitz constraint on the discriminator2, Fedus et al. (2018) found that 1-GP preventsgradient exploding, making the original GAN more stable. 1-GP forces the norm of gradients w.r.t.datapoints on the line segment connecting xandyto be 1. If all gradients on the line segment havenorm 1, then the line integral in Eqn. 4 could be as large as kxyk. Because the distance betweenrandom samples grows with the dimensionality, in high dimensional space kxykis greater than1 with high probability. The discriminator could maximize the value of the line integral withoutviolating the Lipschitz constraint. The discriminator trained with 1-GP, therefore, can overfit to thetraining data and have poor generalization capability.5.4 C ONVERGENCE ANALYSIS FOR ZERO -CENTERED GRADIENT PENALTYMescheder et al. (2018) showed that zero-centered GP on real and/or fake samples (0-GP-sample)makes GANs convergent. The penalty is based on the convergence analysis for the Dirac GAN, an1-dimensional linear GAN which learns the Dirac distribution. The intuition is that when pgis thesame aspr, the gradient of the discriminator w.r.t. the fake datapoints (which are also real datapoints)should be 0so that generator will not move away when being updated using this gradient. If thegradient from the discriminator is not 0, then the generator will oscillate around the equilibrium.Our GP forces the gradient w.r.t. all datapoints on the line segment between a pair of samples (in-cluding the two endpoints) to be 0. As a result, our GP also prevents the generator from oscillating.Therefore, our GP has the same convergence guarantee as the 0-GP-sample.5.5 Z ERO-CENTERED GRADIENT PENALTY IMPROVES CAPACITY DISTRIBUTIONDiscriminators trained with the original GAN loss tends to focus on the region of the where fakesamples are close to real samples, ignoring other regions. The phenomenon can be seen in Fig. 2a,2b, 2c, 2d, 2h and 2i. Gradients in the region where fake samples are concentrated are large whilegradients in other regions, including regions where real samples are located, are very small. Thegenerator cannot discover and generate real datapoints in regions where the gradient vanishes.When trained with the objective in Eqn. 6, the discriminator will have to balance betweenmaximizingLand minimizing the GP. For finite , the GP term will not be exactly 0. Let=E~x[k(rD)~xk2]. Among discriminators with the same value of , gradient descent willfind the discriminator that maximizes L. As discussed in section 4.2.2, maximizing Lleads to the2Petzka et al. (2018) pointed out that 1-GP is based on the wrong intuition that the gradient of the optimalcritic must be 1 everywhere under prandpg. The corrected GP is based on the definition of Lipschitzness.8Published as a conference paper at ICLR 2019maximization of norms of gradients on the path from ytox. The discriminator should maximizethe value=E~x[k(rD)~xk]. Ifis fixed then is maximized when krD~x(i)k=krD~x(j)k;8i;j(Cauchy-Schwarz inequality). Therefore, our zero-centered GP encourages the gradients at differentregions of the real data space to have the same norm. The capacity of Dis distributed more equallybetween regions of the real data space, effectively reduce mode collapse. The effect can be seen inFig. 2e and 2f.1-GP encouragesjkrD~x(i)k1j=jkrD~x(j)k1j;8i;j. That allows gradient norms to besmaller than 1in some regions and larger than 1in some other regions. The problem can be seen inFig. 2h.6 E XPERIMENTSThe code is made available at https://github :com/htt210/GeneralizationAndStabilityInGANs .6.1 Z ERO-CENTERED GRADIENT PENALTY PREVENTS OVERFITTINGTo test the effectiveness of gradient penalties in preventing overfitting, we designed a dataset withreal and fake samples coming from two Gaussian distributions and trained a MLP based discrimi-nator on that dataset. The result is shown in Fig. 1. As predicted in section 5.3, 0-GP-sample doesnot help to improve generalization. 1-GP helps to improve generalization. The value surface in Fig.1c is smoother than that in Fig. 1a. However, as discussed in section 5.3, 1-GP cannot help muchin higher dimensional space where the pair-wise distances are large. The discriminator trained withour 0-GP has the best generalization capability, with a value surface which is the most similar to thatof the theoretically optimal one.We increased the number of discriminator updates per generator update to 5to see the effect of GPsin preventing overfitting. On the MNIST dataset, GAN without GP and with other GPs cannot learnanything after 10,000 iterations. GAN with our 0-GP can still learn normally and start produce rec-ognizable digits after only 1,000 iterations. The result confirms that our GP is effective in preventingoverfitting in the discriminator.6.2 Z ERO-CENTERED GRADIENT PENALTY IMPROVES GENERALIZATION AND ROBUSTNESSOFGAN SSYNTHETIC DATAWe tested different gradient penalties on a number of synthetic datasets to compare their effective-ness. The first dataset is a mixture of 8 Gaussians. The dataset is scaled up by a factor of 10 tosimulate the situation in high dimensional space where random samples are far from each other.The result is shown in Fig. 2. GANs with other gradient penalties all fail to learn the distribu-tion and exhibit mode collapse problem to different extents. GAN with our 0-GP (GAN-0-GP) cansuccessfully learn the distribution. Furthermore, GAN-0-GP can generate datapoints on the circle,demonstrating good generalization capability. The original GAN collapses to some disconnectedmodes and cannot perform smooth interpolation between modes: small change in the input result inlarge, unpredictable change in the output. GAN with zero-centered GP on real/fake samples onlyalso exhibits the same ”mode jumping” behavior. The behavior suggests that these GANs tend toremember the training dataset and have poor generalization capability. Fig. 9 in appendix D demon-strates the problem on MNIST dataset.We observe that GAN-0-GP behaves similar to Wasserstein GAN as it first learns the overall struc-ture of the distribution and then focuses on the modes. An evolution sequence of GAN-0-GP isshown in Fig. 5 in appendix D. Results on other synthetic datasets are shown in appendix D.MNIST DATASETThe result on MNIST dataset is shown in Fig. 3. After 1,000 iterations, all other GANs exhibitmode collapse or cannot learn anything. GAN-0-GP is robust to changes in hyper parameters such9Published as a conference paper at ICLR 2019(a) (b) (c) (d) (e)Figure 3: Result on MNIST. The networks have the same architectures with networks used in syn-thetic experiment. Batch normalization (Ioffe & Szegedy, 2015) was not used. Adam optimizer(Kingma & Ba, 2014) with 1= 0:5;2= 0:9was used. (a) No GP, iter. 1,000. (b) 0-GP-sample,= 100 , iter. 1,000. (c) 1-GP, = 100 , iter. 1,000. (d), (e) 0-GP, = 100 , iter. 1,000 and 10,000.0 200000 400000 600000 800000 1000000Generator Iteration24681012Inception scoreGAN-0-GP-TTURGAN-0-sample-TTURWGAN-GP5-TTURFigure 4: Inception score (Salimans et al., 2016) on ImageNet of GAN-0-GP, GAN-0-GP-sample,and WGAN-GP. The code for this experiment is adapted from Mescheder et al. (2018). We used= 10 for all GANs as recommended by Mescheder et al. The critic in WGAN-GP was updated 5times per generator update. To improve convergence, we used TTUR with learning rates of 0.0001and 0.0003 for the generator and discriminator, respectively.as learning rate and optimizers. When Adam is initialized with large 1, e.g. 0:9, GANs with otherGPs cannot learn anything after many iterations. More samples are given in appendix D.We observe that higher value of improves the diversity of generated samples. For = 50 , weobserve some similar looking samples in the generated data. This is consistent with our conjecturethat largerleads to better generalization.IMAGE NETWhen trained on ImangeNet (Deng et al., 2009), GAN-0-GP can produce high quality samples fromall 1,000 classes. We compared our method with GAN with 0-GP-sample and WGAN-GP. GAN-0-GP-sample is able to produce samples of state of the art quality without using progressive growingtrick (Karras et al., 2018). The result in Fig. 4 shows that our method consistently outperformsGAN-0-GP-sample. GAN-0-GP and GAN-0-GP-sample outperform WGAN-GP by a large margin.Image samples are given in appendix D.7 C ONCLUSIONIn this paper, we clarify the reason behind the poor generalization capability of GAN. We show thatthe original GAN loss does not guide the discriminator and the generator toward a generalizableequilibrium. We propose a zero-centered gradient penalty which pushes empirical discriminatorstoward the optimal discriminator with good generalization capability. Our gradient penalty pro-vides better generalization and convergence guarantee than other gradient penalties. Experimentson diverse datasets verify that our method significantly improves the generalization and stability ofGANs.10Published as a conference paper at ICLR 2019
Bkgv8HHx3Q
An interesting read on the convergence of GANs with gradient penalties, lacking comparisons to WGAN-GP
6: Marginally above acceptance threshold
Summary: The paper proposes to add to the original GAN (2014) loss a zero-centered gradient penalty as the one defined in the WGAN-GP paper. It also provides an analysis on the mode collapse and lack of stability of classical GANs. The authors compare results using their penalty on a few synthetic examples and on image net dogs generations to results using the classical GAN loss with or without gradient penalties. Positive points: The paper is interesting to read and well illustrated. An experiment on imagenet illustrates the progress that can be achieved by the proposed penalty. Points to improve: If I understood correctly, the main contribution resides in the application of the GP proposed by WGAN-GP to the original setting. Why not compare results to WGAN-GP in this case? Since the proposal of GANs, many papers addressed the mode collapse problem. WGAN-GP, VEEGAN, or Lucas et al arXiv:1806.07185, ICML 2018 to name only a few. The related work section looks incomplete with some missing related references as mentioned above, and copy of a segment that appears in the introduction. The submission could maybe improved by segmenting the work into intro / related / background (with clear equations presenting the existing GP) / analysis / approach / experiments The experiments on synthetic data could be improved: for reproducibility, many works on GANs used the same synthetic data as VEEGAN. The imagenet experiment lacks details.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Improving Generalization and Stability of Generative Adversarial Networks ### Paper Abstract Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis. ### Paper Keywords ["GAN", "generalization", "gradient penalty", "zero centered", "convergence"] ### Paper Content ABSTRACTGenerative Adversarial Networks (GANs) are one of the most popular tools forlearning complex high dimensional distributions. However, generalization prop-erties of GANs have not been well understood. In this paper, we analyze thegeneralization of GANs in practical settings. We show that discriminators trainedon discrete datasets with the original GAN loss have poor generalization capabil-ity and do not approximate the theoretically optimal discriminator. We proposea zero-centered gradient penalty for improving the generalization of the discrimi-nator by pushing it toward the optimal discriminator. The penalty guarantees thegeneralization and convergence of GANs. Experiments on synthetic and largescale datasets verify our theoretical analysis.1 I NTRODUCTIONGANs (Goodfellow et al., 2014) are one of the most popular tools for modeling high dimensionaldata. The original GAN is, however, highly unstable and often suffers from mode collapse. Much ofrecent researches has focused on improving the stability of GANs (Radford et al., 2015; Arjovskyet al., 2017; Heusel et al., 2017; Miyato et al., 2018; Karras et al., 2018). On the theoretical aspect,Nagarajan & Kolter (2017) proved that gradient based training of the original GAN is locally stable.Heusel et al. (2017) further proved that GANs trained with Two Timescale Update Rule (TTUR)converge to local equilibria. However, the generalization of GANs at local equilibria is not discussedin depth in these papers.Arora et al. (2017) showed that the generator can win by remembering a polynomial number oftraining examples. The result implies that a low capacity discriminator cannot detect the lack of di-versity. Therefore, it cannot teach the generator to approximate the target distribution. In section 4,we discuss the generalization capability of high capacity discriminators. We show that high capacitydiscriminators trained with the original GAN loss tends to overfit to the mislabeled samples in train-ing dataset, guiding the generator toward collapsed equilibria (i.e. equilibria where the generatorhas mode collapse).Arora et al. (2018) proposed to measure the generalization capability of GAN by estimating thenumber of modes in the model distribution using the birthday paradox. Experiments on severaldatasets showed that the number of modes in the model distribution is several times greater thanthe number of training examples. The author concluded that although GANs might not be able tolearn distributions, they do exhibit some level of generalization. Our analysis shows that poor gen-eralization comes from the mismatch between discriminators trained on discrete finite datasets andthe theoretically optimal discriminator. We propose a zero-centered gradient penalty for improvingthe generalization capability of (high capacity) discriminators. Our zero-centered gradient penaltypushes the discriminator toward the optimal one, making GAN to converge to equilibrium with goodgeneralization capability.Our contributions are as follow:1Published as a conference paper at ICLR 20191. We show that discriminators trained with the original GAN loss have poor generalizationcapability. Poor generalization in the discriminator prevents the generator from learningthe target distribution.2. We show that the original GAN objective encourages gradient exploding in the discrimina-tor. Gradient exploding in the discriminator can lead to mode collapse in the generator.3. We propose a zero-centered gradient penalty (0-GP) for improving the generalization ca-pability of the discriminator. We show that non-zero centered GP and the zero-centered GPproposed in Mescheder et al. (2018) cannot make the discriminator generalize. Our 0-GPhelps GANs to converge to generalizable equilibria. Theoretical results are verified on realworld datasets.4. We show that 0-GP helps the discriminator to distribute its capacity more equally betweenregions of the space, effectively preventing mode collapse. Experiments on synthetic andreal world datasets verify that 0-GP can prevent mode collapse. GANs with 0-GP is muchmore robust to changes in hyper parameters, optimizers, and network architectures than theoriginal GAN and GANs with other gradient penalties.Table 1 compares the key properties of our 0-GP with one centered GP (1-GP) (Gulrajani et al.,2017) and zero centered GP on real/fake samples only (0-GP-sample) (Mescheder et al., 2018).NOTATIONSpr the target distributionpg the model distributionpz the noise distributiondx the dimensionality of a data sample (real or fake)dz the dimensionality of a noise samplesupp(p) the support of distribution pxpr a real samplezpz a noise vector drawn from the noise distribution pzy=G(z) a generated sampleDr=fx1;:::;xng the set ofnreal samplesD(t)g=ny(t)1;:::;y(t)mothe set ofmgenerated samples at step tD(t)=Dr[D(t)g the training dataset at step t2 R ELATED WORKSGradient penalties are widely used in GANs literature. There are a plethora of works on usinggradient penalty to improve the stability of GANs (Mescheder et al., 2018; Gulrajani et al., 2017;Petzka et al., 2018; Roth et al., 2017; Qi, 2017). However, these works mostly focused on makingthe training of GANs stable and convergent. Our work aims to improve the generalization capabilityof GANs via gradient regularization.Arora et al. (2018) showed that the number of modes in the model distribution grows linearly withthe size of the discriminator. The result implies that higher capacity discriminators are needed forbetter approximation of the target distribution. Zhang et al. (2018) studied the tradeoff betweengeneralization and discrimination in GANs. The authors showed that generalization is guaranteed ifthe discriminator set is small enough. In practice, rich discriminators are usually used for better dis-criminative power. Our GP makes rich discriminators generalizable while remaining discriminative.Although less mode collapse is not exactly the same as generalization, the ability to produce morediverse samples implies better generalization. There are a large number of papers on preventingmode collapse in GANs. Radford et al. (2015); Salimans et al. (2016) introduced a number ofempirical tricks to help stabilizing GANs. Arjovsky & Bottou (2017) showed the importance ofdivergences in GAN training, leading to the introduction of Wasserstein GAN (Arjovsky et al.,2017). The use of weak divergence is further explored by Mroueh & Sercu (2017); Mroueh et al.(2018). Lucas et al. (2018) advocated the use of mixed-batches, mini-batches of real and fake data,2Published as a conference paper at ICLR 2019GP Formula Improve gen-eralizationPrevent gradexpodingConvergenceguaranteeOur 0-GP Ev2C[k(rD)vk2],Cfromytox3 3 31-GP E~x[(k(rD)~xk 1)2],where ~x=x+ (1)y7 3 70-GP-sample Ev2D[k(rD)vk2] 7 7 3Table 1: Summary of different gradient penaltiesto smooth out the loss surface. The method exploits the distributional information in a mini-batchto prevent mode collapse. VEEGAN (Srivastava et al., 2017) uses an inverse of the generator tomap the data to the prior distribution. The mismatch between the inverse mapping and the prior isused to detect mode collapse. If the generator can remember the entire training set, then the inversemapping can be arbitrarily close the the prior distribution. It suggests that VEEGAN might not beable to help GAN to generalize outside of the training dataset. Our method helps GANs to discoverunseen regions of the target distribution, significantly improve the diversity of generated samples.3 B ACKGROUNDIn the original GAN, the discriminator Dmaximizes the following objectiveL=Expr[log(D(x))] +Ezpz[log(1D(G(z)))] (1)Goodfellow et al. (2014) showed that if the density functions pgandprare known, then for a fixedgeneratorGthe optimal discriminator isD(v) =pr(v)pr(v) +pg(v);8v2supp(pr)[supp(pg) (2)In the beginning of the training, pgis very different from prso we havepr(x)pg(x);forx2Drandpg(y)pr(y);fory2Dg. Therefore, in the beginning of the training D(x)1;forx2DrandD(y)0;fory2Dg. As the training progresses, the generator will bring pgcloser topr. The game reaches the global equilibrium when pr=pg. At the global equilibrium, D(v) =12;8v2supp(pr)[supp(pg). One important result of the original paper is that, if the discriminatoris optimal at every step of the GAN algorithm, then pgconverges to pr.In practice, density functions are not known and the optimal discriminator is approximated by op-timizing the classification performance of a parametric discriminator D(;D)on a discrete finitedatasetD=Dr[Dg. We call a discriminator trained on a discrete finite dataset an empiricaldiscriminator. The empirically optimal discriminator is denoted by ^D.Arora et al. (2017) defined generalization of a divergence das follow: A divergence dis said to havegeneralization error ifjd(Dg;Dr)d(pg;pr)j (3)A discriminator Ddefines a divergence between two distributions. The performance of a discrim-inator with good generalization capability on the training dataset should be similar to that on theentire data space. In practice, generalization capability of Dcan be estimated by measuring thedifference between its performance on the training dataset and a held-out dataset.4 G ENERALIZATION CAPABILITY OF DISCRIMINATORS4.1 T HE EMPIRICALLY OPTIMAL DISCRIMINATOR DOES NOT APPROXIMATE THETHEORETICALLY OPTIMAL DISCRIMINATORIt has been observed that if the discriminator is too good at discriminating real and fake samples,the generator cannot learn effectively (Goodfellow et al., 2014; Arjovsky & Bottou, 2017). The phe-nomenon suggests that ^Ddoes not well approximate D, and does not guarantee the convergenceofpgtopr. In the following, we clarify the mismatch between ^DandD, and its implications.3Published as a conference paper at ICLR 20192 1 0 1 2 3 42101234Powered by TCPDF (www.tcpdf.org)Powered by TCPDF (www.tcpdf.org)(a)2 1 0 1 2 3 42101234 (b)2 1 0 1 2 3 42101234 (c)2 1 0 1 2 3 42101234 (d)2 1 0 1 2 3 42101234 (e)2 1 0 1 2 3 421012340.00.20.40.60.81.0 (f)Figure 1: Value surfaces of discriminators trained for 10,000 iterations with different gradient penal-ties, on samples from two Gaussian distributions. The discriminator is a 2 hidden layer MLP with 64hidden neurons.(a) No GP. (b) No GP with more samples. (c) One-centered GP (1-GP) with = 1.(d) Zero-centered GP on real/fake samples only (0-GP-sample) with = 1. (e) Our zero-centeredGP with= 1. (f) Theoretically optimal discriminator computed using Eqn. 2.Proposition 1. The two datasetsDrandD(t)gare disjoint with probability 1regardless of how closethe two distributions prandp(t)gare.Proof. See appendix A.DrandD(t)gare disjoint with probability 1 even when pgandprare exactly the same. ^Dperfectlyclassifies the real and the fake datasets, and ^D(x) = 1;8x2Dr,^D(y) = 0;8y2D(t)g. Thevalue of ^DonD(t)does not depend on the distance between the two distributions and does notreflect the learning progress. The value of ^Don the training dataset approximates that of Din thebeginning of the learning process but not when the two distributions are close. When trained usinggradient descent on a discrete finite dataset with the loss in Eqn. 1, the discriminator Dis pushedtoward ^D, notD. This behavior does not depend on the size of training set (see Fig. 1a, 1b),implying that the original GAN is not guaranteed to converge to the target distribution even whengiven enough data .4.2 E MPIRICAL DISCRIMINATORS HAVE POOR GENERALIZATION CAPABILITYWhen the generator gets better, generated samples are more similar to samples from the target dis-tribution. However, regardless of their quality, generated samples are still labeled as fake in Eqn. 1.The training dataset Dis a bad dataset as it contains many mislabeled examples. A discriminatortrained on such dataset will overfit to the mislabeled examples and has poor generalization capabil-ity. It will misclassify unseen samples and cannot teach the generator to generate these samples.Figure 1a and 1b demonstrate the problem on a synthetic dataset consisting of samples from twoGaussian distributions. The discriminator in Fig. 1a overfits to the small dataset and does notgeneralize to new samples in Fig. 1b. Although the discriminator in Fig. 1b was trained on a largerdataset which is sufficient to characterize the two distributions, it still overfits to the data and itsvalue surface is very different from that of the theoretically optimal discriminator in Fig. 1f.An overfitted discriminator does not guide the model distribution toward target distribution but to-ward the real samples in the dataset. This explains why the original GAN usually exhibits modecollapse behavior. Finding the empirically optimal discriminator using gradient descent usuallyrequires many iterations. Heuristically, overfitting can be alleviated by limiting the number of dis-criminator updates per generator update. Goodfellow et al. (2014) recommended to update thediscriminator once every generator update. In the next subsection, we show that limiting the numberof discriminator updates per generator update prevents the discriminator from overfitting.4.2.1-OPTIMAL DISCRIMINATORS^Dis costly to find and maintain. We consider here a weaker notion of optimality which can beachieved in practical settings.4Published as a conference paper at ICLR 2019Definition 1 (-optimal discriminator) .Given two disjoint datasets DrandDg, and a number >0,a discriminator Dis-optimal ifD(x)12+2;8x2DrD(y)122;8y2DgAs observed in Goodfellow et al. (2014), ^Ddoes not generate usable gradient for the generator.Goodfellow et al. proposed the non-saturating loss for the generator to circumvent this vanishinggradient problem. For an -optimal discriminator, if is relatively small, then the gradient of thediscriminator w.r.t. fake datapoints might not vanish and can be used to guide the model distributiontoward the target distribution.Proposition 2. Given two disjoint datasets DrandDg, and a number >0, an-optimal discrimi-natorDexists and can be constructed as a one hidden layer MLP with O(dx(m+n))parameters.Proof. See appendix B.Because deep networks are more powerful than shallow ones, the size of a deep -optimal discrim-inator can be much smaller than O(dx(m+n)). From the formula, the size of a shallow -optimaldiscriminator for real world datasets ranges from a few to hundreds of millions parameters. That iscomparable to the size of discriminators used in practice. Arjovsky & Bottou (2017) showed thateven when the generator can generate realistic samples, a discriminator that can perfectly classifyreal and fake samples can be found easily using gradient descent. The experiment verified that-optimal discriminator can be found using gradient descent in practical settings.We observe that the norm of the gradient w.r.t. the discriminator’s parameters decreases as fakessamples approach real samples. If the discriminator’s learning rate is fixed, then the number ofgradient descent steps that the discriminator has to take to reach -optimal state should increase.Proposition 3. Alternating gradient descent with the same learning rate for discriminator andgenerator, and fixed number of discriminator updates per generator update (Fixed-Alt-GD) can-not maintain the (empirical) optimality of the discriminator.Fixed-Alt-GD decreases the discriminative power of the discriminator to improve its generalizationcapability. The proof for linear case is given in appendix C.In GANs trained with Two Timescale Update Rule (TTUR) (Heusel et al., 2017), the ratio betweenthe learning rate of the discriminator and that of the generator goes to infinity as the iteration numbergoes to infinity. Therefore, the discriminator can learn much faster than the generator and might beable to maintain its optimality throughout the learning process.4.2.2 G RADIENT EXPLODING IN -OPTIMAL DISCRIMINATORSLet’s consider a simplified scenario where the real and the fake datasets each contains a singledatapoint:Dr=fxg,D(t)g=y(t). Updating the generator according to the gradient from thediscriminator will push y(t)towardx. The absolute value of directional derivative of Din thedirectionu=xy(t), atxisj(ruD)xj= limy(t)u !xD(x)D(y(t))xy(t)IfDis always-optimal, thenD(x)D(y(t));8t2N, andj(ruD)xj limy(t)u !xxy(t)=1The directional derivate of the -optimal discriminator explodes as the fake datapoint approaches thereal datapoint. Directional derivative exploding implies gradient exploding at datapoints on the line5Published as a conference paper at ICLR 201910 5 0 5 1010.07.55.02.50.02.55.07.510.0(a)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (b)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (c)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (d)10 5 0 5 1010.07.55.02.50.02.55.07.510.0(e)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (f)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (g)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (h)10 5 0 5 1010.07.55.02.50.02.55.07.510.0 (i)Figure 2: Gradient w.r.t. the input of the discriminator of a GAN trained with different gradientpenalties. The vector associated with a datapoint vpoints in the direction that increases the valueoflog (D(v))the fastest. The discriminator is a 2 hidden layer MLP with 512 hidden neurons. Thediscriminator is updated once every generator update. SGD is used for optimization. (a), (b) NoGP, iter. 1000 and 10,000. (c), (d) No GP with TTUR, iter. 1,000 and 10,000. (e) Our 0-GP with= 10 , iter. 10,000. (f), (g) Our 0-GP with TTUR and = 10 , iter. 10,000 and 20,000. (h) 1-GPwith= 10 , iter. 10,000. (i) 0-GP-sample with = 10 , iter. 10,000.segment connecting xandy(t). If in the next iteration, the generator produces a sample in a regionwhere the gradient explodes, then the gradient w.r.t. the generator’s parameters explodes.Let’s consider the following line integralZC(rD)vds=D(x)D(y(t)) (4)whereCis the line segment from y(t)tox. As the model distribution gets closer to the targetdistribution, the length of Cshould be non increasing. Therefore, maximizing D(x)D(y(t)), orthe discriminative power of D, leads to the maximization of the directional derivative of Din thedirectionds. The original GAN loss makes Dto maximize its discriminative power, encouraginggradient exploding to occur.Gradient exploding happens in the discriminator trained with TTUR in Fig. 2c and 2d. BecauseTTUR can help the discriminator to maintain its optimality, gradient exploding happens and persiststhroughout the training process. Without TTUR, the discriminator cannot maintain its optimality sogradient exploding can happen sometimes during the training but does not persist (Fig. 2a and 2b).Because of the saturated regions in the sigmoid function used in neural network based discrimina-tors, the gradient w.r.t. datapoints in the training set could vanishes. However, gradient explodingmust happen at some datapoints on the path between a pair of samples, where the sigmoid functiondoes not saturate. In Fig. 1a, gradient exploding happens near the decision boundary.In practice,DrandDgcontain many datapoints and the generator is updated using the average ofgradients of the discriminator w.r.t. fake datapoints in the mini-batch. If a fake datapoint y0is veryclose to a real datapoint x0, the gradient (rD)y0might explode. When the average gradient iscomputed over the mini-batch, (rD)y0outweighs other gradients. The generator updated with thisaverage gradient will move many fake datapoints in the direction of (rD)y0, towardx0, makingmode collapse visible.6Published as a conference paper at ICLR 20195 I MPROVING GENERALIZATION CAPABILITY OF EMPIRICALDISCRIMINATORSAlthough the theoretically optimal discriminator Dis generalizable, the original GAN loss doesnot push empirical discriminators toward D. We aim to improve the generalization capability ofempirical discriminators by pushing them toward D.5.1 P USHING EMPIRICAL DISCRIMINATORS TOWARD DFor any input v2supp(pr)[supp(pg), the value of D(v)goes to12and the gradient (rD)vgoes to 0aspgapproachespr. Consider again the line integral in Eqn. 4. As D(x)andD(y)approach12for allx2supp(pr)andy2supp(pg), we haveD(x)D(y) =ZC(rD)vds!0 (5)for all pairs of xandyand all pathsCfromytox. That means, the discriminative power of Dmust decrease as the two distributions become more similar.To push an empirical discriminator DtowardD, we forceDto satisfy two requirements:1.(rD)v!0;8v2supp(pr)[supp(pg)2.D(x)D(y) =RC(rD)vds!0;8xpr;ypg;Cfromytox5.2 Z ERO-CENTERED GRADIENT PENALTYThe first requirement can be implemented by sampling some datapoints v2supp(pr)[supp(pg)and force (rD)vto be 0. The second requirement can be implemented by sampling pairs of realand fake datapoints (x;y)and forceD(x)D(y)to be 0. The two requirements can be added tothe discriminator’s objective as follows^L=L1Ev[k(rD)vk2]2Ex;y[(D(x)D(y))2]whereLis the objective in Eqn. 1. However, as discussed in section 4.2.2, an -optimal discriminatorcan have zero gradient on the training dataset and have gradient exploding outside of the trainingdataset. The gradient norm could go to infinity even when D(x)D(y)is small. Regulating thedifference between D(x)andD(y)is not an efficient way to prevent gradient exploding.We want to prevent gradient exploding on every path in supp(pr)[supp(pg). Because (rD)v!0for allv2supp(pr)[supp(pg)aspgapproachpr, we could push the gradient w.r.t. everydatapoint on every path C2supp(pr)[supp(pg)toward 0. We note that, if (rD)v!0;8v2CthenRC(rD)vds!0. Therefore, the two requirements can be enforced by a single zero-centeredgradient penalty of the formEv2C[k(rD)vk2]The remaining problem is how to find the path Cfrom a fake to a real sample which lies insidesupp(pr)[supp(pg). Because we do not have access to the full supports of prandpg, and thesupports of two distributions could be disjoint in the beginning of the training process, finding a pathwhich lies completely inside the support is infeasible.In the current implementation, we approximate Cwith the straight line connecting a pair of samples,although there is no guarantee that all datapoints on that straight line are in supp(pr)[supp(pg).That results in the following objectiveL0GP=LE~x[k(rD)~xk2] (6)where ~x=x+ (1)y,xpr,ypg, andU(0;1)1. We describe a more sophisticatedway of finding a better path in appendix F.1Wu et al. (2018) independently proposed the Wasserstein divergence for WGAN which uses a gradientpenalty of similar form. Although the two penalties have similar approximate form, they have different moti-vations and addresses different problems in GANs.7Published as a conference paper at ICLR 2019The largeris, the stronger (rD)~xis pushed toward 0. Ifis 0, then the discriminator will onlyfocus on maximizing its discriminative power. If approaches infinity, then the discriminator hasmaximum generalization capability and no discriminative power. controls the tradeoff betweendiscrimination and generalization in the discriminator.5.3 G ENERALIZATION CAPABILITY OF DIFFERENT GRADIENT PENALTIESMescheder et al. (2018) proposed to force the gradient w.r.t. datapoints in the real and/or fakedataset(s) to be 0to make the training of GANs convergent. In section 4, we showed that for dis-crete training dataset, an empirically optimal discriminator ^Dalways exists and could be foundby gradient descent. Although (r^D)v=0;8v2D ,^Ddoes not satisfy the requirement inEqn. 5 and have gradient exploding when some fake datapoints approach a real datapoint. Thediscriminators in Fig. 1a, 1b, 1d, 2c and 2d have vanishingly small gradients on datapoints in thetraining dataset and very large gradients outside. They have poor generalization capability and can-not teach the generator to generate unseen real datapoints. Therefore, zero-centered gradient penaltyon samples from prandpgonly cannot help improving the generalization of the discriminator .Non-zero centered GPs do not push an empirical discriminator toward Dbecause the gradientdoes not converge to 0. A commonly used non-zero centered GP is the one-centered GP (1-GP)(Gulrajani et al., 2017) which has the following formE~x[(k(rD)~xk1)2] (7)where ~x=x+(1)y,xpr,ypg, andU(0;1). Although the initial goal of 1-GP wasto enforce Lipschitz constraint on the discriminator2, Fedus et al. (2018) found that 1-GP preventsgradient exploding, making the original GAN more stable. 1-GP forces the norm of gradients w.r.t.datapoints on the line segment connecting xandyto be 1. If all gradients on the line segment havenorm 1, then the line integral in Eqn. 4 could be as large as kxyk. Because the distance betweenrandom samples grows with the dimensionality, in high dimensional space kxykis greater than1 with high probability. The discriminator could maximize the value of the line integral withoutviolating the Lipschitz constraint. The discriminator trained with 1-GP, therefore, can overfit to thetraining data and have poor generalization capability.5.4 C ONVERGENCE ANALYSIS FOR ZERO -CENTERED GRADIENT PENALTYMescheder et al. (2018) showed that zero-centered GP on real and/or fake samples (0-GP-sample)makes GANs convergent. The penalty is based on the convergence analysis for the Dirac GAN, an1-dimensional linear GAN which learns the Dirac distribution. The intuition is that when pgis thesame aspr, the gradient of the discriminator w.r.t. the fake datapoints (which are also real datapoints)should be 0so that generator will not move away when being updated using this gradient. If thegradient from the discriminator is not 0, then the generator will oscillate around the equilibrium.Our GP forces the gradient w.r.t. all datapoints on the line segment between a pair of samples (in-cluding the two endpoints) to be 0. As a result, our GP also prevents the generator from oscillating.Therefore, our GP has the same convergence guarantee as the 0-GP-sample.5.5 Z ERO-CENTERED GRADIENT PENALTY IMPROVES CAPACITY DISTRIBUTIONDiscriminators trained with the original GAN loss tends to focus on the region of the where fakesamples are close to real samples, ignoring other regions. The phenomenon can be seen in Fig. 2a,2b, 2c, 2d, 2h and 2i. Gradients in the region where fake samples are concentrated are large whilegradients in other regions, including regions where real samples are located, are very small. Thegenerator cannot discover and generate real datapoints in regions where the gradient vanishes.When trained with the objective in Eqn. 6, the discriminator will have to balance betweenmaximizingLand minimizing the GP. For finite , the GP term will not be exactly 0. Let=E~x[k(rD)~xk2]. Among discriminators with the same value of , gradient descent willfind the discriminator that maximizes L. As discussed in section 4.2.2, maximizing Lleads to the2Petzka et al. (2018) pointed out that 1-GP is based on the wrong intuition that the gradient of the optimalcritic must be 1 everywhere under prandpg. The corrected GP is based on the definition of Lipschitzness.8Published as a conference paper at ICLR 2019maximization of norms of gradients on the path from ytox. The discriminator should maximizethe value=E~x[k(rD)~xk]. Ifis fixed then is maximized when krD~x(i)k=krD~x(j)k;8i;j(Cauchy-Schwarz inequality). Therefore, our zero-centered GP encourages the gradients at differentregions of the real data space to have the same norm. The capacity of Dis distributed more equallybetween regions of the real data space, effectively reduce mode collapse. The effect can be seen inFig. 2e and 2f.1-GP encouragesjkrD~x(i)k1j=jkrD~x(j)k1j;8i;j. That allows gradient norms to besmaller than 1in some regions and larger than 1in some other regions. The problem can be seen inFig. 2h.6 E XPERIMENTSThe code is made available at https://github :com/htt210/GeneralizationAndStabilityInGANs .6.1 Z ERO-CENTERED GRADIENT PENALTY PREVENTS OVERFITTINGTo test the effectiveness of gradient penalties in preventing overfitting, we designed a dataset withreal and fake samples coming from two Gaussian distributions and trained a MLP based discrimi-nator on that dataset. The result is shown in Fig. 1. As predicted in section 5.3, 0-GP-sample doesnot help to improve generalization. 1-GP helps to improve generalization. The value surface in Fig.1c is smoother than that in Fig. 1a. However, as discussed in section 5.3, 1-GP cannot help muchin higher dimensional space where the pair-wise distances are large. The discriminator trained withour 0-GP has the best generalization capability, with a value surface which is the most similar to thatof the theoretically optimal one.We increased the number of discriminator updates per generator update to 5to see the effect of GPsin preventing overfitting. On the MNIST dataset, GAN without GP and with other GPs cannot learnanything after 10,000 iterations. GAN with our 0-GP can still learn normally and start produce rec-ognizable digits after only 1,000 iterations. The result confirms that our GP is effective in preventingoverfitting in the discriminator.6.2 Z ERO-CENTERED GRADIENT PENALTY IMPROVES GENERALIZATION AND ROBUSTNESSOFGAN SSYNTHETIC DATAWe tested different gradient penalties on a number of synthetic datasets to compare their effective-ness. The first dataset is a mixture of 8 Gaussians. The dataset is scaled up by a factor of 10 tosimulate the situation in high dimensional space where random samples are far from each other.The result is shown in Fig. 2. GANs with other gradient penalties all fail to learn the distribu-tion and exhibit mode collapse problem to different extents. GAN with our 0-GP (GAN-0-GP) cansuccessfully learn the distribution. Furthermore, GAN-0-GP can generate datapoints on the circle,demonstrating good generalization capability. The original GAN collapses to some disconnectedmodes and cannot perform smooth interpolation between modes: small change in the input result inlarge, unpredictable change in the output. GAN with zero-centered GP on real/fake samples onlyalso exhibits the same ”mode jumping” behavior. The behavior suggests that these GANs tend toremember the training dataset and have poor generalization capability. Fig. 9 in appendix D demon-strates the problem on MNIST dataset.We observe that GAN-0-GP behaves similar to Wasserstein GAN as it first learns the overall struc-ture of the distribution and then focuses on the modes. An evolution sequence of GAN-0-GP isshown in Fig. 5 in appendix D. Results on other synthetic datasets are shown in appendix D.MNIST DATASETThe result on MNIST dataset is shown in Fig. 3. After 1,000 iterations, all other GANs exhibitmode collapse or cannot learn anything. GAN-0-GP is robust to changes in hyper parameters such9Published as a conference paper at ICLR 2019(a) (b) (c) (d) (e)Figure 3: Result on MNIST. The networks have the same architectures with networks used in syn-thetic experiment. Batch normalization (Ioffe & Szegedy, 2015) was not used. Adam optimizer(Kingma & Ba, 2014) with 1= 0:5;2= 0:9was used. (a) No GP, iter. 1,000. (b) 0-GP-sample,= 100 , iter. 1,000. (c) 1-GP, = 100 , iter. 1,000. (d), (e) 0-GP, = 100 , iter. 1,000 and 10,000.0 200000 400000 600000 800000 1000000Generator Iteration24681012Inception scoreGAN-0-GP-TTURGAN-0-sample-TTURWGAN-GP5-TTURFigure 4: Inception score (Salimans et al., 2016) on ImageNet of GAN-0-GP, GAN-0-GP-sample,and WGAN-GP. The code for this experiment is adapted from Mescheder et al. (2018). We used= 10 for all GANs as recommended by Mescheder et al. The critic in WGAN-GP was updated 5times per generator update. To improve convergence, we used TTUR with learning rates of 0.0001and 0.0003 for the generator and discriminator, respectively.as learning rate and optimizers. When Adam is initialized with large 1, e.g. 0:9, GANs with otherGPs cannot learn anything after many iterations. More samples are given in appendix D.We observe that higher value of improves the diversity of generated samples. For = 50 , weobserve some similar looking samples in the generated data. This is consistent with our conjecturethat largerleads to better generalization.IMAGE NETWhen trained on ImangeNet (Deng et al., 2009), GAN-0-GP can produce high quality samples fromall 1,000 classes. We compared our method with GAN with 0-GP-sample and WGAN-GP. GAN-0-GP-sample is able to produce samples of state of the art quality without using progressive growingtrick (Karras et al., 2018). The result in Fig. 4 shows that our method consistently outperformsGAN-0-GP-sample. GAN-0-GP and GAN-0-GP-sample outperform WGAN-GP by a large margin.Image samples are given in appendix D.7 C ONCLUSIONIn this paper, we clarify the reason behind the poor generalization capability of GAN. We show thatthe original GAN loss does not guide the discriminator and the generator toward a generalizableequilibrium. We propose a zero-centered gradient penalty which pushes empirical discriminatorstoward the optimal discriminator with good generalization capability. Our gradient penalty pro-vides better generalization and convergence guarantee than other gradient penalties. Experimentson diverse datasets verify that our method significantly improves the generalization and stability ofGANs.10Published as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title An interesting read on the convergence of GANs with gradient penalties, lacking comparisons to WGAN-GP ### Review Text Summary: The paper proposes to add to the original GAN (2014) loss a zero-centered gradient penalty as the one defined in the WGAN-GP paper. It also provides an analysis on the mode collapse and lack of stability of classical GANs. The authors compare results using their penalty on a few synthetic examples and on image net dogs generations to results using the classical GAN loss with or without gradient penalties. Positive points: The paper is interesting to read and well illustrated. An experiment on imagenet illustrates the progress that can be achieved by the proposed penalty. Points to improve: If I understood correctly, the main contribution resides in the application of the GP proposed by WGAN-GP to the original setting. Why not compare results to WGAN-GP in this case? Since the proposal of GANs, many papers addressed the mode collapse problem. WGAN-GP, VEEGAN, or Lucas et al arXiv:1806.07185, ICML 2018 to name only a few. The related work section looks incomplete with some missing related references as mentioned above, and copy of a segment that appears in the introduction. The submission could maybe improved by segmenting the work into intro / related / background (with clear equations presenting the existing GP) / analysis / approach / experiments The experiments on synthetic data could be improved: for reproducibility, many works on GANs used the same synthetic data as VEEGAN. The imagenet experiment lacks details. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Sk0pHeZAW
ICLR.cc/2018/Conference
2018
Sparse Regularized Deep Neural Networks For Efficient Embedded Learning
["Jia Bi"]
Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods {\em Weight Reduction Quantisation} for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with $\ell$1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60$\times$ without affecting their test accuracy.
["Sparse representation", "Compression Deep Learning Models", "L1 regularisation", "Optimisation."]
Under review as a conference paper at ICLR 2018Sparse Regularized Deep Neural NetworksFor Efficient Embedded LearningAnonymous authorsPaper under double-blind reviewAbstractDeep learning is becoming more widespread in its application due to itspower in solving complex classification problems. However, deep learningmodels often require large memory and energy consumption, which mayprevent them from being deployed effectively on embedded platforms, limit-ing their applications. This work addresses the problem by proposing meth-odsWeight Reduction Quantisation for compressing the memory footprintof the models, including reducing the number of weights and the number ofbits to store each weight. Beside, applying with sparsity-inducing regular-ization, our work focuses on speeding up stochastic variance reduced gradi-ents (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with /lscript1 regularization on non-convex problem has faster andsmootherconvergenceratesthanSGDbyusingadaptivelearningrates. Ex-perimental evaluation of our approach uses MNIST and CIFAR-10 datasetson LeNet-300-100 and LeNet-5 models, showing our approach can reducethe memory requirements both in the convolutional and fully connectedlayers by up to 60 ×without affecting their test accuracy.1 IntroductionArtificial intelligence is finding wider application across a number of domains where com-putational resources can vary from large data centres to mobile devices. However, state-of-the-art techniques such as deep learning (LeCun et al., 2015) require significant resources,including large memory requirements and energy consumption. Reducing the size of thedeep learning model to a compact model that has small memory footprint without compro-mising its performance is a desirable research aim to address the challenges for deployingthese leading approaches on mobile devices. /lscript1 regularization can be used as a penalty totrain models to prevent the model from over-fitting the training data. As well as providing,/lscript1 regularization is a powerful compression techniques to penalize some weights to be zero.As the results, our research focus on improving the method based on /lscript1 regularization to re-duce memory requirements. Moreover, as deep neural network optimization is a non-convexproblem, the optimization can be stuck in local-minimal, which can reduce the performance.Toaddresstheproblem, weimproveSGDoptimizationfornon-convexfunctiontoenhancingsparse representations obtained with /lscript1 regularization. In this paper, we propose our com-pression method Weight Reduction Quantisation which reduces both the number of weightsand bits-depth of model without sacrificing accuracy. To reduces the number of weights, ourmethod employs sparsity-inducing /lscript1 regularization to encourage many connections in bothconvolutional and fully connected layers to be zero during the training process. Formally,in this paper we consider the following unconstrained minimization problem, Given traininglabels y1, y2, ..., yNas correct outputs for input data x1, x2, ..., x N, the optimization problemto estimate the weights in all layers, W, is defined byminW1NN/summationdisplayi=1L(yi, f(x i; W)) + lr(W), (1)where lis a hyper-parameter controlling the degree of regularization and the weights inall layers is given by W. The problem 1 can be strongly convex or possibly non-convex1Under review as a conference paper at ICLR 2018(Allen-Zhu & Yuan, 2016). Following update rule, the mini-batch SGD method with /lscript1regularization is a popular approach for performing the optimization, and the weight updaterule is given bywk+1j= wkj–hk¶¶wj1BB/summationdisplayi=1L(yi,f(xi; W)) +lMM/summationdisplayj=1|wj|, (2)where each weight of network can be represented by wj,the total number of weights is M.kis the iteration counter and hkis the learning rate and B is mini-batch size ( 1<B<N) usedto approximate the full gradient. However, SGD optimization with /lscript1 regularization has twochallenges: firstly, it inefficiently encourages weight to be zero due to fluctuations generatedby SGD (Tsuruoka et al., 2009). Secondly, SGD optimization slowing down convergencerate due to the high variance of gradients. The two methods of cumulative /lscript1 regularizationandSVRGcan solve the two challenges respectively:Cumulative /lscript1 regularization Tsuruoka et al. (2009) proposed a method cumulatingthe/lscript1 penalties to resolve the problem. The method clips regularization at zero, whichavoids the derivative¶¶wj/summationtextMj=1(lM|wj|)being non-differentiable when wj= 0and provides amore stable convergence for the weights. Moreover, the cumulative penalty can reduce theweight to zero more quickly.Mini-batch SVRG As SGD optimization has slow convergence asymptotically due tonoise, Johnson & Zhang (2013) proposed SVRG that can efficiently decrease the noise ofSGD by reducing the variance of gradients by:wk+1j= wkj–hk/parenleftBigg1BB/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) + ̃mj/parenrightBigg, (3)where ̃mjis the average gradient of sub-optimal weights ̃wjwhich is the weight after everymSGD iterations ̃mj=1NN/summationdisplayi=1¶L(yi, f(x i; ̃W))¶wj=1NN/summationdisplayi=1∇yi( ̃wj),(4)where ̃Wis the sub-optimal weights after mSGD iterations in all layers. For succinctnesswe also write∇yi(wkj) =¶L(yi,f(xi;W))¶wj. They determined that reduction of variance helpsinitial weights w0close to global minima at the beginning in order to boost the convergencerate of SGD in strongly convex problems. Johnson & Zhang (2013) further prove that theperformance of SGD degrades with mini-batching by the theoretical result of complexity.Specifically, for batch size of B, SGD has a 1/√Bdependence on the batch size. In contrast,SVRG in a parallel setting has 1/Bdependence on the batch size which is much better thanSGD. Hence, SVRG allows more efficient mini-batching. However, for non-strongly convexproblems, globalminimizationofnon-convexfunctionisNP-hard(AllenZhu&Hazan,2016).Johnson & Zhang (2013) have a assumption that SVRG can also be applied in neuralnetworks to accelerate the local convergence rate of SGD. Further, Allen Zhu & Hazan(2016) prove non-asymptotic rates of convergence of SVRG for non-convex optimization andproposed improved SVRG that is provably faster than SGD. Hence, a promising approachis to use mini-batch SVRG instead of SGD with cumulative /lscript1 regularization.Main Contributions We summarize our main contributions below:1.Reducing memory requirements :1.1 We analyse a method that combines SVRG with cumulative /lscript1 regular-ization to reduce the number of weights, and propose our method Delicate-SVRG-cumulative- /lscript1which can significantly reduce the number of weights by up to 25 ×2Under review as a conference paper at ICLR 2018without affecting their test accuracy. To our knowledge, ours is the first work thatto combine mini-batch SVRG with cumulative /lscript1 regularization for non-convex op-timization.1.2 To further reduce the memory requirements of models, we aim to reducesthe number of bits to store each weight. Compression method Weight ReductionQuantisation , including both reducing number of weights and bit-depth, can reducethe memory footprints up to 60 ×without affecting accuracy.2.Accelerating convergence rates :2.1Weanalysenon-convexstochasticvariancereducedgradient(SVRG).Basedon the results from (Reddi et al., 2016), we provide the condition when SVRG hasfaster rates of convergence than SGD.2.2 We empirically show that modified SVRG in our method have faster ratesof convergence than ordinary SVRG and SGD.2 Related WorksDifferent methods have been proposed to remove redundancy in deep learning models.Sparse representation is a good approach to reduce the number of parameters. Han et al.mainly explored pruning which is a direct approach to remove small values of connection andfocuses on the important connections with large weight values in all layers of the network.However, a disadvantage is that after pruning the needs networks to be retrained. One ideafrom matrix factorization can be applied to compressed parameters in models by findinga low rank approximation of the weight matrix Denton et al. (2014). However, in prac-tice whilst it improves computation performance, it dose not significantly reduce memoryrequirements.Weight sharing aims to approximate weights by a single weight. Chen et al. proposedHashedNets binning network connections into hash buckets uniformly at random by a hashfunction. As part of a three stage compression pipeline, Han et al. use k-means clusteringto identify the shared weights for each layer of a trained network.Weight quantization for reducing the bit-width to store each weight is an other approach toreduce memory requirements of models. Gysel et al. can successfully condense CaffeNet andSqueezeNet to 8 bits with only slight accuracy loss. Han et al. quantizes the sparse weightsmatrix to be an index which encodes in 8-bit for convolutional layers and 5-bit for fullyconnected layers. Rastegari et al. used binary operations to find the best approximationsof the convolutions, in which the bit-size can be reduced to 1-bit.Another type of approach uses regularization to induce sparsity. Hinton et al. proposed"dropout" that refers to dropping out neurons that are from visible and hidden layers inneural network during training, which can be shown to be a kind of regularization. Collins& Kohli applied /lscript1 regularization and shrinkage operators in the training process. However,it only reduced the weights by only 4 ×with inferior accuracy. Tsuruoka et al. improved onthis with /lscript1 regularization with superior compression, but the methods use SGD and hasslow asymptotical convergence due to the inherent variance Johnson & Zhang (2013).3 Mini-batch Non-convex SVRGFor Problem 1, a stochastic iterative learning algorithm estimate a stationary point xandachieve e-accuracy in finite iterations satisfying ||/triangleinv f(x)||2≤e, which is termed of the e-accurate solution. For a non-convex problem, the goal is to find a reasonable local minimum.However, the challenge is that gradients are easy to be stuck into saddle-point or a localminimum. As a result, such an algorithm aims to help gradients escape from saddle-point orlocal-minimal, e.g.(Ge et al., 2015) demonstrated that adding additional noise can help thealgorithm escape from saddle points. To our best knowledge, there is no theoretically proofthat can guarantee SVRG has faster rates of convergence than SGD. (Reddi et al., 2016)compared the Incremental First-order Oracle (IFO) complexity Agarwal & Bottou (2015)of SGD and SVRG on non-convex problem, O/parenleftbig1/e2/parenrightbigandO/parenleftBign + (n23/e)/parenrightBigrespectively. For3Under review as a conference paper at ICLR 2018our analysis, whether non-convex SVRG can be efficiently close to reasonable optimal localminimum depends on the number of training samples. Suppose fiis non-convex for i∈[n]and fhase-bounded gradients, the IFO complexity of mini-batch SGD with a adaptivelearning rate isO/parenleftbig1/e2/parenrightbigand for mini-batch SVRG with a fixed learning rate O/parenleftBign + (n23/e)/parenrightBig.If the value of eis constant, the speed of convergence rates of SVRG depends on the numberof training samples: when nis small, SVRG is faster than SGD for non-convex optimizationand vice versa. Our experiment results showed in Figure1 and Figure5can support our view.3.1 Mini-batch Non-convex SVRG on Sparse RepresentationIn our case, SVRG is applied on sparse representation. However, if directly combining mini-batch non-convex SVRG with cumulative /lscript1 regularization (called SVRG-cumulative- /lscript1):letukbe the average value of the total /lscript1 penalty given byuk=lMk/summationdisplayt=1ht. (5)At each training sample, weights that are used in current sample can be updated aswk+12j= wkj–hk/parenleftBigg1BB/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) + ̃mj/parenrightBigg(6)ifwk+12j>0thenwk+1j= max(0, wk+12j– (u k+ qk–1j)),else if wk+12j<0thenwk+1j= min(0, wk+12j+ (u k– qk–1j)),(7)where, qkjis the total difference of two weights between the SGD update and the /lscript1 regu-larization update,qkj=k/summationdisplayt=1(wt+1j– wt+12j), (8)where tis an index to calculate cumulative value of q, the algorithm has two problems: (1)As we mentioned, SVRG on sparse representation cannot guarantee to be faster than SGD.Figure 1 shows that for small dataset (e.g. MNIST) the convergence of SVRG is fasterthan SGD but slower than SGD using a larger dataset (e.g. CIFAR-10), (2) The trade-off5 10 15 2000.20.40.60.811.2LossLoss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)5 10 15 200.20.40.60.811.21.41.6Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)5 10 15 2000.20.40.60.811.2Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)Compression Rate=90% Compression Rate=10% Compression Rate=50%(a) MNIST dataset on LeNet-300-100 model0 10 20 30 40 5000.511.522.5LossLoss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)0 10 20 30 40 5000.511.522.5Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)0 10 20 30 40 500.811.21.41.61.822.22.4Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)Compression Rate=90% Compression Rate=50%Compression Rate=10% (b) CIFAR-10 dataset on LeNet-5 modelFigure 1: With cumulative /lscript1 regularization, we compare the convergence rates of SGD andSVRG. SVRG-cumulative- /lscript1has faster convergence rate in Figure 1(a). However, in Figure1(b), SGD-cumulative- /lscript1cansignificantlyconvergeintolowerlossthan SVRG-cumulative- /lscript1when compression rate equal 50% and 90%.ofSVRG-cumulative- /lscript1in the variance reduction versus the sparsity of the cumulative /lscript1regularization. After the variance of the gradient is reduced by SVRG in Equation 6, the ab-solute value of the updated weight wk+12jis higher than that using SGD, which causes SVRG4Under review as a conference paper at ICLR 2018to have an adverse effect on the sparsity of /lscript1-regularization. Compared to ordinary SVRG,(Reddi et al., 2016) proposed an extension of SVRG: MSVRG that introduces adapts thelearning rate, which guarantee that their method has equal or better than SGD. Therefore,similar to the method MSVRG, our method provides separate adaptive learning rates forSVRG-cumulative- /lscript1, which empirically demonstrates that it has faster convergence ratethan SGD.3.2 Delicate-SVRG-cumulative- /lscript1To reduce the number of weights, we introduce our compression method Delicate-SVRG-cumulative- /lscript1that have two main improvements :(1) Separate Adaptive Learning Rate Learning rates play an important rule in effect-ing the convergence rate of optimization during the training process which must be chosencarefully to ensure that the convergence rate is fast, but not too aggressive in which case thealgorithms may become unstable. Reddi et al. believe that adaptive learning rates can beapplied with reduced variance to provide faster convergence rates on nonconvex optimiza-tion. As a result, the convergence rate of the algorithms can be improved if the learning rateis adaptively updated. Our algorithm includes three parameters to provide greater fidelityin controlling the convergence of gradients for implementation of the /lscript1 regularization.Firstly, the learning rate gkis chosen based on the learning rate from Collins et al. shownas,gk=h01 +p(k/N)(9)where h0is an initial learning rate with large value. Our experiments determined the param-eters in three learning rates are over range of values, and a value of p=0.6 as determined tobe efficient. The learning rate schedule can emphasis the large distance between the gradientin the current optimization iteration and the sub-optimal solutions after every miterationin the beginning, which avoids the current gradient being stuck in a local minimum at thestart. It has a fast convergence rate to start with which decreases over time to minima localstation.The second learning rate, bk, that reduces the variance of the SVRG-cumulative- /lscript1andbetter balances the trade-off in both of SVRG and cumulative /lscript1 regularization. bkischosen such that bk>gkwith slower convergence asbk=h01 +a(k/N)q, (10)here bk=0.75, and the results of experiment is the best when q = 3that can keep relativelylarge penalty of average gradients. During updating weight, it is efficient to prevent theabsolute value of weight from being increased by SVRG, which can reduce the bad effect of/lscript1 regularization, and sparsity.We retain the same learning rate hkfor cumulative /lscript1 regularization Tsuruoka et al. (2009)shown as,hk=h0ak/N(11)Theexponentialdecayensuresthatthelearningratesdosenotdroptoofastatthebeginningand too slowly at the end.(2) Bias-based Pruning To further reduce the number of weights, we add a bias-basedpruning ̃bafter the /lscript1 regularization in each iteration. The pruning rule isbased onfollowingheuristic Fonseca & Fleming (1995): connections (weights) in each layer will be removedif their value is smaller than the network’s minimal bias. If the absolute value of weightconnections are smaller than the absolute value of the smallest bias of the entire network ineach batch, these connections have least contribution to the node, which can be removed.In practice, bias-based pruning has no effect on train and test loss.5Under review as a conference paper at ICLR 2018Consequently, Delicate-SVRG-cumulative- /lscript1that incorporates the adaptive learning rateschedules and bias-based pruning as,wk+12j=wkj–/parenleftBigggkNN/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) +bk ̃mj/parenrightBiggifwk+12j>0thenwk+1j= max(0, wk+12j– (u k+ qk–1j+ ̃b)),else if wk+12j<0thenwk+1j= min(0, wk+12j+ (u k– qk–1j– ̃b)).(12)The pseudo code of our method is illustrated as Algorithm 1 in the Appendix.4 Weight Quantization for Bit-depth ReductionTo further compress the model, weight quantization can significantly reduce memory re-quirement by reducing bit precision. We quantize to 3-bit after reducing by Delicate-SVRG-cumulative- /lscript1for convolutional layers and encode 5 bits for fully connected layers.Consequently, we propose our final compression method Weight Reduction Quantisation .Table1: ComparisonofthecompressionresultsofthepruningmethodfromHanetal.(2016)and our method in each layer. Using MNIST dataset train and test on LeNet-300-100 1(a)and LeNet-5 model1(b). D is Delicate-SVRG-cumulative- /lscript1and Q is weight quantization.(a) MNIST dataset with LeNet-300-100 model.Layer Original network #Weights Memory Compress rate Compress rate Deep compression(D)(D+Q) (D) (D+Q) Han et al. (2016)Compress rateip1 235K(940KB) 8.0K14.36KB 3% 1.63% 2.32%ip2 30K(120KB) 2.5K3.392KB 8.3% 2.82% 3.04%ip3 1K(4KB) 0.3K0.308KB 30% 7.7% 12.70%Total 266K(1070KB) 10.8K18.06KB 4%(25×)1.68%( 60×)2.49%( 40×)Top-1 Error 1.64% - - 1.58% 1.57% 1.58%(b) MNIST dataset with LeNet-5 model.Layer Original network D-SVRG-C-L1 Memory Compress rate Compress rate Deep compression(D) (D+Q) (D) (D+Q) Han et al. (2016)Compress rateconv1 0.5K(2KB) 0.33K 1.16KB 78% 58% 67.85%conv2 25K(100KB) 3K 2.42KB 12% 2.42% 5.28%ip1 400K(1600KB) 32K 24KB 3.7% 1.5% 2.45%ip2 5K(40KB) 0.95K 2.112KB 17% 5.28% 6.13%Total 431K(1720KB) 35K 30KB 4.5%( 22×)1.8%( 57×)2.55%( 39×)Top-1 Error 0.80% - - 0.74% 0.737% 0.74%5 ExperimentsIn order to estimate and compare the effect of our compression method on different topolo-gies, e.g. fully connected networks and convolutional networks, we select deep neural net-works (DNNs) and convolutional neural networks (CNNs). The DNN chosen is LeNet-300-100 which has two fully connected layers as hidden layers with 300 and 100 neurons respec-tively. The CNN chosen is LeNet-5 which has two convolutional layers and two fully con-nected layers. We evaluate the performance of our new compression method using MNIST,6Under review as a conference paper at ICLR 2018and CIFAR as benchmarks. MNIST (LeCun et al., 2001) is a set of handwritten digits whichis a commonly used dataset in machine learning. It has 60,000 training examples and 10,000test samples. Each image is grey-scale with 28 ×28 pixels. CIFAR-10 is a dataset that has10 classes with 5,000 training images and 1,000 test images in each class. In total, it con-tains 50,000 training images and 10,000 test images with 32 ×32 pixels. CIFAR-10 imagesare RGB. Two types of error rate are used to measure the performance of models, whichare top-1 and top-5 error rate. Here, we consider top-1 error on MNIST, while top-5 erroron CIFAR-10 because many images in CIFAR are small and ambiguous. Our compressionmethod was implemented using Caffe1.5.1 Comparison with leading resultsApplying Weight Reduction Quantisation to the MNIST dataset, we choose the resultswith the best combination of compression and error rate for comparison. Our method canreduce 98% of the memory requirements with a 1.57% test error rate on the LeNet-300-100 model and 98% of the parameters with a 0.74% test error on the LeNet-5 model. InTable 1, the compression pipeline is summarised with weight statistics in comparison tothe method from Han et al. (2016). In our first stage Delicate-SVRG-cumulative- /lscript1thatfocus on reducing the number of weights, we compare the results of pruning method fromHan et al. (2016) that is the first stage of their compression method. The two tables showthat both Delicate-SVRG-cumulative- /lscript1and Han et al. pruning method can significantlyremove many weights in the fully connected layers. For LeNet-300-100 models, the numberof weights in the first fully connected layers (ip1) contains about 88% of the total number ofweights and this can be compressed by 97% by Delicate-SVRG-cumulative- /lscript1. Furthermore,both Delicate-SVRG-cumulative- /lscript1and pruning method have very similar compression ratein convolutional layers (conv1 and conv2) in reducing the number of weights in LeNet-5model, but Delicate-SVRG-cumulative- /lscript1is more effective to reduce the number of weightsof the two fully connected layers (ip1 and ip2) sparse. Both Delicate-SVRG-cumulative- /lscript1and Han et al. pruning method can achieve lower test error than that of uncompressedmodels, whilst delivering overall compression rates up to 25×and 12×respectively. The104105# Weights0.0160.0180.020.0220.0240.0260.0280.03Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(without BiasPruning)SGD-C-L10.5 1 1.5 2 2.5 3 3.5 4# Weights1057.588.599.510-3SVRG-Cumulative-L1D-SVRG-C-L1D-SVRG-C-L1(without BiasPruning)SGD-Cumulative-L1 Deep compression (92%, 0.0074) Deep compression (92%, 0.0158)(a) MNIST dataset on LeNet-300-100 model(left) and LeNet-5 (right): error rate is top-1 er-ror0 2 4 6 8 10# Weights 1050.070.080.090.10.110.120.130.14Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L10 1 2 3 4 5# Weights 1050.040.060.080.10.120.14 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(b) CIFAR-10 dataset on LeNet-300-100 model(left) and LeNet-5 (right): error rate is top-5 er-rorFigure 2: Four /lscript1 regularization compression methods experiment on two deep learningmodels, including LeNet-300-100 and LeNet-5 using MNIST datasets and CIFAR-10.second stage is to further compress the model by bit-depth reduction, and Table1 showsour method Weight Reduction Quantisation that combines Delicate-SVRG-cumulative- /lscript1with bit-depth reduction can achieve 1.56% error rate on MNIST, and 0.737% error rate onCIFAR-10, where the two errors are all lower than that of original model. The compressionrates are up to 60 ×in LeNet-300-100 and up to 57 ×in LeNet-5 model.1Caffe is a deep learning framework. Source code can be download:http://caffe.berkeleyvision.org7Under review as a conference paper at ICLR 20185.2 Evaluation the Trade-off Between Memory Requirements andPerformanceFocusing on Delicate-SVRG-cumulative- /lscript1to examine the performance of method at differ-ent compression rates controlled by threshold l, we compare the performance of differentmodel-compression based on /lscript1 regularization over the range of memory requirements. Fig-ure 2 shows how the test error rate and weight sparsity vary as the regularization parameterlis adjusted. Where pareto fronts are not available for comparison, we compare with asingle trade-off and determine the related performance by the side of the pareto front thatthe point lies.LeNet on MNIST Figure 2(a) shows LeNet on MNIST. Compared with SVRG-cumulative- /lscript1,SGD-cumulative- /lscript1has the better ability of compression, but the error rateis higher due to the variance generated by SGD optimiser. Replacing SGD with SVRG,SVRG-cumulative- /lscript1reduces the test error but the compression ability is also reduced. TheDelicate-SVRG-cumulative- /lscript1method has the least number of weights and the best perfor-mance having the lowest test error for almost every compression value. Its performance issimilar with the method without bias-based pruning, which means that adding bias-basedpruning can further reduce the number of weights without side-effect on the performance.The pink box on 2(a) showed that the results within the box is better than pink point.LeNet on CIFAR-10 Figure 2(b) shows LeNet on CIFAR-10 dataset that is a largerand more complicated dataset than MNIST. SVRG-cumulative- /lscript1has chances to achievelower test error than SGD-cumulative /lscript1but can not guarantee that the performance isalways better than SGD-cumulative- /lscript1.Delicate-SVRG-cumulative- /lscript1method has betterperformance than the other methods. Its performance is further enhanced by adding bias-based pruning. Consequently, Delicate-SVRG-cumulative- /lscript1can be effectively applied inLeNet-300-100 and LeNet-5 models without accuracy loss when applied to MNIST andCIFAR-10.5.3 Combining Delicate-SVRG-cumulative- /lscript1 and Weight QuantizationFigure3 shows the test error at different compression rates for Delicate-SVRG-cumulative- /lscript1and weight Quantization. Individually, weight quantization can reduce more memory beforethe test error increases significantly in Delicate-SVRG-cumulative- /lscript1using MNIST dataset,but the reverse results applied on CIFAR-10 dataset. However, if combining together, theapproach consistently outperforms.1% 3% 5% 9% 20% 35% 60% 95%Compression Rate0.01550.0160.01650.0170.0175Test ErrorMNIST ON LeNet-300-100DQD+Q1% 3% 5% 20% 40% 80%Compression Rate77.588.599.51010.51110-3MINIST ON LeNet-5DQD+Q(a) MNIST dataset on LeNet-300-100 model(left) and LeNet-5 (right).5% 7% 10% 30% 50% 70%Compression Rate0.0560.0570.0580.0590.060.0610.0620.0630.0640.065Test ErrorCIFAR-10 ON LeNet-300-100DQD+Q0.6% 1% 3% 10% 20% 35% 65%Compression Rate0.020.0220.0240.0260.0280.03CIFAR-10 ON LeNet-5DQD+Q(b) CIFAR-10 dataset on LeNet-300-100 model(left) and LeNet-5 (right).Figure 3: The test error with compression rate under different compression methods. Dis Delicate-SVRG-cumulative- /lscript1, Q is weight quantization. Combining Delicate-SVRG-cumulative- /lscript1 with weight quantization can achieve the best performance.8Under review as a conference paper at ICLR 20185.4 Comparison of Convergence RatesTo confirm the theoretical insights that our method has no bad effect on convergence rateto achieve similar fast convergence with SGD-cumulative- /lscript1orSVRG-cumulative- /lscript1, wecalculate the training loss of two LeNet models on MNIST and CIFAR datasets duringincreasing iterations. In Figure 4(a), all methods have similar convergence rates in LeNet-300-100. In all of our experiments, Delicate-SVRG-cumulative- /lscript1has same or lower traininglossandfasterconvergenceratethanothermethods, meaningthatadaptivelearningratecanhelp SVRG with cumulative /lscript1 regularization to escape the local minimum in the beginningand quickly converge to a good local minimum within finite training iterations. Moreover,Delicate-SVRG-cumulative- /lscript1without bias-based pruning has a similar train loss, whichillustrates that adding bias-based pruning in /lscript1 regularization has no obvious bad effecton the convergence of weights. Consequently, applying adaptive learning rates, Delicate-SVRG-cumulative- /lscript1is a efficient compression method for neural network problems.6 DiscussionInthispaper, weproposed Weight Reduction Quantisation thatefficientlycompressedneuralnetworks without scarifying accuracy. Our method has two stages that reduce the numberof weights and reduce the number of bits to store each weight. We show that SVRG andcumulative /lscript1 regularization can improve over SGD and /lscript1-regularization. By combiningthem, we have presented a new compression method Delicate-SVRG-cumulative- /lscript1that canefficiently reduce the number of parameters by the separate adaptive learning rates. ThethreeadaptivelearningratesareappliedonSVRGandcumulative /lscript1penalty,whichprovidesa high accuracy and reduced number of weights. Besides, our method improved SVRG thatcan be used on non-convex problem with fast convergence rate. In our experiments onLeNet-300-100 and LeNet-5, our method can significantly reduce the memory requirementsup to 60×without accuracy loss. After compression by our method, a compact deep neuralnetwork can be efficiently deployed on an embedded device with performance of the originalmodel.ReferencesAlekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. InDavid Blei and Francis Bach (eds.), Proceedings of the 32nd International Conference onMachine Learning (ICML-15) , pp. 78–86. JMLR Workshop and Conference Proceedings,2015. URL http://jmlr.org/proceedings/papers/v37/agarwal15.pdf .Zeyuan Allen Zhu and Elad Hazan. Variance reduction for faster non-convex optimiza-tion. In Proceedings of the 33nd International Conference on Machine Learning, ICML2016, New York City, NY, USA, June 19-24, 2016 , pp. 699–707, 2016. URL http://jmlr.org/proceedings/papers/v48/allen-zhua16.html .Zeyuan Allen-Zhu and Yang Yuan. Improved svrg for non-strongly-convex or sum-of-non-convex objectives. In Proceedings of the 33rd International Conference on InternationalConference on Machine Learning - Volume 48 , ICML’16, pp. 1080–1089. JMLR.org, 2016.URL http://dl.acm.org/citation.cfm?id=3045390.3045505 .Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen.Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015. URLhttp://arxiv.org/abs/1504.04788 .Maxwell D. Collins and Pushmeet Kohli. Memory bounded deep convolutional networks.CoRR, abs/1412.1442, 2014. URL http://arxiv.org/abs/1412.1442 .Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. Ex-ponentiated gradient algorithms for conditional random fields and max-margin markovnetworks. Journal of Machine Learning Research , 9:1775–1822, 2008. URL http://jmlr.csail.mit.edu/papers/v9/collins08a.html .9Under review as a conference paper at ICLR 2018Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Ex-ploiting linear structure within convolutional networks for efficient evaluation. CoRR,abs/1404.0736, 2014. URL http://arxiv.org/abs/1404.0736 .Carlos M. Fonseca and Peter J. Fleming. An overview of evolutionary algorithms in mul-tiobjective optimization. Evol. Comput. , 3(1):1–16, March 1995. ISSN 1063-6560. doi:10.1162/evco.1995.3.1.1. URL http://dx.doi.org/10.1162/evco.1995.3.1.1 .Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points - onlinestochastic gradient for tensor decomposition. CoRR, abs/1503.02101, 2015. URL http://arxiv.org/abs/1503.02101 .Philipp Gysel, Mohammad Motamedi, and Soheil Ghiasi. Hardware-oriented approxi-mation of convolutional neural networks. CoRR, abs/1604.03168, 2016. URL http://arxiv.org/abs/1604.03168 .Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neuralnetworks with pruning, trained quantization and huffman coding. International Confer-ence on Learning Representations (ICLR) , 2016.GeoffreyE.Hinton,NitishSrivastava,AlexKrizhevsky,IlyaSutskever,andRuslanSalakhut-dinov. Improvingneuralnetworksbypreventingco-adaptationoffeaturedetectors. CoRR,abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580 .Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictivevariance reduction. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q.Weinberger (eds.), Advances in Neural Information Processing Systems 26 , pp. 315–323.Curran Associates, Inc., 2013.Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to docu-ment recognition. In Intelligent Signal Processing , pp. 306–351. IEEE Press, 2001.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 05 2015. URL http://dx.doi.org/10.1038/nature14539 .Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Im-agenet classification using binary convolutional neural networks. CoRR, abs/1603.05279,2016a. URL http://arxiv.org/abs/1603.05279 .Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Im-agenet classification using binary convolutional neural networks. CoRR, abs/1603.05279,2016b. URL http://arxiv.org/abs/1603.05279 .Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabás Póczós, and Alex Smola. Stochas-tic variance reduction for nonconvex optimization. In Proceedings of the 33rd Inter-national Conference on International Conference on Machine Learning - Volume 48 ,ICML’16, pp. 314–323. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045425 .Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. Stochastic gradient descenttraining for l1-regularized log-linear models with cumulative penalty. In Proceedingsof the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna-tional Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Vol-ume 1, ACL ’09, pp. 477–485, Stroudsburg, PA, USA, 2009. Association for Computa-tional Linguistics. ISBN 978-1-932432-45-9. URL http://dl.acm.org/citation.cfm?id=1687878.1687946 .10Under review as a conference paper at ICLR 2018Algorithm 1 Delicate-SVRG-cumulative- /lscript1: Stochastic descent training with cumulative/lscript1 penaltyprocedure Train(l)u←0 ̃m←0Initial wjand qjwith zero for all number of weights Mfork=0 to Maximal Iterations dog←h01 +p(k/N)b←h01 +a(k/N)3fort=0 to k doh←h0at/Nend foru←u +hl/Mend forforj∈features used in sample idorandomly select mfeatures from train sampleswj←wj–/parenleftbiggkN/summationtextNi=1(∇yi(wj) –∇yi( ̃wj)) +bk ̃m/parenrightbig∇yi( ̃wj) =∇yi(wj)ifwjand ̃wjconverge to the same weights then ̃m= 0end if ̃m← ̃m+1N∇yi( ̃w)end forend procedureprocedure Apply Penalty (i)z←wj ̃bis minimal bias in all layers.ifwj>0thenwj∈max(0, w j– (u + qj+ ̃b)),elseifwj<0thenwj∈min(0, w j+ (u – qj– ̃b)),end ifend ifqj←qj+ (w j– z)end procedure7 Appendix7.1 The algorithm of Delicate-SVRG-cumulative- /lscript17.2 Comparison of the convergence rates of between our method andSVRG and SGD when combining with /lscript1 regularization.The results showed in Figure4.7.3 Using multiple initializations to compare the performance of ourmethod and other three methods.The experiments were run with multiple initializations and there was some small variabilityin the results. However, the relative performance of the our method is always better thanSVRGandSGDcombiningwithcumulative /lscript1regularization. TheresultsshowedinFigure511Under review as a conference paper at ICLR 201810 20 30 40 50# Epoch10-210-1100101Train Loss (Log)Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L110 20 30 40 50# Epoch1.11.21.31.41.51.61.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L110 20 30 40 50# Epoch0.10.91.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L1(a) The train loss: MNIST dataset on LeNet-5 (left) and CIFAR-10 dataset on LeNet-300-100(middle) and LeNet-5 (right)0 10 20 30 40 50# Epoch0.020.040.060.080.1Test Loss (Log)Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L10 20 40 50# Epoch1.31.41.51.61.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L10 20 40 50# Epoch11.52Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L1(b) The test loss: MNIST dataset on LeNet-5 (left) and CIFAR-10 dataset on LeNet-300-100(middle) and LeNet-5 (right)Figure 4: Estimate the convergence rate when using four compression methods, includ-ing our method Delicate-SVRG-cumulative- /lscript1,Delicate-SVRG-cumulative- /lscript1 (without Bi-asPruning) that without bias-based pruning in /lscript1 regularization, SVRG-cumulative- /lscript1andSGD-cumulative- /lscript1, on LeNet-300-100 and LeNet-5 models with MNIST and CIFAR-10datasets. Here we choose the compression rate that equal 90% to observe training andtest loss. For MNIST dataset, we did not notice subtle difference train and test loss onLeNet-300-100 model generated by four methods.12Under review as a conference paper at ICLR 2018104105#Weights0.020.0250.030.0350.040.0450.05Test ErrorInitial weights 1SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104105#Weights0.020.0250.030.0350.04Initial weights 2SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104105#Weights0.0180.020.0220.0240.0260.0280.03Initial weights 3SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(a) MNIST dataset on LeNet-300-100102104#Weights0.0080.0090.010.0110.0120.130.0140.0150.0160.0170.019Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L15 10 15#Weights1040.0080.0090.010.0110.0120.0130.0140.0150.0160.017SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105#Weights0.0090.010.0110.0120.0130.0140.0150.0160.0170.018SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(b) MNIST dataset on LeNet-513Under review as a conference paper at ICLR 2018105106#Weights0.0620.0640.0660.0680.070.0720.0740.0760.0780.080.082SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105106#Weights0.0620.0640.0660.0680.070.0720.0740.0760.0780.08SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105106#Weights0.060.0620.0640.0660.0680.070.0720.0740.0760.078Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(c) CIFAR-10 dataset on LeNet-300-100104#Weights0.030.050.070.090.10.130.150.170.190.250.30.40.5Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104#Weights0.030.050.070.090.10.130.150.170.190.250.3 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1102104#Weights0.030.050.070.10.130.150.170.190.250.30.4 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(d) CIFAR-10 dataset on LeNet-5Figure 5: Using three types of initial weights, we compare our method with other threemethods. D-SVRG-C-L1 and D-SVRG-C-L1(wo BiasPruning) are always better than othertwo methods. This experiment also can verify the our view that the performance of SVRGis better or worse than SGD that depends on the number of training samples. In ourexperiment, if choosing small dataset (e.g. MNIST), SVRG is better than SGD. Otherwise,if choosing relatively large dataset (e.g. CIFAR-10), SVRG is worse than SGD.14
Hku4bLqgM
The authors use l-1 regularized SVRG to promotes sparsity in the trained model. However, the paper lacks comparisons with some key literature, and experimentally the benefit of SVRG over SGD does not seem substantial.
4: Ok but not good enough - rejection
The authors present an l-1 regularized SVRG based training algorithm that is able to force many weights of the network to be 0, hence leading to good compression of the model. The motivation for l-1 regularization is clear as it promotes sparse models, which lead to lower storage overheads during inference. The use of SVRG is motivated by the fact that it can, in some cases, provide faster convergence than SGD. Unfortunately, the authors do not compare with some key literature. For example there has been several techniques that use sparsity, and group sparsity [1,2,3], that lead to the same conclusion as the paper here: models can be significantly sparsified while not affecting the test accuracy of the trained model. Then, the novelty of the technique presented is also unclear, as essentially the algorithm is simply SVRG with l1 regularization and then some quantization. The experimental evaluation does not strongly support the thesis that the presented algorithm is much better than SGD with l1 regularization. In the presented experiments, the gap between the performance of SGD and SVRG is small (especially in terms of test error), and overall the savings in terms of the number of weights is similar to Deep compression. Hence, it is unclear how the use of SVRG over SGD improves things. Eg in figure 2 the differences in top-1 error of SGD and SVRG, for the same number of weights is very similar (it’s unclear also why Fig 2a uses top-1 and Fig 2b uses top-5 error). I also want to note that all experiments were run on LeNet, and not on state of the art models (eg ResNets). Finally, the paper is riddled with typos. I attach below some of the ones I found in pages 1 and 2 Overall, although the topic is very interesting, the contribution of this paper is limited, and it is unclear how it compares with other similar techniques that use group sparsity regularization, and whether SVRG offers any significant advantages over l1-SGD. typos: “ This work addresses the problem by proposing methods Weight Reduction Quantisation” -> This work addresses the problem by proposing a Weight Reduction Quantisation method “Beside, applying with sparsity-inducing regularization” -> Beside, applying sparsity-inducing regularization “Our method that minibatch SVRG with l-1 regularization on non-convex problem” -> Our minibatch SVRG with l-1 regularization method on non-convex problem “As well as providing,l1 regularization is a powerful compression techniques to penalize some weights to be zero” -> “l1 regularization is a powerful compression technique that forces some weights to be zero” The problem 1 can -> The problem in Eq.(1) can “it inefficiently encourages weight” -> “it inefficiently encourages weights” ———— [1] Learning Structured Sparsity in Deep Neural Networks http://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf [2] Fast ConvNets Using Group-wise Brain Damage https://arxiv.org/pdf/1506.02515.pdf [3] Sparse Convolutional Neural Networks https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Sparse Regularized Deep Neural Networks For Efficient Embedded Learning ### Paper Abstract Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods {\em Weight Reduction Quantisation} for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with $\ell$1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60$\times$ without affecting their test accuracy. ### Paper Keywords ["Sparse representation", "Compression Deep Learning Models", "L1 regularisation", "Optimisation."] ### Paper Content Under review as a conference paper at ICLR 2018Sparse Regularized Deep Neural NetworksFor Efficient Embedded LearningAnonymous authorsPaper under double-blind reviewAbstractDeep learning is becoming more widespread in its application due to itspower in solving complex classification problems. However, deep learningmodels often require large memory and energy consumption, which mayprevent them from being deployed effectively on embedded platforms, limit-ing their applications. This work addresses the problem by proposing meth-odsWeight Reduction Quantisation for compressing the memory footprintof the models, including reducing the number of weights and the number ofbits to store each weight. Beside, applying with sparsity-inducing regular-ization, our work focuses on speeding up stochastic variance reduced gradi-ents (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with /lscript1 regularization on non-convex problem has faster andsmootherconvergenceratesthanSGDbyusingadaptivelearningrates. Ex-perimental evaluation of our approach uses MNIST and CIFAR-10 datasetson LeNet-300-100 and LeNet-5 models, showing our approach can reducethe memory requirements both in the convolutional and fully connectedlayers by up to 60 ×without affecting their test accuracy.1 IntroductionArtificial intelligence is finding wider application across a number of domains where com-putational resources can vary from large data centres to mobile devices. However, state-of-the-art techniques such as deep learning (LeCun et al., 2015) require significant resources,including large memory requirements and energy consumption. Reducing the size of thedeep learning model to a compact model that has small memory footprint without compro-mising its performance is a desirable research aim to address the challenges for deployingthese leading approaches on mobile devices. /lscript1 regularization can be used as a penalty totrain models to prevent the model from over-fitting the training data. As well as providing,/lscript1 regularization is a powerful compression techniques to penalize some weights to be zero.As the results, our research focus on improving the method based on /lscript1 regularization to re-duce memory requirements. Moreover, as deep neural network optimization is a non-convexproblem, the optimization can be stuck in local-minimal, which can reduce the performance.Toaddresstheproblem, weimproveSGDoptimizationfornon-convexfunctiontoenhancingsparse representations obtained with /lscript1 regularization. In this paper, we propose our com-pression method Weight Reduction Quantisation which reduces both the number of weightsand bits-depth of model without sacrificing accuracy. To reduces the number of weights, ourmethod employs sparsity-inducing /lscript1 regularization to encourage many connections in bothconvolutional and fully connected layers to be zero during the training process. Formally,in this paper we consider the following unconstrained minimization problem, Given traininglabels y1, y2, ..., yNas correct outputs for input data x1, x2, ..., x N, the optimization problemto estimate the weights in all layers, W, is defined byminW1NN/summationdisplayi=1L(yi, f(x i; W)) + lr(W), (1)where lis a hyper-parameter controlling the degree of regularization and the weights inall layers is given by W. The problem 1 can be strongly convex or possibly non-convex1Under review as a conference paper at ICLR 2018(Allen-Zhu & Yuan, 2016). Following update rule, the mini-batch SGD method with /lscript1regularization is a popular approach for performing the optimization, and the weight updaterule is given bywk+1j= wkj–hk¶¶wj1BB/summationdisplayi=1L(yi,f(xi; W)) +lMM/summationdisplayj=1|wj|, (2)where each weight of network can be represented by wj,the total number of weights is M.kis the iteration counter and hkis the learning rate and B is mini-batch size ( 1<B<N) usedto approximate the full gradient. However, SGD optimization with /lscript1 regularization has twochallenges: firstly, it inefficiently encourages weight to be zero due to fluctuations generatedby SGD (Tsuruoka et al., 2009). Secondly, SGD optimization slowing down convergencerate due to the high variance of gradients. The two methods of cumulative /lscript1 regularizationandSVRGcan solve the two challenges respectively:Cumulative /lscript1 regularization Tsuruoka et al. (2009) proposed a method cumulatingthe/lscript1 penalties to resolve the problem. The method clips regularization at zero, whichavoids the derivative¶¶wj/summationtextMj=1(lM|wj|)being non-differentiable when wj= 0and provides amore stable convergence for the weights. Moreover, the cumulative penalty can reduce theweight to zero more quickly.Mini-batch SVRG As SGD optimization has slow convergence asymptotically due tonoise, Johnson & Zhang (2013) proposed SVRG that can efficiently decrease the noise ofSGD by reducing the variance of gradients by:wk+1j= wkj–hk/parenleftBigg1BB/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) + ̃mj/parenrightBigg, (3)where ̃mjis the average gradient of sub-optimal weights ̃wjwhich is the weight after everymSGD iterations ̃mj=1NN/summationdisplayi=1¶L(yi, f(x i; ̃W))¶wj=1NN/summationdisplayi=1∇yi( ̃wj),(4)where ̃Wis the sub-optimal weights after mSGD iterations in all layers. For succinctnesswe also write∇yi(wkj) =¶L(yi,f(xi;W))¶wj. They determined that reduction of variance helpsinitial weights w0close to global minima at the beginning in order to boost the convergencerate of SGD in strongly convex problems. Johnson & Zhang (2013) further prove that theperformance of SGD degrades with mini-batching by the theoretical result of complexity.Specifically, for batch size of B, SGD has a 1/√Bdependence on the batch size. In contrast,SVRG in a parallel setting has 1/Bdependence on the batch size which is much better thanSGD. Hence, SVRG allows more efficient mini-batching. However, for non-strongly convexproblems, globalminimizationofnon-convexfunctionisNP-hard(AllenZhu&Hazan,2016).Johnson & Zhang (2013) have a assumption that SVRG can also be applied in neuralnetworks to accelerate the local convergence rate of SGD. Further, Allen Zhu & Hazan(2016) prove non-asymptotic rates of convergence of SVRG for non-convex optimization andproposed improved SVRG that is provably faster than SGD. Hence, a promising approachis to use mini-batch SVRG instead of SGD with cumulative /lscript1 regularization.Main Contributions We summarize our main contributions below:1.Reducing memory requirements :1.1 We analyse a method that combines SVRG with cumulative /lscript1 regular-ization to reduce the number of weights, and propose our method Delicate-SVRG-cumulative- /lscript1which can significantly reduce the number of weights by up to 25 ×2Under review as a conference paper at ICLR 2018without affecting their test accuracy. To our knowledge, ours is the first work thatto combine mini-batch SVRG with cumulative /lscript1 regularization for non-convex op-timization.1.2 To further reduce the memory requirements of models, we aim to reducesthe number of bits to store each weight. Compression method Weight ReductionQuantisation , including both reducing number of weights and bit-depth, can reducethe memory footprints up to 60 ×without affecting accuracy.2.Accelerating convergence rates :2.1Weanalysenon-convexstochasticvariancereducedgradient(SVRG).Basedon the results from (Reddi et al., 2016), we provide the condition when SVRG hasfaster rates of convergence than SGD.2.2 We empirically show that modified SVRG in our method have faster ratesof convergence than ordinary SVRG and SGD.2 Related WorksDifferent methods have been proposed to remove redundancy in deep learning models.Sparse representation is a good approach to reduce the number of parameters. Han et al.mainly explored pruning which is a direct approach to remove small values of connection andfocuses on the important connections with large weight values in all layers of the network.However, a disadvantage is that after pruning the needs networks to be retrained. One ideafrom matrix factorization can be applied to compressed parameters in models by findinga low rank approximation of the weight matrix Denton et al. (2014). However, in prac-tice whilst it improves computation performance, it dose not significantly reduce memoryrequirements.Weight sharing aims to approximate weights by a single weight. Chen et al. proposedHashedNets binning network connections into hash buckets uniformly at random by a hashfunction. As part of a three stage compression pipeline, Han et al. use k-means clusteringto identify the shared weights for each layer of a trained network.Weight quantization for reducing the bit-width to store each weight is an other approach toreduce memory requirements of models. Gysel et al. can successfully condense CaffeNet andSqueezeNet to 8 bits with only slight accuracy loss. Han et al. quantizes the sparse weightsmatrix to be an index which encodes in 8-bit for convolutional layers and 5-bit for fullyconnected layers. Rastegari et al. used binary operations to find the best approximationsof the convolutions, in which the bit-size can be reduced to 1-bit.Another type of approach uses regularization to induce sparsity. Hinton et al. proposed"dropout" that refers to dropping out neurons that are from visible and hidden layers inneural network during training, which can be shown to be a kind of regularization. Collins& Kohli applied /lscript1 regularization and shrinkage operators in the training process. However,it only reduced the weights by only 4 ×with inferior accuracy. Tsuruoka et al. improved onthis with /lscript1 regularization with superior compression, but the methods use SGD and hasslow asymptotical convergence due to the inherent variance Johnson & Zhang (2013).3 Mini-batch Non-convex SVRGFor Problem 1, a stochastic iterative learning algorithm estimate a stationary point xandachieve e-accuracy in finite iterations satisfying ||/triangleinv f(x)||2≤e, which is termed of the e-accurate solution. For a non-convex problem, the goal is to find a reasonable local minimum.However, the challenge is that gradients are easy to be stuck into saddle-point or a localminimum. As a result, such an algorithm aims to help gradients escape from saddle-point orlocal-minimal, e.g.(Ge et al., 2015) demonstrated that adding additional noise can help thealgorithm escape from saddle points. To our best knowledge, there is no theoretically proofthat can guarantee SVRG has faster rates of convergence than SGD. (Reddi et al., 2016)compared the Incremental First-order Oracle (IFO) complexity Agarwal & Bottou (2015)of SGD and SVRG on non-convex problem, O/parenleftbig1/e2/parenrightbigandO/parenleftBign + (n23/e)/parenrightBigrespectively. For3Under review as a conference paper at ICLR 2018our analysis, whether non-convex SVRG can be efficiently close to reasonable optimal localminimum depends on the number of training samples. Suppose fiis non-convex for i∈[n]and fhase-bounded gradients, the IFO complexity of mini-batch SGD with a adaptivelearning rate isO/parenleftbig1/e2/parenrightbigand for mini-batch SVRG with a fixed learning rate O/parenleftBign + (n23/e)/parenrightBig.If the value of eis constant, the speed of convergence rates of SVRG depends on the numberof training samples: when nis small, SVRG is faster than SGD for non-convex optimizationand vice versa. Our experiment results showed in Figure1 and Figure5can support our view.3.1 Mini-batch Non-convex SVRG on Sparse RepresentationIn our case, SVRG is applied on sparse representation. However, if directly combining mini-batch non-convex SVRG with cumulative /lscript1 regularization (called SVRG-cumulative- /lscript1):letukbe the average value of the total /lscript1 penalty given byuk=lMk/summationdisplayt=1ht. (5)At each training sample, weights that are used in current sample can be updated aswk+12j= wkj–hk/parenleftBigg1BB/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) + ̃mj/parenrightBigg(6)ifwk+12j>0thenwk+1j= max(0, wk+12j– (u k+ qk–1j)),else if wk+12j<0thenwk+1j= min(0, wk+12j+ (u k– qk–1j)),(7)where, qkjis the total difference of two weights between the SGD update and the /lscript1 regu-larization update,qkj=k/summationdisplayt=1(wt+1j– wt+12j), (8)where tis an index to calculate cumulative value of q, the algorithm has two problems: (1)As we mentioned, SVRG on sparse representation cannot guarantee to be faster than SGD.Figure 1 shows that for small dataset (e.g. MNIST) the convergence of SVRG is fasterthan SGD but slower than SGD using a larger dataset (e.g. CIFAR-10), (2) The trade-off5 10 15 2000.20.40.60.811.2LossLoss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)5 10 15 200.20.40.60.811.21.41.6Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)5 10 15 2000.20.40.60.811.2Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)Compression Rate=90% Compression Rate=10% Compression Rate=50%(a) MNIST dataset on LeNet-300-100 model0 10 20 30 40 5000.511.522.5LossLoss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)0 10 20 30 40 5000.511.522.5Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)0 10 20 30 40 500.811.21.41.61.822.22.4Loss per EpochTrain(SGD-C-L1)Train(SVRG-C-L1)Test(SGD-C-L1)Test(SVRG-C-L1)Compression Rate=90% Compression Rate=50%Compression Rate=10% (b) CIFAR-10 dataset on LeNet-5 modelFigure 1: With cumulative /lscript1 regularization, we compare the convergence rates of SGD andSVRG. SVRG-cumulative- /lscript1has faster convergence rate in Figure 1(a). However, in Figure1(b), SGD-cumulative- /lscript1cansignificantlyconvergeintolowerlossthan SVRG-cumulative- /lscript1when compression rate equal 50% and 90%.ofSVRG-cumulative- /lscript1in the variance reduction versus the sparsity of the cumulative /lscript1regularization. After the variance of the gradient is reduced by SVRG in Equation 6, the ab-solute value of the updated weight wk+12jis higher than that using SGD, which causes SVRG4Under review as a conference paper at ICLR 2018to have an adverse effect on the sparsity of /lscript1-regularization. Compared to ordinary SVRG,(Reddi et al., 2016) proposed an extension of SVRG: MSVRG that introduces adapts thelearning rate, which guarantee that their method has equal or better than SGD. Therefore,similar to the method MSVRG, our method provides separate adaptive learning rates forSVRG-cumulative- /lscript1, which empirically demonstrates that it has faster convergence ratethan SGD.3.2 Delicate-SVRG-cumulative- /lscript1To reduce the number of weights, we introduce our compression method Delicate-SVRG-cumulative- /lscript1that have two main improvements :(1) Separate Adaptive Learning Rate Learning rates play an important rule in effect-ing the convergence rate of optimization during the training process which must be chosencarefully to ensure that the convergence rate is fast, but not too aggressive in which case thealgorithms may become unstable. Reddi et al. believe that adaptive learning rates can beapplied with reduced variance to provide faster convergence rates on nonconvex optimiza-tion. As a result, the convergence rate of the algorithms can be improved if the learning rateis adaptively updated. Our algorithm includes three parameters to provide greater fidelityin controlling the convergence of gradients for implementation of the /lscript1 regularization.Firstly, the learning rate gkis chosen based on the learning rate from Collins et al. shownas,gk=h01 +p(k/N)(9)where h0is an initial learning rate with large value. Our experiments determined the param-eters in three learning rates are over range of values, and a value of p=0.6 as determined tobe efficient. The learning rate schedule can emphasis the large distance between the gradientin the current optimization iteration and the sub-optimal solutions after every miterationin the beginning, which avoids the current gradient being stuck in a local minimum at thestart. It has a fast convergence rate to start with which decreases over time to minima localstation.The second learning rate, bk, that reduces the variance of the SVRG-cumulative- /lscript1andbetter balances the trade-off in both of SVRG and cumulative /lscript1 regularization. bkischosen such that bk>gkwith slower convergence asbk=h01 +a(k/N)q, (10)here bk=0.75, and the results of experiment is the best when q = 3that can keep relativelylarge penalty of average gradients. During updating weight, it is efficient to prevent theabsolute value of weight from being increased by SVRG, which can reduce the bad effect of/lscript1 regularization, and sparsity.We retain the same learning rate hkfor cumulative /lscript1 regularization Tsuruoka et al. (2009)shown as,hk=h0ak/N(11)Theexponentialdecayensuresthatthelearningratesdosenotdroptoofastatthebeginningand too slowly at the end.(2) Bias-based Pruning To further reduce the number of weights, we add a bias-basedpruning ̃bafter the /lscript1 regularization in each iteration. The pruning rule isbased onfollowingheuristic Fonseca & Fleming (1995): connections (weights) in each layer will be removedif their value is smaller than the network’s minimal bias. If the absolute value of weightconnections are smaller than the absolute value of the smallest bias of the entire network ineach batch, these connections have least contribution to the node, which can be removed.In practice, bias-based pruning has no effect on train and test loss.5Under review as a conference paper at ICLR 2018Consequently, Delicate-SVRG-cumulative- /lscript1that incorporates the adaptive learning rateschedules and bias-based pruning as,wk+12j=wkj–/parenleftBigggkNN/summationdisplayi=1(∇yi(wkj) –∇yi( ̃wj)) +bk ̃mj/parenrightBiggifwk+12j>0thenwk+1j= max(0, wk+12j– (u k+ qk–1j+ ̃b)),else if wk+12j<0thenwk+1j= min(0, wk+12j+ (u k– qk–1j– ̃b)).(12)The pseudo code of our method is illustrated as Algorithm 1 in the Appendix.4 Weight Quantization for Bit-depth ReductionTo further compress the model, weight quantization can significantly reduce memory re-quirement by reducing bit precision. We quantize to 3-bit after reducing by Delicate-SVRG-cumulative- /lscript1for convolutional layers and encode 5 bits for fully connected layers.Consequently, we propose our final compression method Weight Reduction Quantisation .Table1: ComparisonofthecompressionresultsofthepruningmethodfromHanetal.(2016)and our method in each layer. Using MNIST dataset train and test on LeNet-300-100 1(a)and LeNet-5 model1(b). D is Delicate-SVRG-cumulative- /lscript1and Q is weight quantization.(a) MNIST dataset with LeNet-300-100 model.Layer Original network #Weights Memory Compress rate Compress rate Deep compression(D)(D+Q) (D) (D+Q) Han et al. (2016)Compress rateip1 235K(940KB) 8.0K14.36KB 3% 1.63% 2.32%ip2 30K(120KB) 2.5K3.392KB 8.3% 2.82% 3.04%ip3 1K(4KB) 0.3K0.308KB 30% 7.7% 12.70%Total 266K(1070KB) 10.8K18.06KB 4%(25×)1.68%( 60×)2.49%( 40×)Top-1 Error 1.64% - - 1.58% 1.57% 1.58%(b) MNIST dataset with LeNet-5 model.Layer Original network D-SVRG-C-L1 Memory Compress rate Compress rate Deep compression(D) (D+Q) (D) (D+Q) Han et al. (2016)Compress rateconv1 0.5K(2KB) 0.33K 1.16KB 78% 58% 67.85%conv2 25K(100KB) 3K 2.42KB 12% 2.42% 5.28%ip1 400K(1600KB) 32K 24KB 3.7% 1.5% 2.45%ip2 5K(40KB) 0.95K 2.112KB 17% 5.28% 6.13%Total 431K(1720KB) 35K 30KB 4.5%( 22×)1.8%( 57×)2.55%( 39×)Top-1 Error 0.80% - - 0.74% 0.737% 0.74%5 ExperimentsIn order to estimate and compare the effect of our compression method on different topolo-gies, e.g. fully connected networks and convolutional networks, we select deep neural net-works (DNNs) and convolutional neural networks (CNNs). The DNN chosen is LeNet-300-100 which has two fully connected layers as hidden layers with 300 and 100 neurons respec-tively. The CNN chosen is LeNet-5 which has two convolutional layers and two fully con-nected layers. We evaluate the performance of our new compression method using MNIST,6Under review as a conference paper at ICLR 2018and CIFAR as benchmarks. MNIST (LeCun et al., 2001) is a set of handwritten digits whichis a commonly used dataset in machine learning. It has 60,000 training examples and 10,000test samples. Each image is grey-scale with 28 ×28 pixels. CIFAR-10 is a dataset that has10 classes with 5,000 training images and 1,000 test images in each class. In total, it con-tains 50,000 training images and 10,000 test images with 32 ×32 pixels. CIFAR-10 imagesare RGB. Two types of error rate are used to measure the performance of models, whichare top-1 and top-5 error rate. Here, we consider top-1 error on MNIST, while top-5 erroron CIFAR-10 because many images in CIFAR are small and ambiguous. Our compressionmethod was implemented using Caffe1.5.1 Comparison with leading resultsApplying Weight Reduction Quantisation to the MNIST dataset, we choose the resultswith the best combination of compression and error rate for comparison. Our method canreduce 98% of the memory requirements with a 1.57% test error rate on the LeNet-300-100 model and 98% of the parameters with a 0.74% test error on the LeNet-5 model. InTable 1, the compression pipeline is summarised with weight statistics in comparison tothe method from Han et al. (2016). In our first stage Delicate-SVRG-cumulative- /lscript1thatfocus on reducing the number of weights, we compare the results of pruning method fromHan et al. (2016) that is the first stage of their compression method. The two tables showthat both Delicate-SVRG-cumulative- /lscript1and Han et al. pruning method can significantlyremove many weights in the fully connected layers. For LeNet-300-100 models, the numberof weights in the first fully connected layers (ip1) contains about 88% of the total number ofweights and this can be compressed by 97% by Delicate-SVRG-cumulative- /lscript1. Furthermore,both Delicate-SVRG-cumulative- /lscript1and pruning method have very similar compression ratein convolutional layers (conv1 and conv2) in reducing the number of weights in LeNet-5model, but Delicate-SVRG-cumulative- /lscript1is more effective to reduce the number of weightsof the two fully connected layers (ip1 and ip2) sparse. Both Delicate-SVRG-cumulative- /lscript1and Han et al. pruning method can achieve lower test error than that of uncompressedmodels, whilst delivering overall compression rates up to 25×and 12×respectively. The104105# Weights0.0160.0180.020.0220.0240.0260.0280.03Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(without BiasPruning)SGD-C-L10.5 1 1.5 2 2.5 3 3.5 4# Weights1057.588.599.510-3SVRG-Cumulative-L1D-SVRG-C-L1D-SVRG-C-L1(without BiasPruning)SGD-Cumulative-L1 Deep compression (92%, 0.0074) Deep compression (92%, 0.0158)(a) MNIST dataset on LeNet-300-100 model(left) and LeNet-5 (right): error rate is top-1 er-ror0 2 4 6 8 10# Weights 1050.070.080.090.10.110.120.130.14Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L10 1 2 3 4 5# Weights 1050.040.060.080.10.120.14 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(b) CIFAR-10 dataset on LeNet-300-100 model(left) and LeNet-5 (right): error rate is top-5 er-rorFigure 2: Four /lscript1 regularization compression methods experiment on two deep learningmodels, including LeNet-300-100 and LeNet-5 using MNIST datasets and CIFAR-10.second stage is to further compress the model by bit-depth reduction, and Table1 showsour method Weight Reduction Quantisation that combines Delicate-SVRG-cumulative- /lscript1with bit-depth reduction can achieve 1.56% error rate on MNIST, and 0.737% error rate onCIFAR-10, where the two errors are all lower than that of original model. The compressionrates are up to 60 ×in LeNet-300-100 and up to 57 ×in LeNet-5 model.1Caffe is a deep learning framework. Source code can be download:http://caffe.berkeleyvision.org7Under review as a conference paper at ICLR 20185.2 Evaluation the Trade-off Between Memory Requirements andPerformanceFocusing on Delicate-SVRG-cumulative- /lscript1to examine the performance of method at differ-ent compression rates controlled by threshold l, we compare the performance of differentmodel-compression based on /lscript1 regularization over the range of memory requirements. Fig-ure 2 shows how the test error rate and weight sparsity vary as the regularization parameterlis adjusted. Where pareto fronts are not available for comparison, we compare with asingle trade-off and determine the related performance by the side of the pareto front thatthe point lies.LeNet on MNIST Figure 2(a) shows LeNet on MNIST. Compared with SVRG-cumulative- /lscript1,SGD-cumulative- /lscript1has the better ability of compression, but the error rateis higher due to the variance generated by SGD optimiser. Replacing SGD with SVRG,SVRG-cumulative- /lscript1reduces the test error but the compression ability is also reduced. TheDelicate-SVRG-cumulative- /lscript1method has the least number of weights and the best perfor-mance having the lowest test error for almost every compression value. Its performance issimilar with the method without bias-based pruning, which means that adding bias-basedpruning can further reduce the number of weights without side-effect on the performance.The pink box on 2(a) showed that the results within the box is better than pink point.LeNet on CIFAR-10 Figure 2(b) shows LeNet on CIFAR-10 dataset that is a largerand more complicated dataset than MNIST. SVRG-cumulative- /lscript1has chances to achievelower test error than SGD-cumulative /lscript1but can not guarantee that the performance isalways better than SGD-cumulative- /lscript1.Delicate-SVRG-cumulative- /lscript1method has betterperformance than the other methods. Its performance is further enhanced by adding bias-based pruning. Consequently, Delicate-SVRG-cumulative- /lscript1can be effectively applied inLeNet-300-100 and LeNet-5 models without accuracy loss when applied to MNIST andCIFAR-10.5.3 Combining Delicate-SVRG-cumulative- /lscript1 and Weight QuantizationFigure3 shows the test error at different compression rates for Delicate-SVRG-cumulative- /lscript1and weight Quantization. Individually, weight quantization can reduce more memory beforethe test error increases significantly in Delicate-SVRG-cumulative- /lscript1using MNIST dataset,but the reverse results applied on CIFAR-10 dataset. However, if combining together, theapproach consistently outperforms.1% 3% 5% 9% 20% 35% 60% 95%Compression Rate0.01550.0160.01650.0170.0175Test ErrorMNIST ON LeNet-300-100DQD+Q1% 3% 5% 20% 40% 80%Compression Rate77.588.599.51010.51110-3MINIST ON LeNet-5DQD+Q(a) MNIST dataset on LeNet-300-100 model(left) and LeNet-5 (right).5% 7% 10% 30% 50% 70%Compression Rate0.0560.0570.0580.0590.060.0610.0620.0630.0640.065Test ErrorCIFAR-10 ON LeNet-300-100DQD+Q0.6% 1% 3% 10% 20% 35% 65%Compression Rate0.020.0220.0240.0260.0280.03CIFAR-10 ON LeNet-5DQD+Q(b) CIFAR-10 dataset on LeNet-300-100 model(left) and LeNet-5 (right).Figure 3: The test error with compression rate under different compression methods. Dis Delicate-SVRG-cumulative- /lscript1, Q is weight quantization. Combining Delicate-SVRG-cumulative- /lscript1 with weight quantization can achieve the best performance.8Under review as a conference paper at ICLR 20185.4 Comparison of Convergence RatesTo confirm the theoretical insights that our method has no bad effect on convergence rateto achieve similar fast convergence with SGD-cumulative- /lscript1orSVRG-cumulative- /lscript1, wecalculate the training loss of two LeNet models on MNIST and CIFAR datasets duringincreasing iterations. In Figure 4(a), all methods have similar convergence rates in LeNet-300-100. In all of our experiments, Delicate-SVRG-cumulative- /lscript1has same or lower traininglossandfasterconvergenceratethanothermethods, meaningthatadaptivelearningratecanhelp SVRG with cumulative /lscript1 regularization to escape the local minimum in the beginningand quickly converge to a good local minimum within finite training iterations. Moreover,Delicate-SVRG-cumulative- /lscript1without bias-based pruning has a similar train loss, whichillustrates that adding bias-based pruning in /lscript1 regularization has no obvious bad effecton the convergence of weights. Consequently, applying adaptive learning rates, Delicate-SVRG-cumulative- /lscript1is a efficient compression method for neural network problems.6 DiscussionInthispaper, weproposed Weight Reduction Quantisation thatefficientlycompressedneuralnetworks without scarifying accuracy. Our method has two stages that reduce the numberof weights and reduce the number of bits to store each weight. We show that SVRG andcumulative /lscript1 regularization can improve over SGD and /lscript1-regularization. By combiningthem, we have presented a new compression method Delicate-SVRG-cumulative- /lscript1that canefficiently reduce the number of parameters by the separate adaptive learning rates. ThethreeadaptivelearningratesareappliedonSVRGandcumulative /lscript1penalty,whichprovidesa high accuracy and reduced number of weights. Besides, our method improved SVRG thatcan be used on non-convex problem with fast convergence rate. In our experiments onLeNet-300-100 and LeNet-5, our method can significantly reduce the memory requirementsup to 60×without accuracy loss. After compression by our method, a compact deep neuralnetwork can be efficiently deployed on an embedded device with performance of the originalmodel.ReferencesAlekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. InDavid Blei and Francis Bach (eds.), Proceedings of the 32nd International Conference onMachine Learning (ICML-15) , pp. 78–86. JMLR Workshop and Conference Proceedings,2015. URL http://jmlr.org/proceedings/papers/v37/agarwal15.pdf .Zeyuan Allen Zhu and Elad Hazan. Variance reduction for faster non-convex optimiza-tion. In Proceedings of the 33nd International Conference on Machine Learning, ICML2016, New York City, NY, USA, June 19-24, 2016 , pp. 699–707, 2016. URL http://jmlr.org/proceedings/papers/v48/allen-zhua16.html .Zeyuan Allen-Zhu and Yang Yuan. Improved svrg for non-strongly-convex or sum-of-non-convex objectives. In Proceedings of the 33rd International Conference on InternationalConference on Machine Learning - Volume 48 , ICML’16, pp. 1080–1089. JMLR.org, 2016.URL http://dl.acm.org/citation.cfm?id=3045390.3045505 .Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen.Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015. URLhttp://arxiv.org/abs/1504.04788 .Maxwell D. Collins and Pushmeet Kohli. Memory bounded deep convolutional networks.CoRR, abs/1412.1442, 2014. URL http://arxiv.org/abs/1412.1442 .Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. Ex-ponentiated gradient algorithms for conditional random fields and max-margin markovnetworks. Journal of Machine Learning Research , 9:1775–1822, 2008. URL http://jmlr.csail.mit.edu/papers/v9/collins08a.html .9Under review as a conference paper at ICLR 2018Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Ex-ploiting linear structure within convolutional networks for efficient evaluation. CoRR,abs/1404.0736, 2014. URL http://arxiv.org/abs/1404.0736 .Carlos M. Fonseca and Peter J. Fleming. An overview of evolutionary algorithms in mul-tiobjective optimization. Evol. Comput. , 3(1):1–16, March 1995. ISSN 1063-6560. doi:10.1162/evco.1995.3.1.1. URL http://dx.doi.org/10.1162/evco.1995.3.1.1 .Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points - onlinestochastic gradient for tensor decomposition. CoRR, abs/1503.02101, 2015. URL http://arxiv.org/abs/1503.02101 .Philipp Gysel, Mohammad Motamedi, and Soheil Ghiasi. Hardware-oriented approxi-mation of convolutional neural networks. CoRR, abs/1604.03168, 2016. URL http://arxiv.org/abs/1604.03168 .Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neuralnetworks with pruning, trained quantization and huffman coding. International Confer-ence on Learning Representations (ICLR) , 2016.GeoffreyE.Hinton,NitishSrivastava,AlexKrizhevsky,IlyaSutskever,andRuslanSalakhut-dinov. Improvingneuralnetworksbypreventingco-adaptationoffeaturedetectors. CoRR,abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580 .Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictivevariance reduction. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q.Weinberger (eds.), Advances in Neural Information Processing Systems 26 , pp. 315–323.Curran Associates, Inc., 2013.Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to docu-ment recognition. In Intelligent Signal Processing , pp. 306–351. IEEE Press, 2001.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 05 2015. URL http://dx.doi.org/10.1038/nature14539 .Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Im-agenet classification using binary convolutional neural networks. CoRR, abs/1603.05279,2016a. URL http://arxiv.org/abs/1603.05279 .Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Im-agenet classification using binary convolutional neural networks. CoRR, abs/1603.05279,2016b. URL http://arxiv.org/abs/1603.05279 .Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabás Póczós, and Alex Smola. Stochas-tic variance reduction for nonconvex optimization. In Proceedings of the 33rd Inter-national Conference on International Conference on Machine Learning - Volume 48 ,ICML’16, pp. 314–323. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045425 .Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. Stochastic gradient descenttraining for l1-regularized log-linear models with cumulative penalty. In Proceedingsof the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna-tional Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Vol-ume 1, ACL ’09, pp. 477–485, Stroudsburg, PA, USA, 2009. Association for Computa-tional Linguistics. ISBN 978-1-932432-45-9. URL http://dl.acm.org/citation.cfm?id=1687878.1687946 .10Under review as a conference paper at ICLR 2018Algorithm 1 Delicate-SVRG-cumulative- /lscript1: Stochastic descent training with cumulative/lscript1 penaltyprocedure Train(l)u←0 ̃m←0Initial wjand qjwith zero for all number of weights Mfork=0 to Maximal Iterations dog←h01 +p(k/N)b←h01 +a(k/N)3fort=0 to k doh←h0at/Nend foru←u +hl/Mend forforj∈features used in sample idorandomly select mfeatures from train sampleswj←wj–/parenleftbiggkN/summationtextNi=1(∇yi(wj) –∇yi( ̃wj)) +bk ̃m/parenrightbig∇yi( ̃wj) =∇yi(wj)ifwjand ̃wjconverge to the same weights then ̃m= 0end if ̃m← ̃m+1N∇yi( ̃w)end forend procedureprocedure Apply Penalty (i)z←wj ̃bis minimal bias in all layers.ifwj>0thenwj∈max(0, w j– (u + qj+ ̃b)),elseifwj<0thenwj∈min(0, w j+ (u – qj– ̃b)),end ifend ifqj←qj+ (w j– z)end procedure7 Appendix7.1 The algorithm of Delicate-SVRG-cumulative- /lscript17.2 Comparison of the convergence rates of between our method andSVRG and SGD when combining with /lscript1 regularization.The results showed in Figure4.7.3 Using multiple initializations to compare the performance of ourmethod and other three methods.The experiments were run with multiple initializations and there was some small variabilityin the results. However, the relative performance of the our method is always better thanSVRGandSGDcombiningwithcumulative /lscript1regularization. TheresultsshowedinFigure511Under review as a conference paper at ICLR 201810 20 30 40 50# Epoch10-210-1100101Train Loss (Log)Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L110 20 30 40 50# Epoch1.11.21.31.41.51.61.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L110 20 30 40 50# Epoch0.10.91.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L1(a) The train loss: MNIST dataset on LeNet-5 (left) and CIFAR-10 dataset on LeNet-300-100(middle) and LeNet-5 (right)0 10 20 30 40 50# Epoch0.020.040.060.080.1Test Loss (Log)Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L10 20 40 50# Epoch1.31.41.51.61.7Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L10 20 40 50# Epoch11.52Loss per EpochSGD-C-L1SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)D-SVRG-C-L1(b) The test loss: MNIST dataset on LeNet-5 (left) and CIFAR-10 dataset on LeNet-300-100(middle) and LeNet-5 (right)Figure 4: Estimate the convergence rate when using four compression methods, includ-ing our method Delicate-SVRG-cumulative- /lscript1,Delicate-SVRG-cumulative- /lscript1 (without Bi-asPruning) that without bias-based pruning in /lscript1 regularization, SVRG-cumulative- /lscript1andSGD-cumulative- /lscript1, on LeNet-300-100 and LeNet-5 models with MNIST and CIFAR-10datasets. Here we choose the compression rate that equal 90% to observe training andtest loss. For MNIST dataset, we did not notice subtle difference train and test loss onLeNet-300-100 model generated by four methods.12Under review as a conference paper at ICLR 2018104105#Weights0.020.0250.030.0350.040.0450.05Test ErrorInitial weights 1SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104105#Weights0.020.0250.030.0350.04Initial weights 2SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104105#Weights0.0180.020.0220.0240.0260.0280.03Initial weights 3SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(a) MNIST dataset on LeNet-300-100102104#Weights0.0080.0090.010.0110.0120.130.0140.0150.0160.0170.019Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L15 10 15#Weights1040.0080.0090.010.0110.0120.0130.0140.0150.0160.017SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105#Weights0.0090.010.0110.0120.0130.0140.0150.0160.0170.018SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(b) MNIST dataset on LeNet-513Under review as a conference paper at ICLR 2018105106#Weights0.0620.0640.0660.0680.070.0720.0740.0760.0780.080.082SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105106#Weights0.0620.0640.0660.0680.070.0720.0740.0760.0780.08SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1105106#Weights0.060.0620.0640.0660.0680.070.0720.0740.0760.078Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(c) CIFAR-10 dataset on LeNet-300-100104#Weights0.030.050.070.090.10.130.150.170.190.250.30.40.5Test ErrorSVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1104#Weights0.030.050.070.090.10.130.150.170.190.250.3 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1102104#Weights0.030.050.070.10.130.150.170.190.250.30.4 SVRG-C-L1D-SVRG-C-L1D-SVRG-C-L1(wo BiasPruning)SGD-C-L1(d) CIFAR-10 dataset on LeNet-5Figure 5: Using three types of initial weights, we compare our method with other threemethods. D-SVRG-C-L1 and D-SVRG-C-L1(wo BiasPruning) are always better than othertwo methods. This experiment also can verify the our view that the performance of SVRGis better or worse than SGD that depends on the number of training samples. In ourexperiment, if choosing small dataset (e.g. MNIST), SVRG is better than SGD. Otherwise,if choosing relatively large dataset (e.g. CIFAR-10), SVRG is worse than SGD.14<|im_end|> <|im_start|>assistant ### Review Title The authors use l-1 regularized SVRG to promotes sparsity in the trained model. However, the paper lacks comparisons with some key literature, and experimentally the benefit of SVRG over SGD does not seem substantial. ### Review Text The authors present an l-1 regularized SVRG based training algorithm that is able to force many weights of the network to be 0, hence leading to good compression of the model. The motivation for l-1 regularization is clear as it promotes sparse models, which lead to lower storage overheads during inference. The use of SVRG is motivated by the fact that it can, in some cases, provide faster convergence than SGD. Unfortunately, the authors do not compare with some key literature. For example there has been several techniques that use sparsity, and group sparsity [1,2,3], that lead to the same conclusion as the paper here: models can be significantly sparsified while not affecting the test accuracy of the trained model. Then, the novelty of the technique presented is also unclear, as essentially the algorithm is simply SVRG with l1 regularization and then some quantization. The experimental evaluation does not strongly support the thesis that the presented algorithm is much better than SGD with l1 regularization. In the presented experiments, the gap between the performance of SGD and SVRG is small (especially in terms of test error), and overall the savings in terms of the number of weights is similar to Deep compression. Hence, it is unclear how the use of SVRG over SGD improves things. Eg in figure 2 the differences in top-1 error of SGD and SVRG, for the same number of weights is very similar (it’s unclear also why Fig 2a uses top-1 and Fig 2b uses top-5 error). I also want to note that all experiments were run on LeNet, and not on state of the art models (eg ResNets). Finally, the paper is riddled with typos. I attach below some of the ones I found in pages 1 and 2 Overall, although the topic is very interesting, the contribution of this paper is limited, and it is unclear how it compares with other similar techniques that use group sparsity regularization, and whether SVRG offers any significant advantages over l1-SGD. typos: “ This work addresses the problem by proposing methods Weight Reduction Quantisation” -> This work addresses the problem by proposing a Weight Reduction Quantisation method “Beside, applying with sparsity-inducing regularization” -> Beside, applying sparsity-inducing regularization “Our method that minibatch SVRG with l-1 regularization on non-convex problem” -> Our minibatch SVRG with l-1 regularization method on non-convex problem “As well as providing,l1 regularization is a powerful compression techniques to penalize some weights to be zero” -> “l1 regularization is a powerful compression technique that forces some weights to be zero” The problem 1 can -> The problem in Eq.(1) can “it inefficiently encourages weight” -> “it inefficiently encourages weights” ———— [1] Learning Structured Sparsity in Deep Neural Networks http://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf [2] Fast ConvNets Using Group-wise Brain Damage https://arxiv.org/pdf/1506.02515.pdf [3] Sparse Convolutional Neural Networks https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
oGvpaRgaAd
NoDaLiDa/2023/Conference
2023
Question Answering and Question Generation for Finnish
["Ilmari Kylli\u00e4inen", "Roman Yangarber"]
Recent advances in the field of language modeling have improved the state-of-the-art in question answering (QA) and question generation (QG). However, the development of modern neural models, their benchmarks, and datasets for training them has mainly focused on English. Finnish, like many other languages, faces a shortage of large QA/QG model training resources, which has prevented experimenting with state-of-the-art QA/QG fine-tuning methods. We present the first neural QA and QG models that work with Finnish. To train the models, we automatically translate the SQuAD dataset and then use normalization methods to reduce the amount of problematic data created during the translation. Using the synthetic data, together with the Finnish partition of the TyDi-QA dataset, we fine-tune several transformer-based models to both QA and QG and evaluate their performance. To the best of our knowledge, the resulting dataset is the first large-scale QA/QG resource for Finnish. This paper also sets the initial benchmarks for Finnish-language QA and QG.
["computational linguistics", "question answering", "question generation", "deep learning", "transformer models"]
Question Answering and Question Generation for FinnishIlmari Kylliäinen and Roman YangarberUniversity of Helsinki, FinlandDepartment of Digital Humanitiesfirst.last@helsinki.fiAbstractRecent advances in the field of languagemodeling have improved the state-of-the-art in question answering (QA) and ques-tion generation (QG). However, the devel-opment of modern neural models, theirbenchmarks, and datasets for training themhas mainly focused on English. Finnish,like many other languages, faces a shortageof large QA/QG model training resources,which has prevented experimenting withstate-of-the-art QA/QG fine-tuning meth-ods. We present the first neural QA andQG models that work with Finnish. Totrain the models, we automatically translatethe SQuAD dataset and then use normal-ization methods to reduce the amount ofproblematic data created during the trans-lation. Using the synthetic data, togetherwith the Finnish partition of the TyDi-QAdataset, we fine-tune several transformer-based models to both QA and QG and eval-uate their performance. To the best of ourknowledge, the resulting dataset is the firstlarge-scale QA/QG resource for Finnish.This paper also sets the initial benchmarksfor Finnish-language QA and QG.1 IntroductionThe purpose of question answering (QA) systemsis to help users find information more efficiently.QA systems come in many forms and offer help ineverything from database querying to complex in-formation search from the entire World Wide Web.Recently, much attention has been directed towarddeveloping extractive QA models that can drawanswers directly from spans of text. Popular ap-proaches have emerged that integrate componentsthat first retrieve documents relevant to a question,with models for reading comprehension that pin-point the answers in the retrieved documents.A task closely related to QA, yet less researched,is question generation (QG), where the object isto generate natural and grammatical questions thatcan be answered by a specific answer using somegiven context. QG can be used to, e.g., automat-ically create reading comprehension tasks, or toimprove the interactivity of virtual assistants. Itcan also be used as a data augmentation tool—tocreate new training data for QA systems.Recently, the focus for both tasks has moved toneural language models utilizing transfer learning—e.g., BERT (Devlin et al., 2019) or XLNet (Yanget al., 2019), at least for languages such as English.Despite the advances in QA and QG, the lack oftraining datasets has hindered the use of state-of-the-art deep learning methods to develop modernQA and QG models for Finnish. Finnish, like manylanguages, lacks the resources to train models forthe two tasks. In fact, no monolingual Finnish QAor QG models have been reported to exist at all.In order to fine-tune models for Finnish extrac-tive QA and answer-aware QG, we first create aFinnish QA dataset by automatically translating theSQuAD— Stanford Question Answering Datasetdataset (Rajpurkar et al., 2016), from English toFinnish, and then use automatic normalization toclean up problematic data. We use the syntheticdata to train several transformer-based models forQA and QG and evaluate their performance. We re-lease the data to the research community to supportfuture research.1The paper is organized as follows: in Section (2)we review prior work on QA, QG, and generationof synthetic resources. In Section 3, we review thedataset creation, and introduce additional datasetsused to train and evaluate the models. Section 4reviews the fine-tuning methods, and Section 5discusses the results of the experiments. Section 6concludes and offers directions for future work.1https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi2 Related Work2.1 QA and QG for Other LanguagesApproaches to both question answering and ques-tion generation have significantly evolved through-out their history. More recently, along with newdatasets and novel deep learning methods, neuralapproaches have become the state of the art forboth tasks.It has become popular for information retrieval-based QA systems to incorporate a neural machinereading comprehension (MRC) component that ex-tracts answers from a set of retrieved documents.After the introduction of the transformer architec-ture, models like BERT (Devlin et al., 2019) havebecome a popular tool for the answer extractiontask. Many models have already surpassed humanperformance on the SQuAD1.1 dataset (Yamadaet al., 2020; Yang et al., 2019) and some modelscan also predict whether the passage contains theanswer to the question at all (Zhang et al., 2020).Lee et al. (2019) presented a unified end-to-endarchitecture capable of both retrieving and reading.Since the mid-2010s, many RNN-based ap-proaches have been proposed to QG (Zhou et al.,2017; Du et al., 2017; Zhao et al., 2018). How-ever, the Transformer architecture (Vaswani et al.,2017) solved many problems that RNNs have, andhas also become a popular architecture for QGmodels. The QG system by Wang et al. (2020)employs the encoder and the decoder from theTransformer. They combine the question gener-ation and answer selection process in a joint modeland treat the answers as a hidden pivot for ques-tion generation. Durmus et al. (2020) fine-tune apre-trained BART model (Lewis et al., 2020) togenerate questions from sentences. Chan and Fan(2019b) fine-tune a BERT model to work in a se-quential manner to generate questions from para-graphs of text. Their model achieved state-of-the-art results in paragraph-level QG.2.2 QA and QG for FinnishVery little research on Finnish QA exists to date.Aunimo et al. (2004) presented two cross-lingualQA systems, Tikka andVaris , that took Finnishquestions as input and found answers to themfrom a collection of English-language documents.Tikka is a simple baseline model, while Varisis more sophisticated. The pipelines of both sys-tems start with defining the question type with theuse of syntactic information and then translatingthe question into English. Varis also tries to ex-tract the answer type of the question using a namedentity recognizer. Tikka andVaris could cor-rectly answer 22.5% and 29.0% of the questionspresented to them, respectively.No previous work is found on monolingual orcross-lingual QG systems that work with Finnish.Therefore, to the best of our knowledge, the resultsreported in this paper are the first ones for Finnish-language question generation.2.3 Generation of Synthetic QA CorporaLarge annotated corpora are essential for fine-tuning pre-trained deep architecture but, unfortu-nately, they are also scarce for Finnish. In thecontext of QA, generation of synthetic corporaoften means creation of a dataset via, e.g., auto-matic or semiautomatic translation of an existingQA dataset, or automatic data extraction from rawunlabeled data.Recently, there have been several attempts to cre-ate synthetic datasets for QA. Carrino et al. (2020)translated an English QA dataset automatically toSpanish using a method called Translate-Align-Retrieve. The method is based on MT and anunsupervised alignment algorithm. Alberti et al.(2019) combined QG and answer extraction mod-els with a technique they refer to as roundtripconsistency-ensuring filtering to automatically cre-ate a synthetic English QA dataset from unlabeledtext passages. Abadani et al. (2021) translated theSQuAD2.0 QA dataset (Rajpurkar et al., 2018) au-tomatically into Persian, and then finalized the datainto two datasets, of which one is corrected manu-ally and the other automatically. The automaticallycorrected one is many times bigger and also yieldedbetter results. The SQuAD dataset has also been au-tomatically translated to Swedish (Okazawa, 2021)and French (Kabbadj, 2018).3 Data3.1 SQuADSQuAD is a large English QA dataset created fortraining machine learning models for the extractiveQA task. It is one of the most popular QA datasets,and many other QA datasets have followed itsmethodology (Clark et al., 2020; d’Hoffschmidtet al., 2020; Lim et al., 2019). SQuAD has alsobeen a popular resource for answer-aware neuralquestion generation (NQG) (Chan and Fan, 2019a;Du et al., 2017; Klein and Nabi, 2019).English Finnish translationPassage The capital, Brazzaville, is located on the CongoRiver, in the south of the country, immediatelyacross from Kinshasa, the capital of the Demo-cratic Republic of the Congo.Pääkaupunki Brazzaville sijaitsee Kongo-joenvarrella maan eteläosassa, vastapäätäKongon demokraattisen tasavallanpääkaupunkia Kinshasaa.Question What country does Kinshasa serve as capital of? Minkä maan pääkaupunki Kinshasa on?Answer Democratic Republic of the Congo Kongon demokraattinen tasavaltaTable 1: An example of problematic data resulting from translating passages and answers separately. Thetranslated answer (in the nominative case) is not found within the translated passage (where it appears inthe genitive case) which is required for extractive QA.The first version of SQuAD (SQuAD1.1) con-tains over 100K passage-question-answer tripletsthat crowdworkers extracted from 536 Wikipediaarticles. Each article is divided into several pas-sages, and each passage has several questions re-lated to its contents. Each question is linked withan answer (a substring of the passage) and the posi-tion of the answer’s first character in the passage.The second version of the dataset, SQuAD2.0, con-tains additional 50K questions, similar to the firstversion’s questions but impossible to answer withthe given passage. The extension’s idea was to en-able the development of models that can identifyunanswerable questions.3.2 Dataset Translation and NormalizationWe translated all the text data in the SQuAD2.0 intoFinnish using Google NMT (Wu et al., 2016) withthe Google Translate API. The passage, questions,and answers were translated separately, which ledto many of the translated answers not being sub-strings of the translated passage. That was some-times caused by translation errors, but one ma-jor factor was that the data was translated froma weakly inflected, analytic language to a highlyinflected, agglutinative language. In other words,the MT system has no way of knowing how toinflect the words in the translation without any con-text. The SQuAD format requires the answer tobe a substring of the passage as it is an extrac-tive QA dataset. The problem is illustrated in Ta-ble 1. Okazawa (2021) used a simple highlightingtechnique to tackle this problem when translatingSQuAD2.0 into Swedish. Rather than translatingthe passage and the answer separately, they put spe-cial markers ( [0]) around the answer substring be-fore the translation and afterward simply extractedthe translated answer span between the markersand then removed the markers. However, using itwould have required translating the same passagesmultiple times with different answers marked sincepassages are linked with several questions. Thiswas not feasible solely because using Google NMTvia API is not free.After translation, we used simple normaliza-tion methods to identify the answer substring inthe translated passage whenever it did not containthe separately translated answer. In total, therewere four normalization steps: regular expressions,lemmatization, stemming, and using the Englishanswer.. The script started with the first one andmoved to the next one if necessary.In the first step, a set of regular expressions wasused to fix some inconsistencies (in, e.g., whitespaces and punctuation) that were found to occa-sionally occur in the translations. In the next step,both the passage and the answer were lemmatized,and the script checked whether the now lemmatizedanswer was included in the lemmatized passage. Iflemmatization did not lead to a match, the scriptmoved to the next step: stemming. Stemming wasdone because the lemmatizer was observed to notrecognize many of the passage words as they wereoften proper nouns. If no match was found afterstemming, the last step was to check whether theEnglish answer was included in the translated pas-sage; if it was, it was used as the answer with theassumption that the English answer was mistak-enly translated. This was often the case with, e.g.,English song and movie names when they weretranslated with no context. If no match was foundafter all normalization, the question-answer pairwas discarded from the final dataset.If there was a match at any normalization step,the script proceeded to search its location in thepassage. The answer search started from the En-glish answer’s relative position in the translatedpassage and continued to neighboring positions un-til the answer was found. This was done to reducethe chance of choosing the starting position of awrong occurrence, as some passages contain theanswer string multiple times in different positions.After finding the answer start position, the question-answer pair was added to the final dataset.With the normalization procedure, roughly 32Kanswers were modified to match the passage strings.The data consists of 101,120 passage-question-answer triplets that are valid in the sense that the an-swers are included in the passages. 66K of them areanswerable (from SQuAD1.1), and 34K are unan-swerable with the given passage (from SQuAD2.0).This means that roughly 28% of the data includedin the publicly available partition of SQuAD1.1(92K questions) had to be discarded. The amountis approximately the same when taking into accountalso the “unanswerable” questions of SQuAD2.0.3.3 Finnish TyDi-QA CorpusTyDi-QA— Typologically Diverse QuestionAnswering (Clark et al., 2020), consists of twoQA datasets, covering 11 typologically diverselanguages with 204K question-answer pairs.The data was collected from Wikipedia articlesby human annotators. Unlike with SQuAD,the question writers formed questions withoutknowing the answers to them. The authors chosethis strategy to reduce lexical overlapping betweenquestions and passages, which could be exploitedby machine learning systems.One of the two datasets TyDi-QA consists of isin the SQuAD data format, which makes it ideal tocombine with the SQuAD data. In total, it contains7,635 Finnish questions. It is not much comparedto SQuAD, but to the best of our knowledge, it isthe only dataset that contains any Finnish data forextractive QA purposes. Consequently, we decidedto include the Finnish partition of the TyDi-QAdataset in our experimental dataset.3.4 The QA100-fi CorpusBecause most of the data used to train, validate, andtest the models are synthetically generated, we de-cided to also create an additional small Finnishdataset for evaluation purposes only, QA100-fi.One option would have been to simply use theFinnish TyDi-QA data for evaluation. However, itwould not have been feasible due to the possibledifferences with SQuAD questions caused by theTyDi-QA annotators not knowing the answers totheir formed questions.The QA100-fi dataset contains 100 questionsrelated to Finnish Wikipedia articles. It is in theSQuAD format, and there are 10 questions for eachcategory identified by Rajpurkar et al. (2016). Wedid not use any popularity-based ranking method toselect the articles, like the authors of SQuAD did.Instead, we simply selected articles that appearedto be of good quality and had a length of at leastthree paragraphs. The dataset is tiny compared toactual QA test sets, but it still gives an impressionof the models’ performance on purely native textdata collected by a native speaker.3.5 Data SplitTo train and evaluate models, we use data consist-ing of the answerable questions of the translatedSQuAD1.1 data and the Finnish TyDi-QA data.Mimicking the methodology of Du et al. (2017),who used SQuAD data for English QG, we shuffledand split the data on article level into training, vali-dation, and testing partitions. We call the resultingdataset SQuADTyDi-fi. The same SQuADTyDi-fisplits were used to train, validate, and evaluate bothQA and QG models. We also use QA100-fi as anadditional evaluation dataset. The split sizes areillustrated in Table 2.Dataset Split Q-A Pairs ArticlesTrain 64,604 6,977SQuADTyDi-fi Dev 4,903 567Test 4,822 567QA100-fi Test 100 67Table 2: Dataset splits. Q-A Pairs refers tothe number of question-answer pairs in the cor-responding split, and Articles tells how manyWikipedia articles the split has data from.4 Model Fine-tuningWe train three models for QA and four models forQG. As the base models for fine-tuning, we use theFinnish GPT-22(Radford et al., 2019), FinBERT3(Virtanen et al., 2019), and multilingual M-BERT,(Devlin et al., 2019).2https://huggingface.co/Finnish-NLP/gpt2-medium-finnish3We use bert-base-finnish-cased-v1 , the casedvariant.4.1 BERT Question AnsweringTo use BERT for extractive QA, we employ themethod described in Devlin et al., 2019. BERT isfine-tuned to “highlight” the answer when givena question and a passage that contains the answeras input. In practice, the model’s task is to outputtwo types of probabilities for each input token: 1)being the answer span start 2)being the last tokenof the answer span.The input consists of a passage and a question,separated with the [SEP] token:X= ([CLS], ⟨P⟩, [SEP], ⟨Q⟩)(1)where⟨P⟩is the input passage sequence and ⟨Q⟩isthe question sequence.4.2 BERT Question GenerationThe BERT models are fine-tuned for QG usingthe BERT-HLSQG (Highlight Sequential QuestionGeneration) method originally presented by Chanand Fan, 2019b. In BERT-HLSQG, the previousdecoding results are considered when decodingthe next token. Tokens are generated one by oneusing a strategy to modify BERT into generatingtext in an autoregressive manner. Another key ideain HLSQG is to highlight the answer in the inputpassage with special tokens to tackle any ambiguitycaused by the answer appearing multiple times inthe passage.At inference, the input Xfor an HLSQG modelis in the following format:X= ([CLS] , PHL,[SEP] ,ˆQ,[MASK] )(2)where PHLis the highlighted passage sequenceandˆQis the predicted question sequence.At the first inference step, the highlighted pas-sage is followed only by a [MASK] token, as thepredicted question sequence ˆQ= [ˆq1,ˆq2, ...,ˆq|ˆQ|]is empty at the start. The passage highlighting isdone by placing special [HL] tokens around theanswer in the passage:PHL= (p1, ...,[HL] , ps, ..., p e,[HL] , ..., p |P|)(3)where pnis the nth passage token, psandpeare the answer start and end tokens, and |P|is thepassage length.During each step, the whole input is fed to themodel, and it outputs a prediction for the [MASK]token. That prediction is considered the next tokenin the question sequence, and a new [MASK] tokenis placed after it. The same procedure goes on withinputs updated with the newly predicted questiontokens until a [SEP] token is predicted. At thatpoint, the question is considered ready.4.3 GPT-2 Question AnsweringTo fine-tune a GPT-2 model for QA ( GPT-2-QA ),we use a prompt to encourage the model to generateanswers relevant to the given passage and question.The model should learn the pattern of the promptand also the relation between the two input sections(passage and question) in the prompt.During fine-tuning, the prompt consists of threelines. Each line starts with a word that describes thecontent of the line and is followed by a matchingsequence. For example, the first two lines startwith Context: andQuestion: and continue with thepassage and question sequences. During training,language modeling loss is computed only on thesection where the model should output the answer.The fine-tuning prompt is:X=Context :⟨P⟩Question :⟨Q⟩Answer :⟨A⟩where⟨P⟩is the passage sequence, ⟨Q⟩is the ques-tion sequence, and ⟨A⟩is the answer sequence. Dur-ing inference, the answer sequence is omitted fromthe prompt, as the model’s task is to fill it in.4.4 GPT-2 Question GenerationWe train two GPT-2-based QG models,GPT-2-QG and GPT-2-HLQG . The train-ing and inference prompts of the GPT-2-QGmodel are the same as the GPT-2-QA , but theorder of the last two rows is reversed. TheQG models should learn to use the passage togenerate a question that the second line’s sequenceanswers. The training procedure is the same aswith GPT-2-QA , but instead of answers, thetraining loss is computed on the generated ques-tions. The two QG models differ in the prompts.GPT-2-HLQG also highlights the answer in thepassage with [HL] tokens. The motivation for thatis the same as with BERT-HLSQG: to reduce thepossible ambiguity caused by the answer appearingmultiple times in the passage.4.5 ImplementationAll the pre-trained models were accessed via thetransformers4Python package by Hugging4https://github.com/huggingface/transformers . Version 3.0.2 for BERT-HLSQGFace (Wolf et al., 2020). The fine-tuning scriptswere implemented using the same package alongwith PyTorch.5. For fine-tuning BERT-HLSQGmodels, we modified and used open-source codeby Lin (2020).6We fine-tune the models using two Nvidia V oltav100 GPUs and AdamW optimization with initiallearning rate 5×10−5. The batch size varied from2 to 24, depending on the task and the model ar-chitecture. All the models were trained for sixepochs, and a validation set was used to keep trackof the training performance and thus select the bestmodel for evaluation on the test sets. QA BERTmodels ( FinBERT-QA andM-BERT-QA ) had thebest validation results after two epochs, whereasall the other models had the best validation perfor-mance after six epochs. More details regarding thefine-tuning are included in Appendix A.5 Results5.1 QA ResultsThe evaluation results for the QA models are inTable 3. The scores are multiplied by 100 to mimicthe style of the official SQuAD leaderboard.7Withboth testing datasets, FinBERT-QA obtains thebest results. However, the fine-tuned M-BERTmodel comes close, with EM scores 2-3% worseand F1 scores 2.8-4.5 points behind FinBERT-QA.The GPT-2 -based QA model achieves moderatelygood results also, but both EM and F1 scores are atleast 20 points worse with both test sets.Dataset Model Exact Match F1 scoreFinBERT-QA 58.0 69.9SQuADTyDi-fi M-BERT-QA 56.0 67.1GPT-2-QA 37.2 46.9FinBERT-QA 67.0 83.7QA100-fi M-BERT-QA 64.0 79.2GPT-2-QA 43.0 56.0Table 3: Evaluation of QA models on two test sets.GPT-2-QA model obtained the worst results onboth datasets. With an EM score of 37.2 and an F1models and 4.8.1 for other models.5Version 1.5.0+cu101 for BERT-HLSQG models and1.9.0+cu111 for other models.6https://github.com/chris4540/StudyMaskedLMForQG7https://rajpurkar.github.io/SQuAD-explorer/score of 46.9 on SQuADTyDi-fi data, it is appar-ent that fine-tuning has contributed to the model’sability to answer questions. The model outputs rel-atively short answers as expected, and it also seemsto have quite well learned the expected answer typefor each interrogative in the question. For exam-ple, the model mostly seems to answer questionsstarting with kuka (“who”) with names/people andquestions starting with montako (“how many”) withnumeral phrases. However, the results are still farbehind the best-performing models.When the question contains very different vocab-ulary than the passage (e.g., synonyms or idiomaticexpressions), GPT-2-QA seems to perform partic-ularly poorly. A closer look at the results showsthat the GPT-2-QA model’s outputs occasionallycontain words that are slightly modified versionsof the ones in the passage. This problem is uniqueto GPT-2 in the experiments as it is the only au-toregressive model. Some other examples of sucherrors are shown in Table 4. However, most ofthe answers seem to be substrings of the input pas-sages, as expected. GPT-2-QA seems to often failto “understand” what specifically is being asked.Even when it seems to understand that the questionshould be answered with a date and the answershould be a substring of the passage, it often seemsto pick just any date. And sometimes, it even mod-ifies the date, as seen in Table 4.Predicted answer Target answerKenji Vatanabe Kenji Watanabe20. lokakuuta 2000 21. lokakuuta 2000Kypylän Midnan kypärän3 vuotta kolme vuottaTable 4: Examples of GPT-2-QA outputs that arenot substrings of the input passage.The other QA models, FinBERT-QA andM-BERT-QA , perform much better. They come inquite close to each other as FinBERT-QA outper-forms M-BERT-QA by 2-3 points on SQuADTyDi-fi data with its EM and F1 scores of 58.0 and 69.9,respectively. The difference between the scoresofFinBERT-QA andM-BERT-QA is slightlybigger with the QA100-fi test data, with whichFinBERT-QA obtains an EM score of 67.0 andan F1 score of 83.7. Using only Finnish dataand a lot larger amount of it in pre-training seemsto have been beneficial for FinBERT-QA . LikeGPT-2-QA , also M-BERT-QA seems to occasion-ally struggle when the question is phrased verydifferently compared to the input passage.As with GPT-2-QA , the longer the ground truthanswer, the more likely the BERT-based modelsseem to predict it incorrectly. However, rather thanchoosing a completely wrong span, FinBERT-QAandM-BERT-QA often seemed only to pick toofew words. This is also reflected in the biggerdifferences between EM and F1 scores of the othertwo models, compared to GPT-2-QA . Other thanquestions with longer answers, it is challenging toidentify any specific question/answer types withwhich FinBERT-QA andM-BERT-QA have themost difficulties. Additional examples of outputsof the QA models are included in Appendix A.The results of all QA models are better with theQA100-fi test dataset. It is possible that because thepassages, questions, and answers in QA100-fi arenot machine-translated, they could be closer to theFinnish language with which the models were pre-trained. Another factor might be the lengths of thepassages, questions, and answers. Their averagelengths are shown in Table 5. The passages andquestions in the test partition of SQuADTyDi-fi arelonger on average, but the answers are longer inQA100-fi. Longer passages are more challengingfor the models as there are more tokens from whichto choose the answer span start and end tokens.However, the test sets are so different in size that itis hard to say how much that affects the results.Passage Question AnswerSQuADTyDi-fi (test) 74.5 6.6 2.5QA100-fi 62.2 5.9 3.2Table 5: Average word counts in the test partitionof SQuADTyDi-fi and QA100-fi.As there are no other Finnish QA models tocompare with, we can gain some perspective bycomparing the results with English models trainedon a similar dataset. The top EM and F1 scoresfor single BERT models in the English SQuAD1.1leaderboard8are around 85 and 90, respectively.The overall best single model results are fromother transformer-based models, like LUKE (Ya-mada et al., 2020) and XLNet (Yang et al., 2019),which both obtain EM and F1 scores over 908Webpage mirroring SQuAD1.1 leaderboard:https://paperswithcode.com/sota/question-answering-on-squad11and 95, respectively. The best Finnish results(byFinBERT-QA ) are quite far from the best-performing English models. However, it is worthnoting that the Finnish models were fine-tuned us-ing a smaller dataset which is probably of poorerquality, as it has been automatically translated.Finnish being a highly inflective language mightalso make the QA task generally more challenging.5.2 QG ResultsThe evaluation results for the QG models are in Ta-ble 6. The FinBERT-based models obtain the bestresults. As in the QA task, the results of the Fin-BERT and M-BERT-based models are quite closeto each other, whereas the GPT-2 models are muchworse.Dataset Model BLEU-4 METEORSQuADTyDi-fi FinBERT-HLSQG 0.11 0.17M-BERT-HLSQG 0.10 0.16GPT-2-QG 0.04 0.10GPT-2-HLQG 0.04 0.10QA100-fi FinBERT-HLSQG 0.18 0.22M-BERT-HLSQG 0.13 0.20GPT-2-QG 0.04 0.13GPT-2-HLQG 0.04 0.11Table 6: BLEU-4 and METEOR scores of QG mod-els. Results on additional metrics in Appendix A.BothGPT-2-QG andGPT-2-HLQG achieve aBLEU-4 score of 0.04 on both datasets. Unlike inChan and Fan (2019b), using an answer highlighttechnique in the passage did not lead to an increasein the performance as the results of the two modelsare nearly identical. This indicates that ambiguitywas not the root cause of the inferior performanceof the models.Looking at the outputs of the GPT-2-based QGmodels, it is clear that the models learn the gen-eral structure of a question. The outputs mostlystart with the correct interrogative word and endwith a question mark. The questions also seemmostly grammatical. The biggest problems seemto be related to semantic validity and generatingquestions that can be answered using the input an-swer. However, the models occasionally seem togenerate questions that can be answered with theinput answer, but they are very different from theground-truth questions. They are good examplesof why using automatic, n-gram-based evaluationmetrics to assess QG systems can be problematic.Compared to the GPT-2-based QG models, theBERT-based QG models perform roughly twiceas well on every metric. FinBERT-HLSQG andM-BERT-HLSQG seem to output questions thatmake more sense and have more common wordswith the target question. For example, with tar-get question Kuinka korkeaksi puu yleensä kasvaaavoimilla alueilla? (“How tall does the tree usu-ally grow in open areas?”), FinBERT-HLSQG out-puts Minkä korkuinen on jousisoihtupuu avoimillaalueilla? (“How tall is the pink trumpet treein open areas?”) and GPT-2-HLQG outputsMinkä kokoisia puutalot ovat metsäalueiden ko-rkeilta tasoilta? (“What size are the woodenhouses from the high levels of the forest areas?”).GPT-2-HLQG ’s output is nonsensical yet gram-matical, whereas FinBERT-HLSQG ’s output canbe considered correct, though the phrasing is quitedifferent from the target question. All models per-form better with shorter passages and struggle atinflecting rare words. Additional examples of theoutputs of all QG models are shown in Appendix A.As on the QA task, the FinBERT-based modelachieves slightly better scores on the SQuADTyDi-fi test set than the multilingual variant. How-ever, in QG, the difference between the perfor-mance of BERT-based models is bigger whenevaluating on the QA100-fi dataset. For ex-ample, FinBERT-HLSQG obtains a BLEU-4score of 0.18, while M-BERT-HLSQG yields 0.13.Checking the outputs on QA100-fi, it seems thatM-BERT-HLSQG has more problems inflectingwords, and it occasionally uses word order andphrasings that sound a bit unnatural in Finnish. Itis possible that these problems were exacerbatedwhen the model was tested on QA100-fi, whichconsists of data collected by a native speaker.Chan and Fan (2019b), who initially presentedthe BERT-HLSQG method, report a BLEU-4 scoreof 0.20 for their English QG model that wasfine-tuned on roughly 73K question-answer pairs.FinBERT-HLSQG ’s BLEU-4 score (0.11) on theSQuADTyDi-fi test set is quite far from that,whereas the BLEU-4 score on the smaller QA100-fitest set (0.18) is a lot closer. It is likely that the pas-sages and questions in QA100-fi being shorter onaverage has a positive effect on the model’s perfor-mance on the dataset. Chan and Fan (2019a) alsoconclude that their BERT-HLSQG model worksbetter with shorter passages. As with the QA task,it is possible that the smaller amount of trainingdata and its poorer quality, together with the morecomplex Finnish morphology, partly explain thedifferences that occur when compared to the En-glish models.6 Conclusion and Future WorkWe have proposed an MT-based method for creat-ing a Finnish QA dataset, and used it to train andevaluate several transformer-based QA and QGmodels. On both tasks, fine-tuned monolingualBERT models obtain the best results. The multi-lingual variants came close, while the fine-tunedGPT-2 models were found to underperform. Pre-training with only Finnish data seems to give themodels an edge in both QA and QG.To the best of our knowledge, these are the firstmonolingual Finnish QA and QG models. Theyset a fair baseline for further research in FinnishQA and QG. All data used in the experiments is re-leased to the research community, to support futureresearch, and the models are released as bench-marks. We believe that this is a valuable contri-bution, since suitable datasets created by nativeFinnish speakers are not yet available.Given the promising initial results, we plan topursue several directions. (1) As the SQuAD2.0data with the unanswerable questions was alsotranslated, it could be used to train the first FinnishQA models that can also identify unanswerablequestions. (2) Lower-level natural language pro-cessing (NLP) components can be employed tostudy and improve performance. For example,we can use syntactic parsing to check for ungram-matical questions, to analyze the created syntheticdataset; we can use name recognition to improveQA results (Yadav and Bethard, 2019; Piskorskiet al., 2019), etc. (3) Real-world applications, suchas language learning systems, e.g., (Katinskaiaet al., 2018, 2017), can benefit from QA and QG—by automatically generating reading comprehen-sion questions from arbitrary authentic text. To in-tegrate QG into such applications, a separate modelshould be developed for choosing the appropriateinput answers. (4) To support (3), it is importantto study in detail on what types questions and an-swers the QA and QG models do especially well orespecially poorly.ReferencesNegin Abadani, Jamshid Mozafari, Afsaneh Fatemi,Mohammd Ali Nematbakhsh, and Arefeh Kazemi.2021. Parsquad: Machine translated SQuAD datasetfor Persian question answering. In 2021 7th Interna-tional Conference on Web Research (ICWR) , pages163–168.Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin,and Michael Collins. 2019. Synthetic QA corporageneration with roundtrip consistency. In Proceed-ings of the 57th Annual Meeting of the Association forComputational Linguistics , pages 6168–6173, Flo-rence, Italy. Association for Computational Linguis-tics.Lili Aunimo, Juha Makkonen, and Reeta Kuuskoski.2004. Cross-language question answering forFinnish. In Proceedings of the Web Intelligence Sym-posium, Finnish Artificial Intelligence Conference ,pages 35–49.Casimiro Pio Carrino, Marta R. Costa-jussà, and JoséA. R. Fonollosa. 2020. Automatic Spanish trans-lation of SQuAD dataset for multi-lingual questionanswering. In Proceedings of the 12th LanguageResources and Evaluation Conference , pages 5515–5523, Marseille, France. European Language Re-sources Association.Ying-Hong Chan and Yao-Chung Fan. 2019a. BERTfor question generation. In Proceedings of the 12thInternational Conference on Natural Language Gen-eration , pages 173–177, Tokyo, Japan. Associationfor Computational Linguistics.Ying-Hong Chan and Yao-Chung Fan. 2019b. A recur-rent BERT-based model for question generation. InProceedings of the 2nd Workshop on Machine Read-ing for Question Answering , pages 154–162, HongKong, China. Association for Computational Linguis-tics.Jonathan H. Clark, Eunsol Choi, Michael Collins, DanGarrette, Tom Kwiatkowski, Vitaly Nikolaev, andJennimaria Palomaki. 2020. TyDi QA: A benchmarkfor information-seeking question answering in typo-logically diverse languages. Transactions of the As-sociation for Computational Linguistics , 8:454–470.ArXiv: 2003.05002.Jacob Devlin, Ming-Wei Chang, Kenton Lee, andKristina Toutanova. 2019. BERT: Pre-training ofdeep bidirectional transformers for language under-standing. In Proceedings of the 2019 Conference ofthe North American Chapter of the Association forComputational Linguistics: Human Language Tech-nologies, Volume 1 (Long and Short Papers) , pages4171–4186, Minneapolis, Minnesota. Association forComputational Linguistics.Martin d’Hoffschmidt, Wacim Belblidia, QuentinHeinrich, Tom Brendlé, and Maxime Vidal. 2020.FQuAD: French question answering dataset. In Find-ings of the Association for Computational Linguistics:EMNLP 2020 , pages 1193–1208, Online. Associationfor Computational Linguistics.Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn-ing to ask: Neural question generation for readingcomprehension. In Proceedings of the 55th AnnualMeeting of the Association for Computational Lin-guistics (Volume 1: Long Papers) , pages 1342–1352,Vancouver, Canada.Esin Durmus, He He, and Mona Diab. 2020. FEQA: Aquestion answering evaluation framework for faith-fulness assessment in abstractive summarization. InProceedings of the 58th Annual Meeting of the Asso-ciation for Computational Linguistics , pages 5055–5070, Online. Association for Computational Lin-guistics.Ali Kabbadj. 2018. Something new in French text min-ing and information extraction (universal chatbot):Largest Q&A French training dataset (110 000+).[Online; posted 11-November-2018].Anisia Katinskaia, Javad Nouri, and Roman Yangarber.2017. Revita: a system for language learning and sup-porting endangered languages. In Proceedings of thejoint workshop on NLP for Computer Assisted Lan-guage Learning and NLP for Language Acquisition ,pages 27–35, Gothenburg, Sweden. LiU ElectronicPress.Anisia Katinskaia, Javad Nouri, and Roman Yangarber.2018. Revita: a language-learning platform at theintersection of ITS and CALL. In Proceedings ofthe Eleventh International Conference on LanguageResources and Evaluation (LREC 2018) , Miyazaki,Japan. European Language Resources Association(ELRA).T. Klein and Moin Nabi. 2019. Learning to answer bylearning to ask: Getting the best of GPT-2 and BERTworlds. ArXiv , abs/1911.02365.Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.2019. Latent retrieval for weakly supervised opendomain question answering. In Proceedings of the57th Annual Meeting of the Association for Computa-tional Linguistics , pages 6086–6096, Florence, Italy.Association for Computational Linguistics.Mike Lewis, Yinhan Liu, Naman Goyal, MarjanGhazvininejad, Abdelrahman Mohamed, Omer Levy,Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart:Denoising sequence-to-sequence pre-training for nat-ural language generation, translation, and comprehen-sion. In Proceedings of the 58th Annual Meeting ofthe Association for Computational Linguistics , pages7871–7880, Online. Association for ComputationalLinguistics. ArXiv: 1910.13461.Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019.KorQuAD1.0: Korean QA dataset for machine read-ing comprehension. arXiv:1909.07005 [cs] . ArXiv:1909.07005.Chun Hung Lin. 2020. Automatic Question Generationwith Pre-trained Masked Language Models . Ph.D.thesis, KTH Royal Institute of Technology, Stock-holm, Sweden.Susumu Okazawa. 2021. Swedish translation ofSQuAD2.0. GitHub repository (Accessed: 6 March2022).Jakub Piskorski, Laska Laskova, Michał Marci ́nczuk,Lidia Pivovarova, Pavel P ˇribáˇn, Josef Steinberger,and Roman Yangarber. 2019. The second cross-lingual challenge on recognition, normalization, clas-sification, and linking of named entities across Slaviclanguages. In Proceedings of the 7th Workshop onBalto-Slavic Natural Language Processing . ACL.Alec Radford, Jeffrey Wu, Rewon Child, David Luan,Dario Amodei, and Ilya Sutskever. 2019. Languagemodels are unsupervised multitask learners. OpenAIblog, 1(8).Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.Know what you don’t know: Unanswerable ques-tions for SQuAD. In Proceedings of the 56th AnnualMeeting of the Association for Computational Lin-guistics (Volume 2: Short Papers) , pages 784–789,Melbourne, Australia. Association for ComputationalLinguistics.Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, andPercy Liang. 2016. Squad: 100,000+ questions formachine comprehension of text. In Proceedings ofthe 2016 Conference on Empirical Methods in Natu-ral Language Processing , pages 2383–2392, Austin,Texas. Association for Computational Linguistics.Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, ŁukaszKaiser, and Illia Polosukhin. 2017. Attention is allyou need. In Advances in Neural Information Pro-cessing Systems , volume 30. Curran Associates, Inc.Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma,Juhani Luotolahti, Tapio Salakoski, Filip Ginter, andSampo Pyysalo. 2019. Multilingual is not enough:BERT for Finnish. arXiv:1912.07076 [cs] . ArXiv:1912.07076.Bingning Wang, Xiaochuan Wang, Ting Tao, Qi Zhang,and Jingfang Xu. 2020. Neural question generationwith answer pivot. Proceedings of the AAAI Con-ference on Artificial Intelligence , 34(05):9138–9145.Number: 05.Thomas Wolf, Lysandre Debut, Victor Sanh, JulienChaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rémi Louf, Morgan Fun-towicz, Joe Davison, Sam Shleifer, Patrick vonPlaten, Clara Ma, Yacine Jernite, Julien Plu, Can-wen Xu, Teven Le Scao, Sylvain Gugger, MariamaDrame, Quentin Lhoest, and Alexander M. Rush.2020. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. Technical ReportarXiv:1910.03771, arXiv. ArXiv:1910.03771 [cs]type: article.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V . Le,Mohammad Norouzi, Wolfgang Macherey, MaximKrikun, Yuan Cao, Qin Gao, Klaus Macherey, JeffKlingner, Apurva Shah, Melvin Johnson, XiaobingLiu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato,Taku Kudo, Hideto Kazawa, Keith Stevens, GeorgeKurian, Nishant Patil, Wei Wang, Cliff Young, JasonSmith, Jason Riesa, Alex Rudnick, Oriol Vinyals,Greg Corrado, Macduff Hughes, and Jeffrey Dean.2016. Google’s neural machine translation system:Bridging the gap between human and machine trans-lation. arXiv:1609.08144 [cs] . ArXiv: 1609.08144.Vikas Yadav and Steven Bethard. 2019. A survey onrecent advances in named entity recognition fromdeep learning models. CoRR , abs/1910.11470.Ikuya Yamada, Akari Asai, Hiroyuki Shindo, HideakiTakeda, and Yuji Matsumoto. 2020. LUKE: Deepcontextualized entity representations with entity-aware self-attention. In Proceedings of the 2020Conference on Empirical Methods in Natural Lan-guage Processing (EMNLP) , pages 6442–6454, On-line. Association for Computational Linguistics.Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-bonell, Russ R Salakhutdinov, and Quoc V Le. 2019.Xlnet: Generalized autoregressive pretraining for lan-guage understanding. In Advances in Neural Infor-mation Processing Systems , volume 32. Curran Asso-ciates, Inc.Zhuosheng Zhang, Junjie Yang, and Hai Zhao. 2020.Retrospective Reader for Machine Reading Compre-hension. Technical Report arXiv:2001.09694, arXiv.ArXiv:2001.09694 [cs] type: article.Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and QifaKe. 2018. Paragraph-level neural question gener-ation with maxout pointer and gated self-attentionnetworks. In Proceedings of the 2018 Conference onEmpirical Methods in Natural Language Processing ,pages 3901–3910, Brussels, Belgium. Associationfor Computational Linguistics.Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan,Hangbo Bao, and Ming Zhou. 2017. Neural ques-tion generation from text: A preliminary study.arXiv:1704.01792 [cs] . ArXiv: 1704.01792.A AppendixModel Epochs(best model)Batch sizeFinBERT-QA 2 16M-BERT-QA 2 16GPT-2-QA 6 2FinBERT-HLSQG 6 24M-BERT-HLSQG 6 16GPT-2-QG 6 2GPT-2-HLQG 6 2Table 7: Training hyperparameters. With all models, we use the AdamW optimization algorithm with aninitial learning rate of 5×10−5.Dataset Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-LFinBERT-HLSQG 0.29 0.21 0.15 0.11 0.17 0.33M-BERT-HLSQG 0.29 0.20 0.14 0.10 0.16 0.31SQuADTyDi-fiGPT-2-QG 0.18 0.11 0.06 0.04 0.10 0.20GPT-2-HLQG 0.18 0.10 0.06 0.04 0.10 0.20FinBERT-HLSQG 0.39 0.30 0.22 0.18 0.22 0.41M-BERT-HLSQG 0.36 0.25 0.18 0.13 0.20 0.36QA100-fiGPT-2-QG 0.22 0.12 0.07 0.04 0.13 0.22GPT-2-HLQG 0.19 0.11 0.07 0.04 0.11 0.20Table 8: All evaluation results of the QG models.Input passage Ulkomuodoltaan hylkeet ovat sileitä ja pulleita. Ruumiinrakenne soveltuusulavaan vedessä liikkumiseen. Ranteesta ja kämmenestä ovat muodostuneetetuevät ja nilkasta ja jalkaterästä takaevät. Evät ovat heikot eikä niitä voi käyttääapuna maalla liikkumiseen . Hylkeet liikkuvatkin maalla siten, että ne siirtävätpainoa rinnan ja vatsan varaan. Erotuksena lähisukulaisistaan korvahylkeistä,joihin kuuluvat muun muassa merileijonat, varsinaisilla hylkeillä ei ole ulkoisiakorvalehtiä. Varsinaisten hylkeiden uiminen tapahtuu evien ja ruumiin takaosansivuttaissuuntaista liikettä apuna käyttäen.Input question Mihin hylkeiden evät eivät sovellu? (What are seal fins not suitable for? )Target answer maalla liikkumiseen (to move on land )Model Predicted AnswerFinBERT-QA maalla liikkumiseen. (to move on land. )M-BERT-QA vedessä (in the water )GPT-2-QA ui maalla (swim/swims on land )Table 9: Output examples of the QA models. The ground truth answer is highlighted in the input passage.Input passage Jättiläismetsäkarju eli jättiläismetsäsika eli jättisika (Hylochoerus meinertzha-geni) on keskisen ja läntisen Afrikan metsissä elävä elinvoimainen sorkkaeläin-laji. Se on sukunsa Hylochoerus ainoa laji. Jättiläismetsäkarjut ovat suurimpialuonnonvaraisia sikoja. Ne voivat kasvaa jopa 210 senttimetriä pitkiksi japainaa 275 kilogrammaa. Niiden ruumis on tanakka ja pää leveä, mutta jalatovat lyhyet. Nahkaa peittävät pitkät ja karkeat karvat, jotka nousevat pystyyneläimen kiihtyessä.Input answer 210 senttimetriä(210 centimeters)Target question Kuinka pitkiksi jättiläismetsäkarjut voivat kasvaa?(How long can giant forest hogs grow?)Model Generated questionFinBERT-HLSQG Kuinka pitkäksi jättiläismetsäkarju voi kasvaa?(How long can a giant forest hog grow?)M-BERT-HLSQG Kuinka pitkiä jättiläismetsäkarjat voivat kasvaa? *(How long can giant forest cattles grow?)GPT-2-QG Miten pitkäksi afrikkalainen jättiläismetsäkarju voi kasvaa?(How long can an African giant forest hog grow?)GPT-2-HLQG Kuinka pitkä on jättiläismetsäkarjun pituus?(How long is the length of a giant forest hog?)Table 10: Output examples from the QG models. The input answer is highlighted in the input passage.Outputs marked with * contain inflection errors, but they are ignored in the translation.
Z8GgzfQZuy-
Question Answering and Question Generation for Finnish
8: Top 50% of accepted papers, clear accept
This paper presents (mostly automatically translated) Finnish question answering and question generation datasets and experiments to finetune Finnish LM-s to the task. Creating datasets and benchmarks for new languages is very important and the most significant contribution of this work. In general, I found the work well structured, interesting, informative and easy to read. The information was presented systematically and the authors set a new benchmark for future work. While I'm not an expert in this particular task, I wonder if the authors should have also considered translating the test set instead and using English models given their superior performance. The also authors used Google’s translation API to create their dataset. Unfortunately, they mentioned some compromises in their methodology due to the cost of the translation service, which is surprising considering that there are also many open-source NMT models available for Finnish. Would better results be achieved if the API cost was not a factor?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Question Answering and Question Generation for Finnish ### Paper Abstract Recent advances in the field of language modeling have improved the state-of-the-art in question answering (QA) and question generation (QG). However, the development of modern neural models, their benchmarks, and datasets for training them has mainly focused on English. Finnish, like many other languages, faces a shortage of large QA/QG model training resources, which has prevented experimenting with state-of-the-art QA/QG fine-tuning methods. We present the first neural QA and QG models that work with Finnish. To train the models, we automatically translate the SQuAD dataset and then use normalization methods to reduce the amount of problematic data created during the translation. Using the synthetic data, together with the Finnish partition of the TyDi-QA dataset, we fine-tune several transformer-based models to both QA and QG and evaluate their performance. To the best of our knowledge, the resulting dataset is the first large-scale QA/QG resource for Finnish. This paper also sets the initial benchmarks for Finnish-language QA and QG. ### Paper Keywords ["computational linguistics", "question answering", "question generation", "deep learning", "transformer models"] ### Paper Content Question Answering and Question Generation for FinnishIlmari Kylliäinen and Roman YangarberUniversity of Helsinki, FinlandDepartment of Digital Humanitiesfirst.last@helsinki.fiAbstractRecent advances in the field of languagemodeling have improved the state-of-the-art in question answering (QA) and ques-tion generation (QG). However, the devel-opment of modern neural models, theirbenchmarks, and datasets for training themhas mainly focused on English. Finnish,like many other languages, faces a shortageof large QA/QG model training resources,which has prevented experimenting withstate-of-the-art QA/QG fine-tuning meth-ods. We present the first neural QA andQG models that work with Finnish. Totrain the models, we automatically translatethe SQuAD dataset and then use normal-ization methods to reduce the amount ofproblematic data created during the trans-lation. Using the synthetic data, togetherwith the Finnish partition of the TyDi-QAdataset, we fine-tune several transformer-based models to both QA and QG and eval-uate their performance. To the best of ourknowledge, the resulting dataset is the firstlarge-scale QA/QG resource for Finnish.This paper also sets the initial benchmarksfor Finnish-language QA and QG.1 IntroductionThe purpose of question answering (QA) systemsis to help users find information more efficiently.QA systems come in many forms and offer help ineverything from database querying to complex in-formation search from the entire World Wide Web.Recently, much attention has been directed towarddeveloping extractive QA models that can drawanswers directly from spans of text. Popular ap-proaches have emerged that integrate componentsthat first retrieve documents relevant to a question,with models for reading comprehension that pin-point the answers in the retrieved documents.A task closely related to QA, yet less researched,is question generation (QG), where the object isto generate natural and grammatical questions thatcan be answered by a specific answer using somegiven context. QG can be used to, e.g., automat-ically create reading comprehension tasks, or toimprove the interactivity of virtual assistants. Itcan also be used as a data augmentation tool—tocreate new training data for QA systems.Recently, the focus for both tasks has moved toneural language models utilizing transfer learning—e.g., BERT (Devlin et al., 2019) or XLNet (Yanget al., 2019), at least for languages such as English.Despite the advances in QA and QG, the lack oftraining datasets has hindered the use of state-of-the-art deep learning methods to develop modernQA and QG models for Finnish. Finnish, like manylanguages, lacks the resources to train models forthe two tasks. In fact, no monolingual Finnish QAor QG models have been reported to exist at all.In order to fine-tune models for Finnish extrac-tive QA and answer-aware QG, we first create aFinnish QA dataset by automatically translating theSQuAD— Stanford Question Answering Datasetdataset (Rajpurkar et al., 2016), from English toFinnish, and then use automatic normalization toclean up problematic data. We use the syntheticdata to train several transformer-based models forQA and QG and evaluate their performance. We re-lease the data to the research community to supportfuture research.1The paper is organized as follows: in Section (2)we review prior work on QA, QG, and generationof synthetic resources. In Section 3, we review thedataset creation, and introduce additional datasetsused to train and evaluate the models. Section 4reviews the fine-tuning methods, and Section 5discusses the results of the experiments. Section 6concludes and offers directions for future work.1https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi2 Related Work2.1 QA and QG for Other LanguagesApproaches to both question answering and ques-tion generation have significantly evolved through-out their history. More recently, along with newdatasets and novel deep learning methods, neuralapproaches have become the state of the art forboth tasks.It has become popular for information retrieval-based QA systems to incorporate a neural machinereading comprehension (MRC) component that ex-tracts answers from a set of retrieved documents.After the introduction of the transformer architec-ture, models like BERT (Devlin et al., 2019) havebecome a popular tool for the answer extractiontask. Many models have already surpassed humanperformance on the SQuAD1.1 dataset (Yamadaet al., 2020; Yang et al., 2019) and some modelscan also predict whether the passage contains theanswer to the question at all (Zhang et al., 2020).Lee et al. (2019) presented a unified end-to-endarchitecture capable of both retrieving and reading.Since the mid-2010s, many RNN-based ap-proaches have been proposed to QG (Zhou et al.,2017; Du et al., 2017; Zhao et al., 2018). How-ever, the Transformer architecture (Vaswani et al.,2017) solved many problems that RNNs have, andhas also become a popular architecture for QGmodels. The QG system by Wang et al. (2020)employs the encoder and the decoder from theTransformer. They combine the question gener-ation and answer selection process in a joint modeland treat the answers as a hidden pivot for ques-tion generation. Durmus et al. (2020) fine-tune apre-trained BART model (Lewis et al., 2020) togenerate questions from sentences. Chan and Fan(2019b) fine-tune a BERT model to work in a se-quential manner to generate questions from para-graphs of text. Their model achieved state-of-the-art results in paragraph-level QG.2.2 QA and QG for FinnishVery little research on Finnish QA exists to date.Aunimo et al. (2004) presented two cross-lingualQA systems, Tikka andVaris , that took Finnishquestions as input and found answers to themfrom a collection of English-language documents.Tikka is a simple baseline model, while Varisis more sophisticated. The pipelines of both sys-tems start with defining the question type with theuse of syntactic information and then translatingthe question into English. Varis also tries to ex-tract the answer type of the question using a namedentity recognizer. Tikka andVaris could cor-rectly answer 22.5% and 29.0% of the questionspresented to them, respectively.No previous work is found on monolingual orcross-lingual QG systems that work with Finnish.Therefore, to the best of our knowledge, the resultsreported in this paper are the first ones for Finnish-language question generation.2.3 Generation of Synthetic QA CorporaLarge annotated corpora are essential for fine-tuning pre-trained deep architecture but, unfortu-nately, they are also scarce for Finnish. In thecontext of QA, generation of synthetic corporaoften means creation of a dataset via, e.g., auto-matic or semiautomatic translation of an existingQA dataset, or automatic data extraction from rawunlabeled data.Recently, there have been several attempts to cre-ate synthetic datasets for QA. Carrino et al. (2020)translated an English QA dataset automatically toSpanish using a method called Translate-Align-Retrieve. The method is based on MT and anunsupervised alignment algorithm. Alberti et al.(2019) combined QG and answer extraction mod-els with a technique they refer to as roundtripconsistency-ensuring filtering to automatically cre-ate a synthetic English QA dataset from unlabeledtext passages. Abadani et al. (2021) translated theSQuAD2.0 QA dataset (Rajpurkar et al., 2018) au-tomatically into Persian, and then finalized the datainto two datasets, of which one is corrected manu-ally and the other automatically. The automaticallycorrected one is many times bigger and also yieldedbetter results. The SQuAD dataset has also been au-tomatically translated to Swedish (Okazawa, 2021)and French (Kabbadj, 2018).3 Data3.1 SQuADSQuAD is a large English QA dataset created fortraining machine learning models for the extractiveQA task. It is one of the most popular QA datasets,and many other QA datasets have followed itsmethodology (Clark et al., 2020; d’Hoffschmidtet al., 2020; Lim et al., 2019). SQuAD has alsobeen a popular resource for answer-aware neuralquestion generation (NQG) (Chan and Fan, 2019a;Du et al., 2017; Klein and Nabi, 2019).English Finnish translationPassage The capital, Brazzaville, is located on the CongoRiver, in the south of the country, immediatelyacross from Kinshasa, the capital of the Demo-cratic Republic of the Congo.Pääkaupunki Brazzaville sijaitsee Kongo-joenvarrella maan eteläosassa, vastapäätäKongon demokraattisen tasavallanpääkaupunkia Kinshasaa.Question What country does Kinshasa serve as capital of? Minkä maan pääkaupunki Kinshasa on?Answer Democratic Republic of the Congo Kongon demokraattinen tasavaltaTable 1: An example of problematic data resulting from translating passages and answers separately. Thetranslated answer (in the nominative case) is not found within the translated passage (where it appears inthe genitive case) which is required for extractive QA.The first version of SQuAD (SQuAD1.1) con-tains over 100K passage-question-answer tripletsthat crowdworkers extracted from 536 Wikipediaarticles. Each article is divided into several pas-sages, and each passage has several questions re-lated to its contents. Each question is linked withan answer (a substring of the passage) and the posi-tion of the answer’s first character in the passage.The second version of the dataset, SQuAD2.0, con-tains additional 50K questions, similar to the firstversion’s questions but impossible to answer withthe given passage. The extension’s idea was to en-able the development of models that can identifyunanswerable questions.3.2 Dataset Translation and NormalizationWe translated all the text data in the SQuAD2.0 intoFinnish using Google NMT (Wu et al., 2016) withthe Google Translate API. The passage, questions,and answers were translated separately, which ledto many of the translated answers not being sub-strings of the translated passage. That was some-times caused by translation errors, but one ma-jor factor was that the data was translated froma weakly inflected, analytic language to a highlyinflected, agglutinative language. In other words,the MT system has no way of knowing how toinflect the words in the translation without any con-text. The SQuAD format requires the answer tobe a substring of the passage as it is an extrac-tive QA dataset. The problem is illustrated in Ta-ble 1. Okazawa (2021) used a simple highlightingtechnique to tackle this problem when translatingSQuAD2.0 into Swedish. Rather than translatingthe passage and the answer separately, they put spe-cial markers ( [0]) around the answer substring be-fore the translation and afterward simply extractedthe translated answer span between the markersand then removed the markers. However, using itwould have required translating the same passagesmultiple times with different answers marked sincepassages are linked with several questions. Thiswas not feasible solely because using Google NMTvia API is not free.After translation, we used simple normaliza-tion methods to identify the answer substring inthe translated passage whenever it did not containthe separately translated answer. In total, therewere four normalization steps: regular expressions,lemmatization, stemming, and using the Englishanswer.. The script started with the first one andmoved to the next one if necessary.In the first step, a set of regular expressions wasused to fix some inconsistencies (in, e.g., whitespaces and punctuation) that were found to occa-sionally occur in the translations. In the next step,both the passage and the answer were lemmatized,and the script checked whether the now lemmatizedanswer was included in the lemmatized passage. Iflemmatization did not lead to a match, the scriptmoved to the next step: stemming. Stemming wasdone because the lemmatizer was observed to notrecognize many of the passage words as they wereoften proper nouns. If no match was found afterstemming, the last step was to check whether theEnglish answer was included in the translated pas-sage; if it was, it was used as the answer with theassumption that the English answer was mistak-enly translated. This was often the case with, e.g.,English song and movie names when they weretranslated with no context. If no match was foundafter all normalization, the question-answer pairwas discarded from the final dataset.If there was a match at any normalization step,the script proceeded to search its location in thepassage. The answer search started from the En-glish answer’s relative position in the translatedpassage and continued to neighboring positions un-til the answer was found. This was done to reducethe chance of choosing the starting position of awrong occurrence, as some passages contain theanswer string multiple times in different positions.After finding the answer start position, the question-answer pair was added to the final dataset.With the normalization procedure, roughly 32Kanswers were modified to match the passage strings.The data consists of 101,120 passage-question-answer triplets that are valid in the sense that the an-swers are included in the passages. 66K of them areanswerable (from SQuAD1.1), and 34K are unan-swerable with the given passage (from SQuAD2.0).This means that roughly 28% of the data includedin the publicly available partition of SQuAD1.1(92K questions) had to be discarded. The amountis approximately the same when taking into accountalso the “unanswerable” questions of SQuAD2.0.3.3 Finnish TyDi-QA CorpusTyDi-QA— Typologically Diverse QuestionAnswering (Clark et al., 2020), consists of twoQA datasets, covering 11 typologically diverselanguages with 204K question-answer pairs.The data was collected from Wikipedia articlesby human annotators. Unlike with SQuAD,the question writers formed questions withoutknowing the answers to them. The authors chosethis strategy to reduce lexical overlapping betweenquestions and passages, which could be exploitedby machine learning systems.One of the two datasets TyDi-QA consists of isin the SQuAD data format, which makes it ideal tocombine with the SQuAD data. In total, it contains7,635 Finnish questions. It is not much comparedto SQuAD, but to the best of our knowledge, it isthe only dataset that contains any Finnish data forextractive QA purposes. Consequently, we decidedto include the Finnish partition of the TyDi-QAdataset in our experimental dataset.3.4 The QA100-fi CorpusBecause most of the data used to train, validate, andtest the models are synthetically generated, we de-cided to also create an additional small Finnishdataset for evaluation purposes only, QA100-fi.One option would have been to simply use theFinnish TyDi-QA data for evaluation. However, itwould not have been feasible due to the possibledifferences with SQuAD questions caused by theTyDi-QA annotators not knowing the answers totheir formed questions.The QA100-fi dataset contains 100 questionsrelated to Finnish Wikipedia articles. It is in theSQuAD format, and there are 10 questions for eachcategory identified by Rajpurkar et al. (2016). Wedid not use any popularity-based ranking method toselect the articles, like the authors of SQuAD did.Instead, we simply selected articles that appearedto be of good quality and had a length of at leastthree paragraphs. The dataset is tiny compared toactual QA test sets, but it still gives an impressionof the models’ performance on purely native textdata collected by a native speaker.3.5 Data SplitTo train and evaluate models, we use data consist-ing of the answerable questions of the translatedSQuAD1.1 data and the Finnish TyDi-QA data.Mimicking the methodology of Du et al. (2017),who used SQuAD data for English QG, we shuffledand split the data on article level into training, vali-dation, and testing partitions. We call the resultingdataset SQuADTyDi-fi. The same SQuADTyDi-fisplits were used to train, validate, and evaluate bothQA and QG models. We also use QA100-fi as anadditional evaluation dataset. The split sizes areillustrated in Table 2.Dataset Split Q-A Pairs ArticlesTrain 64,604 6,977SQuADTyDi-fi Dev 4,903 567Test 4,822 567QA100-fi Test 100 67Table 2: Dataset splits. Q-A Pairs refers tothe number of question-answer pairs in the cor-responding split, and Articles tells how manyWikipedia articles the split has data from.4 Model Fine-tuningWe train three models for QA and four models forQG. As the base models for fine-tuning, we use theFinnish GPT-22(Radford et al., 2019), FinBERT3(Virtanen et al., 2019), and multilingual M-BERT,(Devlin et al., 2019).2https://huggingface.co/Finnish-NLP/gpt2-medium-finnish3We use bert-base-finnish-cased-v1 , the casedvariant.4.1 BERT Question AnsweringTo use BERT for extractive QA, we employ themethod described in Devlin et al., 2019. BERT isfine-tuned to “highlight” the answer when givena question and a passage that contains the answeras input. In practice, the model’s task is to outputtwo types of probabilities for each input token: 1)being the answer span start 2)being the last tokenof the answer span.The input consists of a passage and a question,separated with the [SEP] token:X= ([CLS], ⟨P⟩, [SEP], ⟨Q⟩)(1)where⟨P⟩is the input passage sequence and ⟨Q⟩isthe question sequence.4.2 BERT Question GenerationThe BERT models are fine-tuned for QG usingthe BERT-HLSQG (Highlight Sequential QuestionGeneration) method originally presented by Chanand Fan, 2019b. In BERT-HLSQG, the previousdecoding results are considered when decodingthe next token. Tokens are generated one by oneusing a strategy to modify BERT into generatingtext in an autoregressive manner. Another key ideain HLSQG is to highlight the answer in the inputpassage with special tokens to tackle any ambiguitycaused by the answer appearing multiple times inthe passage.At inference, the input Xfor an HLSQG modelis in the following format:X= ([CLS] , PHL,[SEP] ,ˆQ,[MASK] )(2)where PHLis the highlighted passage sequenceandˆQis the predicted question sequence.At the first inference step, the highlighted pas-sage is followed only by a [MASK] token, as thepredicted question sequence ˆQ= [ˆq1,ˆq2, ...,ˆq|ˆQ|]is empty at the start. The passage highlighting isdone by placing special [HL] tokens around theanswer in the passage:PHL= (p1, ...,[HL] , ps, ..., p e,[HL] , ..., p |P|)(3)where pnis the nth passage token, psandpeare the answer start and end tokens, and |P|is thepassage length.During each step, the whole input is fed to themodel, and it outputs a prediction for the [MASK]token. That prediction is considered the next tokenin the question sequence, and a new [MASK] tokenis placed after it. The same procedure goes on withinputs updated with the newly predicted questiontokens until a [SEP] token is predicted. At thatpoint, the question is considered ready.4.3 GPT-2 Question AnsweringTo fine-tune a GPT-2 model for QA ( GPT-2-QA ),we use a prompt to encourage the model to generateanswers relevant to the given passage and question.The model should learn the pattern of the promptand also the relation between the two input sections(passage and question) in the prompt.During fine-tuning, the prompt consists of threelines. Each line starts with a word that describes thecontent of the line and is followed by a matchingsequence. For example, the first two lines startwith Context: andQuestion: and continue with thepassage and question sequences. During training,language modeling loss is computed only on thesection where the model should output the answer.The fine-tuning prompt is:X=Context :⟨P⟩Question :⟨Q⟩Answer :⟨A⟩where⟨P⟩is the passage sequence, ⟨Q⟩is the ques-tion sequence, and ⟨A⟩is the answer sequence. Dur-ing inference, the answer sequence is omitted fromthe prompt, as the model’s task is to fill it in.4.4 GPT-2 Question GenerationWe train two GPT-2-based QG models,GPT-2-QG and GPT-2-HLQG . The train-ing and inference prompts of the GPT-2-QGmodel are the same as the GPT-2-QA , but theorder of the last two rows is reversed. TheQG models should learn to use the passage togenerate a question that the second line’s sequenceanswers. The training procedure is the same aswith GPT-2-QA , but instead of answers, thetraining loss is computed on the generated ques-tions. The two QG models differ in the prompts.GPT-2-HLQG also highlights the answer in thepassage with [HL] tokens. The motivation for thatis the same as with BERT-HLSQG: to reduce thepossible ambiguity caused by the answer appearingmultiple times in the passage.4.5 ImplementationAll the pre-trained models were accessed via thetransformers4Python package by Hugging4https://github.com/huggingface/transformers . Version 3.0.2 for BERT-HLSQGFace (Wolf et al., 2020). The fine-tuning scriptswere implemented using the same package alongwith PyTorch.5. For fine-tuning BERT-HLSQGmodels, we modified and used open-source codeby Lin (2020).6We fine-tune the models using two Nvidia V oltav100 GPUs and AdamW optimization with initiallearning rate 5×10−5. The batch size varied from2 to 24, depending on the task and the model ar-chitecture. All the models were trained for sixepochs, and a validation set was used to keep trackof the training performance and thus select the bestmodel for evaluation on the test sets. QA BERTmodels ( FinBERT-QA andM-BERT-QA ) had thebest validation results after two epochs, whereasall the other models had the best validation perfor-mance after six epochs. More details regarding thefine-tuning are included in Appendix A.5 Results5.1 QA ResultsThe evaluation results for the QA models are inTable 3. The scores are multiplied by 100 to mimicthe style of the official SQuAD leaderboard.7Withboth testing datasets, FinBERT-QA obtains thebest results. However, the fine-tuned M-BERTmodel comes close, with EM scores 2-3% worseand F1 scores 2.8-4.5 points behind FinBERT-QA.The GPT-2 -based QA model achieves moderatelygood results also, but both EM and F1 scores are atleast 20 points worse with both test sets.Dataset Model Exact Match F1 scoreFinBERT-QA 58.0 69.9SQuADTyDi-fi M-BERT-QA 56.0 67.1GPT-2-QA 37.2 46.9FinBERT-QA 67.0 83.7QA100-fi M-BERT-QA 64.0 79.2GPT-2-QA 43.0 56.0Table 3: Evaluation of QA models on two test sets.GPT-2-QA model obtained the worst results onboth datasets. With an EM score of 37.2 and an F1models and 4.8.1 for other models.5Version 1.5.0+cu101 for BERT-HLSQG models and1.9.0+cu111 for other models.6https://github.com/chris4540/StudyMaskedLMForQG7https://rajpurkar.github.io/SQuAD-explorer/score of 46.9 on SQuADTyDi-fi data, it is appar-ent that fine-tuning has contributed to the model’sability to answer questions. The model outputs rel-atively short answers as expected, and it also seemsto have quite well learned the expected answer typefor each interrogative in the question. For exam-ple, the model mostly seems to answer questionsstarting with kuka (“who”) with names/people andquestions starting with montako (“how many”) withnumeral phrases. However, the results are still farbehind the best-performing models.When the question contains very different vocab-ulary than the passage (e.g., synonyms or idiomaticexpressions), GPT-2-QA seems to perform partic-ularly poorly. A closer look at the results showsthat the GPT-2-QA model’s outputs occasionallycontain words that are slightly modified versionsof the ones in the passage. This problem is uniqueto GPT-2 in the experiments as it is the only au-toregressive model. Some other examples of sucherrors are shown in Table 4. However, most ofthe answers seem to be substrings of the input pas-sages, as expected. GPT-2-QA seems to often failto “understand” what specifically is being asked.Even when it seems to understand that the questionshould be answered with a date and the answershould be a substring of the passage, it often seemsto pick just any date. And sometimes, it even mod-ifies the date, as seen in Table 4.Predicted answer Target answerKenji Vatanabe Kenji Watanabe20. lokakuuta 2000 21. lokakuuta 2000Kypylän Midnan kypärän3 vuotta kolme vuottaTable 4: Examples of GPT-2-QA outputs that arenot substrings of the input passage.The other QA models, FinBERT-QA andM-BERT-QA , perform much better. They come inquite close to each other as FinBERT-QA outper-forms M-BERT-QA by 2-3 points on SQuADTyDi-fi data with its EM and F1 scores of 58.0 and 69.9,respectively. The difference between the scoresofFinBERT-QA andM-BERT-QA is slightlybigger with the QA100-fi test data, with whichFinBERT-QA obtains an EM score of 67.0 andan F1 score of 83.7. Using only Finnish dataand a lot larger amount of it in pre-training seemsto have been beneficial for FinBERT-QA . LikeGPT-2-QA , also M-BERT-QA seems to occasion-ally struggle when the question is phrased verydifferently compared to the input passage.As with GPT-2-QA , the longer the ground truthanswer, the more likely the BERT-based modelsseem to predict it incorrectly. However, rather thanchoosing a completely wrong span, FinBERT-QAandM-BERT-QA often seemed only to pick toofew words. This is also reflected in the biggerdifferences between EM and F1 scores of the othertwo models, compared to GPT-2-QA . Other thanquestions with longer answers, it is challenging toidentify any specific question/answer types withwhich FinBERT-QA andM-BERT-QA have themost difficulties. Additional examples of outputsof the QA models are included in Appendix A.The results of all QA models are better with theQA100-fi test dataset. It is possible that because thepassages, questions, and answers in QA100-fi arenot machine-translated, they could be closer to theFinnish language with which the models were pre-trained. Another factor might be the lengths of thepassages, questions, and answers. Their averagelengths are shown in Table 5. The passages andquestions in the test partition of SQuADTyDi-fi arelonger on average, but the answers are longer inQA100-fi. Longer passages are more challengingfor the models as there are more tokens from whichto choose the answer span start and end tokens.However, the test sets are so different in size that itis hard to say how much that affects the results.Passage Question AnswerSQuADTyDi-fi (test) 74.5 6.6 2.5QA100-fi 62.2 5.9 3.2Table 5: Average word counts in the test partitionof SQuADTyDi-fi and QA100-fi.As there are no other Finnish QA models tocompare with, we can gain some perspective bycomparing the results with English models trainedon a similar dataset. The top EM and F1 scoresfor single BERT models in the English SQuAD1.1leaderboard8are around 85 and 90, respectively.The overall best single model results are fromother transformer-based models, like LUKE (Ya-mada et al., 2020) and XLNet (Yang et al., 2019),which both obtain EM and F1 scores over 908Webpage mirroring SQuAD1.1 leaderboard:https://paperswithcode.com/sota/question-answering-on-squad11and 95, respectively. The best Finnish results(byFinBERT-QA ) are quite far from the best-performing English models. However, it is worthnoting that the Finnish models were fine-tuned us-ing a smaller dataset which is probably of poorerquality, as it has been automatically translated.Finnish being a highly inflective language mightalso make the QA task generally more challenging.5.2 QG ResultsThe evaluation results for the QG models are in Ta-ble 6. The FinBERT-based models obtain the bestresults. As in the QA task, the results of the Fin-BERT and M-BERT-based models are quite closeto each other, whereas the GPT-2 models are muchworse.Dataset Model BLEU-4 METEORSQuADTyDi-fi FinBERT-HLSQG 0.11 0.17M-BERT-HLSQG 0.10 0.16GPT-2-QG 0.04 0.10GPT-2-HLQG 0.04 0.10QA100-fi FinBERT-HLSQG 0.18 0.22M-BERT-HLSQG 0.13 0.20GPT-2-QG 0.04 0.13GPT-2-HLQG 0.04 0.11Table 6: BLEU-4 and METEOR scores of QG mod-els. Results on additional metrics in Appendix A.BothGPT-2-QG andGPT-2-HLQG achieve aBLEU-4 score of 0.04 on both datasets. Unlike inChan and Fan (2019b), using an answer highlighttechnique in the passage did not lead to an increasein the performance as the results of the two modelsare nearly identical. This indicates that ambiguitywas not the root cause of the inferior performanceof the models.Looking at the outputs of the GPT-2-based QGmodels, it is clear that the models learn the gen-eral structure of a question. The outputs mostlystart with the correct interrogative word and endwith a question mark. The questions also seemmostly grammatical. The biggest problems seemto be related to semantic validity and generatingquestions that can be answered using the input an-swer. However, the models occasionally seem togenerate questions that can be answered with theinput answer, but they are very different from theground-truth questions. They are good examplesof why using automatic, n-gram-based evaluationmetrics to assess QG systems can be problematic.Compared to the GPT-2-based QG models, theBERT-based QG models perform roughly twiceas well on every metric. FinBERT-HLSQG andM-BERT-HLSQG seem to output questions thatmake more sense and have more common wordswith the target question. For example, with tar-get question Kuinka korkeaksi puu yleensä kasvaaavoimilla alueilla? (“How tall does the tree usu-ally grow in open areas?”), FinBERT-HLSQG out-puts Minkä korkuinen on jousisoihtupuu avoimillaalueilla? (“How tall is the pink trumpet treein open areas?”) and GPT-2-HLQG outputsMinkä kokoisia puutalot ovat metsäalueiden ko-rkeilta tasoilta? (“What size are the woodenhouses from the high levels of the forest areas?”).GPT-2-HLQG ’s output is nonsensical yet gram-matical, whereas FinBERT-HLSQG ’s output canbe considered correct, though the phrasing is quitedifferent from the target question. All models per-form better with shorter passages and struggle atinflecting rare words. Additional examples of theoutputs of all QG models are shown in Appendix A.As on the QA task, the FinBERT-based modelachieves slightly better scores on the SQuADTyDi-fi test set than the multilingual variant. How-ever, in QG, the difference between the perfor-mance of BERT-based models is bigger whenevaluating on the QA100-fi dataset. For ex-ample, FinBERT-HLSQG obtains a BLEU-4score of 0.18, while M-BERT-HLSQG yields 0.13.Checking the outputs on QA100-fi, it seems thatM-BERT-HLSQG has more problems inflectingwords, and it occasionally uses word order andphrasings that sound a bit unnatural in Finnish. Itis possible that these problems were exacerbatedwhen the model was tested on QA100-fi, whichconsists of data collected by a native speaker.Chan and Fan (2019b), who initially presentedthe BERT-HLSQG method, report a BLEU-4 scoreof 0.20 for their English QG model that wasfine-tuned on roughly 73K question-answer pairs.FinBERT-HLSQG ’s BLEU-4 score (0.11) on theSQuADTyDi-fi test set is quite far from that,whereas the BLEU-4 score on the smaller QA100-fitest set (0.18) is a lot closer. It is likely that the pas-sages and questions in QA100-fi being shorter onaverage has a positive effect on the model’s perfor-mance on the dataset. Chan and Fan (2019a) alsoconclude that their BERT-HLSQG model worksbetter with shorter passages. As with the QA task,it is possible that the smaller amount of trainingdata and its poorer quality, together with the morecomplex Finnish morphology, partly explain thedifferences that occur when compared to the En-glish models.6 Conclusion and Future WorkWe have proposed an MT-based method for creat-ing a Finnish QA dataset, and used it to train andevaluate several transformer-based QA and QGmodels. On both tasks, fine-tuned monolingualBERT models obtain the best results. The multi-lingual variants came close, while the fine-tunedGPT-2 models were found to underperform. Pre-training with only Finnish data seems to give themodels an edge in both QA and QG.To the best of our knowledge, these are the firstmonolingual Finnish QA and QG models. Theyset a fair baseline for further research in FinnishQA and QG. All data used in the experiments is re-leased to the research community, to support futureresearch, and the models are released as bench-marks. We believe that this is a valuable contri-bution, since suitable datasets created by nativeFinnish speakers are not yet available.Given the promising initial results, we plan topursue several directions. (1) As the SQuAD2.0data with the unanswerable questions was alsotranslated, it could be used to train the first FinnishQA models that can also identify unanswerablequestions. (2) Lower-level natural language pro-cessing (NLP) components can be employed tostudy and improve performance. For example,we can use syntactic parsing to check for ungram-matical questions, to analyze the created syntheticdataset; we can use name recognition to improveQA results (Yadav and Bethard, 2019; Piskorskiet al., 2019), etc. (3) Real-world applications, suchas language learning systems, e.g., (Katinskaiaet al., 2018, 2017), can benefit from QA and QG—by automatically generating reading comprehen-sion questions from arbitrary authentic text. To in-tegrate QG into such applications, a separate modelshould be developed for choosing the appropriateinput answers. (4) To support (3), it is importantto study in detail on what types questions and an-swers the QA and QG models do especially well orespecially poorly.ReferencesNegin Abadani, Jamshid Mozafari, Afsaneh Fatemi,Mohammd Ali Nematbakhsh, and Arefeh Kazemi.2021. Parsquad: Machine translated SQuAD datasetfor Persian question answering. In 2021 7th Interna-tional Conference on Web Research (ICWR) , pages163–168.Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin,and Michael Collins. 2019. Synthetic QA corporageneration with roundtrip consistency. In Proceed-ings of the 57th Annual Meeting of the Association forComputational Linguistics , pages 6168–6173, Flo-rence, Italy. Association for Computational Linguis-tics.Lili Aunimo, Juha Makkonen, and Reeta Kuuskoski.2004. Cross-language question answering forFinnish. In Proceedings of the Web Intelligence Sym-posium, Finnish Artificial Intelligence Conference ,pages 35–49.Casimiro Pio Carrino, Marta R. Costa-jussà, and JoséA. R. Fonollosa. 2020. Automatic Spanish trans-lation of SQuAD dataset for multi-lingual questionanswering. In Proceedings of the 12th LanguageResources and Evaluation Conference , pages 5515–5523, Marseille, France. European Language Re-sources Association.Ying-Hong Chan and Yao-Chung Fan. 2019a. BERTfor question generation. In Proceedings of the 12thInternational Conference on Natural Language Gen-eration , pages 173–177, Tokyo, Japan. Associationfor Computational Linguistics.Ying-Hong Chan and Yao-Chung Fan. 2019b. A recur-rent BERT-based model for question generation. InProceedings of the 2nd Workshop on Machine Read-ing for Question Answering , pages 154–162, HongKong, China. Association for Computational Linguis-tics.Jonathan H. Clark, Eunsol Choi, Michael Collins, DanGarrette, Tom Kwiatkowski, Vitaly Nikolaev, andJennimaria Palomaki. 2020. TyDi QA: A benchmarkfor information-seeking question answering in typo-logically diverse languages. Transactions of the As-sociation for Computational Linguistics , 8:454–470.ArXiv: 2003.05002.Jacob Devlin, Ming-Wei Chang, Kenton Lee, andKristina Toutanova. 2019. BERT: Pre-training ofdeep bidirectional transformers for language under-standing. In Proceedings of the 2019 Conference ofthe North American Chapter of the Association forComputational Linguistics: Human Language Tech-nologies, Volume 1 (Long and Short Papers) , pages4171–4186, Minneapolis, Minnesota. Association forComputational Linguistics.Martin d’Hoffschmidt, Wacim Belblidia, QuentinHeinrich, Tom Brendlé, and Maxime Vidal. 2020.FQuAD: French question answering dataset. In Find-ings of the Association for Computational Linguistics:EMNLP 2020 , pages 1193–1208, Online. Associationfor Computational Linguistics.Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn-ing to ask: Neural question generation for readingcomprehension. In Proceedings of the 55th AnnualMeeting of the Association for Computational Lin-guistics (Volume 1: Long Papers) , pages 1342–1352,Vancouver, Canada.Esin Durmus, He He, and Mona Diab. 2020. FEQA: Aquestion answering evaluation framework for faith-fulness assessment in abstractive summarization. InProceedings of the 58th Annual Meeting of the Asso-ciation for Computational Linguistics , pages 5055–5070, Online. Association for Computational Lin-guistics.Ali Kabbadj. 2018. Something new in French text min-ing and information extraction (universal chatbot):Largest Q&A French training dataset (110 000+).[Online; posted 11-November-2018].Anisia Katinskaia, Javad Nouri, and Roman Yangarber.2017. Revita: a system for language learning and sup-porting endangered languages. In Proceedings of thejoint workshop on NLP for Computer Assisted Lan-guage Learning and NLP for Language Acquisition ,pages 27–35, Gothenburg, Sweden. LiU ElectronicPress.Anisia Katinskaia, Javad Nouri, and Roman Yangarber.2018. Revita: a language-learning platform at theintersection of ITS and CALL. In Proceedings ofthe Eleventh International Conference on LanguageResources and Evaluation (LREC 2018) , Miyazaki,Japan. European Language Resources Association(ELRA).T. Klein and Moin Nabi. 2019. Learning to answer bylearning to ask: Getting the best of GPT-2 and BERTworlds. ArXiv , abs/1911.02365.Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.2019. Latent retrieval for weakly supervised opendomain question answering. In Proceedings of the57th Annual Meeting of the Association for Computa-tional Linguistics , pages 6086–6096, Florence, Italy.Association for Computational Linguistics.Mike Lewis, Yinhan Liu, Naman Goyal, MarjanGhazvininejad, Abdelrahman Mohamed, Omer Levy,Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart:Denoising sequence-to-sequence pre-training for nat-ural language generation, translation, and comprehen-sion. In Proceedings of the 58th Annual Meeting ofthe Association for Computational Linguistics , pages7871–7880, Online. Association for ComputationalLinguistics. ArXiv: 1910.13461.Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019.KorQuAD1.0: Korean QA dataset for machine read-ing comprehension. arXiv:1909.07005 [cs] . ArXiv:1909.07005.Chun Hung Lin. 2020. Automatic Question Generationwith Pre-trained Masked Language Models . Ph.D.thesis, KTH Royal Institute of Technology, Stock-holm, Sweden.Susumu Okazawa. 2021. Swedish translation ofSQuAD2.0. GitHub repository (Accessed: 6 March2022).Jakub Piskorski, Laska Laskova, Michał Marci ́nczuk,Lidia Pivovarova, Pavel P ˇribáˇn, Josef Steinberger,and Roman Yangarber. 2019. The second cross-lingual challenge on recognition, normalization, clas-sification, and linking of named entities across Slaviclanguages. In Proceedings of the 7th Workshop onBalto-Slavic Natural Language Processing . ACL.Alec Radford, Jeffrey Wu, Rewon Child, David Luan,Dario Amodei, and Ilya Sutskever. 2019. Languagemodels are unsupervised multitask learners. OpenAIblog, 1(8).Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.Know what you don’t know: Unanswerable ques-tions for SQuAD. In Proceedings of the 56th AnnualMeeting of the Association for Computational Lin-guistics (Volume 2: Short Papers) , pages 784–789,Melbourne, Australia. Association for ComputationalLinguistics.Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, andPercy Liang. 2016. Squad: 100,000+ questions formachine comprehension of text. In Proceedings ofthe 2016 Conference on Empirical Methods in Natu-ral Language Processing , pages 2383–2392, Austin,Texas. Association for Computational Linguistics.Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, ŁukaszKaiser, and Illia Polosukhin. 2017. Attention is allyou need. In Advances in Neural Information Pro-cessing Systems , volume 30. Curran Associates, Inc.Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma,Juhani Luotolahti, Tapio Salakoski, Filip Ginter, andSampo Pyysalo. 2019. Multilingual is not enough:BERT for Finnish. arXiv:1912.07076 [cs] . ArXiv:1912.07076.Bingning Wang, Xiaochuan Wang, Ting Tao, Qi Zhang,and Jingfang Xu. 2020. Neural question generationwith answer pivot. Proceedings of the AAAI Con-ference on Artificial Intelligence , 34(05):9138–9145.Number: 05.Thomas Wolf, Lysandre Debut, Victor Sanh, JulienChaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rémi Louf, Morgan Fun-towicz, Joe Davison, Sam Shleifer, Patrick vonPlaten, Clara Ma, Yacine Jernite, Julien Plu, Can-wen Xu, Teven Le Scao, Sylvain Gugger, MariamaDrame, Quentin Lhoest, and Alexander M. Rush.2020. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. Technical ReportarXiv:1910.03771, arXiv. ArXiv:1910.03771 [cs]type: article.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V . Le,Mohammad Norouzi, Wolfgang Macherey, MaximKrikun, Yuan Cao, Qin Gao, Klaus Macherey, JeffKlingner, Apurva Shah, Melvin Johnson, XiaobingLiu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato,Taku Kudo, Hideto Kazawa, Keith Stevens, GeorgeKurian, Nishant Patil, Wei Wang, Cliff Young, JasonSmith, Jason Riesa, Alex Rudnick, Oriol Vinyals,Greg Corrado, Macduff Hughes, and Jeffrey Dean.2016. Google’s neural machine translation system:Bridging the gap between human and machine trans-lation. arXiv:1609.08144 [cs] . ArXiv: 1609.08144.Vikas Yadav and Steven Bethard. 2019. A survey onrecent advances in named entity recognition fromdeep learning models. CoRR , abs/1910.11470.Ikuya Yamada, Akari Asai, Hiroyuki Shindo, HideakiTakeda, and Yuji Matsumoto. 2020. LUKE: Deepcontextualized entity representations with entity-aware self-attention. In Proceedings of the 2020Conference on Empirical Methods in Natural Lan-guage Processing (EMNLP) , pages 6442–6454, On-line. Association for Computational Linguistics.Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-bonell, Russ R Salakhutdinov, and Quoc V Le. 2019.Xlnet: Generalized autoregressive pretraining for lan-guage understanding. In Advances in Neural Infor-mation Processing Systems , volume 32. Curran Asso-ciates, Inc.Zhuosheng Zhang, Junjie Yang, and Hai Zhao. 2020.Retrospective Reader for Machine Reading Compre-hension. Technical Report arXiv:2001.09694, arXiv.ArXiv:2001.09694 [cs] type: article.Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and QifaKe. 2018. Paragraph-level neural question gener-ation with maxout pointer and gated self-attentionnetworks. In Proceedings of the 2018 Conference onEmpirical Methods in Natural Language Processing ,pages 3901–3910, Brussels, Belgium. Associationfor Computational Linguistics.Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan,Hangbo Bao, and Ming Zhou. 2017. Neural ques-tion generation from text: A preliminary study.arXiv:1704.01792 [cs] . ArXiv: 1704.01792.A AppendixModel Epochs(best model)Batch sizeFinBERT-QA 2 16M-BERT-QA 2 16GPT-2-QA 6 2FinBERT-HLSQG 6 24M-BERT-HLSQG 6 16GPT-2-QG 6 2GPT-2-HLQG 6 2Table 7: Training hyperparameters. With all models, we use the AdamW optimization algorithm with aninitial learning rate of 5×10−5.Dataset Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-LFinBERT-HLSQG 0.29 0.21 0.15 0.11 0.17 0.33M-BERT-HLSQG 0.29 0.20 0.14 0.10 0.16 0.31SQuADTyDi-fiGPT-2-QG 0.18 0.11 0.06 0.04 0.10 0.20GPT-2-HLQG 0.18 0.10 0.06 0.04 0.10 0.20FinBERT-HLSQG 0.39 0.30 0.22 0.18 0.22 0.41M-BERT-HLSQG 0.36 0.25 0.18 0.13 0.20 0.36QA100-fiGPT-2-QG 0.22 0.12 0.07 0.04 0.13 0.22GPT-2-HLQG 0.19 0.11 0.07 0.04 0.11 0.20Table 8: All evaluation results of the QG models.Input passage Ulkomuodoltaan hylkeet ovat sileitä ja pulleita. Ruumiinrakenne soveltuusulavaan vedessä liikkumiseen. Ranteesta ja kämmenestä ovat muodostuneetetuevät ja nilkasta ja jalkaterästä takaevät. Evät ovat heikot eikä niitä voi käyttääapuna maalla liikkumiseen . Hylkeet liikkuvatkin maalla siten, että ne siirtävätpainoa rinnan ja vatsan varaan. Erotuksena lähisukulaisistaan korvahylkeistä,joihin kuuluvat muun muassa merileijonat, varsinaisilla hylkeillä ei ole ulkoisiakorvalehtiä. Varsinaisten hylkeiden uiminen tapahtuu evien ja ruumiin takaosansivuttaissuuntaista liikettä apuna käyttäen.Input question Mihin hylkeiden evät eivät sovellu? (What are seal fins not suitable for? )Target answer maalla liikkumiseen (to move on land )Model Predicted AnswerFinBERT-QA maalla liikkumiseen. (to move on land. )M-BERT-QA vedessä (in the water )GPT-2-QA ui maalla (swim/swims on land )Table 9: Output examples of the QA models. The ground truth answer is highlighted in the input passage.Input passage Jättiläismetsäkarju eli jättiläismetsäsika eli jättisika (Hylochoerus meinertzha-geni) on keskisen ja läntisen Afrikan metsissä elävä elinvoimainen sorkkaeläin-laji. Se on sukunsa Hylochoerus ainoa laji. Jättiläismetsäkarjut ovat suurimpialuonnonvaraisia sikoja. Ne voivat kasvaa jopa 210 senttimetriä pitkiksi japainaa 275 kilogrammaa. Niiden ruumis on tanakka ja pää leveä, mutta jalatovat lyhyet. Nahkaa peittävät pitkät ja karkeat karvat, jotka nousevat pystyyneläimen kiihtyessä.Input answer 210 senttimetriä(210 centimeters)Target question Kuinka pitkiksi jättiläismetsäkarjut voivat kasvaa?(How long can giant forest hogs grow?)Model Generated questionFinBERT-HLSQG Kuinka pitkäksi jättiläismetsäkarju voi kasvaa?(How long can a giant forest hog grow?)M-BERT-HLSQG Kuinka pitkiä jättiläismetsäkarjat voivat kasvaa? *(How long can giant forest cattles grow?)GPT-2-QG Miten pitkäksi afrikkalainen jättiläismetsäkarju voi kasvaa?(How long can an African giant forest hog grow?)GPT-2-HLQG Kuinka pitkä on jättiläismetsäkarjun pituus?(How long is the length of a giant forest hog?)Table 10: Output examples from the QG models. The input answer is highlighted in the input passage.Outputs marked with * contain inflection errors, but they are ignored in the translation.<|im_end|> <|im_start|>assistant ### Review Title Question Answering and Question Generation for Finnish ### Review Text This paper presents (mostly automatically translated) Finnish question answering and question generation datasets and experiments to finetune Finnish LM-s to the task. Creating datasets and benchmarks for new languages is very important and the most significant contribution of this work. In general, I found the work well structured, interesting, informative and easy to read. The information was presented systematically and the authors set a new benchmark for future work. While I'm not an expert in this particular task, I wonder if the authors should have also considered translating the test set instead and using English models given their superior performance. The also authors used Google’s translation API to create their dataset. Unfortunately, they mentioned some compromises in their methodology due to the cost of the translation service, which is surprising considering that there are also many open-source NMT models available for Finnish. Would better results be achieved if the API cost was not a factor? ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
0MjC3uMthAb
ICLR.cc/2021/Conference
2021
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
["Eleni Triantafillou", "Vincent Dumoulin", "Hugo Larochelle", "Richard Zemel"]
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (``pre-training'') followed by episodic fine-tuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the ``shot'' setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
["few-shot classification", "few-shot learning", "episodic training", "meta-learning"]
ABSTRACTEarly few-shot classification work advocates for episodic training, i.e. training overlearning episodes each posing a few-shot classification task. However, the role ofthis training regime remains poorly understood, and its usefulness is still debated.Standard classification training methods (“pre-training”) followed by episodic fine-tuning have recently achieved strong results. This work aims to understand the roleof this episodic fine-tuning phase through an exploration of the effect of the “shot”setting (number of examples per class) that is used during fine-tuning. We discoverthat fine-tuning on episodes of a particular shot can specialize the pre-trained modelto solving episodes of that shot at the expense of performance on other shots, inagreement with a trade-off recently observed in the context of end-to-end episodictraining. To amend this, we propose a shot-conditional form of episodic fine-tuning,inspired from recent work that trains a single model on a distribution of losses.Our investigation shows that this improves overall performance, without sufferingdisproportionately on any shot. We also examine the usefulness of this approach onthe large-scale Meta-Dataset benchmark where test episodes exhibit varying shotsand imbalanced classes. We find that our flexible model improves performance inthat challenging environment.1 I NTRODUCTIONFew-shot classification is the problem of learning a classifier using only a few examples. Specifically,the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’learn about new classes from few examples. Success is evaluated on a number of test episodes , eachposing a classification task between previously-unseen testclasses. In each such episode, we aregiven a few examples, or “shots”, of each new class that can be used to adapt this model to the task athand, and the objective is to correctly classify a held-out set of examples of the new classes.A simple approach to this problem is to learn a classifier over the training classes, parameterized asa neural network feature extractor followed by a classification layer. While the classification layeris not useful at test time due to the class shift, the embedding weights that are learned during this“pre-training” phase evidently constitute a strong representation that can be used to tackle test taskswhen paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to makepredictions for each example in the test episode given the episode’s small training set. Alternatively,early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training ,a regime where the training objective is expressed in terms of performance on a number of trainingepisodes of the same structure as the test episodes, but with the classes sampled from the training set.It was hypothesized that this episodic approach captures a more appropriate inductive bias for theproblem of few-shot classification and would thus lead to better generalization.However, there is an ongoing debate about whether episodic training is in fact required for obtainingthe best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al.,2020) proposed strong “pre-training” baselines that leverage common best practices for supervisedtraining (e.g. normalization schemes, data augmentation) to obtain a powerful representation thatworks well for this task. Interestingly, other recent work combines the pre-training of a single classifierwith episodic fine-tuning by removing the classification head and continuing to train the embeddingnetwork using the episodic inference algorithm that will be applied at test time (Triantafillou et al.,2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes1Under review as a conference paper at ICLR 2021have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what isthe nature of the modification it induces into the pre-trained solution? Under which conditions is itrequired in order to achieve the best performance?As a step towards answering those questions, we investigate the effect of the shot used duringepisodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We areparticularly interested in understanding whether the shot of the training episodes constitutes a sourceof information that the model can leverage to improve its few-shot classification performance onepisodes of that shot at test time. Our analysis reveals that indeed a particular functionality thatthis fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particularshot; accomplished by performing the fine-tuning on episodes of that shot. However, perhapsunsurprisingly, we find that specializing to a given shot comes at the expense of hurting performancefor other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of PrototypicalNetworks (Snell et al., 2017) where inferior performance was reported when the shot at training timedid not match the shot at test time.Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shot-specialization help us in practice? It is unrealistic to assume that we will always have the samenumber of labeled examples for every new class we hope to learn at test time, so we are interested inapproaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune aseparate episodic model for every shot, and intuitively that seems wasteful as we expect that tasksof similar shots should require similar models. Motivated by this, we propose to train a singleshot-conditional model for specializing the pre-trained solution to a wide spectrum of shots withoutsuffering trade-offs. This leads to a compact but flexible model that can be conditioned to be madeappropriate for the shot appearing in each test episode.In what follows we provide some background on few-shot classification and episodic models andthen introduce our proposed shot-conditioning approach and related work. We then present ourexperimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe thatour shot-conditional training approach is beneficial for obtaining a general flexible model that doesnot suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experimentwith our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shotclassification, and demonstrate its effectiveness in that challenging environment.2 B ACKGROUNDProblem definition Few-shot classification aims to classify test examples of unseen classes froma small labeled training set. The standard evaluation procedure involves sampling classificationepisodes by pickingNclasses at random from a test set of classes Ctestand sampling two disjointsets of examples from the Nchosen classes: a support set (or training set) of klabeled examplesper class, and a query set (or test set) of unlabeled examples, forming N-way,k-shot episodes. Themodel is allowed to use the support set, in addition to knowledge acquired while training on a disjointset of classesCtrain, to make a prediction for examples in the query set, and is evaluated on its queryset accuracy averaged over multiple test episodes.Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate underthe assumption that obtaining a model capable of few-shot classification requires training it on(mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standardsupervised learning. These learning episodes are sampled in the same way as described above fortest episodes, but with classes sampled from Ctrainthis time. In other words, the model is trained tominimize a loss of the form:ES;QPN;ktrain241jQjX(x;y)2Qlogp(yjx;S)35 (1)whereSandQare support and query sets sampled from the distribution PN;ktrain ofN-way,k-shottraining episodes induced by Ctrain, andrepresents the model’s parameters. This training regimeis often characterized as meta-learning orlearning to learn , i.e. learning over many episodes howto learn within an episode (from few labeled examples). Episodic models differ by their “inference2Under review as a conference paper at ICLR 2021x2S;Qsub-networkFiLM layersub-networkFiLM layersub-networkf(x)shots distribution FiLM parameter selectionfeature extractorFigure 1: SCONE conditions the feature extractor fon an episode’s shot distribution.algorithm”, i.e. the manner in which p(yjx;S)is computed to classify query examples based onthe support set.Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodicmodel which constructs a prototype cfor each class cin an episode asc=1jScjXx2Scf(x); (2)wherefis an embedding function parametrized by andScrepresents the set of support examplesbelonging to class c, and classifies a given query example asp(y=cjx;S) =exp(jjxcjj22)Pc0exp(jjxc0jj22): (3)3 S HOT CON DITIONAL EPISODIC (SCONE ) TRAININGIn this section we introduce Shot CONditional Episodic ( SCONE ) training for the purpose ofspecializing a strong pre-trained model to solving few-shot classification tasks of a range of differentshots, without suffering disproportionately for any shot.Training objective Training episodically involves minimizing the objective shown in Equation 1.We first sample an episode from Pk;Ntrain and compute a prediction p(yjx;S)for each queryexamplex. We then compute the cross-entropy loss on the query set using those predictions andperform a parameter update by backpropagating its gradient with respect to into the inferencealgorithm . In this work we concern ourselves with models that use an embedding function ftoobtain a representation for the support and query examples of each episode on top of which theinference algorithm is applied. In Prototypical Networks, for instance, fcontains allof the model’slearnable parameters.SCONE trains on episodes of varying shots and conditions the model on each episode’s shotdistribution. (Figure 1) by minimizingEkPk24ES;QPN;ktrain241jQjX(x;y)2Qlogpk(yjx;S)3535; (4)wherePkis the distribution over shots at training time and kdepends on an episode’s sampled shots.In the Appendix, we include an algorithm box outlining SCONE fine-tuning.3Under review as a conference paper at ICLR 2021Conditioning mechanism Rather than learning a separate set of model parameters for each shotsetting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioningmechanism which performs an affine feature-wise transformation of its input xbased on conditioninginformation k(in our case, the episode’s number of shots):FiLM(x) =(k)x+(k): (5)The dependency of andonkis handled by maintaining distinct values for each shot setting andselecting the appropriate andbased on an episode’s shot. Equivalently, we can think of ourapproach as a compact representation of many shot-specific feature extractors which share all buttheir FiLM layer parameters.More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX -SHOT ] range(where MAX -SHOT is a hyperparameter) and let all shots settings greater than or equal to MAX -SHOT share the same FiLM parametrization. As is often the case in practice, instead of insertingFiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values ofexisting batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performingepisodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training(i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLMparameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial topenalize the L2-norm of (regularizing the offset towards 0) and the L2 norm of 1(regularizingthe scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength ofthis FiLM weight decay.Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, wheredifferent classes have different shots. In that case, instead of selecting a single set of FiLM parameters,we compute the FiLM parameters for an episode as the convex combination of the FiLM parametersassociated with all shots found in the episode, where the weights of that combination are determinedbased on the frequency with which each shot appears in the episode.Concretely, the episode’s “shot distribution” s(a vector of length MAX -SHOT ) is obtained by averagingthe one-hot representations of the shots of the classes appearing in an episode. In the special case of aclass-balanced episode, the resulting average will be exactly a one-hot vector. This shot distributionis then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as anembedding lookup sTFin a matrixFof FiLM parameters using a shot distribution s.Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters,which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTH -SHOT procedure in the Appendix in Algorithm 1, which receives the shot sof a class (an integer),and a smoothing hyperparameter m(a float in [0;1]) and returns the smoothed shot for that class,which is a vector of length MAX -SHOT . Essentially, the result of smoothing is that the returned vectorrepresentation of sis not strictly one-hot with only the position corresponding to the observed shot sbeing ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entriesthat are directly adjacent to sreceive the value m, the entries two spots away from sthe valuem2,and so on, with entries further away from sreceiving exponentially-decaying values.4 R ELATED WORKFew-shot classification A plethora of models have been recently proposed for few-shot classifi-cation, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodictraining was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015;Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks(Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is madevia nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NNalgorithm where the label of a query example is predicted to be the weighted average of the (one-hot)support labels with the weights determined by the similarity of that query to each support example.Gradient-based episodic models are another popular family of approaches following the influentialMAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunesthe embedding weights along with a linear classifier head using gradient descent on the support set.4Under review as a conference paper at ICLR 2021Intuitively, this results in learning an embedding space that serves as a useful starting point fromwhich a few steps of gradient descent suffice to adapt the model to each episode’s classification task.Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier foreach episode from the prototypes of the classes appearing in that episode.Recently, the field has shifted towards studying few-shot classification in more realistic environmentsliketiered -ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which hasencouraged research into newly-introduced challenges, such as accounting for multiple diversedatasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel taskconditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach,and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each testepisode out of a universal feature representation.Understanding episodic learning Our work inscribes itself in a recent line of work attemptingto understand the differences between episodic and non-episodic learning. Goldblum et al. (2020)attempts to understand episodic learning from the perspective of how classes cluster in feature-space(for models that learn a final classification layer on top of a feature extractor) as well as from theperspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chaoet al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskillet al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use innon-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in theirability to generalize to new examples of previously seen classes or new examples of unseen classes.Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks toexplain the observed performance drop when there is a mismatch between the shots at training andtest time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of apre-trained solution, in a larger-scale and more diverse environment.Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) areused as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for asurvey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the lossat each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on ascalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as ameans of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) useFiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) usesit as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has alsobeen used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shotlearning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019)and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of thesupport set for this and thus the ‘shot’ information is discarded. The purpose of our conditioningmechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners isinspired by recent work that investigates loss-conditional training using feature-wise transformations(Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).5 E XPERIMENTS5.1 E XPLORING THE ROLE OF ‘SHOTS ‘DURING EPISODIC FINE -TUNINGIn this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuningphase, and in particular how it impacts the resulting model’s ability to solve test episodes of differentshots. We consider either using a fixed shot kthroughout the fine-tuning phase, or fine-tuning onepisodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well asSCONE fine-tuning that equips the model with the shot-conditioning mechanism described in theprevious section. We also compare against EST (Cao et al., 2020).Experimental setup We ran this round of experiments on ImageNet using the class splits proposedin Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet.We then removed the topmost classification layer, leaving us with a pre-trained backbone thatwe used as the initialization for the subsequent episodic fine-tuning round. We ran the following5Under review as a conference paper at ICLR 20210.400.450.50 Accuracy1-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shot0.660.680.70 Accuracy5-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shot0.760.780.800.82 Accuracy40-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shotFigure 2: Test accuracy on three different evaluation shots. Fine-tuning exclusively on a particularshot leads to the best test accuracy on that shot but poor accuracy on different shots. Fine-tuning ona range of shots is a reasonable general solution, but its performance can be improved when usingSCONE , thanks to its conditioning mechanism that offers a compact form of shot specialization.variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusivelyon 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from therange [1;40](‘Fine-tune on all shots’), and on episodes with that same shot distribution but usingSCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shotconditioning mechanism described in the previous section. We also consider ‘Fine-tune on bestk-shot’, an additional baseline that fine-tunes exclusively on the shot kthat is found to work beston average on the validation set (on the range of shots 140). For this, we trained models fork= 1;5;10;15;20;30;40and found the best to be k= 15 .As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm ofFiLM parameters. For a fair comparison with the other models, we applied the same regularization tothe batch normalization parameters of all models during the episodic fine-tuning phase, and we foundthis to be generally helpful. We tuned the strength of this regularization separately for each modeland picked the variant that worked best on the validation set, which we report in the Appendix. Weset the SCONE ’s MAX -SHOT hyperparameter to be 40 for this experiment.We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for buildingshot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of thelearned embeddings that aims to strike a good balance between maximizing the inter-class varianceand minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned and thehyperparameter dcontrolling the projection dimensionality very extensively. The values that we foundworked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantiallydifferent than those used in the original EST paper: d= 480 and= 5e8(versus the originald= 120 and= 1e3). We believe that this discrepancy may be due to our deeper backbones andlarger range of shots. The EST configuration that worked best for us yields a minimal reduction inthe embedding dimensionality, and primarily favours maximizing the inter-class variance, with theterm that minimizes the intra-class variance having minimal effect.In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and weperform early stopping and model selection on the validation set of classes, where the validationperformance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trainedon. All models are tested on a held-out test set of classes that is not seen during pre-training norepisodic fine-tuning, on 5-way episodes of different shot settings.Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on testepisodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses theperformance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that theepisodic fine-tuning phase may serve is to specialize the pre-trained model for performing well ontasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specializationcomes at the cost of severely reduced performance on tasks of very different shots. For instance, themodel that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shottest tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice toperform well in all settings either, since k= 15 there, it performs really poorly on 1-shot for instance.In practice, it may be desirable to perform well on more than a single shot setting at test time, withouthaving to fine-tune multiple separate shot-specialized models. A reasonable approach to that is6Under review as a conference paper at ICLR 2021Prototypical Networks (ImageNet only)Dataset Standard L2 BN EST Best k-shot SCONE w/o S SCONEILSVRC-2012 50.90 1.12% 51.811.06% 52.171.09% 52.361.08% 52.98%1.09% 52.511.11%Omniglot 63.12 1.37% 63.141.32% 66.071.29% 65.941.33% 64.71%1.32% 65.601.34%Aircraft 54.30 0.97% 53.260.97% 55.640.88% 56.030.95% 55.38%0.96% 55.380.96%Birds 68.22 0.97% 69.211.01% 67.171.02% 68.631.11% 68.98%1.04% 69.701.01%DTD 66.62 0.90% 68.330.81% 68.200.77% 69.610.80% 68.68%0.78% 69.580.77%Quickdraw 59.790.98% 59.170.96% 60.050.97% 60.680.97% 60.00%1.00% 60.810.95%Fungi 36.77 1.07% 38.961.10% 39.501.12% 37.961.08% 39.19%1.15% 39.661.12%VGG Flower 86.61 0.87% 87.700.77% 88.550.65% 87.450.86% 86.98%0.80% 88.030.73%Traffic Signs 48.64 1.06% 46.541.03% 48.411.07% 50.261.16% 47.61%1.05% 48.241.09%MSCOCO 43.021.09% 43.111.05% 43.451.05% 43.201.18% 43.43%1.08% 44.251.11%Average 57.80 % 58.12 % 58.92% 59.21% 58.79% 59.38%Average ranks 4.90 4.10 3.25 3.05 3.00 2.70Table 1: Prototypical Networks fine-tuned on ImageNet (‘Standard’) with the addition of L2 regular-ization on the batch normalization weights (‘L2 BN’), EST (Cao et al., 2020), the ‘Fine-tune on bestk-shot’ baseline (‘Best k-shot’) and SCONE , including the ablation that omits the shot smoothing(‘SCONE w/o S’). The reported numbers are query set accuracies averaged over 600 test episodesand 95% confidence intervals. We also show the average ranks (lower is better). We report details inthe Appendix on rank computation and statistical testing.episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in anygiven setting, it falls short of the performance of the corresponding shot-specialized model.Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings(‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONEfine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via itsconditioning mechanism, without suffering the trade-offs inherent in naively specializing a modelexclusively to any particular shot. We can view a SCONE model as a very compact way ofrepresenting multiple shot-specialized models, where the information required for that specializationresides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in thissetting which also strives for shot resiliency, but does so by encouraging invariance to the shot settingrather than shot awareness as in SCONE .5.2 L ARGE -SCALE EXPERIMENTS ON META-DATASETIn what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark forfew-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct imagedatasets, including natural images, handwritten characters and sketches. It also defines a generativeprocess for episodes that varies the way and shot across episodes, and within a particular episodevaries the shot for different classes, introducing imbalance. The range of shots induced by thisepisode generator is also larger than what we considered in the previous section. It is a long-taileddistribution under which small and medium shots are more likely but it is possible to also encountervery large shots (e.g. >400), though this would happen very infrequently. We include histograms ofthe shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. Theseexperiments aim to investigate whether SCONE is effective on this broader shot distribution andimbalanced episodes.Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we exploredifferent strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights usingPrototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodesof varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’)toSCONE episodic fine-tuning (‘ SCONE ’). Since SCONE uses L2-regularization on the sets ofFiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning withL2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of ourmethod that does not use any smoothing of the shot distribution. Finally, we compare to EST as wellwhere we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ PrototypicalNetwork variant, since that worked best. We tuned EST’s hyperparameters very extensively, as7Under review as a conference paper at ICLR 2021Meta-Baseline (All datasets)Dataset Classifier-Baseline Control SCONEILSVRC-2012 53.440.82% 49.830.80% 53.690.83%Omniglot 81.66 0.73% 89.280.51% 90.010.49%Aircraft 70.65 0.62% 81.600.49% 78.270.54%Birds 76.99 0.64% 78.750.59% 79.620.58%DTD 71.280.56% 70.470.58% 71.890.59%Quickdraw 64.09 0.67% 72.790.59% 71.950.56%Fungi 50.23 0.81% 55.280.73% 57.040.74%VGG Flower 89.14 0.44% 90.130.43% 91.090.39%Traffic Signs 89.14 0.44% 90.130.43% 91.090.39%MSCOCO 53.920.78% 47.850.81% 52.940.82%Average 68.03% 70.63% 71.68%Average ranks 2.55 2 1.45Table 2: Our reproduction of the Classifier-Baseline (Chen et al., 2020) trained on all datasets, andtwo variants that freeze those weights and episodically fine-tune using Meta-Baseline (Chen et al.,2020) to optimize either only the batch norm parameters (‘Control’), or only SCONE ’s parameters(‘SCONE ’). In all cases, the reported numbers are query set accuracies averaged over 1K testepisodes and 95% confidence intervals. We also show the average ranks (lower is better).described in Section 5.1, this time model-selecting on the validation sets of all datasets of Meta-Dataset. The values that worked best in this case are d= 480 and= 5e9. As noted in Section5.1, these are substantially different than those used in the original EST paper, likely due to our deeperbackbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune onbest k-shot’ baseline described in Section 5.1. In this case we found that the best kwas 20.We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets.We set SCONE ’sMAX -SHOT to 200. We tune the learning rate and decay schedule separately foreach variant and we perform model selection of SCONE ’s hyperparameters using the validationset. All additional details are reported in the Appendix, and we plan to open source our code uponpublication.Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chenet al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followedby an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training aclassifier on the set of training classes. This variant is evaluated on few-shot episodes by discardingthe ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inferencealgorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trainedembeddings on the episodic objective of the aforementioned nearest-centroid algorithm.20 21 22 23 24 25565860626466255075100125150175200ShotsFigure 3: UMAP projection of thelearned FiLM parameters for each“shot” setting, color-coded by shots.When training on all datasets of Meta-Dataset, they obtainedstrong results using their Classifier-Baseline which is in thiscase trained in a multi-task setup with separate output heads forthe different datasets. They found that episodically fine-tuningthat solution on all datasets did not help in general (it improvedperformance on some datasets but hurt performance on a largernumber of datasets).Inspired by that finding, we experimented with a SCONEtraining phase on top of Classifier-Baseline’s strong pre-trainedsolution where we froze the embedding weights to that powerfulrepresentation and we optimized only the set of SCONE ’sFiLM parameters for shot conditioning. We performed thisfine-tuning on training episodes from all datasets, using Meta-Baseline’s nearest centroid method as the episodic model. As acontrol experiment, we performed the same episodic fine-tuningbut without shot-conditioning, where we optimized only thebatch normalization parameters, keeping the remainder of theembedding weights frozen (‘Control’). This control can be thought of as a special case of SCONEwhere MAX -SHOT is set to 1.8Under review as a conference paper at ICLR 20210 50 100 150 200Shot0.000.010.02sFigure 4: The shot distribution sproduced according to our smoothing procedure for a hypothetical4-way episode where the shots for the four classes are: 1,10,23, and 103.Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their moreheavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computa-tion). Following (Triantafillou et al., 2020), we run a test of statistical significance described in theAppendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperformsstandard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing theL2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when notusing SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablationofSCONE that does not use any smoothing of the shot distribution is also competitive, but performsworse than full SCONE . We also observe that EST is competitive in this setting, only slightly worsethan SCONE , though we note that SCONE is a more general approach that is not tied to Gaussianclassifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuningthe batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), butusing SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in thissetting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness ondifferent shot distributions, and in different backbones.FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projec-tion (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). Asexpected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective ofthe fact that they rely on similar features for classification.Example smoothed shot distribution To gain an intuition on the effect of our smoothing pro-cedure, we illustrate in Figure 4 the result of smoothing an example shot distribution usingm= 11e06, which is the value of the smoothing hyperparameter that we used for ourPrototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-wayepisode where the shots for the four classes are: 1,10,23, and 103. We observe that the largest peakis in the range of small values, due to the first three shots of the episode, with the fourth shot causinga second peak around the value 103. As a reminder, this shot distribution defines the weights ofthe convex combination of FiLM parameters that will be used for the episode. In practice therefore,we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictlyactivating only the FiLM parameters of the observed shots.6 C ONCLUSIONIn summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of apre-trained model for few-shot classification. We discover that this fine-tuning phase can be usedto specialize the pre-trained model to episodes of a given shot, leading to strong performance ontest episodes of that shot at the expense of inferior performance on other shots. To eliminate thattrade-off, we propose a shot-conditional episodic training approach that trains a model on episodes ofa range of shots and can be conditioned at test time to modify its behavior appropriately dependingon the shot of the given test episode. Our experimental analysis suggests that our proposed shot-conditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scaleand diverse Meta-Dataset benchmark, in the context of two different episodic models. Future workcould explore how to incorporate shot-awareness in other few-shot classification models. In additionto the architectural modification of FiLM conditioning on the shot distribution, are there algorithmicadjustments that can yield additional performance gains, such as a mechanism of determining thenumber of inner-loop updates to perform for gradient-based meta-learners based on the number ofavailable shots?9Under review as a conference paper at ICLR 2021
mzfkA51ASJ_
Not familiar with this area, but assigned to review
5: Marginally below acceptance threshold
This paper aims to understand the role of this episodic fine-tuning phase and discovers that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots. It proposes a shot-conditional form of episodic fine-tuning. The main contribution of this work is the training objective, which varying the shots by introducing a distribution over shots $P_k$. In addition, the model parameters are not separately or independently maintained for different shots. The author proposed a conditioning mechanism using FiLM, where $k$ is a conditional variable or input to the FiLM network ($\gamma$ and $\beta$). The idea of shot-specific feature extractors is not new, and it is a common trick in amortized variational inference to reduce the number of model parameters. In the experimental section, I did not see other baselines in previous works. The comparisons are mostly against the two methods named standard and L2 BN, which seem to be simple variation of Prototypical Networks. Even the proposed SCONE is a variation of Prototypical Networks, it also reduces the novelty or originality. I am not familiar with this area or related research papers, but was assigned to review this paper. My evaluation is only based on the my understanding of the contents including in the manuscript. I would like to see the comments from other reviewers in this domain.
1: The reviewer's evaluation is an educated guess
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training ### Paper Abstract Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (``pre-training'') followed by episodic fine-tuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the ``shot'' setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment. ### Paper Keywords ["few-shot classification", "few-shot learning", "episodic training", "meta-learning"] ### Paper Content ABSTRACTEarly few-shot classification work advocates for episodic training, i.e. training overlearning episodes each posing a few-shot classification task. However, the role ofthis training regime remains poorly understood, and its usefulness is still debated.Standard classification training methods (“pre-training”) followed by episodic fine-tuning have recently achieved strong results. This work aims to understand the roleof this episodic fine-tuning phase through an exploration of the effect of the “shot”setting (number of examples per class) that is used during fine-tuning. We discoverthat fine-tuning on episodes of a particular shot can specialize the pre-trained modelto solving episodes of that shot at the expense of performance on other shots, inagreement with a trade-off recently observed in the context of end-to-end episodictraining. To amend this, we propose a shot-conditional form of episodic fine-tuning,inspired from recent work that trains a single model on a distribution of losses.Our investigation shows that this improves overall performance, without sufferingdisproportionately on any shot. We also examine the usefulness of this approach onthe large-scale Meta-Dataset benchmark where test episodes exhibit varying shotsand imbalanced classes. We find that our flexible model improves performance inthat challenging environment.1 I NTRODUCTIONFew-shot classification is the problem of learning a classifier using only a few examples. Specifically,the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’learn about new classes from few examples. Success is evaluated on a number of test episodes , eachposing a classification task between previously-unseen testclasses. In each such episode, we aregiven a few examples, or “shots”, of each new class that can be used to adapt this model to the task athand, and the objective is to correctly classify a held-out set of examples of the new classes.A simple approach to this problem is to learn a classifier over the training classes, parameterized asa neural network feature extractor followed by a classification layer. While the classification layeris not useful at test time due to the class shift, the embedding weights that are learned during this“pre-training” phase evidently constitute a strong representation that can be used to tackle test taskswhen paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to makepredictions for each example in the test episode given the episode’s small training set. Alternatively,early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training ,a regime where the training objective is expressed in terms of performance on a number of trainingepisodes of the same structure as the test episodes, but with the classes sampled from the training set.It was hypothesized that this episodic approach captures a more appropriate inductive bias for theproblem of few-shot classification and would thus lead to better generalization.However, there is an ongoing debate about whether episodic training is in fact required for obtainingthe best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al.,2020) proposed strong “pre-training” baselines that leverage common best practices for supervisedtraining (e.g. normalization schemes, data augmentation) to obtain a powerful representation thatworks well for this task. Interestingly, other recent work combines the pre-training of a single classifierwith episodic fine-tuning by removing the classification head and continuing to train the embeddingnetwork using the episodic inference algorithm that will be applied at test time (Triantafillou et al.,2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes1Under review as a conference paper at ICLR 2021have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what isthe nature of the modification it induces into the pre-trained solution? Under which conditions is itrequired in order to achieve the best performance?As a step towards answering those questions, we investigate the effect of the shot used duringepisodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We areparticularly interested in understanding whether the shot of the training episodes constitutes a sourceof information that the model can leverage to improve its few-shot classification performance onepisodes of that shot at test time. Our analysis reveals that indeed a particular functionality thatthis fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particularshot; accomplished by performing the fine-tuning on episodes of that shot. However, perhapsunsurprisingly, we find that specializing to a given shot comes at the expense of hurting performancefor other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of PrototypicalNetworks (Snell et al., 2017) where inferior performance was reported when the shot at training timedid not match the shot at test time.Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shot-specialization help us in practice? It is unrealistic to assume that we will always have the samenumber of labeled examples for every new class we hope to learn at test time, so we are interested inapproaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune aseparate episodic model for every shot, and intuitively that seems wasteful as we expect that tasksof similar shots should require similar models. Motivated by this, we propose to train a singleshot-conditional model for specializing the pre-trained solution to a wide spectrum of shots withoutsuffering trade-offs. This leads to a compact but flexible model that can be conditioned to be madeappropriate for the shot appearing in each test episode.In what follows we provide some background on few-shot classification and episodic models andthen introduce our proposed shot-conditioning approach and related work. We then present ourexperimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe thatour shot-conditional training approach is beneficial for obtaining a general flexible model that doesnot suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experimentwith our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shotclassification, and demonstrate its effectiveness in that challenging environment.2 B ACKGROUNDProblem definition Few-shot classification aims to classify test examples of unseen classes froma small labeled training set. The standard evaluation procedure involves sampling classificationepisodes by pickingNclasses at random from a test set of classes Ctestand sampling two disjointsets of examples from the Nchosen classes: a support set (or training set) of klabeled examplesper class, and a query set (or test set) of unlabeled examples, forming N-way,k-shot episodes. Themodel is allowed to use the support set, in addition to knowledge acquired while training on a disjointset of classesCtrain, to make a prediction for examples in the query set, and is evaluated on its queryset accuracy averaged over multiple test episodes.Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate underthe assumption that obtaining a model capable of few-shot classification requires training it on(mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standardsupervised learning. These learning episodes are sampled in the same way as described above fortest episodes, but with classes sampled from Ctrainthis time. In other words, the model is trained tominimize a loss of the form:ES;QPN;ktrain241jQjX(x;y)2Qlogp(yjx;S)35 (1)whereSandQare support and query sets sampled from the distribution PN;ktrain ofN-way,k-shottraining episodes induced by Ctrain, andrepresents the model’s parameters. This training regimeis often characterized as meta-learning orlearning to learn , i.e. learning over many episodes howto learn within an episode (from few labeled examples). Episodic models differ by their “inference2Under review as a conference paper at ICLR 2021x2S;Qsub-networkFiLM layersub-networkFiLM layersub-networkf(x)shots distribution FiLM parameter selectionfeature extractorFigure 1: SCONE conditions the feature extractor fon an episode’s shot distribution.algorithm”, i.e. the manner in which p(yjx;S)is computed to classify query examples based onthe support set.Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodicmodel which constructs a prototype cfor each class cin an episode asc=1jScjXx2Scf(x); (2)wherefis an embedding function parametrized by andScrepresents the set of support examplesbelonging to class c, and classifies a given query example asp(y=cjx;S) =exp(jjxcjj22)Pc0exp(jjxc0jj22): (3)3 S HOT CON DITIONAL EPISODIC (SCONE ) TRAININGIn this section we introduce Shot CONditional Episodic ( SCONE ) training for the purpose ofspecializing a strong pre-trained model to solving few-shot classification tasks of a range of differentshots, without suffering disproportionately for any shot.Training objective Training episodically involves minimizing the objective shown in Equation 1.We first sample an episode from Pk;Ntrain and compute a prediction p(yjx;S)for each queryexamplex. We then compute the cross-entropy loss on the query set using those predictions andperform a parameter update by backpropagating its gradient with respect to into the inferencealgorithm . In this work we concern ourselves with models that use an embedding function ftoobtain a representation for the support and query examples of each episode on top of which theinference algorithm is applied. In Prototypical Networks, for instance, fcontains allof the model’slearnable parameters.SCONE trains on episodes of varying shots and conditions the model on each episode’s shotdistribution. (Figure 1) by minimizingEkPk24ES;QPN;ktrain241jQjX(x;y)2Qlogpk(yjx;S)3535; (4)wherePkis the distribution over shots at training time and kdepends on an episode’s sampled shots.In the Appendix, we include an algorithm box outlining SCONE fine-tuning.3Under review as a conference paper at ICLR 2021Conditioning mechanism Rather than learning a separate set of model parameters for each shotsetting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioningmechanism which performs an affine feature-wise transformation of its input xbased on conditioninginformation k(in our case, the episode’s number of shots):FiLM(x) =(k)x+(k): (5)The dependency of andonkis handled by maintaining distinct values for each shot setting andselecting the appropriate andbased on an episode’s shot. Equivalently, we can think of ourapproach as a compact representation of many shot-specific feature extractors which share all buttheir FiLM layer parameters.More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX -SHOT ] range(where MAX -SHOT is a hyperparameter) and let all shots settings greater than or equal to MAX -SHOT share the same FiLM parametrization. As is often the case in practice, instead of insertingFiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values ofexisting batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performingepisodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training(i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLMparameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial topenalize the L2-norm of (regularizing the offset towards 0) and the L2 norm of 1(regularizingthe scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength ofthis FiLM weight decay.Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, wheredifferent classes have different shots. In that case, instead of selecting a single set of FiLM parameters,we compute the FiLM parameters for an episode as the convex combination of the FiLM parametersassociated with all shots found in the episode, where the weights of that combination are determinedbased on the frequency with which each shot appears in the episode.Concretely, the episode’s “shot distribution” s(a vector of length MAX -SHOT ) is obtained by averagingthe one-hot representations of the shots of the classes appearing in an episode. In the special case of aclass-balanced episode, the resulting average will be exactly a one-hot vector. This shot distributionis then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as anembedding lookup sTFin a matrixFof FiLM parameters using a shot distribution s.Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters,which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTH -SHOT procedure in the Appendix in Algorithm 1, which receives the shot sof a class (an integer),and a smoothing hyperparameter m(a float in [0;1]) and returns the smoothed shot for that class,which is a vector of length MAX -SHOT . Essentially, the result of smoothing is that the returned vectorrepresentation of sis not strictly one-hot with only the position corresponding to the observed shot sbeing ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entriesthat are directly adjacent to sreceive the value m, the entries two spots away from sthe valuem2,and so on, with entries further away from sreceiving exponentially-decaying values.4 R ELATED WORKFew-shot classification A plethora of models have been recently proposed for few-shot classifi-cation, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodictraining was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015;Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks(Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is madevia nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NNalgorithm where the label of a query example is predicted to be the weighted average of the (one-hot)support labels with the weights determined by the similarity of that query to each support example.Gradient-based episodic models are another popular family of approaches following the influentialMAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunesthe embedding weights along with a linear classifier head using gradient descent on the support set.4Under review as a conference paper at ICLR 2021Intuitively, this results in learning an embedding space that serves as a useful starting point fromwhich a few steps of gradient descent suffice to adapt the model to each episode’s classification task.Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier foreach episode from the prototypes of the classes appearing in that episode.Recently, the field has shifted towards studying few-shot classification in more realistic environmentsliketiered -ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which hasencouraged research into newly-introduced challenges, such as accounting for multiple diversedatasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel taskconditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach,and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each testepisode out of a universal feature representation.Understanding episodic learning Our work inscribes itself in a recent line of work attemptingto understand the differences between episodic and non-episodic learning. Goldblum et al. (2020)attempts to understand episodic learning from the perspective of how classes cluster in feature-space(for models that learn a final classification layer on top of a feature extractor) as well as from theperspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chaoet al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskillet al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use innon-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in theirability to generalize to new examples of previously seen classes or new examples of unseen classes.Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks toexplain the observed performance drop when there is a mismatch between the shots at training andtest time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of apre-trained solution, in a larger-scale and more diverse environment.Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) areused as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for asurvey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the lossat each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on ascalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as ameans of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) useFiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) usesit as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has alsobeen used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shotlearning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019)and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of thesupport set for this and thus the ‘shot’ information is discarded. The purpose of our conditioningmechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners isinspired by recent work that investigates loss-conditional training using feature-wise transformations(Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).5 E XPERIMENTS5.1 E XPLORING THE ROLE OF ‘SHOTS ‘DURING EPISODIC FINE -TUNINGIn this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuningphase, and in particular how it impacts the resulting model’s ability to solve test episodes of differentshots. We consider either using a fixed shot kthroughout the fine-tuning phase, or fine-tuning onepisodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well asSCONE fine-tuning that equips the model with the shot-conditioning mechanism described in theprevious section. We also compare against EST (Cao et al., 2020).Experimental setup We ran this round of experiments on ImageNet using the class splits proposedin Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet.We then removed the topmost classification layer, leaving us with a pre-trained backbone thatwe used as the initialization for the subsequent episodic fine-tuning round. We ran the following5Under review as a conference paper at ICLR 20210.400.450.50 Accuracy1-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shot0.660.680.70 Accuracy5-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shot0.760.780.800.82 Accuracy40-shot accuracyFine-tune on 1-shotFine-tune on 5-shotFine-tune on 40-shotFine-tune on all shotsSCONE Fine-tune on all shotsEST Fine-tune on all shotsFine-tune on best k-shotFigure 2: Test accuracy on three different evaluation shots. Fine-tuning exclusively on a particularshot leads to the best test accuracy on that shot but poor accuracy on different shots. Fine-tuning ona range of shots is a reasonable general solution, but its performance can be improved when usingSCONE , thanks to its conditioning mechanism that offers a compact form of shot specialization.variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusivelyon 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from therange [1;40](‘Fine-tune on all shots’), and on episodes with that same shot distribution but usingSCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shotconditioning mechanism described in the previous section. We also consider ‘Fine-tune on bestk-shot’, an additional baseline that fine-tunes exclusively on the shot kthat is found to work beston average on the validation set (on the range of shots 140). For this, we trained models fork= 1;5;10;15;20;30;40and found the best to be k= 15 .As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm ofFiLM parameters. For a fair comparison with the other models, we applied the same regularization tothe batch normalization parameters of all models during the episodic fine-tuning phase, and we foundthis to be generally helpful. We tuned the strength of this regularization separately for each modeland picked the variant that worked best on the validation set, which we report in the Appendix. Weset the SCONE ’s MAX -SHOT hyperparameter to be 40 for this experiment.We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for buildingshot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of thelearned embeddings that aims to strike a good balance between maximizing the inter-class varianceand minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned and thehyperparameter dcontrolling the projection dimensionality very extensively. The values that we foundworked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantiallydifferent than those used in the original EST paper: d= 480 and= 5e8(versus the originald= 120 and= 1e3). We believe that this discrepancy may be due to our deeper backbones andlarger range of shots. The EST configuration that worked best for us yields a minimal reduction inthe embedding dimensionality, and primarily favours maximizing the inter-class variance, with theterm that minimizes the intra-class variance having minimal effect.In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and weperform early stopping and model selection on the validation set of classes, where the validationperformance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trainedon. All models are tested on a held-out test set of classes that is not seen during pre-training norepisodic fine-tuning, on 5-way episodes of different shot settings.Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on testepisodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses theperformance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that theepisodic fine-tuning phase may serve is to specialize the pre-trained model for performing well ontasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specializationcomes at the cost of severely reduced performance on tasks of very different shots. For instance, themodel that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shottest tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice toperform well in all settings either, since k= 15 there, it performs really poorly on 1-shot for instance.In practice, it may be desirable to perform well on more than a single shot setting at test time, withouthaving to fine-tune multiple separate shot-specialized models. A reasonable approach to that is6Under review as a conference paper at ICLR 2021Prototypical Networks (ImageNet only)Dataset Standard L2 BN EST Best k-shot SCONE w/o S SCONEILSVRC-2012 50.90 1.12% 51.811.06% 52.171.09% 52.361.08% 52.98%1.09% 52.511.11%Omniglot 63.12 1.37% 63.141.32% 66.071.29% 65.941.33% 64.71%1.32% 65.601.34%Aircraft 54.30 0.97% 53.260.97% 55.640.88% 56.030.95% 55.38%0.96% 55.380.96%Birds 68.22 0.97% 69.211.01% 67.171.02% 68.631.11% 68.98%1.04% 69.701.01%DTD 66.62 0.90% 68.330.81% 68.200.77% 69.610.80% 68.68%0.78% 69.580.77%Quickdraw 59.790.98% 59.170.96% 60.050.97% 60.680.97% 60.00%1.00% 60.810.95%Fungi 36.77 1.07% 38.961.10% 39.501.12% 37.961.08% 39.19%1.15% 39.661.12%VGG Flower 86.61 0.87% 87.700.77% 88.550.65% 87.450.86% 86.98%0.80% 88.030.73%Traffic Signs 48.64 1.06% 46.541.03% 48.411.07% 50.261.16% 47.61%1.05% 48.241.09%MSCOCO 43.021.09% 43.111.05% 43.451.05% 43.201.18% 43.43%1.08% 44.251.11%Average 57.80 % 58.12 % 58.92% 59.21% 58.79% 59.38%Average ranks 4.90 4.10 3.25 3.05 3.00 2.70Table 1: Prototypical Networks fine-tuned on ImageNet (‘Standard’) with the addition of L2 regular-ization on the batch normalization weights (‘L2 BN’), EST (Cao et al., 2020), the ‘Fine-tune on bestk-shot’ baseline (‘Best k-shot’) and SCONE , including the ablation that omits the shot smoothing(‘SCONE w/o S’). The reported numbers are query set accuracies averaged over 600 test episodesand 95% confidence intervals. We also show the average ranks (lower is better). We report details inthe Appendix on rank computation and statistical testing.episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in anygiven setting, it falls short of the performance of the corresponding shot-specialized model.Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings(‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONEfine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via itsconditioning mechanism, without suffering the trade-offs inherent in naively specializing a modelexclusively to any particular shot. We can view a SCONE model as a very compact way ofrepresenting multiple shot-specialized models, where the information required for that specializationresides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in thissetting which also strives for shot resiliency, but does so by encouraging invariance to the shot settingrather than shot awareness as in SCONE .5.2 L ARGE -SCALE EXPERIMENTS ON META-DATASETIn what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark forfew-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct imagedatasets, including natural images, handwritten characters and sketches. It also defines a generativeprocess for episodes that varies the way and shot across episodes, and within a particular episodevaries the shot for different classes, introducing imbalance. The range of shots induced by thisepisode generator is also larger than what we considered in the previous section. It is a long-taileddistribution under which small and medium shots are more likely but it is possible to also encountervery large shots (e.g. >400), though this would happen very infrequently. We include histograms ofthe shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. Theseexperiments aim to investigate whether SCONE is effective on this broader shot distribution andimbalanced episodes.Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we exploredifferent strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights usingPrototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodesof varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’)toSCONE episodic fine-tuning (‘ SCONE ’). Since SCONE uses L2-regularization on the sets ofFiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning withL2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of ourmethod that does not use any smoothing of the shot distribution. Finally, we compare to EST as wellwhere we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ PrototypicalNetwork variant, since that worked best. We tuned EST’s hyperparameters very extensively, as7Under review as a conference paper at ICLR 2021Meta-Baseline (All datasets)Dataset Classifier-Baseline Control SCONEILSVRC-2012 53.440.82% 49.830.80% 53.690.83%Omniglot 81.66 0.73% 89.280.51% 90.010.49%Aircraft 70.65 0.62% 81.600.49% 78.270.54%Birds 76.99 0.64% 78.750.59% 79.620.58%DTD 71.280.56% 70.470.58% 71.890.59%Quickdraw 64.09 0.67% 72.790.59% 71.950.56%Fungi 50.23 0.81% 55.280.73% 57.040.74%VGG Flower 89.14 0.44% 90.130.43% 91.090.39%Traffic Signs 89.14 0.44% 90.130.43% 91.090.39%MSCOCO 53.920.78% 47.850.81% 52.940.82%Average 68.03% 70.63% 71.68%Average ranks 2.55 2 1.45Table 2: Our reproduction of the Classifier-Baseline (Chen et al., 2020) trained on all datasets, andtwo variants that freeze those weights and episodically fine-tune using Meta-Baseline (Chen et al.,2020) to optimize either only the batch norm parameters (‘Control’), or only SCONE ’s parameters(‘SCONE ’). In all cases, the reported numbers are query set accuracies averaged over 1K testepisodes and 95% confidence intervals. We also show the average ranks (lower is better).described in Section 5.1, this time model-selecting on the validation sets of all datasets of Meta-Dataset. The values that worked best in this case are d= 480 and= 5e9. As noted in Section5.1, these are substantially different than those used in the original EST paper, likely due to our deeperbackbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune onbest k-shot’ baseline described in Section 5.1. In this case we found that the best kwas 20.We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets.We set SCONE ’sMAX -SHOT to 200. We tune the learning rate and decay schedule separately foreach variant and we perform model selection of SCONE ’s hyperparameters using the validationset. All additional details are reported in the Appendix, and we plan to open source our code uponpublication.Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chenet al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followedby an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training aclassifier on the set of training classes. This variant is evaluated on few-shot episodes by discardingthe ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inferencealgorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trainedembeddings on the episodic objective of the aforementioned nearest-centroid algorithm.20 21 22 23 24 25565860626466255075100125150175200ShotsFigure 3: UMAP projection of thelearned FiLM parameters for each“shot” setting, color-coded by shots.When training on all datasets of Meta-Dataset, they obtainedstrong results using their Classifier-Baseline which is in thiscase trained in a multi-task setup with separate output heads forthe different datasets. They found that episodically fine-tuningthat solution on all datasets did not help in general (it improvedperformance on some datasets but hurt performance on a largernumber of datasets).Inspired by that finding, we experimented with a SCONEtraining phase on top of Classifier-Baseline’s strong pre-trainedsolution where we froze the embedding weights to that powerfulrepresentation and we optimized only the set of SCONE ’sFiLM parameters for shot conditioning. We performed thisfine-tuning on training episodes from all datasets, using Meta-Baseline’s nearest centroid method as the episodic model. As acontrol experiment, we performed the same episodic fine-tuningbut without shot-conditioning, where we optimized only thebatch normalization parameters, keeping the remainder of theembedding weights frozen (‘Control’). This control can be thought of as a special case of SCONEwhere MAX -SHOT is set to 1.8Under review as a conference paper at ICLR 20210 50 100 150 200Shot0.000.010.02sFigure 4: The shot distribution sproduced according to our smoothing procedure for a hypothetical4-way episode where the shots for the four classes are: 1,10,23, and 103.Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their moreheavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computa-tion). Following (Triantafillou et al., 2020), we run a test of statistical significance described in theAppendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperformsstandard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing theL2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when notusing SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablationofSCONE that does not use any smoothing of the shot distribution is also competitive, but performsworse than full SCONE . We also observe that EST is competitive in this setting, only slightly worsethan SCONE , though we note that SCONE is a more general approach that is not tied to Gaussianclassifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuningthe batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), butusing SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in thissetting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness ondifferent shot distributions, and in different backbones.FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projec-tion (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). Asexpected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective ofthe fact that they rely on similar features for classification.Example smoothed shot distribution To gain an intuition on the effect of our smoothing pro-cedure, we illustrate in Figure 4 the result of smoothing an example shot distribution usingm= 11e06, which is the value of the smoothing hyperparameter that we used for ourPrototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-wayepisode where the shots for the four classes are: 1,10,23, and 103. We observe that the largest peakis in the range of small values, due to the first three shots of the episode, with the fourth shot causinga second peak around the value 103. As a reminder, this shot distribution defines the weights ofthe convex combination of FiLM parameters that will be used for the episode. In practice therefore,we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictlyactivating only the FiLM parameters of the observed shots.6 C ONCLUSIONIn summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of apre-trained model for few-shot classification. We discover that this fine-tuning phase can be usedto specialize the pre-trained model to episodes of a given shot, leading to strong performance ontest episodes of that shot at the expense of inferior performance on other shots. To eliminate thattrade-off, we propose a shot-conditional episodic training approach that trains a model on episodes ofa range of shots and can be conditioned at test time to modify its behavior appropriately dependingon the shot of the given test episode. Our experimental analysis suggests that our proposed shot-conditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scaleand diverse Meta-Dataset benchmark, in the context of two different episodic models. Future workcould explore how to incorporate shot-awareness in other few-shot classification models. In additionto the architectural modification of FiLM conditioning on the shot distribution, are there algorithmicadjustments that can yield additional performance gains, such as a mechanism of determining thenumber of inner-loop updates to perform for gradient-based meta-learners based on the number ofavailable shots?9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Not familiar with this area, but assigned to review ### Review Text This paper aims to understand the role of this episodic fine-tuning phase and discovers that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots. It proposes a shot-conditional form of episodic fine-tuning. The main contribution of this work is the training objective, which varying the shots by introducing a distribution over shots $P_k$. In addition, the model parameters are not separately or independently maintained for different shots. The author proposed a conditioning mechanism using FiLM, where $k$ is a conditional variable or input to the FiLM network ($\gamma$ and $\beta$). The idea of shot-specific feature extractors is not new, and it is a common trick in amortized variational inference to reduce the number of model parameters. In the experimental section, I did not see other baselines in previous works. The comparisons are mostly against the two methods named standard and L2 BN, which seem to be simple variation of Prototypical Networks. Even the proposed SCONE is a variation of Prototypical Networks, it also reduces the novelty or originality. I am not familiar with this area or related research papers, but was assigned to review this paper. My evaluation is only based on the my understanding of the contents including in the manuscript. I would like to see the comments from other reviewers in this domain. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 1: The reviewer's evaluation is an educated guess<|im_end|> <|im_end|>
BJlxm30cKm
ICLR.cc/2019/Conference
2019
An Empirical Study of Example Forgetting during Deep Neural Network Learning
["Mariya Toneva*", "Alessandro Sordoni*", "Remi Tachet des Combes*", "Adam Trischler", "Yoshua Bengio", "Geoffrey J. Gordon"]
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
["catastrophic forgetting", "sample weighting", "deep generalization"]
ABSTRACTInspired by the phenomenon of catastrophic forgetting, we investigate the learningdynamics of neural networks as they train on single classification tasks. Our goalis to understand whether a related phenomenon occurs when data does not un-dergo a clear distributional shift. We define a “forgetting event” to have occurredwhen an individual training example transitions from being classified correctly toincorrectly over the course of learning. Across several benchmark data sets, wefind that: (i) certain examples are forgotten with high frequency, and some notat all; (ii) a data set’s (un)forgettable examples generalize across neural architec-tures; and (iii) based on forgetting dynamics, a significant fraction of examplescan be omitted from the training data set while still maintaining state-of-the-artgeneralization performance.1 I NTRODUCTIONMany machine learning models, in particular neural networks, cannot perform continual learning .They have a tendency to forget previously learnt information when trained on new tasks, a phe-nomenon usually called catastrophic forgetting (Kirkpatrick et al., 2017; Ritter et al., 2018). One ofthe hypothesized causes of catastrophic forgetting in neural networks is the shift in the input distri-bution across different tasks— e.g., a lack of common factors or structure in the inputs of differenttasks might lead standard optimization techniques to converge to radically different solutions eachtime a new task is presented. In this paper, we draw inspiration from this phenomenon and investi-gate the extent to which a related forgetting process occurs as a model learns examples traditionallyconsidered to belong to the same task .Similarly to the continual learning setting, in stochastic gradient descent (SGD) optimization, eachmini-batch can be considered as a mini-“task” presented to the network sequentially. In this con-text, we are interested in characterizing the learning dynamics of neural networks by analyzing(catastrophic) example forgetting events . These occur when examples that have been “learnt” ( i.e.,correctly classified) at some time tin the optimization process are subsequently misclassified —or in other terms forgotten — at a time t0> t. We thus switch the focus from studying interac-tions between sequentially presented tasks to studying interactions between sequentially presenteddataset examples during SGD optimization. Our starting point is to understand whether there existexamples that are consistently forgotten across subsequent training presentations and, conversely,examples that are never forgotten. We will call the latter unforgettable examples. We hypothesizethat specific examples consistently forgotten between subsequent presentations, if they exist, mustEqual contribution. Correspondence: MT: mariya@cmu.edu , AS: alsordon@microsoft.comyWork done while interning at Microsoft Research MontrealCode available at https://github.com/mtoneva/example forgetting1Published as a conference paper at ICLR 2019not share commonalities with other examples from the same task. We therefore analyze the propor-tion of forgettable/unforgettable examples for a given task and what effects these examples have ona model’s decision boundary and generalization error.The goal of our investigation is two-fold. First, we attempt to gain insight into the optimizationprocess by analyzing interactions among examples during learning and their influence on the finaldecision boundary. We are particularly interested in whether we can glean insight on the com-pressibility of a dataset, and thereby increase data efficiency without compromising generalizationaccuracy. It is a timely problem that has been the recent focus of few-shot learning approaches viameta-learning (Finn et al., 2017; Ravi & Larochelle, 2017). Second, we aim to characterize whetherforgetting statistics can be used to identify “important” samples and detect outliers and exampleswith noisy labels (John, 1995; Brodley & Friedl, 1999; Sukhbaatar et al., 2014; Jiang et al., 2018).Identifying important, or most informative examples is an important line of work and was exten-sively studied in the literature. Techniques of note — among others — are predefined curriculaof examples (Bengio & LeCun, 2007), self-paced learning (Kumar et al., 2010), and more recentlymeta-learning (Fan et al., 2017). These research directions usually define “hardness” or “commonal-ity” of an example as a function of the loss on that particular example at some point during training(or possibly at convergence). They do not consider whether some examples are consistently for-gotten throughout learning. Very recently, Chang et al. (2017) consider re-weighting examples byaccounting for the variance of their predictive distribution. This is related to our definition of for-getting events, but the authors provide little analysis of the extent to which the phenomenon occursin their proposed tasks. Our purpose is to study this phenomenon from an empirical standpoint andcharacterize its prevalence in different datasets and across different model architectures.Our experimental findings suggest that: a) there exist a large number of unforgettable examples, i.e.,examples that are never forgotten once learnt, those examples are stable across seeds and stronglycorrelated from one neural architecture to another; b) examples with noisy labels are among the mostforgotten examples, along with images with “uncommon” features, visually complicated to classify;c) training a neural network on a dataset where a very large fraction of the least forgotten exampleshave been removed still results in extremely competitive performance on the test set.2 R ELATED WORKCurriculum Learning and Sample Weighting Curriculum learning is a paradigm that favorslearning along a curriculum of examples of increasing difficulty (Bengio et al., 2009). This generalidea has found success in a variety of areas since its introduction (Kumar et al., 2010; Lee & Grau-man, 2011; Schaul et al., 2015). Kumar et al. (2010) implemented their curriculum by consideringeasy the examples with a small loss. In our experiments, we empirically validate that unforget-table examples can be safely removed without compromising generalization. Zhao & Zhang (2015);Katharopoulos & Fleuret (2018) relate sample importance to the norm of its loss gradient with re-spect to the parameters of the network. Fan et al. (2017); Kim & Choi (2018); Jiang et al. (2018)learn a curriculum directly from data in order to minimize the task loss. Jiang et al. (2018) also studythe robustness of their method in the context of noisy examples. This relates to a rich literature onoutlier detection and removal of examples with noisy labels (John, 1995; Brodley & Friedl, 1999;Sukhbaatar et al., 2014; Jiang et al., 2018). We will provide evidence that noisy examples rankhigher in terms of number of forgetting events. Koh & Liang (2017) borrow influence functionsfrom robust statistics to evaluate the impact of the training examples on a model’s predictions.Deep Generalization The study of the generalization properties of deep neural networks whentrained by stochastic gradient descent has been the focus of several recent publications (Zhang et al.,2016; Keskar et al., 2016; Chaudhari et al., 2016; Advani & Saxe, 2017). These studies suggestthat the generalization error does not depend solely on the complexity of the hypothesis space. Forinstance, it has been demonstrated that over-parameterized models with many more parameters thantraining points can still achieve low test error (Huang et al., 2017; Wang et al., 2018) while beingcomplex enough to fit a dataset with completely random labels (Zhang et al., 2016). A possibleexplanation for this phenomenon is a form of implicit regularization performed by stochastic gradi-ent descent: deep neural networks trained with SGD have been recently shown to converge to themaximum margin solution in the linearly separable case (Soudry et al., 2017; Xu et al., 2018). In2Published as a conference paper at ICLR 2019our work, we provide empirical evidence that generalization can be maintained when removing asubstantial portion of the training examples and without restricting the complexity of the hypothesisclass. This goes along the support vector interpretation provided by Soudry et al. (2017).3 D EFINING AND COMPUTING EXAMPLE FORGETTINGOur general case study for example forgetting is a standard classification setting. Given a datasetD= (xi;yi)iof observation/label pairs, we wish to learn the conditional probability distributionp(yjx;)using a deep neural network with parameters . The network is trained to minimize theempirical risk R=1jDjPiL(p(yijxi;);yi), whereLdenotes the cross-entropy loss and yi21;:::k . The minimization is performed using variations of stochastic gradient descent, startingfrom initial random parameters 0, and by sampling examples at random from the dataset D.Forgetting and learning events We denote by ^yti= arg max kp(yikjxi;t)the predicted label forexample xiobtained after tsteps of SGD. We also let accti= 1^yti=yibe a binary variable indicatingwhether the example is correctly classified at time step t. Exampleiundergoes a forgetting eventwhen acctidecreases between two consecutive updates: accti>acct+1i. In other words, example iis misclassified at step t+ 1after having been correctly classified at step t. Conversely, a learningevent has occurred if accti<acct+1i. Statistics that will be of interest in the next sections include thedistribution of forgetting events across examples and the first time a learning event occurs.Classification margin We will also be interested in analyzing the classification margin. Our pre-dictors have the form p(yijxi;) =((xi)), whereis a sigmoid (softmax) activation functionin the case of binary (categorical) classification. The classification margin mis defined as the dif-ference between the logit of the correct class and the largest logit among the other classes, i.e.m=karg max k06=kk0, wherekis the index corresponding to the correct class.Unforgettable examples We qualify examples as unforgettable if they are learnt at some point andexperience no forgetting events during the whole course of training: example iis unforgettable ifthe first time it is learnt tverifiest<1and for allkt, accki= 1. Note that, according to thisdefinition, examples that are never learnt during training do not qualify as unforgettable. We referto examples that have been forgotten at least once as forgettable .3.1 P ROCEDURAL DESCRIPTION AND EXPERIMENTAL SETTINGFollowing the previous definitions, monitoring forgetting events entails computing the prediction forall examples in the dataset at each model update, which would be prohibitively expensive. In prac-tice, for each example, we subsample the full sequence of forgetting events by computing forgettingstatistics only when the example is included in the current mini-batch; that is, we compute forgettingacross presentations of the same example in subsequent mini-batches. This gives a lower bound onthe number of forgetting events an example undergoes during training.Algorithm 1 Computing forgetting statistics.initialize prev acci= 0;i2Dinitialize forgetting T[i] = 0;i2Dwhile not training done doBD # sample a minibatchforexamplei2Bdocompute acc iifprev acci>accithenT[i] =T[i] + 1prev acci=accigradient update classifier on BreturnTWe train a classifier on a given dataset and recordthe forgetting events for each example when they aresampled in the current mini-batch. For the purposesof further analysis, we then sort the dataset’s exam-ples based on the number of forgetting events theyundergo. Ties are broken at random when samplingfrom the ordered data. Samples that are never learntare considered forgotten an infinite number of timesfor sorting purposes. Note that this estimate of ex-ample forgetting is computationally expensive; seeSec. 6 for a discussion of a cheaper method.We perform our experimental evaluation on threedatasets of increasing complexity: MNIST (LeCunet al., 1999), permuted MNIST – a version of MNISTthat has the same fixed permutation applied to thepixels of all examples, and CIFAR-10 (Krizhevsky,2009). We use various model architectures and training schemes that yield test errors comparable3Published as a conference paper at ICLR 20190 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.040 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.040 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.04Figure 1: Histograms of forgetting events on (from left to right) MNIST ,permutedMNIST andCIFAR-10 . Insets show the zoomed-in y-axis.with the current state-of-the-art on the respective datasets. In particular, the MNIST -based exper-iments use a network comprised of two convolutional layers followed by a fully connected one,trained using SGD with momentum and dropout. This network achieves 0.8% test error. For CIFAR-10, we use a ResNet with cutout (DeVries & Taylor, 2017) trained using SGD and momentum witha particular learning rate schedule. This network achieves a competitive 3.99% test error. For fullexperimentation details, see the Supplementary.4 C HARACTERIZING EXAMPLE FORGETTINGNumber of forgetting events We estimate the number of forgetting events of all the training ex-amples for the three different datasets ( MNIST ,permutedMNIST andCIFAR-10 ) across 5 randomseeds. The histograms of forgetting events computed from one seed are shown in Figure 1. Thereare55;012,45;181and15;628unforgettable examples common across 5seeds, they represent re-spectively 91:7%,75:3%, and 31:3%of the corresponding training sets. Note that datasets with lesscomplexity and diversity of examples, such as MNIST , seem to contain significantly more unfor-gettable examples. permutedMNIST exhibits a complexity balanced between MNIST (easiest) andCIFAR-10 (hardest). This finding seems to suggest a correlation between forgetting statistics andthe intrinsic dimension of the learning problem, as recently formalized by Li et al. (2018).Stability across seeds To test the stability of our metric with respect to the variance generated bystochastic gradient descent, we compute the number of forgetting events per example for 10differentrandom seeds and measure their correlation. From one seed to another, the average Pearson corre-lation is 89:2%. When randomly splitting the 10different seeds into two sets of 5, the cumulatednumber of forgetting events within those two sets shows a high correlation of 97:6%. We also ranthe original experiment on 100 seeds to devise 95% confidence bounds on the average (over 5 seeds)number of forgetting events per example (see Appendix 13). The confidence interval of the leastforgotten examples is tight, confirming that examples with a small number of forgetting events canbe ranked confidently.Forgetting by chance In order to quantify the possibility of forgetting occurring by chance, we ad-ditionally analyze the distribution of forgetting events obtained under the regime of random updatesteps instead of the true SGD steps. In order to maintain the statistics of the random updates similarto those encountered during SGD, random updates are obtained by shuffling the gradients producedby standard SGD on a main network (more details are provided in Appendix 12). We report thehistogram of chance forgetting events in Supplementary Figure 13: examples are being forgotten bychance a small number of time, at most twice and most of the time less than once. The observed sta-bility across seeds, low number of chance forgetting events and the tight confidence bounds suggestthat it is unlikely for the ordering produced by the metric to be the by-product of another unrelatedrandom cause.First learning events We investigate whether unforgettable and forgettable examples need to bepresented different numbers of times in order to be learnt for the first time ( i.e.for the first learningevent to occur, as defined in Section 3). The distributions of the presentation numbers at whichfirst learning events occur across all datasets can be seen in Supplemental Figure 8. We observethat, while both unforgettable and forgettable sets contain many examples that are learnt during the4Published as a conference paper at ICLR 2019unforgettableplane car bird cat deer dog frog horse ship truckforgettableFigure 2: Pictures of unforgettable ( Top) and forgettable examples ( Bottom ) of every CIFAR-10class. Forgettable examples seem to exhibit peculiar or uncommon features. Additional examplesare available in Supplemental Figure 15.0 5 10 15 20 25number of forgetting events0.0000.0250.0500.0750.1000.1250.1500.175fraction of corresponding examplesregular examplesnoisy examples0 5 10 15 20number of forgetting events0.000.050.100.150.200.250.300.350.40fraction of corresponding examplesexamples before noiseexamples after noiseFigure 3: Distributions of forgetting events across training examples in CIFAR-10 when 20% oflabels are randomly changed. Left. Comparison of forgetting events between examples with noisyand original labels. The most forgotten examples are those with noisy labels. No noisy examplesare unforgettable. Right. Comparison of forgetting events between examples with noisy labels andthe same examples with original labels. Examples exhibit more forgetting when their labels arechanged.first3-4presentations, the forgettable examples contain a larger number of examples that are firstlearnt later in training. The Spearman rank correlation between the first learning event presentationsand the number of forgetting events across all training examples is 0:56, indicating a moderaterelationship.Misclassification margin The definition of forgetting events is binary and as such fairly crudecompared to more sophisticated estimators of example relevance (Zhao & Zhang, 2015; Chang et al.,2017). In order to qualify its validity, we compute the misclassification margin of forgetting events.The misclassification margin of an example is defined as the mean classification margin (definedin Section 3) over all its forgetting events, a negative quantity by definition. The Spearman rankcorrelation between an example’s number of forgetting events and its mean misclassification marginis -0.74 (computed over 5 seeds, see corresponding 2D-histogram in Supplemental Figure 9). Theseresults suggest that examples which are frequently forgotten have a large misclassification margin.Visual inspection We visualize some of the unforgettable examples in Figure 2 along with someexamples that have been most forgotten in the CIFAR-10 dataset. Unforgettable samples are easilyrecognizable and contain the most obvious class attributes or centered objects, e.g., a plane on aclear sky. On the other hand, the most forgotten examples exhibit more ambiguous characteristics(as in the center image, a truck on a brown background) that may not align with the learning signalcommon to other examples from the same class.Detection of noisy examples We further investigate the observation that the most forgettable ex-amples seem to exhibit atypical characteristics. We would expect that if highly forgettable exam-ples have atypical class characteristics, then noisily-labeled examples will undergo more forgetting5Published as a conference paper at ICLR 20190 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(a.1) (a.2) (a.3) (a.4)random partition 1random partition 2(a) random partitions0 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(b.1) (b.2) (b.3) (b.4)never forgottenforgotten at least once0 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(c.1) (c.2) (c.3) (c.4)never forgottenforgotten at least once (b) partitioning by forgetting eventsFigure 4: Synthetic continual learning setup for CIFAR-10 . Background color in each column indi-cates the training partition, curves track performance on both partitions during interleaved training.Solids lines represent the average of 5runs and dashed lines represent the standard error. The figurehighlights that examples that have been forgotten at least once can “support” those that have neverbeen forgotten, as shown in (c.2) and (b.3).events. We randomly change the labels of 20% ofCIFAR-10 and record the number of forgettingevents of both the noisy and regular examples through training. The distributions of forgetting eventsacross noisy and regular examples are shown in Figure 3. We observe that the most forgotten exam-ples are those with noisy labels and that no noisy examples are unforgettable. We also compare theforgetting events of the noisy examples to that of the same set of examples with original labels andobserve a much higher degree of forgetting in the noisy case. The results of these synthetic experi-ments support the hypothesis that highly forgettable examples exhibit atypical class characteristics.4.1 C ONTINUAL LEARNING SETUPWe observed that in harder tasks such as CIFAR-10 , a significant portion of examples are forgotten atleast once during learning. This leads us to believe that catastrophic forgetting may be observed, tosome extent, even when considering examples coming from the same task distribution. To test thishypothesis, we perform an experiment inspired by the standard continual learning setup (McCloskey& Cohen, 1989; Kirkpatrick et al., 2017). We create two tasks by randomly sampling 10k examplesfrom the CIFAR-10 training set and dividing them in two equally-sized partitions (5k examples each).We treat each partition as a separate ”task” even though they should follow the same distribution.We then train a classifier for 20 epochs on each partition in an alternating fashion, while trackingperformance on both partitions. The results are reported in Figure 4 (a). The background colorrepresents which of the two partitions is currently used for training. We observe some forgetting ofthe second task when we only train on the first task (panel (a.2)). This is somewhat surprising as thetwo tasks contain examples from the same underlying distribution.We contrast the results from training on random partitions of examples with ones obtained by par-titioning the examples based on forgetting statistics (Figure 4 (b)). That is, we first compute theforgetting events for all examples based on Algorithm 1 and we create our tasks by sampling 5kexamples that have zero forgetting events (named f0) and 5k examples that have non-zero forgettingevents (named fN). We observe that examples that have been forgotten at least once suffer a moredrastic form of forgetting than those included in a random split (compare (a.2) with (b.2)). In panel(b.3) and (c.2) we can observe that examples from task f0suffer very mild forgetting when trainingon task fN. This suggests that examples that have been forgotten at least once may be able to “sup-port” those that have never been forgotten. We observe the same pattern when we investigate theopposite alternating sequence of tasks in Figure 4 (b, right).5 R EMOVING UNFORGETTABLE EXAMPLESAs shown in the previous section, learning on examples that have been forgotten at least once min-imally impacts performance on those that are unforgettable. This appears to indicate that unforget-table examples are less informative than others, and, more generally, that the more an example isforgotten during training, the more useful it may be to the classification task. This seems to alignwith the observations in Chang et al. (2017), where the authors re-weight training examples by ac-6Published as a conference paper at ICLR 20190 10 20 30 40 50 60percentage of training set removed92.593.093.594.094.595.095.596.096.5test accuracynone removedselected removedrandom removed0 5 10 15 20average number of forgetting events in removed subset95.295.495.695.896.096.2test classification accuracyselected removedrandom removedFigure 5: LeftGeneralization performance on CIFAR-10 of ResNet18 where increasingly larger sub-sets of the training set are removed (mean +/- std error of 5seeds). When the removed examples areselected at random, performance drops very fast. Selecting the examples according to our orderingcan reduce the training set significantly without affecting generalization. The vertical line indicatesthe point at which all unforgettable examples are removed from the training set. Right Differencein generalization performance when contiguous chunks of 5000 increasingly forgotten examples areremoved from the training set. Most important examples tend to be those that are forgotten the most.counting for the variance of their predictive distribution. Here, we test whether it is possible tocompletely remove a given subset of examples during training.In Fig. 5 ( Left), we show the evolution of the generalization performance in CIFAR-10 when weartificially remove examples from the training dataset. We choose the examples to remove by in-creasing number of forgetting events. Each point in the figure corresponds to retraining the modelfrom scratch on an increasingly smaller subset of the training data (with the same hyper-parametersas the base model). We observe that when removing a random subset of the dataset, performancerapidly decreases. Comparatively, by removing examples ordered by number of forgetting events,30% of the dataset can be removed while maintaining comparable generalization performance as thebase model trained on the full dataset, and up to 35% can be removed with marginal degradation(less than 0:2%). The results on the other datasets are similar: a large fraction of training examplescan be ignored without hurting the final generalization performance of the classifiers (Figure 6).In Figure 5 ( Right ), we show the evolution of the generalization error when we remove from thedataset 5;000examples with increasing forgetting statistics. Each point in the figure corresponds tothe generalization error of a model trained on the full dataset minus 5;000examples as a functionof the average number of forgetting events in those 5;000examples. As can be seen, removing thesame number of examples with increasingly more forgetting events results in worse generalizationfor most of the curve. It is interesting to notice the rightmost part of the curve moving up, suggestingthat some of the most forgotten examples actually hurt performance. Those could correspond tooutliers or mislabeled examples (see Sec. 4). Finding a way to separate those points from veryinformative ones is an ancient but still active area of research (John, 1995; Jiang et al., 2018).Support vectors Various explanations of the implicit generalization of deep neural networks (Zhanget al., 2016) have been offered: flat minima generalize better and stochastic gradient descent con-verges towards them (Hochreiter & Schmidhuber, 1997; Kleinberg et al., 2018), gradient descentprotects against overfitting (Advani & Saxe, 2017; Tachet et al., 2018), deep networks’ structurebiases learning towards simple functions (Neyshabur et al., 2014; Perez et al., 2018). But it remainsa poorly understood phenomenon. An interesting direction of research is to study the convergenceproperties of gradient descent in terms of maximum margin classifiers. It has been shown recently(Soudry et al., 2017) that on separable data, a linear network will learn such a maximum marginclassifier. This supports the idea that stochastic gradient descent implicitly converges to solutionsthat maximally separate the dataset, and additionally, that some data points are more relevant thanothers to the decision boundary learnt by the classifier. Those points play a part equivalent to sup-port vectors in the support vector machine paradigm. Our results confirm that a significant portion oftraining data points have little to no influence on the generalization performance when the decisionfunction is learnt with SGD. Forgettable training points may be considered as analogs to support vec-7Published as a conference paper at ICLR 20190 20 40 60 80 100percent of training set removed01020304050percent increase in test errorCIFAR-10permuted MNISTMNIST2% test error increase0 20 40 60 80 100percent of training set removed012345percent increase in test errorCIFAR-10permuted MNISTMNIST2% test error increaseFigure 6: Decrease in generalization performance when fractions of the training sets are removed.When the subsets are selected appropriately, performance is maintained after removing up to 30%ofCIFAR-10 ,50% ofpermutedMNIST , and 80% ofMNIST . Vertical black line indicates the pointat which all unforgettable examples are removed from CIFAR-10. Right is a zoomed in version ofLeft.Figure 7: Left. Ranking of examples by forgotten events stabilizes after 75epochs in CIFAR-10 .Middle. Precision and recall of retrieving the unforgettable examples of ResNet18, using the ex-ample ordering of a simpler convolutional neural network. Right. Generalization performance onCIFAR-10 of a WideResNet using the example ordering of ResNet18.tors, important for the generalization performance of the model. The number of forgetting eventsof an example is a relevant metric to detect such support vectors. It also correlates well with themisclassification margin (see Sec.4) which is a proxy for the distance to the decision boundary.Intrinsic dataset dimension As mentioned above, the datasets we study have various fractions ofunforgettable events ( 91:7%forMNIST ,75:3%forpermutedMNIST and31:3%forCIFAR-10 ). Wealso see in Figure 6 that performance on those datasets starts to degrade at different fractions ofremoved examples: the number of support vectors varies from one dataset to the other, based on thecomplexity of the underlying data distribution. If we assume that we are in fact detecting analogs ofsupport vectors, we can put these results in perspective with the intrinsic dataset dimension definedby Li et al. (2018) as the codimension in the parameter space of the solution set: for a given archi-tecture, the higher the intrinsic dataset dimension, the larger the number of support vectors, and thefewer the number of unforgettable examples.6 T RANSFERABLE FORGETTING EVENTSForgetting events rely on training a given architecture, with a given optimizer, for a given number ofepochs. We investigate to what extent the forgetting statistics of examples depend on those factors.Throughout training We compute the Spearman rank correlation between the ordering obtained atthe end of training ( 200epochs) and the ordering after various number of epochs. As seen in Fig. 7(Left), the ordering is very stable after 75epochs, and we found a reasonable number of epochs toget a good correlation to be 25 (see the Supplementary Materials for precision-recall plots).8Published as a conference paper at ICLR 2019Between architectures A limitation of our method is that it requires computing the ordering froma previous run. An interesting question is whether that ordering could be obtained from a simplerarchitecture than residual networks. We train a network with two convolutional layers followed bytwo fully connected ones (see the Supplementary for the full architecture) and compare the resultingordering with the one obtained with ResNet18. Figure 7 ( Middle ) shows a precision-recall plot ofthe unforgettable examples computed with the residual network. We see a reasonably strong agree-ment between the unforgettable examples of the convolutional neural network and the ones of theResNet18. Finally, we train a WideResNet (Zagoruyko & Komodakis, 2016) on truncated data setsusing the example ordering from ResNet18. Using the same computing power (one Titan X GPU),Resnet18 requires 2hours to train whereas WideResNet requires 8– estimating the forgetting statis-tics of WideResNet via ResNet18 can save up to 6hours of training time if the estimate is accurate.We plot WideResNet’s generalization performance using the ordering obtained by ResNet18 in Fig-ure 7 ( Right ): the network still performs near optimally with 30% of the dataset removed. Thisopens up promising avenues of computing forgetting statistics with smaller architectures.7 C ONCLUSION AND FUTURE WORKIn this paper, inspired by the phenomenon of catastrophic forgetting, we investigate the learningdynamics of neural networks when training on single classification tasks. We show that catastrophicforgetting can occur in the context of what is usually considered to be a single task. Inspired by thisresult, we find that some examples within a task are more prone to being forgotten, while others areconsistently unforgettable. We also find that forgetting statistics seem to be fairly stable with respectto the various characteristics of training, suggesting that they actually uncover intrinsic properties ofthe data rather than idiosyncrasies of the training schemes. Furthermore, the unforgettable examplesseem to play little part in the final performance of the classifier as they can be removed from thetraining set without hurting generalization. This supports recent research interpreting deep neuralnetworks as max margin classifiers in the linear case. Future work involves understanding forgettingevents better from a theoretical perspective, exploring potential applications to other areas of super-vised learning, such as speech or text and to reinforcement learning where forgetting is prevalentdue to the continual shift of the underlying distribution.8 A CKNOWLEDGMENTSWe acknowledge the anonymous reviewers for their insightful suggestions.
SyxkXUThiQ
Thorough experiments which prove there exist "support examples" in neural network training.
7: Good paper, accept
This paper studies the forgetting behavior of the training examples during SGD. Empirically it shows there are forgettable and unforgettable examples, unforgettable examples are like "support examples", one can achieve similar performance by training only on these "support examples". The paper also shows this phenomenon is consistent across different network architectures. Pros: This paper is written in high quality, clearly presented. It is original in the sense that this is the first empirical study on the forgettability of examples in during neural network training. Comments and Questions on the experiment details: 1. Is the dataset randomly shuffled after every epoch? One concern is that if the order is fixed, some of the examples will be unforgettable simply because the previous batches have similar examples , and training the model on the previous batches makes it good on some examples in the current batch. 2. It would be more interesting to also include datasets like cifar100, which has more labels. The current datasets all have only 10 categories. 3. An addition figure can be provided which switches the order of training in figure 4b. Namely, start with training on b.2. Cons: Lack of insight. Subjectively, I usually expect empirical analysis papers to either come up with unexpected observations or provide guidance for practice. In my opinion, the findings of this work is within expectation, and there is a gap for practice. Overall this paper is worth publishing for the systematic experiments which empirically verifies that there are support examples in neural networks.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title An Empirical Study of Example Forgetting during Deep Neural Network Learning ### Paper Abstract Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance. ### Paper Keywords ["catastrophic forgetting", "sample weighting", "deep generalization"] ### Paper Content ABSTRACTInspired by the phenomenon of catastrophic forgetting, we investigate the learningdynamics of neural networks as they train on single classification tasks. Our goalis to understand whether a related phenomenon occurs when data does not un-dergo a clear distributional shift. We define a “forgetting event” to have occurredwhen an individual training example transitions from being classified correctly toincorrectly over the course of learning. Across several benchmark data sets, wefind that: (i) certain examples are forgotten with high frequency, and some notat all; (ii) a data set’s (un)forgettable examples generalize across neural architec-tures; and (iii) based on forgetting dynamics, a significant fraction of examplescan be omitted from the training data set while still maintaining state-of-the-artgeneralization performance.1 I NTRODUCTIONMany machine learning models, in particular neural networks, cannot perform continual learning .They have a tendency to forget previously learnt information when trained on new tasks, a phe-nomenon usually called catastrophic forgetting (Kirkpatrick et al., 2017; Ritter et al., 2018). One ofthe hypothesized causes of catastrophic forgetting in neural networks is the shift in the input distri-bution across different tasks— e.g., a lack of common factors or structure in the inputs of differenttasks might lead standard optimization techniques to converge to radically different solutions eachtime a new task is presented. In this paper, we draw inspiration from this phenomenon and investi-gate the extent to which a related forgetting process occurs as a model learns examples traditionallyconsidered to belong to the same task .Similarly to the continual learning setting, in stochastic gradient descent (SGD) optimization, eachmini-batch can be considered as a mini-“task” presented to the network sequentially. In this con-text, we are interested in characterizing the learning dynamics of neural networks by analyzing(catastrophic) example forgetting events . These occur when examples that have been “learnt” ( i.e.,correctly classified) at some time tin the optimization process are subsequently misclassified —or in other terms forgotten — at a time t0> t. We thus switch the focus from studying interac-tions between sequentially presented tasks to studying interactions between sequentially presenteddataset examples during SGD optimization. Our starting point is to understand whether there existexamples that are consistently forgotten across subsequent training presentations and, conversely,examples that are never forgotten. We will call the latter unforgettable examples. We hypothesizethat specific examples consistently forgotten between subsequent presentations, if they exist, mustEqual contribution. Correspondence: MT: mariya@cmu.edu , AS: alsordon@microsoft.comyWork done while interning at Microsoft Research MontrealCode available at https://github.com/mtoneva/example forgetting1Published as a conference paper at ICLR 2019not share commonalities with other examples from the same task. We therefore analyze the propor-tion of forgettable/unforgettable examples for a given task and what effects these examples have ona model’s decision boundary and generalization error.The goal of our investigation is two-fold. First, we attempt to gain insight into the optimizationprocess by analyzing interactions among examples during learning and their influence on the finaldecision boundary. We are particularly interested in whether we can glean insight on the com-pressibility of a dataset, and thereby increase data efficiency without compromising generalizationaccuracy. It is a timely problem that has been the recent focus of few-shot learning approaches viameta-learning (Finn et al., 2017; Ravi & Larochelle, 2017). Second, we aim to characterize whetherforgetting statistics can be used to identify “important” samples and detect outliers and exampleswith noisy labels (John, 1995; Brodley & Friedl, 1999; Sukhbaatar et al., 2014; Jiang et al., 2018).Identifying important, or most informative examples is an important line of work and was exten-sively studied in the literature. Techniques of note — among others — are predefined curriculaof examples (Bengio & LeCun, 2007), self-paced learning (Kumar et al., 2010), and more recentlymeta-learning (Fan et al., 2017). These research directions usually define “hardness” or “commonal-ity” of an example as a function of the loss on that particular example at some point during training(or possibly at convergence). They do not consider whether some examples are consistently for-gotten throughout learning. Very recently, Chang et al. (2017) consider re-weighting examples byaccounting for the variance of their predictive distribution. This is related to our definition of for-getting events, but the authors provide little analysis of the extent to which the phenomenon occursin their proposed tasks. Our purpose is to study this phenomenon from an empirical standpoint andcharacterize its prevalence in different datasets and across different model architectures.Our experimental findings suggest that: a) there exist a large number of unforgettable examples, i.e.,examples that are never forgotten once learnt, those examples are stable across seeds and stronglycorrelated from one neural architecture to another; b) examples with noisy labels are among the mostforgotten examples, along with images with “uncommon” features, visually complicated to classify;c) training a neural network on a dataset where a very large fraction of the least forgotten exampleshave been removed still results in extremely competitive performance on the test set.2 R ELATED WORKCurriculum Learning and Sample Weighting Curriculum learning is a paradigm that favorslearning along a curriculum of examples of increasing difficulty (Bengio et al., 2009). This generalidea has found success in a variety of areas since its introduction (Kumar et al., 2010; Lee & Grau-man, 2011; Schaul et al., 2015). Kumar et al. (2010) implemented their curriculum by consideringeasy the examples with a small loss. In our experiments, we empirically validate that unforget-table examples can be safely removed without compromising generalization. Zhao & Zhang (2015);Katharopoulos & Fleuret (2018) relate sample importance to the norm of its loss gradient with re-spect to the parameters of the network. Fan et al. (2017); Kim & Choi (2018); Jiang et al. (2018)learn a curriculum directly from data in order to minimize the task loss. Jiang et al. (2018) also studythe robustness of their method in the context of noisy examples. This relates to a rich literature onoutlier detection and removal of examples with noisy labels (John, 1995; Brodley & Friedl, 1999;Sukhbaatar et al., 2014; Jiang et al., 2018). We will provide evidence that noisy examples rankhigher in terms of number of forgetting events. Koh & Liang (2017) borrow influence functionsfrom robust statistics to evaluate the impact of the training examples on a model’s predictions.Deep Generalization The study of the generalization properties of deep neural networks whentrained by stochastic gradient descent has been the focus of several recent publications (Zhang et al.,2016; Keskar et al., 2016; Chaudhari et al., 2016; Advani & Saxe, 2017). These studies suggestthat the generalization error does not depend solely on the complexity of the hypothesis space. Forinstance, it has been demonstrated that over-parameterized models with many more parameters thantraining points can still achieve low test error (Huang et al., 2017; Wang et al., 2018) while beingcomplex enough to fit a dataset with completely random labels (Zhang et al., 2016). A possibleexplanation for this phenomenon is a form of implicit regularization performed by stochastic gradi-ent descent: deep neural networks trained with SGD have been recently shown to converge to themaximum margin solution in the linearly separable case (Soudry et al., 2017; Xu et al., 2018). In2Published as a conference paper at ICLR 2019our work, we provide empirical evidence that generalization can be maintained when removing asubstantial portion of the training examples and without restricting the complexity of the hypothesisclass. This goes along the support vector interpretation provided by Soudry et al. (2017).3 D EFINING AND COMPUTING EXAMPLE FORGETTINGOur general case study for example forgetting is a standard classification setting. Given a datasetD= (xi;yi)iof observation/label pairs, we wish to learn the conditional probability distributionp(yjx;)using a deep neural network with parameters . The network is trained to minimize theempirical risk R=1jDjPiL(p(yijxi;);yi), whereLdenotes the cross-entropy loss and yi21;:::k . The minimization is performed using variations of stochastic gradient descent, startingfrom initial random parameters 0, and by sampling examples at random from the dataset D.Forgetting and learning events We denote by ^yti= arg max kp(yikjxi;t)the predicted label forexample xiobtained after tsteps of SGD. We also let accti= 1^yti=yibe a binary variable indicatingwhether the example is correctly classified at time step t. Exampleiundergoes a forgetting eventwhen acctidecreases between two consecutive updates: accti>acct+1i. In other words, example iis misclassified at step t+ 1after having been correctly classified at step t. Conversely, a learningevent has occurred if accti<acct+1i. Statistics that will be of interest in the next sections include thedistribution of forgetting events across examples and the first time a learning event occurs.Classification margin We will also be interested in analyzing the classification margin. Our pre-dictors have the form p(yijxi;) =((xi)), whereis a sigmoid (softmax) activation functionin the case of binary (categorical) classification. The classification margin mis defined as the dif-ference between the logit of the correct class and the largest logit among the other classes, i.e.m=karg max k06=kk0, wherekis the index corresponding to the correct class.Unforgettable examples We qualify examples as unforgettable if they are learnt at some point andexperience no forgetting events during the whole course of training: example iis unforgettable ifthe first time it is learnt tverifiest<1and for allkt, accki= 1. Note that, according to thisdefinition, examples that are never learnt during training do not qualify as unforgettable. We referto examples that have been forgotten at least once as forgettable .3.1 P ROCEDURAL DESCRIPTION AND EXPERIMENTAL SETTINGFollowing the previous definitions, monitoring forgetting events entails computing the prediction forall examples in the dataset at each model update, which would be prohibitively expensive. In prac-tice, for each example, we subsample the full sequence of forgetting events by computing forgettingstatistics only when the example is included in the current mini-batch; that is, we compute forgettingacross presentations of the same example in subsequent mini-batches. This gives a lower bound onthe number of forgetting events an example undergoes during training.Algorithm 1 Computing forgetting statistics.initialize prev acci= 0;i2Dinitialize forgetting T[i] = 0;i2Dwhile not training done doBD # sample a minibatchforexamplei2Bdocompute acc iifprev acci>accithenT[i] =T[i] + 1prev acci=accigradient update classifier on BreturnTWe train a classifier on a given dataset and recordthe forgetting events for each example when they aresampled in the current mini-batch. For the purposesof further analysis, we then sort the dataset’s exam-ples based on the number of forgetting events theyundergo. Ties are broken at random when samplingfrom the ordered data. Samples that are never learntare considered forgotten an infinite number of timesfor sorting purposes. Note that this estimate of ex-ample forgetting is computationally expensive; seeSec. 6 for a discussion of a cheaper method.We perform our experimental evaluation on threedatasets of increasing complexity: MNIST (LeCunet al., 1999), permuted MNIST – a version of MNISTthat has the same fixed permutation applied to thepixels of all examples, and CIFAR-10 (Krizhevsky,2009). We use various model architectures and training schemes that yield test errors comparable3Published as a conference paper at ICLR 20190 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.040 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.040 5 10 15 20 25 30number of forgetting events0.00.20.40.60.81.0fraction of examples0 200.000.010.020.030.04Figure 1: Histograms of forgetting events on (from left to right) MNIST ,permutedMNIST andCIFAR-10 . Insets show the zoomed-in y-axis.with the current state-of-the-art on the respective datasets. In particular, the MNIST -based exper-iments use a network comprised of two convolutional layers followed by a fully connected one,trained using SGD with momentum and dropout. This network achieves 0.8% test error. For CIFAR-10, we use a ResNet with cutout (DeVries & Taylor, 2017) trained using SGD and momentum witha particular learning rate schedule. This network achieves a competitive 3.99% test error. For fullexperimentation details, see the Supplementary.4 C HARACTERIZING EXAMPLE FORGETTINGNumber of forgetting events We estimate the number of forgetting events of all the training ex-amples for the three different datasets ( MNIST ,permutedMNIST andCIFAR-10 ) across 5 randomseeds. The histograms of forgetting events computed from one seed are shown in Figure 1. Thereare55;012,45;181and15;628unforgettable examples common across 5seeds, they represent re-spectively 91:7%,75:3%, and 31:3%of the corresponding training sets. Note that datasets with lesscomplexity and diversity of examples, such as MNIST , seem to contain significantly more unfor-gettable examples. permutedMNIST exhibits a complexity balanced between MNIST (easiest) andCIFAR-10 (hardest). This finding seems to suggest a correlation between forgetting statistics andthe intrinsic dimension of the learning problem, as recently formalized by Li et al. (2018).Stability across seeds To test the stability of our metric with respect to the variance generated bystochastic gradient descent, we compute the number of forgetting events per example for 10differentrandom seeds and measure their correlation. From one seed to another, the average Pearson corre-lation is 89:2%. When randomly splitting the 10different seeds into two sets of 5, the cumulatednumber of forgetting events within those two sets shows a high correlation of 97:6%. We also ranthe original experiment on 100 seeds to devise 95% confidence bounds on the average (over 5 seeds)number of forgetting events per example (see Appendix 13). The confidence interval of the leastforgotten examples is tight, confirming that examples with a small number of forgetting events canbe ranked confidently.Forgetting by chance In order to quantify the possibility of forgetting occurring by chance, we ad-ditionally analyze the distribution of forgetting events obtained under the regime of random updatesteps instead of the true SGD steps. In order to maintain the statistics of the random updates similarto those encountered during SGD, random updates are obtained by shuffling the gradients producedby standard SGD on a main network (more details are provided in Appendix 12). We report thehistogram of chance forgetting events in Supplementary Figure 13: examples are being forgotten bychance a small number of time, at most twice and most of the time less than once. The observed sta-bility across seeds, low number of chance forgetting events and the tight confidence bounds suggestthat it is unlikely for the ordering produced by the metric to be the by-product of another unrelatedrandom cause.First learning events We investigate whether unforgettable and forgettable examples need to bepresented different numbers of times in order to be learnt for the first time ( i.e.for the first learningevent to occur, as defined in Section 3). The distributions of the presentation numbers at whichfirst learning events occur across all datasets can be seen in Supplemental Figure 8. We observethat, while both unforgettable and forgettable sets contain many examples that are learnt during the4Published as a conference paper at ICLR 2019unforgettableplane car bird cat deer dog frog horse ship truckforgettableFigure 2: Pictures of unforgettable ( Top) and forgettable examples ( Bottom ) of every CIFAR-10class. Forgettable examples seem to exhibit peculiar or uncommon features. Additional examplesare available in Supplemental Figure 15.0 5 10 15 20 25number of forgetting events0.0000.0250.0500.0750.1000.1250.1500.175fraction of corresponding examplesregular examplesnoisy examples0 5 10 15 20number of forgetting events0.000.050.100.150.200.250.300.350.40fraction of corresponding examplesexamples before noiseexamples after noiseFigure 3: Distributions of forgetting events across training examples in CIFAR-10 when 20% oflabels are randomly changed. Left. Comparison of forgetting events between examples with noisyand original labels. The most forgotten examples are those with noisy labels. No noisy examplesare unforgettable. Right. Comparison of forgetting events between examples with noisy labels andthe same examples with original labels. Examples exhibit more forgetting when their labels arechanged.first3-4presentations, the forgettable examples contain a larger number of examples that are firstlearnt later in training. The Spearman rank correlation between the first learning event presentationsand the number of forgetting events across all training examples is 0:56, indicating a moderaterelationship.Misclassification margin The definition of forgetting events is binary and as such fairly crudecompared to more sophisticated estimators of example relevance (Zhao & Zhang, 2015; Chang et al.,2017). In order to qualify its validity, we compute the misclassification margin of forgetting events.The misclassification margin of an example is defined as the mean classification margin (definedin Section 3) over all its forgetting events, a negative quantity by definition. The Spearman rankcorrelation between an example’s number of forgetting events and its mean misclassification marginis -0.74 (computed over 5 seeds, see corresponding 2D-histogram in Supplemental Figure 9). Theseresults suggest that examples which are frequently forgotten have a large misclassification margin.Visual inspection We visualize some of the unforgettable examples in Figure 2 along with someexamples that have been most forgotten in the CIFAR-10 dataset. Unforgettable samples are easilyrecognizable and contain the most obvious class attributes or centered objects, e.g., a plane on aclear sky. On the other hand, the most forgotten examples exhibit more ambiguous characteristics(as in the center image, a truck on a brown background) that may not align with the learning signalcommon to other examples from the same class.Detection of noisy examples We further investigate the observation that the most forgettable ex-amples seem to exhibit atypical characteristics. We would expect that if highly forgettable exam-ples have atypical class characteristics, then noisily-labeled examples will undergo more forgetting5Published as a conference paper at ICLR 20190 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(a.1) (a.2) (a.3) (a.4)random partition 1random partition 2(a) random partitions0 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(b.1) (b.2) (b.3) (b.4)never forgottenforgotten at least once0 10 20 30 40 50 60 70 80epoch30405060708090100accuracy(c.1) (c.2) (c.3) (c.4)never forgottenforgotten at least once (b) partitioning by forgetting eventsFigure 4: Synthetic continual learning setup for CIFAR-10 . Background color in each column indi-cates the training partition, curves track performance on both partitions during interleaved training.Solids lines represent the average of 5runs and dashed lines represent the standard error. The figurehighlights that examples that have been forgotten at least once can “support” those that have neverbeen forgotten, as shown in (c.2) and (b.3).events. We randomly change the labels of 20% ofCIFAR-10 and record the number of forgettingevents of both the noisy and regular examples through training. The distributions of forgetting eventsacross noisy and regular examples are shown in Figure 3. We observe that the most forgotten exam-ples are those with noisy labels and that no noisy examples are unforgettable. We also compare theforgetting events of the noisy examples to that of the same set of examples with original labels andobserve a much higher degree of forgetting in the noisy case. The results of these synthetic experi-ments support the hypothesis that highly forgettable examples exhibit atypical class characteristics.4.1 C ONTINUAL LEARNING SETUPWe observed that in harder tasks such as CIFAR-10 , a significant portion of examples are forgotten atleast once during learning. This leads us to believe that catastrophic forgetting may be observed, tosome extent, even when considering examples coming from the same task distribution. To test thishypothesis, we perform an experiment inspired by the standard continual learning setup (McCloskey& Cohen, 1989; Kirkpatrick et al., 2017). We create two tasks by randomly sampling 10k examplesfrom the CIFAR-10 training set and dividing them in two equally-sized partitions (5k examples each).We treat each partition as a separate ”task” even though they should follow the same distribution.We then train a classifier for 20 epochs on each partition in an alternating fashion, while trackingperformance on both partitions. The results are reported in Figure 4 (a). The background colorrepresents which of the two partitions is currently used for training. We observe some forgetting ofthe second task when we only train on the first task (panel (a.2)). This is somewhat surprising as thetwo tasks contain examples from the same underlying distribution.We contrast the results from training on random partitions of examples with ones obtained by par-titioning the examples based on forgetting statistics (Figure 4 (b)). That is, we first compute theforgetting events for all examples based on Algorithm 1 and we create our tasks by sampling 5kexamples that have zero forgetting events (named f0) and 5k examples that have non-zero forgettingevents (named fN). We observe that examples that have been forgotten at least once suffer a moredrastic form of forgetting than those included in a random split (compare (a.2) with (b.2)). In panel(b.3) and (c.2) we can observe that examples from task f0suffer very mild forgetting when trainingon task fN. This suggests that examples that have been forgotten at least once may be able to “sup-port” those that have never been forgotten. We observe the same pattern when we investigate theopposite alternating sequence of tasks in Figure 4 (b, right).5 R EMOVING UNFORGETTABLE EXAMPLESAs shown in the previous section, learning on examples that have been forgotten at least once min-imally impacts performance on those that are unforgettable. This appears to indicate that unforget-table examples are less informative than others, and, more generally, that the more an example isforgotten during training, the more useful it may be to the classification task. This seems to alignwith the observations in Chang et al. (2017), where the authors re-weight training examples by ac-6Published as a conference paper at ICLR 20190 10 20 30 40 50 60percentage of training set removed92.593.093.594.094.595.095.596.096.5test accuracynone removedselected removedrandom removed0 5 10 15 20average number of forgetting events in removed subset95.295.495.695.896.096.2test classification accuracyselected removedrandom removedFigure 5: LeftGeneralization performance on CIFAR-10 of ResNet18 where increasingly larger sub-sets of the training set are removed (mean +/- std error of 5seeds). When the removed examples areselected at random, performance drops very fast. Selecting the examples according to our orderingcan reduce the training set significantly without affecting generalization. The vertical line indicatesthe point at which all unforgettable examples are removed from the training set. Right Differencein generalization performance when contiguous chunks of 5000 increasingly forgotten examples areremoved from the training set. Most important examples tend to be those that are forgotten the most.counting for the variance of their predictive distribution. Here, we test whether it is possible tocompletely remove a given subset of examples during training.In Fig. 5 ( Left), we show the evolution of the generalization performance in CIFAR-10 when weartificially remove examples from the training dataset. We choose the examples to remove by in-creasing number of forgetting events. Each point in the figure corresponds to retraining the modelfrom scratch on an increasingly smaller subset of the training data (with the same hyper-parametersas the base model). We observe that when removing a random subset of the dataset, performancerapidly decreases. Comparatively, by removing examples ordered by number of forgetting events,30% of the dataset can be removed while maintaining comparable generalization performance as thebase model trained on the full dataset, and up to 35% can be removed with marginal degradation(less than 0:2%). The results on the other datasets are similar: a large fraction of training examplescan be ignored without hurting the final generalization performance of the classifiers (Figure 6).In Figure 5 ( Right ), we show the evolution of the generalization error when we remove from thedataset 5;000examples with increasing forgetting statistics. Each point in the figure corresponds tothe generalization error of a model trained on the full dataset minus 5;000examples as a functionof the average number of forgetting events in those 5;000examples. As can be seen, removing thesame number of examples with increasingly more forgetting events results in worse generalizationfor most of the curve. It is interesting to notice the rightmost part of the curve moving up, suggestingthat some of the most forgotten examples actually hurt performance. Those could correspond tooutliers or mislabeled examples (see Sec. 4). Finding a way to separate those points from veryinformative ones is an ancient but still active area of research (John, 1995; Jiang et al., 2018).Support vectors Various explanations of the implicit generalization of deep neural networks (Zhanget al., 2016) have been offered: flat minima generalize better and stochastic gradient descent con-verges towards them (Hochreiter & Schmidhuber, 1997; Kleinberg et al., 2018), gradient descentprotects against overfitting (Advani & Saxe, 2017; Tachet et al., 2018), deep networks’ structurebiases learning towards simple functions (Neyshabur et al., 2014; Perez et al., 2018). But it remainsa poorly understood phenomenon. An interesting direction of research is to study the convergenceproperties of gradient descent in terms of maximum margin classifiers. It has been shown recently(Soudry et al., 2017) that on separable data, a linear network will learn such a maximum marginclassifier. This supports the idea that stochastic gradient descent implicitly converges to solutionsthat maximally separate the dataset, and additionally, that some data points are more relevant thanothers to the decision boundary learnt by the classifier. Those points play a part equivalent to sup-port vectors in the support vector machine paradigm. Our results confirm that a significant portion oftraining data points have little to no influence on the generalization performance when the decisionfunction is learnt with SGD. Forgettable training points may be considered as analogs to support vec-7Published as a conference paper at ICLR 20190 20 40 60 80 100percent of training set removed01020304050percent increase in test errorCIFAR-10permuted MNISTMNIST2% test error increase0 20 40 60 80 100percent of training set removed012345percent increase in test errorCIFAR-10permuted MNISTMNIST2% test error increaseFigure 6: Decrease in generalization performance when fractions of the training sets are removed.When the subsets are selected appropriately, performance is maintained after removing up to 30%ofCIFAR-10 ,50% ofpermutedMNIST , and 80% ofMNIST . Vertical black line indicates the pointat which all unforgettable examples are removed from CIFAR-10. Right is a zoomed in version ofLeft.Figure 7: Left. Ranking of examples by forgotten events stabilizes after 75epochs in CIFAR-10 .Middle. Precision and recall of retrieving the unforgettable examples of ResNet18, using the ex-ample ordering of a simpler convolutional neural network. Right. Generalization performance onCIFAR-10 of a WideResNet using the example ordering of ResNet18.tors, important for the generalization performance of the model. The number of forgetting eventsof an example is a relevant metric to detect such support vectors. It also correlates well with themisclassification margin (see Sec.4) which is a proxy for the distance to the decision boundary.Intrinsic dataset dimension As mentioned above, the datasets we study have various fractions ofunforgettable events ( 91:7%forMNIST ,75:3%forpermutedMNIST and31:3%forCIFAR-10 ). Wealso see in Figure 6 that performance on those datasets starts to degrade at different fractions ofremoved examples: the number of support vectors varies from one dataset to the other, based on thecomplexity of the underlying data distribution. If we assume that we are in fact detecting analogs ofsupport vectors, we can put these results in perspective with the intrinsic dataset dimension definedby Li et al. (2018) as the codimension in the parameter space of the solution set: for a given archi-tecture, the higher the intrinsic dataset dimension, the larger the number of support vectors, and thefewer the number of unforgettable examples.6 T RANSFERABLE FORGETTING EVENTSForgetting events rely on training a given architecture, with a given optimizer, for a given number ofepochs. We investigate to what extent the forgetting statistics of examples depend on those factors.Throughout training We compute the Spearman rank correlation between the ordering obtained atthe end of training ( 200epochs) and the ordering after various number of epochs. As seen in Fig. 7(Left), the ordering is very stable after 75epochs, and we found a reasonable number of epochs toget a good correlation to be 25 (see the Supplementary Materials for precision-recall plots).8Published as a conference paper at ICLR 2019Between architectures A limitation of our method is that it requires computing the ordering froma previous run. An interesting question is whether that ordering could be obtained from a simplerarchitecture than residual networks. We train a network with two convolutional layers followed bytwo fully connected ones (see the Supplementary for the full architecture) and compare the resultingordering with the one obtained with ResNet18. Figure 7 ( Middle ) shows a precision-recall plot ofthe unforgettable examples computed with the residual network. We see a reasonably strong agree-ment between the unforgettable examples of the convolutional neural network and the ones of theResNet18. Finally, we train a WideResNet (Zagoruyko & Komodakis, 2016) on truncated data setsusing the example ordering from ResNet18. Using the same computing power (one Titan X GPU),Resnet18 requires 2hours to train whereas WideResNet requires 8– estimating the forgetting statis-tics of WideResNet via ResNet18 can save up to 6hours of training time if the estimate is accurate.We plot WideResNet’s generalization performance using the ordering obtained by ResNet18 in Fig-ure 7 ( Right ): the network still performs near optimally with 30% of the dataset removed. Thisopens up promising avenues of computing forgetting statistics with smaller architectures.7 C ONCLUSION AND FUTURE WORKIn this paper, inspired by the phenomenon of catastrophic forgetting, we investigate the learningdynamics of neural networks when training on single classification tasks. We show that catastrophicforgetting can occur in the context of what is usually considered to be a single task. Inspired by thisresult, we find that some examples within a task are more prone to being forgotten, while others areconsistently unforgettable. We also find that forgetting statistics seem to be fairly stable with respectto the various characteristics of training, suggesting that they actually uncover intrinsic properties ofthe data rather than idiosyncrasies of the training schemes. Furthermore, the unforgettable examplesseem to play little part in the final performance of the classifier as they can be removed from thetraining set without hurting generalization. This supports recent research interpreting deep neuralnetworks as max margin classifiers in the linear case. Future work involves understanding forgettingevents better from a theoretical perspective, exploring potential applications to other areas of super-vised learning, such as speech or text and to reinforcement learning where forgetting is prevalentdue to the continual shift of the underlying distribution.8 A CKNOWLEDGMENTSWe acknowledge the anonymous reviewers for their insightful suggestions.<|im_end|> <|im_start|>assistant ### Review Title Thorough experiments which prove there exist "support examples" in neural network training. ### Review Text This paper studies the forgetting behavior of the training examples during SGD. Empirically it shows there are forgettable and unforgettable examples, unforgettable examples are like "support examples", one can achieve similar performance by training only on these "support examples". The paper also shows this phenomenon is consistent across different network architectures. Pros: This paper is written in high quality, clearly presented. It is original in the sense that this is the first empirical study on the forgettability of examples in during neural network training. Comments and Questions on the experiment details: 1. Is the dataset randomly shuffled after every epoch? One concern is that if the order is fixed, some of the examples will be unforgettable simply because the previous batches have similar examples , and training the model on the previous batches makes it good on some examples in the current batch. 2. It would be more interesting to also include datasets like cifar100, which has more labels. The current datasets all have only 10 categories. 3. An addition figure can be provided which switches the order of training in figure 4b. Namely, start with training on b.2. Cons: Lack of insight. Subjectively, I usually expect empirical analysis papers to either come up with unexpected observations or provide guidance for practice. In my opinion, the findings of this work is within expectation, and there is a gap for practice. Overall this paper is worth publishing for the systematic experiments which empirically verifies that there are support examples in neural networks. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
B8mxkTzX2RY
ML_Reproducibility_Challenge/2021/Fall
2021
Replicating and Improving GAN2Shape Through Novel Shape Priors and Training Steps
["Alfred Nilsson", "Alessio Galatolo"]
SCOPE OF REPRODUCIBILITY Pan et al. propose an unsupervised method named GAN2Shape that purportedly is able to recover 3D information stored in the weights of a pre-trained StyleGAN2 model, to produce 3D shapes from 2D images. We aim to reproduce the 3D shape recovery and identify its strengths and weaknesses. METHODOLOGY We re-implement the method proposed by Pan et al. with regards to 3D shape reconstruction, and extend their work. Our extensions include novel prior shapes and two new training techniques. Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF. While the code-base relating to GAN2Shape was largely rewritten, many external dependencies, which the original authors relied on, had to be imported. The project used 189 GPU hours in total, mostly using a single Nvidia K80, T4 or P100 GPU, and a negligible number of runs on a Nvidia V100 GPU. RESULTS We replicate the results of Pan et al. on a subset of the LSUN Cat, LSUN Car and CelebA datasets and observe varying degrees of success. We perform several experiments and illustrate the successes and shortcomings of the method. Our novel shape priors improve the 3D shape recovery in certain cases where the original shape prior was unsuitable. Our generalized training approach shows initial promise but has to be confirmed with increased computational resources. WHAT WAS EASY? The original code is easily runnable on the correct machine type (Linux operating system and CUDA 9.2 compatible GPU) for the specific datasets used by the authors. WHAT WAS DIFFICULT? Porting the model to a new dataset, problem setting or a different machine type is far from trivial. The poor cohesion of the original code makes interpretation very difficult, and that is why we took care to re-implement many parts of the code using the decoupling principle. The code depends on many external implementations which had to be made runnable, which caused a significant development bottleneck as we developed on Windows machines (contrary to the authors). The exact loss functions and the number of training steps were not properly reported in the original paper, which meant it had to be deduced from their code. Certain calculations required advanced knowledge of light-transport theory, which had no familiarity to us, and had to be mimicked and could not be verified. COMMUNICATION WITH THE ORIGINAL AUTHORS We did not communicate with the original authors.
["Unsupervised Learning", "Deep Learning", "Generative Modeling", "Computer Vision", "3D Shape"]
Replicating and Improving GAN2Shape Through Novel ShapePriors and Training StepsAnonymous Author(s)AffiliationAddressemailReproducibility Summary 1Scope of Reproducibility 2Pan et al. [2021] propose an unsupervised method named GAN2Shape that purportedly is able to recover 3D information 3stored in the weights of a pre-trained StyleGAN2 model, to produce 3D shapes from 2D images. We aim to reproduce 4the 3D shape recovery and identify its strengths and weaknesses. 5Methodology 6We re-implement the method proposed by Pan et al. [2021] with regards to 3D shape reconstruction, and extend 7their work. Our extensions include novel prior shapes and two new training techniques1While the code-base relating 8to GAN2Shape was largely rewritten, many external dependencies, which the original authors relied on, had to be 9imported2. The project used 189 GPU hours in total, mostly using a single Nvidia K80, T4 or P100 GPU, and a 10negligible number of runs on a Nvidia V100 GPU. 11Results 12We replicate the results of Pan et al. [2021] on a subset of the LSUN Cat, LSUN Car and CelebA datasets and 13observe varying degrees of success. We perform several experiments and illustrate the successes and shortcomings 14of the method. Our novel shape priors improve the 3D shape recovery in certain cases where the original shape prior 15was unsuitable. Our generalized training approach shows initial promise, but has to be confirmed with increased 16computational resources. 17What was easy 18The original code is easily runnable on the correct machine type (Linux operating system and CUDA 9.2 compatible 19GPU) for the specific datasets used by the authors. 20What was difficult 21Porting the model to a new dataset, problem setting or a different machine type is far from trivial. The poor cohesion 22of the original code makes interpretation very difficult, and that is why we took care to re-implement many parts of 23the code using the decoupling principle. The code depends on many external implementations which had to be made 24runnable, which caused a significant development bottleneck as we developed on Windows machines (contrary to the 25authors). The exact loss functions and the number of training steps were not properly reported in the original paper, 26which meant it had to be deduced from their code. Certain calculations required advanced knowledge of light-transport 27theory, which had no familiarity to us, and had to be mimicked and could not be verified. 28Communication with original authors 29We did not communicate with the original authors. 301Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF .2All depencies are declared in section 3Submitted to ML Reproducibility Challenge 2020. Do not distribute.1 Introduction 31Image generation has been a hot topic within generative models as they represent an intuitive problem whose results are 32easily accessible by the public. One of the models that has received a lot of public attention is StyleGAN (Karras et al. 33[2019]). The network’s architecture has been refined through multiple iterations in StyleGAN2 (Karras et al. [2020b]), 34StyleGAN2-ADA (Karras et al. [2020a]) and StyleGAN3 (Karras et al. [2021]). StyleGAN2 improves on the first 35version by, among other things, adding a projection method onto the latent space, which allows the inversion an image 36into its latent representation. 37Methods like GAN2Shape (Pan et al. [2021]) aim at exploiting the information that is already stored in the generator of 38a pre-trained StyleGAN2 model to go beyond generating synthetic 2D images. In particular, this method aims to extract 39the 3D shape of the preeminent object in any image. This is intuitively possible due to the size of the training dataset of 40the StyleGAN2 model, and its ability to generate images of an object from multiple views and lighting directions by 41varying w. The authors of GAN2Shape use StyleGAN2 networks pre-trained on different dataset categories and five 42different feature extraction models to derive the shape information for images belonging to the same dataset categories. 43This method, compared to many others (Lunz et al. [2020], Henzler et al. [2019], Wu et al. [2015], Wang et al. [2019]), 44has the advantage of being completely unsupervised, and not requiring a change in the training process of the classical 452D GAN. 46In this article, we describe our replication of GAN2Shape (Pan et al. [2021]) and report mixed results. We perform 47several experiments and we illustrate the successes and shortcomings of the method. Further, we extend the method 48improving the original results in several cases. 492 Scope of reproducibility 50The authors of GAN2Shape make the following claims: 511. Their framework does not require any kind of annotation, keypoints or assumption about the images 522. Their framework recovers 3D shape with high precision on human faces, cats, cars, buildings, etc. 533. GAN2Shape utilizes the intrinsic knowledge of 2D GANs 544. The 3D shape generated immediately allows for re-lighting and rotation of the image. 553 Methodology 56Our initial intent of re-implementing the source code from from the description of the paper had to be abandoned 57due to lack of detailed information of some key points in the method. We, therefore, decided to follow a different 58approach integrating both the details from the authors’ code and the paper’s description. While trying to always base 59our implementation on the paper’s description we found some parts (particularly, the loss functions) that differed from 60the actual code and decided to follow the latter instead. 61The resources we used were mainly the authors’ code, the code and documentation of all the out-sourced methods the 62authors borrowed: StyleGAN2 Karras et al. [2020b] (code), Unsup3D Wu et al. [2020] (code), Semseg Zhao [2019] 63(code) and BiSeNet Yu et al. [2018, 2021] (code). The GPUs used were multiple and varied depending on availability: 64Nvidia Tesla K80, T4, V100, P100. 653.1 Model descriptions 66To extract the implicit 3D knowledge of pre-trained StyleGAN network, Pan et al. [2021] propose an elaborate scheme 67involving five different neural networks. Each network models a particular quantity corresponding to the view and 68lighting directions, the depth of the image, and the albedo. The View andLight (VandL, resp.) networks operate in a 69encoder type manner, trying to obtain a low-dimensional vector representation of the camera view direction vand the 70direction of light lilluminating the object in the picture. The Depth andAlbedo (DandA, resp.) networks utilize 71auto-encoder architectures3to obtain image-resolution depth maps dand diffuse reflections (albedo) aoff the object’s 72presumed surface. 73The real GAN knowledge extraction happens in the final network, the Offset encoder E, combined with the pre-trained 74StyleGAN2 generator, G. The offset encoder aims to learn a latent representation wof images with randomly sampled 753We refer to tables 5-7 of the original paper (Pan et al. [2021]) for the exact architectures.2view and light directions, pseudo-samples . Paired with G, this allows the creation of new realistic samples ̃Ii=G(w′i) 76with new view and lighting directions, denoted projected samples . The projected samples then serve as extended 77training data, providing multiple view-light direction variations of the original image. 78To use the components v,l,dandato obtain a reconstructed image, the authors utilize a pretrained neural renderer 79developed by Kato et al. [2017], which we denote by Φ. 803.1.1 Training Procedure 81The training process of this method can be divided into 3 different steps, where the different networks involved are 82trained separately. In the original paper, these steps are done sequentially and for one image at a time, as shown in 83Figure 1, and each step is repeated multiple times before moving into the following one. The result is a model that can 84predict the depth map for only one image. All of the networks are trained using the Adam optimization algorithm. 85Prior pretraining. Before attempting to learn the true shape of an object, the depth network is initialized by pretraining 86it on a fixed prior shape. For this purpose Pan et al. [2021] propose to use an ellipsoid shape as the shape prior. We 87utilized this ellipsoid prior to reproduce the results of Pan et al. [2021], and we extended their work by also evaluating 88two new different priors. 89Step 1 optimizes only the Anetwork according to Equation 1. Given an input I, the first four networks predict their 90components v,l,d,a, and we obtain a reconstructed image ˆI= Φ(v,l,d,a).491Lstep1(I,ˆI) =∥I−ˆI∥1+λsLs(D(I)) +λpLp(I,ˆI) (1)Step 2 optimizes the Enetwork according to Equation 2. Using the dandacomponents given in the last step 1 92iteration, and random directions v′i,l′i, we generate Npnew pseudo-images I′i. For each I′iwe predict ∆wi=E(I′i), 93which serves as input to the StyleGAN generator network Gand obtain the projected images ̃Ii. 94Lstep2(I) =1NpNpXi=1∥I′i−G(w+E(I′i))∥1+λ1∥E(I′i)∥2 (2)Step 3 optimizes the L,V,DandAnetworks according to Equation 3. It consists in part of Lstep1. The second part 95utilizes the projected samples from the last iteration of step 2. For each projected sample ̃ vi=V( ̃Ii), ̃li=L( ̃Ii)is 96calculated. Combined with dandafrom the original image, they can be used to reconstruct each projected sample 97from the components ̄I= Φ( ̃ vi, ̃li,d,a)). 98Lstep3(I, ̄I) =1NpNpXi=1[Lp(I, ̄Ii) +||I− ̄Ii||1] +Lstep 1(I,ˆI) +λ2Ls(D(I)) (3)Stages. The steps are repeated for a number of stages . In each, the steps are trained for a different number of iterations 99(see Table 1 in Appendix A for details). 100SingleimagePrior pre-trainingStep 1Step 2Step 3SingleimagemodelPredict depth mapforoneimage.×NstagesFigure 1: Schematic of the original training process.4Lpis a neural network trained to predict similarities between images (Johnson et al. [2016]) and Lsis a term that encouragessmoothness of the resulting depth maps (as described in Zhou et al. [2017]). We refer to our code for the weights λi.33.1.2 Novel Shape Priors 101The first novel prior we consider is a masked box. Using the mask returned by the parsing model developed by Zhao 102et al. [2017] we extrude the relevant object from the background, in a step-like manner. Improving on this idea, we 103also smooth the transition from the object to the background. This is done by using three 2D convolutions, where we 104convolve the masked box shape with a 11×11filter of ones. Renormalizing the convolved shape, we obtain Figure 2c 105denoted as ‘smoothed box’. 106The last prior we tested is obtained by normalizing the score (or "confidence") that the parsing model gives to each 107pixel. We use this confidence to project the object, i.e. a pixel that is within the category with more confidence will be 108farther projected. This prior is similarly smoothed by convolutions and is denoted as ‘confidence based’. 109Figure 2 shows a visual representation of the prior shapes used for an example image taken from the Celeba dataset.(a) Ellipsoid (original) (b) Masked box (c) Masked and smoothedbox(d) Confidence basedFigure 2: Original vs. our novel shape priors, shown on the Celeba (face) dataset1103.2 Generalized Training Procedure 111Motivated by our findings on the forgetting of previously seen images, extensively explained in section 5.3.2 and the 112appendix A.5, we propose an alternative training procedure to favor a general model M∗usable for all images belonging 113to the same distribution as the training dataset D. We propose to pretrain the depth net Don all images first, instead of 114repeating the process for each image. We also modify Step 1, 2 and 3 by greatly lessening the number of iterations 115given to a single image and breaking up the sequential training of the original method into a few iterations per example, 116and instead introducing Neepochs and batch training to compensate, increase resource utilization and training speed. 117To facilitate understanding of our modifications to the training procedure, we provide a schematic in Figure 3. It can be 118compared to the original shown in Figure 1. 119TrainingdatasetExtractbatchPrior pre-trainingExtractbatchStep 1ExtractimageStep 2Step 3GeneralmodelPredict depthmap∀imagesin the category.×NB×batch size×NB×NeFigure 3: Schematic of our new training process designed to favor generalization.3.3 Datasets 120We aimed to reproduce the authors’ results on the LSUN Car, LSUN Cat (Yu et al. [2015]) and Celeba (Liu et al. 121[2015]). From these datasets, the authors selected a subset consisting of 10 images of cars, 216 images of cat faces, 122and 399 celebrity faces. Like the authors, we used RGB images of three color channels, resized to 128×128pixel 123resolution. No further preprocessing was applied. 12443.4 Hyperparameters 125For replication purposes, the original hyperparameters by Pan et al. [2021] were used, but we also tried tuning some 126parameters that we believe are key to the method: the number of projected samples, Np, for each image and the number 127of epochs for pre-training the depth network. Npwas varied within {2,4,8,16,32}. In our tests we found the values 4, 1288 and 8, respectively for the LSUN Car, LSUN Cat and Celeba dataset, to be the threshold after which the improvements 129in image quality start greatly decreasing (see subsection A.8 in Appendix for more details). 130The number of epochs for the depth network pretraining was varied within {100,500,1000,2000}. This pretraining 131affects how irregular the depth map predictions are. We believe that using a threshold for the loss to check the 132convergence would be preferable as the number of epochs selected by the authors (1000) is enough in most cases but 133not in all. We attribute irregularity in some of our results to this issue. 1343.5 Experimental setup and code 135For each dataset we run our implementation of the framework from Pan et al. [2021] on the images that were selected 136by the authors, the procedure saves a checkpoint for each network. These checkpoints are later fed the original image to 137get the generated result. The evaluation of the results was only qualitative as all the datasets we explored do not have a 138ground truth for comparison. We instead relied on a manual evaluation. 139Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF. Our results are available 140interactively under the docs folder. 1413.6 Computational requirements 142Most of the experiments we ran were on a Intel(R) Xeon(R) CPU @ 2.20GHz with 2 cores available and a Nvidia Tesla 143P100-PCIE-16GB. Since the framework described by Pan et al. [2021] is instance-specific, we report the average time 144for completing the projection of a single image: 96m and 28s for an image in the Celeba dataset, 95m and 43s for a 145LSUN Cat image and 74m and 32s for a LSUN Car image. 1464 Results 147The model correctly learned the shape and the texture of many images, while some examples were less successful than 148others. For example, the model converged to believable shapes for two of the cars in Figure 4, but the shape of the 149right-most car is debatable. 150In the following sections we show the reconstructed depth map and 3D projection of some images chosen as repre- 151sentative of the dataset. All of the images that follow have the background cut from the actual object, this was only 152done for ease of illustration and was not done for the actual training process since the original authors do not mask the 153background in all cases. It is also difficult to illustrate the results fairly in 2D images, so we invite the reader to visit our 154website with interactive 3D plots5. 1554.1 Results reproducing original paper 1564.1.1 LSUN Car 157We present the results on LSUN Car dataset in Figure 4. Most features are projected in the right direction and show 158details that are correctly outward projected from the main object. This result supports all the claims made in section 2 159as we did not use any annotation or assumption for the images, many details were retrieved with high precision using 160the StyleGAN knowledge and we were able to easily make a rotation of the image (see interactive web-page). 1614.1.2 LSUN Cat 162The second experiment was executed on the LSUN Cat dataset. The results are a slightly poorer compared to the the 163LSUN Car dataset. The face of the cats gets properly recognized, but some details like the nose are not protruded from 164the rest of the face and are generally on the same plane, see Figure 4. Some images present some irregularities in the 165form of spikes and hills (d). The rotation (f) does not result in a completely natural image as part of the face of the cat 166appears on the same plane. This experiment does not support claims 2 and 4 in some cases (e.g. figures 4 (d) and (f) 167negate claims 2 and 4 respectively) while it does for claims 1 and 3 (section 2). 168Additional results such as for the Celeba dataset, can be found in the Appendix A. 1695Due to the anonymization of the report, we instead refer to the html files under the docs folder in our code5(a) (b) (c) (d) (e) (f)Figure 4: LSUN Car and Cat4.2 Results beyond the original paper 1704.2.1 The effects of shape priors 171No prior. To confirm our suspicions that this method would not work at all without a shape prior, briefly mentioned 172in 3.1.1, we ran a test on one image from the LSUN Car dataset without any prior pre-training, and with random 173initialization. The reconstruction objective is still satisfied very well, but it has converged to an extremely noisy depth 174map (see Figure 8 in Appendix A). It shows that this method would not work without a strong shape prior to guide it 175towards a reasonable shape. 176Smoothed Box Prior. The first experiment was done by testing the first of the prior shapes presented, the smoothed 177box prior. Figure 5 shows the smoothed box prior tested on the LSUN Cat and Celeba dataset where it can be seen how 178it is better at understanding the structure of the nose and face in general (see Appendix A for more details).Figure 5: Example result for two different image examples from the LSUN Cat and Celeba datasets. For each example,the left-most figure corresponds to the ellipsoid and right-most figure corresponds to the smoothed masked box prior.1794.2.2 Generalized Training Procedure 180We demonstrate the results of our new training loop on LSUN Cat. We note again that the difference to the previous 181demonstration on LSUN Cat, is that a single network D∗was used to predict all of the images, as opposed to a different 182network Difor each image Ii. The general model was trained on a limited subset of 30 images from LSUN Cat. It was 183trained for a modest 60epochs which results in approximately 60% of the weight updates per image of the original 184method. Figure 6 shows the projection of some images from the LSUN Cat dataset. One can observe that the method 185recognizes the general structure of the cat’s face but also presents some artefacts in some specific parts of the face e.g. 186the second cat’s cheek is further projected than where it should and similarly for the third cat’s chin. 1874.2.3 Improved initialization 188Our final experiment is inspired by the observations reported in sections 5.3.1 and 3.4. We experiment with drastically 189increasing the number of pseudo-samples Npfrom 16 to 128 for 10 short epochs, in which each training step is 190performed only once. We observe see marginal improvement in the predicted shape (Figure 6) and larger improves in 191the smaller details/features. See the appendix A.7 for further detail. 192Training step 1 was not changed and it is allowed to converge in the first stage, as it does not involve the projected 193samples. See Table 2 in the appendix for an exact description of the number of iterations. All other parameters were left 1946as in subsubsection 4.2.1, with the smoothed box prior. We experimented with two of the worst performers from the 195LSUN Cat dataset to evaluate whether this method could improve the results, see Figure 16. We applied the same idea 196to the general model described in sections 3.2, 4.2.2 and saw improvements, see Figure 6. The results can be compared 197to Figure 14.(a) Reconstructed depth(b) Reconstruced 3D imageFigure 6: Depth map predictions for a few image samples from the training set D ⊂ LSUN Cat dataset, all using oneand the same general model M∗trained with initialization iterations.1985 Discussion 1995.1 What was easy 200The authors provide a clear specification of the Python package dependencies, as well as other dependencies. Addition- 201ally, they provide scripts for easy downloading of a select few datasets and pre-trained model weights. They precisely 202state how to execute the training script and how to run the model for evaluation. Note that this refers to running the 203original code and that modifying and extending the code brought many difficulties, as explained in the next section. 2045.2 What was difficult 205The paper by Pan et al. [2021] did not contain enough information for a successful reimplementation. Many details had 206to be discerned or guessed from their code. Furthermore, the quality of said code does not allow for a quick interpretation. 207For example, deducing the training loop and the number of iterations for each step was further complicated by the poor 208cohesion of the original code: the trainer script was heavily mingled with model class, using class members of the 209model object to increment training steps and nested function calls back and forth between the trainer and model classes. 210The components v,l,dandawere not enough to pass in to the neural renderer to reconstruct an image. In reality, 211several calculations of quantities such as diffuse shading and texture were needed to be fed into the neural renderer, 212using concepts from light transport theory that were not mentioned in the paper. 213Another difficulty was the heavy reliance on external pre-trained neural networks. The neural renderer Kato et al. 214[2017], in particular, posed several problems. The major one was incompatibility with Windows machines. To be able 215to develop on our personal machines, we had to make manual edits of the neural renderer script and different CUDA 216files. 217Another challenge with this method is the lack of objective quantitative metrics to evaluate the success of the models. 218One instead has to rely almost entirely on qualitatively gauging the shape reconstructions by eye. 21975.3 Conclusions 2205.3.1 Variability of the results 221We observed that the method is very sensitive to various random factors and identical runs may yield different results, 222see Figure 12. One factor may be the random initialization of the networks, but we do not believe it is the dominating 223factor, since the depth network is pre-trained on a fixed prior shape each run. Rather, as mentioned by the authors Pan 224et al. [2021], the quality of the projected samples varies. Additionally, we only sample 8−16different view-light 225directions in each step 2 iteration, which may be too few projected samples for a robust model. Since this sampling 226is random, increasing the number of samples should assure the inclusion of meaningful view and light projections 227(experimental backing in the Appendix A). 2285.3.2 Catastrophic forgetting 229We have observed that the instance-specific model forgets the previous training images (see Appendix A.5, Figure 13), 230and thus has no generalization capability. This is not necessarily a problem if one has time and computational resources. 231It can also be argued that this is exactly what is intended with this model, and that generalization is up to the training 232dataset of the StyleGAN model. It does, however, limit the usefulness of the model. As an example, the training time 233for one 128×128pixel RGB image using a Tesla K80 GPU was about 2.5 hours, which seems exceedingly costly for 234just one low-resolution depth map. We argue that a general model would have more use. The ideal scenario would be a 235model D∗trained on Dthat is able to accurately predict di=D∗(Ii)∀Ii∈ D, and even extend to unseen testing data 236belonging to the same distribution as D. This discussion is what urged us to explore the altered training procedure of 237sections 3.2 and 4.2.2. 2385.3.3 Final conclusions 239We were able to replicate some of the results of Pan et al. [2021] on the datasets LSUN Car, LSUN Cat and Celeba. We 240identified several failure modes and limitations of the model, and back it up with experimental evidence. Examples are 241the variability and sensitivity to the projected samples, the heavy dependence on shape priors and the computational 242costliness of the single-use model - all of which were not adequately accounted for in the original paper. 243We propose a new prior shape, the smoothed box prior, that has shown very promising results especially for fine details 244and complex object structures. We propose a second prior shape, confidence-based, that has shown best results in the 245face dataset. We finally suggest two new training procedures that produce better results and are better at generalizing 246than the original model by Pan et al. [2021]. 247We recognize the limitations of this work as we were only able (due to the restricted computational power) to test the 248method on part of the dataset. For example, the Cat’s dataset used by the authors contains more than 200 images but we 249were able to only test few of them. We speculate that some images in the dataset could yield better results than those 250reported here. However, we believe that few bad projected images should be enough to claim the uneffectiveness of the 251method at least in some particular cases. 252Another limitation of our work is the lack of quantitative evaluation methods. The original authors propose their results 253also on the BFM benchmark Paysan et al. [2009] where it is possible to use some metrics to accurately evaluate the 254results. 2555.4 Future work 256We speculate that it would be interesting to adapt the same method to StyleGAN3 (Karras et al. [2021]) where the 257network has been modified to support training with fewer samples, leaving the question if the network still retains 258enough information that is needed for GAN2Shape to work. Future work could also explore the use of our priors on 259datasets where the original method failed (e.g. the LSUN Horse dataset). We speculate that, since our prior captures the 260boundaries of the object very well (compared to the ellipsoid where the boundaries are only used to position the origin), 261it could achieve better results in complex 3D objects where the shape cannot be simplified into an ellipse. A limitation 262of this method is that it does not use voxels, but learns a height map. This disallows realistic shape reconstructions and 263more complex geometries with multiple x and y values for each z value etc. Future work should investigate whether this 264model could be extended to predict voxels instead of height maps. Given our promising results with the generalizing 265trainer, which was obtained through only a few epochs of training, we believe that it should be further explored with 266increased epochs and training set size. 2678References 268Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. Escaping plato’s cave: 3d shape from adversarial rendering. In 269Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 9984–9993, 2019. 270Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. 2712016. 272Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In 273Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4401–4410, 2019. 274Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative 275adversarial networks with limited data. arXiv preprint arXiv:2006.06676 , 2020a. 276Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the 277image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 278pages 8110–8119, 2020b. 279Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free 280generative adversarial networks. In Proc. NeurIPS , 2021. 281Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. CoRR , abs/1711.07566, 2017. URL 282http://arxiv.org/abs/1711.07566 . 283Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of 284International Conference on Computer Vision (ICCV) , December 2015. 285Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, and Nate Kushman. Inverse graphics gan: Learning to generate 3d 286shapes from unstructured 2d data. arXiv preprint arXiv:2002.12674 , 2020. 287Xingang Pan, Bo Dai, Ziwei Liu, Chen Change Loy, and Ping Luo. Do 2d gans know 3d shape? unsupervised 3d shape 288reconstruction from 2d image gans. 2021. 289Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3d face model for pose and 290illumination invariant face recognition. In 2009 sixth IEEE international conference on advanced video and signal 291based surveillance , pages 296–301. Ieee, 2009. 292Hanqing Wang, Jiaolong Yang, Wei Liang, and Xin Tong. Deep single-view 3d object reconstruction with visual hull 293embedding. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 8941–8948, 2019. 294Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 2953d objects from images in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 296Recognition , pages 1–10, 2020. 297Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: 298A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern 299recognition , pages 1912–1920, 2015. 300Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation 301network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV) , 302pages 325–341, 2018. 303Changqian Yu, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong Sang. Bisenet v2: Bilateral network 304with guided aggregation for real-time semantic segmentation. International Journal of Computer Vision , 129(11): 3053051–3068, 2021. 306Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a 307large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 , 2015. 308Hengshuang Zhao. semseg. https://github.com/hszhao/semseg , 2019. 3099Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In 310Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2881–2890, 2017. 311Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. Unsupervised learning of depth and ego-motion 312from video. CoRR , abs/1704.07813, 2017. URL http://arxiv.org/abs/1704.07813 . 31310A Appendix 314A.1 Hyperparameters 315Stage Iterations/step1 [700,700,600]2, 3, 4 [200,500,400]Table 1: Specification of the different stages for the single-image model.Stage Iterations/step Np0 [700,0,0] 161-10 [1,1,1] 12811 [1,700,600] 1612, 13, 14 [200,500,400] 16Table 2: Specification of the different stages for the single-image model with initialization iterationsEpochs Iterations/step Np60 [13,22,18] 16Table 3: Specification of the iterations/step for the generalized model.Epochs Iterations/step Np10 [13,1,1] 12860 [13,22,18] 16Table 4: Specification of the iterations/step for the generalized model with initialization iterationsTable 5: Hyperparameters for the general model with initialization iterations on the LSUN Cat dataset.Parameter Valuen_epochs_prior 1000n_epochs_generalized 70n_epochs_init 10n_init_iterations 8batch_size 10channel_multiplier 1image_size 128z_dim 512root_path data/catlearning_rate 0.0001view_scale 1refinement_iterations 1n_proj_samples 16rot_center_depth 1.0fov 10tex_cube_size 2We refer to our GitHub repository for a complete declaration of all hyperparameters for all datasets https:// 316anonymous.4open.science/w/GAN-2D-to-3D-03EF . 317A.2 Additional replication results 318A.2.1 Celeba 319The third experiment conducted on the Celeba dataset shows that most of the face are correctly portrayed with the only 320exception of the border of the face e.g. chin and forehead that sometimes is not included in the projection (see Figure 7 32111(b)). Also we found out that the method does not behave well with faces that are viewed from the side (see Figure 7 (c)) 322where the face still gets a projection as it was viewed from the front. As a consequence of this, the rotation of side faces 323does not result in a good image. This experiment supports claims 1-4 (section 2) only for some faces and claims 1 and 3 324for those viewed from the side.(a) (b) (c)Figure 7: Celeba325A.3 Effects of shape priors 326Figure 8 shows the effects of random initialization of the depth network. Figure 9 shows the results on the first car(a) Textured shape (b) 3D depth map (c) Reconstructed imageFigure 8: Results with no shape prior.327where it can be observed that our prior is even better the the ellipsoid at capturing fine details such as the side mirror. 328Confidence-Based Prior. Another experiment we performed focused on the performance of the second prior we 329presented, the confidence based prior. Figure 10 shows some results on the datasets considered in this paper. The results 330are most promising in the Celeba dataset where the image of a face is correctly projected even if viewed from the side. 33112(a) Textured shape (b) 3D depth (c) 2D depth colormap(d) Textured shape (e) 3D depth (f) 2D depth colormapFigure 9: Ellipsoid prior (top row) vs. the smoothed masked box (bottom row) prior.(a) LSUN Car (b) LSUN Cat (c) CelebaFigure 10: Results with the confidence based prior.A.4 Variability of identical runs 332A.5 Catastrophic forgetting 333When the training process is complete for one image Itwe have confirmed that the model Mt={V, L, D, A }tis able 334to construct a believable depth map (subsection 4.1). However, when training continues to the next image It+1and 335Mt+1is obtained, we have observed that the ability to predict the depth map of the previous image deteriorates, and 336the problem gets worse with an increasing time discrepancy between the model and image. In other words, the depth 337network Dtat training step tis only usable for predicting the depth map dt=Dt(It)and so suffers from catastrophic 338forgetting of the previous images. This is illustrated in Figure 13. 339The training time for one 128×128pixel RGB image using a Tesla K80 GPU was about 2.5 hours, which seems 340exceedingly costly for just one low-resolution depth map. 34113(a) Cat 1 (original prior) (b) Cat 1 (our prior)Figure 11: Results for a few other images from the LSUN Cat dataset, for the ellipsoid (left) and smoothed masked box(right) priors.(a) (b) (c)Figure 12: Several runs with identical configuration.(a)M3(I3) (b)M3(I2) (c)M3(I1)Figure 13: Depth map predictions for a few image samples from the LSUN Car dataset, illustrating catastrophicforgetting for the model M.A.6 Additional generalized training results 34214(a) Reconstructed depth(b) Reconstruced 3D imageFigure 14: Depth map predictions for a few image samples from the training set D ⊂ LSUN Cat dataset, all using oneand the same general model M∗.I199 I201 I199 I201(a) Using the general model M∗.I199 I201 I199 I201(b) Using the instance-specific model MlastFigure 15: Depth map predictions for unseen image samples {I199,I201} ̸∈ D from the LSUN Cat dataset.A.7 Initialization iteration results 343The observations of sections 5.3.1 and 3.4 can be condensed into two main points to form a hypothesis. Please note 344that our limited computational resources meant that we could not perform rigorous experimentation to confirm these 345observations with a large number of samples, and that this section should be viewed as a speculative experiment. 346•The initial few training iterations can be viewed as an initialization of the weights, which depends on what 347projected samples are generated by the StyleGAN2 model. 348•The “features” (i.e. peaks and valleys) of the depth map predictions do not qualitatively change with increasing 349iterations, but remain fixed except in size (i.e. the height of the peaks). 350If one accepts these claims, then it is clear that the first few iterations determine the success of the shape reconstruction. 351That is why we experiment with drastically increasing the number of pseudo-samples during the first iterations. This 35215reduces the bias of the initialization and reduces the relative impact that a poor projected sample generated by the GAN 353has on the model weights. Specifically, we increase the number of projected samples Npfrom 16 to 128 for 10 short 354epochs, in which each training step is performed only once. 355Ideally, one would of course permanently increase Np, but with extreme costs in terms of training time. This method 356only added ∼4minutes of training time using a Tesla T4 GPU. 357(a) Original initialization(b) Original initialization(c) With initialization iterations (d) With initialization iterationsFigure 16: Results for the worst performers for the single-image model using the smoothed box prior, from the LSUNCat dataset. Original initialization (top row) and using initialization iterations (bottom row). The leftmost cat saw themost drastic changes. While the result is a “spikey” depth map, we argue that the general shape has a better resemblanceto a cat, and less to a square box-like in the original initialization. The rightmost cat saw some improvement in somedetails such as the ears and the mouth region.A.8 Hyperparameter tuning 358We found that Npcorrelates with the quality of the predicted shapes. The trend tends to be that more is better, but 359with diminishing returns. The biggest benefit that a large Nphas, is that strange artefacts are less likely to persist. It is 360difficult to pinpoint an acceptable threshold for Np, as it varies between datasets and even between images. Therefore 361we believe a good compromise is to perform a few initialization iterations as described in section 4.2.3 with a large Np 362(i.e. 128) and then continue training with a lower number according to the aforementioned thresholds. 363To illustrate the results when varying on the number of projected samples Nproj we present the results on the LSUN car 364and Celeba dataset. In Figure 18 the first two cars (corresponding to a low Np) have more irregular surfaces and one 365has a large spike, while the third is more regular. The same is observed for the Celeba faces in Figure 17, where the first 366face (corresponding to a low Np) has significant irregularities across the face. As described in subsubsection 5.3.1, we 367attribute this phenomenon to lower relative impact that sampling poor view-light projections has, the larger Npis. 36816Figure 17: Face 1 when trained with 4, 8, 16 and 32 (from left to right) number of projected samples.Figure 18: Car 1 when trained with 2, 4 and 8 (from left to right) number of projected samples.17
rMzxjieWUW9
Good reproducibility study, writing and structure needs to be improved
6: Marginally above acceptance threshold
This paper aims to reproduce "Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs" by Pan et al. The authors provide a complete and clear summary of the work that has been done. They also state clearly which of the research claims made by Pan et al. are subject to this reproducibility study. The authors decide to re-implement the entire code-base, relying on external dependencies where necessary. Their initial attempt to write the code from scratch relying only on the documentation in the original paper had to be abandoned due to lack of details. The code of the authors seems well enough documented with clear instructions on how to run it. The authors try out additional hyperparameters that have not been reported by the original authors. They additionally experiment with the number of projected samples and the number of epochs for the pre-training. The authors did not communicate with the original authors. Clarifying the different implementation of the loss function compared to the paper would have been helpful. Despite the effort of reproducing the authors also go beyond the original work and try out different shape priors and introduce a generalized training procedure. The motivation for the novel shape priors is missing. Nonetheless, this additional experiment is helpful in understanding the original work. Their newly proposed 3D shape priors seem to improve shape reconstruction for some cases. The proposal for a generalized training structure seems reasonable to improve generalization and is backed up with their experiments on catastrophic forgetting. The improvement achieved by their improved initialization seems debatable. The authors clearly discuss which parts of the original paper could be reproduced and which could not. The paper provides useful experiments to validate the research claims by Pan et al. They carry out many additional experiments which are useful. The structure of the paper is confusing. I urge the authors to improve the writing and especially the structure of the paper, avoiding heavy cross-referencing between results, earlier sections, appendix, and conclusions which makes it very hard to follow. General remarks: - The paper contains many small experiments that would benefit from being better connected. - The structure of the paper could be improved by first presenting their experiments, results, and conclusions of the reproduction and in the following section discuss own experiments, results, and conclusion. This avoids references to findings later in the paper (see for example line 112). Generally, references to appendices and different sections are hard to follow and are very confusing to the reader. - Reference to 3.1.1. in line 173 does not contain the hypothesis “…that this method would not work at all without a shape prior…”. - When referring to the appendix the exact subsection should be mentioned, otherwise, it is very hard to follow what the authors refer to, e.g. line 178 - Generally, when referring to observations in different sections it would be helpful to repeat them briefly, e.g. see section 4.2.3 line 189. - Appendix 4 does not seem not to contain any content. - The Confidence-Based Prior results are not referenced in the paper and can be only found in the appendix - When claiming superior results it would be helpful to have reference images text to the improved ones to compare with directly. Figure 10.
 Grammatical issues: - use either pre-training or pertaining - we observe see marginal -> we observe marginal, line 191 - and larger improves in -> and larger improvements in, line 191
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Replicating and Improving GAN2Shape Through Novel Shape Priors and Training Steps ### Paper Abstract SCOPE OF REPRODUCIBILITY Pan et al. propose an unsupervised method named GAN2Shape that purportedly is able to recover 3D information stored in the weights of a pre-trained StyleGAN2 model, to produce 3D shapes from 2D images. We aim to reproduce the 3D shape recovery and identify its strengths and weaknesses. METHODOLOGY We re-implement the method proposed by Pan et al. with regards to 3D shape reconstruction, and extend their work. Our extensions include novel prior shapes and two new training techniques. Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF. While the code-base relating to GAN2Shape was largely rewritten, many external dependencies, which the original authors relied on, had to be imported. The project used 189 GPU hours in total, mostly using a single Nvidia K80, T4 or P100 GPU, and a negligible number of runs on a Nvidia V100 GPU. RESULTS We replicate the results of Pan et al. on a subset of the LSUN Cat, LSUN Car and CelebA datasets and observe varying degrees of success. We perform several experiments and illustrate the successes and shortcomings of the method. Our novel shape priors improve the 3D shape recovery in certain cases where the original shape prior was unsuitable. Our generalized training approach shows initial promise but has to be confirmed with increased computational resources. WHAT WAS EASY? The original code is easily runnable on the correct machine type (Linux operating system and CUDA 9.2 compatible GPU) for the specific datasets used by the authors. WHAT WAS DIFFICULT? Porting the model to a new dataset, problem setting or a different machine type is far from trivial. The poor cohesion of the original code makes interpretation very difficult, and that is why we took care to re-implement many parts of the code using the decoupling principle. The code depends on many external implementations which had to be made runnable, which caused a significant development bottleneck as we developed on Windows machines (contrary to the authors). The exact loss functions and the number of training steps were not properly reported in the original paper, which meant it had to be deduced from their code. Certain calculations required advanced knowledge of light-transport theory, which had no familiarity to us, and had to be mimicked and could not be verified. COMMUNICATION WITH THE ORIGINAL AUTHORS We did not communicate with the original authors. ### Paper Keywords ["Unsupervised Learning", "Deep Learning", "Generative Modeling", "Computer Vision", "3D Shape"] ### Paper Content Replicating and Improving GAN2Shape Through Novel ShapePriors and Training StepsAnonymous Author(s)AffiliationAddressemailReproducibility Summary 1Scope of Reproducibility 2Pan et al. [2021] propose an unsupervised method named GAN2Shape that purportedly is able to recover 3D information 3stored in the weights of a pre-trained StyleGAN2 model, to produce 3D shapes from 2D images. We aim to reproduce 4the 3D shape recovery and identify its strengths and weaknesses. 5Methodology 6We re-implement the method proposed by Pan et al. [2021] with regards to 3D shape reconstruction, and extend 7their work. Our extensions include novel prior shapes and two new training techniques1While the code-base relating 8to GAN2Shape was largely rewritten, many external dependencies, which the original authors relied on, had to be 9imported2. The project used 189 GPU hours in total, mostly using a single Nvidia K80, T4 or P100 GPU, and a 10negligible number of runs on a Nvidia V100 GPU. 11Results 12We replicate the results of Pan et al. [2021] on a subset of the LSUN Cat, LSUN Car and CelebA datasets and 13observe varying degrees of success. We perform several experiments and illustrate the successes and shortcomings 14of the method. Our novel shape priors improve the 3D shape recovery in certain cases where the original shape prior 15was unsuitable. Our generalized training approach shows initial promise, but has to be confirmed with increased 16computational resources. 17What was easy 18The original code is easily runnable on the correct machine type (Linux operating system and CUDA 9.2 compatible 19GPU) for the specific datasets used by the authors. 20What was difficult 21Porting the model to a new dataset, problem setting or a different machine type is far from trivial. The poor cohesion 22of the original code makes interpretation very difficult, and that is why we took care to re-implement many parts of 23the code using the decoupling principle. The code depends on many external implementations which had to be made 24runnable, which caused a significant development bottleneck as we developed on Windows machines (contrary to the 25authors). The exact loss functions and the number of training steps were not properly reported in the original paper, 26which meant it had to be deduced from their code. Certain calculations required advanced knowledge of light-transport 27theory, which had no familiarity to us, and had to be mimicked and could not be verified. 28Communication with original authors 29We did not communicate with the original authors. 301Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF .2All depencies are declared in section 3Submitted to ML Reproducibility Challenge 2020. Do not distribute.1 Introduction 31Image generation has been a hot topic within generative models as they represent an intuitive problem whose results are 32easily accessible by the public. One of the models that has received a lot of public attention is StyleGAN (Karras et al. 33[2019]). The network’s architecture has been refined through multiple iterations in StyleGAN2 (Karras et al. [2020b]), 34StyleGAN2-ADA (Karras et al. [2020a]) and StyleGAN3 (Karras et al. [2021]). StyleGAN2 improves on the first 35version by, among other things, adding a projection method onto the latent space, which allows the inversion an image 36into its latent representation. 37Methods like GAN2Shape (Pan et al. [2021]) aim at exploiting the information that is already stored in the generator of 38a pre-trained StyleGAN2 model to go beyond generating synthetic 2D images. In particular, this method aims to extract 39the 3D shape of the preeminent object in any image. This is intuitively possible due to the size of the training dataset of 40the StyleGAN2 model, and its ability to generate images of an object from multiple views and lighting directions by 41varying w. The authors of GAN2Shape use StyleGAN2 networks pre-trained on different dataset categories and five 42different feature extraction models to derive the shape information for images belonging to the same dataset categories. 43This method, compared to many others (Lunz et al. [2020], Henzler et al. [2019], Wu et al. [2015], Wang et al. [2019]), 44has the advantage of being completely unsupervised, and not requiring a change in the training process of the classical 452D GAN. 46In this article, we describe our replication of GAN2Shape (Pan et al. [2021]) and report mixed results. We perform 47several experiments and we illustrate the successes and shortcomings of the method. Further, we extend the method 48improving the original results in several cases. 492 Scope of reproducibility 50The authors of GAN2Shape make the following claims: 511. Their framework does not require any kind of annotation, keypoints or assumption about the images 522. Their framework recovers 3D shape with high precision on human faces, cats, cars, buildings, etc. 533. GAN2Shape utilizes the intrinsic knowledge of 2D GANs 544. The 3D shape generated immediately allows for re-lighting and rotation of the image. 553 Methodology 56Our initial intent of re-implementing the source code from from the description of the paper had to be abandoned 57due to lack of detailed information of some key points in the method. We, therefore, decided to follow a different 58approach integrating both the details from the authors’ code and the paper’s description. While trying to always base 59our implementation on the paper’s description we found some parts (particularly, the loss functions) that differed from 60the actual code and decided to follow the latter instead. 61The resources we used were mainly the authors’ code, the code and documentation of all the out-sourced methods the 62authors borrowed: StyleGAN2 Karras et al. [2020b] (code), Unsup3D Wu et al. [2020] (code), Semseg Zhao [2019] 63(code) and BiSeNet Yu et al. [2018, 2021] (code). The GPUs used were multiple and varied depending on availability: 64Nvidia Tesla K80, T4, V100, P100. 653.1 Model descriptions 66To extract the implicit 3D knowledge of pre-trained StyleGAN network, Pan et al. [2021] propose an elaborate scheme 67involving five different neural networks. Each network models a particular quantity corresponding to the view and 68lighting directions, the depth of the image, and the albedo. The View andLight (VandL, resp.) networks operate in a 69encoder type manner, trying to obtain a low-dimensional vector representation of the camera view direction vand the 70direction of light lilluminating the object in the picture. The Depth andAlbedo (DandA, resp.) networks utilize 71auto-encoder architectures3to obtain image-resolution depth maps dand diffuse reflections (albedo) aoff the object’s 72presumed surface. 73The real GAN knowledge extraction happens in the final network, the Offset encoder E, combined with the pre-trained 74StyleGAN2 generator, G. The offset encoder aims to learn a latent representation wof images with randomly sampled 753We refer to tables 5-7 of the original paper (Pan et al. [2021]) for the exact architectures.2view and light directions, pseudo-samples . Paired with G, this allows the creation of new realistic samples ̃Ii=G(w′i) 76with new view and lighting directions, denoted projected samples . The projected samples then serve as extended 77training data, providing multiple view-light direction variations of the original image. 78To use the components v,l,dandato obtain a reconstructed image, the authors utilize a pretrained neural renderer 79developed by Kato et al. [2017], which we denote by Φ. 803.1.1 Training Procedure 81The training process of this method can be divided into 3 different steps, where the different networks involved are 82trained separately. In the original paper, these steps are done sequentially and for one image at a time, as shown in 83Figure 1, and each step is repeated multiple times before moving into the following one. The result is a model that can 84predict the depth map for only one image. All of the networks are trained using the Adam optimization algorithm. 85Prior pretraining. Before attempting to learn the true shape of an object, the depth network is initialized by pretraining 86it on a fixed prior shape. For this purpose Pan et al. [2021] propose to use an ellipsoid shape as the shape prior. We 87utilized this ellipsoid prior to reproduce the results of Pan et al. [2021], and we extended their work by also evaluating 88two new different priors. 89Step 1 optimizes only the Anetwork according to Equation 1. Given an input I, the first four networks predict their 90components v,l,d,a, and we obtain a reconstructed image ˆI= Φ(v,l,d,a).491Lstep1(I,ˆI) =∥I−ˆI∥1+λsLs(D(I)) +λpLp(I,ˆI) (1)Step 2 optimizes the Enetwork according to Equation 2. Using the dandacomponents given in the last step 1 92iteration, and random directions v′i,l′i, we generate Npnew pseudo-images I′i. For each I′iwe predict ∆wi=E(I′i), 93which serves as input to the StyleGAN generator network Gand obtain the projected images ̃Ii. 94Lstep2(I) =1NpNpXi=1∥I′i−G(w+E(I′i))∥1+λ1∥E(I′i)∥2 (2)Step 3 optimizes the L,V,DandAnetworks according to Equation 3. It consists in part of Lstep1. The second part 95utilizes the projected samples from the last iteration of step 2. For each projected sample ̃ vi=V( ̃Ii), ̃li=L( ̃Ii)is 96calculated. Combined with dandafrom the original image, they can be used to reconstruct each projected sample 97from the components ̄I= Φ( ̃ vi, ̃li,d,a)). 98Lstep3(I, ̄I) =1NpNpXi=1[Lp(I, ̄Ii) +||I− ̄Ii||1] +Lstep 1(I,ˆI) +λ2Ls(D(I)) (3)Stages. The steps are repeated for a number of stages . In each, the steps are trained for a different number of iterations 99(see Table 1 in Appendix A for details). 100SingleimagePrior pre-trainingStep 1Step 2Step 3SingleimagemodelPredict depth mapforoneimage.×NstagesFigure 1: Schematic of the original training process.4Lpis a neural network trained to predict similarities between images (Johnson et al. [2016]) and Lsis a term that encouragessmoothness of the resulting depth maps (as described in Zhou et al. [2017]). We refer to our code for the weights λi.33.1.2 Novel Shape Priors 101The first novel prior we consider is a masked box. Using the mask returned by the parsing model developed by Zhao 102et al. [2017] we extrude the relevant object from the background, in a step-like manner. Improving on this idea, we 103also smooth the transition from the object to the background. This is done by using three 2D convolutions, where we 104convolve the masked box shape with a 11×11filter of ones. Renormalizing the convolved shape, we obtain Figure 2c 105denoted as ‘smoothed box’. 106The last prior we tested is obtained by normalizing the score (or "confidence") that the parsing model gives to each 107pixel. We use this confidence to project the object, i.e. a pixel that is within the category with more confidence will be 108farther projected. This prior is similarly smoothed by convolutions and is denoted as ‘confidence based’. 109Figure 2 shows a visual representation of the prior shapes used for an example image taken from the Celeba dataset.(a) Ellipsoid (original) (b) Masked box (c) Masked and smoothedbox(d) Confidence basedFigure 2: Original vs. our novel shape priors, shown on the Celeba (face) dataset1103.2 Generalized Training Procedure 111Motivated by our findings on the forgetting of previously seen images, extensively explained in section 5.3.2 and the 112appendix A.5, we propose an alternative training procedure to favor a general model M∗usable for all images belonging 113to the same distribution as the training dataset D. We propose to pretrain the depth net Don all images first, instead of 114repeating the process for each image. We also modify Step 1, 2 and 3 by greatly lessening the number of iterations 115given to a single image and breaking up the sequential training of the original method into a few iterations per example, 116and instead introducing Neepochs and batch training to compensate, increase resource utilization and training speed. 117To facilitate understanding of our modifications to the training procedure, we provide a schematic in Figure 3. It can be 118compared to the original shown in Figure 1. 119TrainingdatasetExtractbatchPrior pre-trainingExtractbatchStep 1ExtractimageStep 2Step 3GeneralmodelPredict depthmap∀imagesin the category.×NB×batch size×NB×NeFigure 3: Schematic of our new training process designed to favor generalization.3.3 Datasets 120We aimed to reproduce the authors’ results on the LSUN Car, LSUN Cat (Yu et al. [2015]) and Celeba (Liu et al. 121[2015]). From these datasets, the authors selected a subset consisting of 10 images of cars, 216 images of cat faces, 122and 399 celebrity faces. Like the authors, we used RGB images of three color channels, resized to 128×128pixel 123resolution. No further preprocessing was applied. 12443.4 Hyperparameters 125For replication purposes, the original hyperparameters by Pan et al. [2021] were used, but we also tried tuning some 126parameters that we believe are key to the method: the number of projected samples, Np, for each image and the number 127of epochs for pre-training the depth network. Npwas varied within {2,4,8,16,32}. In our tests we found the values 4, 1288 and 8, respectively for the LSUN Car, LSUN Cat and Celeba dataset, to be the threshold after which the improvements 129in image quality start greatly decreasing (see subsection A.8 in Appendix for more details). 130The number of epochs for the depth network pretraining was varied within {100,500,1000,2000}. This pretraining 131affects how irregular the depth map predictions are. We believe that using a threshold for the loss to check the 132convergence would be preferable as the number of epochs selected by the authors (1000) is enough in most cases but 133not in all. We attribute irregularity in some of our results to this issue. 1343.5 Experimental setup and code 135For each dataset we run our implementation of the framework from Pan et al. [2021] on the images that were selected 136by the authors, the procedure saves a checkpoint for each network. These checkpoints are later fed the original image to 137get the generated result. The evaluation of the results was only qualitative as all the datasets we explored do not have a 138ground truth for comparison. We instead relied on a manual evaluation. 139Our code is available at https://anonymous.4open.science/r/GAN-2D-to-3D-03EF. Our results are available 140interactively under the docs folder. 1413.6 Computational requirements 142Most of the experiments we ran were on a Intel(R) Xeon(R) CPU @ 2.20GHz with 2 cores available and a Nvidia Tesla 143P100-PCIE-16GB. Since the framework described by Pan et al. [2021] is instance-specific, we report the average time 144for completing the projection of a single image: 96m and 28s for an image in the Celeba dataset, 95m and 43s for a 145LSUN Cat image and 74m and 32s for a LSUN Car image. 1464 Results 147The model correctly learned the shape and the texture of many images, while some examples were less successful than 148others. For example, the model converged to believable shapes for two of the cars in Figure 4, but the shape of the 149right-most car is debatable. 150In the following sections we show the reconstructed depth map and 3D projection of some images chosen as repre- 151sentative of the dataset. All of the images that follow have the background cut from the actual object, this was only 152done for ease of illustration and was not done for the actual training process since the original authors do not mask the 153background in all cases. It is also difficult to illustrate the results fairly in 2D images, so we invite the reader to visit our 154website with interactive 3D plots5. 1554.1 Results reproducing original paper 1564.1.1 LSUN Car 157We present the results on LSUN Car dataset in Figure 4. Most features are projected in the right direction and show 158details that are correctly outward projected from the main object. This result supports all the claims made in section 2 159as we did not use any annotation or assumption for the images, many details were retrieved with high precision using 160the StyleGAN knowledge and we were able to easily make a rotation of the image (see interactive web-page). 1614.1.2 LSUN Cat 162The second experiment was executed on the LSUN Cat dataset. The results are a slightly poorer compared to the the 163LSUN Car dataset. The face of the cats gets properly recognized, but some details like the nose are not protruded from 164the rest of the face and are generally on the same plane, see Figure 4. Some images present some irregularities in the 165form of spikes and hills (d). The rotation (f) does not result in a completely natural image as part of the face of the cat 166appears on the same plane. This experiment does not support claims 2 and 4 in some cases (e.g. figures 4 (d) and (f) 167negate claims 2 and 4 respectively) while it does for claims 1 and 3 (section 2). 168Additional results such as for the Celeba dataset, can be found in the Appendix A. 1695Due to the anonymization of the report, we instead refer to the html files under the docs folder in our code5(a) (b) (c) (d) (e) (f)Figure 4: LSUN Car and Cat4.2 Results beyond the original paper 1704.2.1 The effects of shape priors 171No prior. To confirm our suspicions that this method would not work at all without a shape prior, briefly mentioned 172in 3.1.1, we ran a test on one image from the LSUN Car dataset without any prior pre-training, and with random 173initialization. The reconstruction objective is still satisfied very well, but it has converged to an extremely noisy depth 174map (see Figure 8 in Appendix A). It shows that this method would not work without a strong shape prior to guide it 175towards a reasonable shape. 176Smoothed Box Prior. The first experiment was done by testing the first of the prior shapes presented, the smoothed 177box prior. Figure 5 shows the smoothed box prior tested on the LSUN Cat and Celeba dataset where it can be seen how 178it is better at understanding the structure of the nose and face in general (see Appendix A for more details).Figure 5: Example result for two different image examples from the LSUN Cat and Celeba datasets. For each example,the left-most figure corresponds to the ellipsoid and right-most figure corresponds to the smoothed masked box prior.1794.2.2 Generalized Training Procedure 180We demonstrate the results of our new training loop on LSUN Cat. We note again that the difference to the previous 181demonstration on LSUN Cat, is that a single network D∗was used to predict all of the images, as opposed to a different 182network Difor each image Ii. The general model was trained on a limited subset of 30 images from LSUN Cat. It was 183trained for a modest 60epochs which results in approximately 60% of the weight updates per image of the original 184method. Figure 6 shows the projection of some images from the LSUN Cat dataset. One can observe that the method 185recognizes the general structure of the cat’s face but also presents some artefacts in some specific parts of the face e.g. 186the second cat’s cheek is further projected than where it should and similarly for the third cat’s chin. 1874.2.3 Improved initialization 188Our final experiment is inspired by the observations reported in sections 5.3.1 and 3.4. We experiment with drastically 189increasing the number of pseudo-samples Npfrom 16 to 128 for 10 short epochs, in which each training step is 190performed only once. We observe see marginal improvement in the predicted shape (Figure 6) and larger improves in 191the smaller details/features. See the appendix A.7 for further detail. 192Training step 1 was not changed and it is allowed to converge in the first stage, as it does not involve the projected 193samples. See Table 2 in the appendix for an exact description of the number of iterations. All other parameters were left 1946as in subsubsection 4.2.1, with the smoothed box prior. We experimented with two of the worst performers from the 195LSUN Cat dataset to evaluate whether this method could improve the results, see Figure 16. We applied the same idea 196to the general model described in sections 3.2, 4.2.2 and saw improvements, see Figure 6. The results can be compared 197to Figure 14.(a) Reconstructed depth(b) Reconstruced 3D imageFigure 6: Depth map predictions for a few image samples from the training set D ⊂ LSUN Cat dataset, all using oneand the same general model M∗trained with initialization iterations.1985 Discussion 1995.1 What was easy 200The authors provide a clear specification of the Python package dependencies, as well as other dependencies. Addition- 201ally, they provide scripts for easy downloading of a select few datasets and pre-trained model weights. They precisely 202state how to execute the training script and how to run the model for evaluation. Note that this refers to running the 203original code and that modifying and extending the code brought many difficulties, as explained in the next section. 2045.2 What was difficult 205The paper by Pan et al. [2021] did not contain enough information for a successful reimplementation. Many details had 206to be discerned or guessed from their code. Furthermore, the quality of said code does not allow for a quick interpretation. 207For example, deducing the training loop and the number of iterations for each step was further complicated by the poor 208cohesion of the original code: the trainer script was heavily mingled with model class, using class members of the 209model object to increment training steps and nested function calls back and forth between the trainer and model classes. 210The components v,l,dandawere not enough to pass in to the neural renderer to reconstruct an image. In reality, 211several calculations of quantities such as diffuse shading and texture were needed to be fed into the neural renderer, 212using concepts from light transport theory that were not mentioned in the paper. 213Another difficulty was the heavy reliance on external pre-trained neural networks. The neural renderer Kato et al. 214[2017], in particular, posed several problems. The major one was incompatibility with Windows machines. To be able 215to develop on our personal machines, we had to make manual edits of the neural renderer script and different CUDA 216files. 217Another challenge with this method is the lack of objective quantitative metrics to evaluate the success of the models. 218One instead has to rely almost entirely on qualitatively gauging the shape reconstructions by eye. 21975.3 Conclusions 2205.3.1 Variability of the results 221We observed that the method is very sensitive to various random factors and identical runs may yield different results, 222see Figure 12. One factor may be the random initialization of the networks, but we do not believe it is the dominating 223factor, since the depth network is pre-trained on a fixed prior shape each run. Rather, as mentioned by the authors Pan 224et al. [2021], the quality of the projected samples varies. Additionally, we only sample 8−16different view-light 225directions in each step 2 iteration, which may be too few projected samples for a robust model. Since this sampling 226is random, increasing the number of samples should assure the inclusion of meaningful view and light projections 227(experimental backing in the Appendix A). 2285.3.2 Catastrophic forgetting 229We have observed that the instance-specific model forgets the previous training images (see Appendix A.5, Figure 13), 230and thus has no generalization capability. This is not necessarily a problem if one has time and computational resources. 231It can also be argued that this is exactly what is intended with this model, and that generalization is up to the training 232dataset of the StyleGAN model. It does, however, limit the usefulness of the model. As an example, the training time 233for one 128×128pixel RGB image using a Tesla K80 GPU was about 2.5 hours, which seems exceedingly costly for 234just one low-resolution depth map. We argue that a general model would have more use. The ideal scenario would be a 235model D∗trained on Dthat is able to accurately predict di=D∗(Ii)∀Ii∈ D, and even extend to unseen testing data 236belonging to the same distribution as D. This discussion is what urged us to explore the altered training procedure of 237sections 3.2 and 4.2.2. 2385.3.3 Final conclusions 239We were able to replicate some of the results of Pan et al. [2021] on the datasets LSUN Car, LSUN Cat and Celeba. We 240identified several failure modes and limitations of the model, and back it up with experimental evidence. Examples are 241the variability and sensitivity to the projected samples, the heavy dependence on shape priors and the computational 242costliness of the single-use model - all of which were not adequately accounted for in the original paper. 243We propose a new prior shape, the smoothed box prior, that has shown very promising results especially for fine details 244and complex object structures. We propose a second prior shape, confidence-based, that has shown best results in the 245face dataset. We finally suggest two new training procedures that produce better results and are better at generalizing 246than the original model by Pan et al. [2021]. 247We recognize the limitations of this work as we were only able (due to the restricted computational power) to test the 248method on part of the dataset. For example, the Cat’s dataset used by the authors contains more than 200 images but we 249were able to only test few of them. We speculate that some images in the dataset could yield better results than those 250reported here. However, we believe that few bad projected images should be enough to claim the uneffectiveness of the 251method at least in some particular cases. 252Another limitation of our work is the lack of quantitative evaluation methods. The original authors propose their results 253also on the BFM benchmark Paysan et al. [2009] where it is possible to use some metrics to accurately evaluate the 254results. 2555.4 Future work 256We speculate that it would be interesting to adapt the same method to StyleGAN3 (Karras et al. [2021]) where the 257network has been modified to support training with fewer samples, leaving the question if the network still retains 258enough information that is needed for GAN2Shape to work. Future work could also explore the use of our priors on 259datasets where the original method failed (e.g. the LSUN Horse dataset). We speculate that, since our prior captures the 260boundaries of the object very well (compared to the ellipsoid where the boundaries are only used to position the origin), 261it could achieve better results in complex 3D objects where the shape cannot be simplified into an ellipse. A limitation 262of this method is that it does not use voxels, but learns a height map. This disallows realistic shape reconstructions and 263more complex geometries with multiple x and y values for each z value etc. Future work should investigate whether this 264model could be extended to predict voxels instead of height maps. Given our promising results with the generalizing 265trainer, which was obtained through only a few epochs of training, we believe that it should be further explored with 266increased epochs and training set size. 2678References 268Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. Escaping plato’s cave: 3d shape from adversarial rendering. In 269Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 9984–9993, 2019. 270Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. 2712016. 272Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In 273Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4401–4410, 2019. 274Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative 275adversarial networks with limited data. arXiv preprint arXiv:2006.06676 , 2020a. 276Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the 277image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 278pages 8110–8119, 2020b. 279Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free 280generative adversarial networks. In Proc. NeurIPS , 2021. 281Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. CoRR , abs/1711.07566, 2017. URL 282http://arxiv.org/abs/1711.07566 . 283Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of 284International Conference on Computer Vision (ICCV) , December 2015. 285Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, and Nate Kushman. Inverse graphics gan: Learning to generate 3d 286shapes from unstructured 2d data. arXiv preprint arXiv:2002.12674 , 2020. 287Xingang Pan, Bo Dai, Ziwei Liu, Chen Change Loy, and Ping Luo. Do 2d gans know 3d shape? unsupervised 3d shape 288reconstruction from 2d image gans. 2021. 289Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3d face model for pose and 290illumination invariant face recognition. In 2009 sixth IEEE international conference on advanced video and signal 291based surveillance , pages 296–301. Ieee, 2009. 292Hanqing Wang, Jiaolong Yang, Wei Liang, and Xin Tong. Deep single-view 3d object reconstruction with visual hull 293embedding. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 8941–8948, 2019. 294Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 2953d objects from images in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 296Recognition , pages 1–10, 2020. 297Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: 298A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern 299recognition , pages 1912–1920, 2015. 300Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation 301network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV) , 302pages 325–341, 2018. 303Changqian Yu, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong Sang. Bisenet v2: Bilateral network 304with guided aggregation for real-time semantic segmentation. International Journal of Computer Vision , 129(11): 3053051–3068, 2021. 306Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a 307large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 , 2015. 308Hengshuang Zhao. semseg. https://github.com/hszhao/semseg , 2019. 3099Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In 310Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2881–2890, 2017. 311Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. Unsupervised learning of depth and ego-motion 312from video. CoRR , abs/1704.07813, 2017. URL http://arxiv.org/abs/1704.07813 . 31310A Appendix 314A.1 Hyperparameters 315Stage Iterations/step1 [700,700,600]2, 3, 4 [200,500,400]Table 1: Specification of the different stages for the single-image model.Stage Iterations/step Np0 [700,0,0] 161-10 [1,1,1] 12811 [1,700,600] 1612, 13, 14 [200,500,400] 16Table 2: Specification of the different stages for the single-image model with initialization iterationsEpochs Iterations/step Np60 [13,22,18] 16Table 3: Specification of the iterations/step for the generalized model.Epochs Iterations/step Np10 [13,1,1] 12860 [13,22,18] 16Table 4: Specification of the iterations/step for the generalized model with initialization iterationsTable 5: Hyperparameters for the general model with initialization iterations on the LSUN Cat dataset.Parameter Valuen_epochs_prior 1000n_epochs_generalized 70n_epochs_init 10n_init_iterations 8batch_size 10channel_multiplier 1image_size 128z_dim 512root_path data/catlearning_rate 0.0001view_scale 1refinement_iterations 1n_proj_samples 16rot_center_depth 1.0fov 10tex_cube_size 2We refer to our GitHub repository for a complete declaration of all hyperparameters for all datasets https:// 316anonymous.4open.science/w/GAN-2D-to-3D-03EF . 317A.2 Additional replication results 318A.2.1 Celeba 319The third experiment conducted on the Celeba dataset shows that most of the face are correctly portrayed with the only 320exception of the border of the face e.g. chin and forehead that sometimes is not included in the projection (see Figure 7 32111(b)). Also we found out that the method does not behave well with faces that are viewed from the side (see Figure 7 (c)) 322where the face still gets a projection as it was viewed from the front. As a consequence of this, the rotation of side faces 323does not result in a good image. This experiment supports claims 1-4 (section 2) only for some faces and claims 1 and 3 324for those viewed from the side.(a) (b) (c)Figure 7: Celeba325A.3 Effects of shape priors 326Figure 8 shows the effects of random initialization of the depth network. Figure 9 shows the results on the first car(a) Textured shape (b) 3D depth map (c) Reconstructed imageFigure 8: Results with no shape prior.327where it can be observed that our prior is even better the the ellipsoid at capturing fine details such as the side mirror. 328Confidence-Based Prior. Another experiment we performed focused on the performance of the second prior we 329presented, the confidence based prior. Figure 10 shows some results on the datasets considered in this paper. The results 330are most promising in the Celeba dataset where the image of a face is correctly projected even if viewed from the side. 33112(a) Textured shape (b) 3D depth (c) 2D depth colormap(d) Textured shape (e) 3D depth (f) 2D depth colormapFigure 9: Ellipsoid prior (top row) vs. the smoothed masked box (bottom row) prior.(a) LSUN Car (b) LSUN Cat (c) CelebaFigure 10: Results with the confidence based prior.A.4 Variability of identical runs 332A.5 Catastrophic forgetting 333When the training process is complete for one image Itwe have confirmed that the model Mt={V, L, D, A }tis able 334to construct a believable depth map (subsection 4.1). However, when training continues to the next image It+1and 335Mt+1is obtained, we have observed that the ability to predict the depth map of the previous image deteriorates, and 336the problem gets worse with an increasing time discrepancy between the model and image. In other words, the depth 337network Dtat training step tis only usable for predicting the depth map dt=Dt(It)and so suffers from catastrophic 338forgetting of the previous images. This is illustrated in Figure 13. 339The training time for one 128×128pixel RGB image using a Tesla K80 GPU was about 2.5 hours, which seems 340exceedingly costly for just one low-resolution depth map. 34113(a) Cat 1 (original prior) (b) Cat 1 (our prior)Figure 11: Results for a few other images from the LSUN Cat dataset, for the ellipsoid (left) and smoothed masked box(right) priors.(a) (b) (c)Figure 12: Several runs with identical configuration.(a)M3(I3) (b)M3(I2) (c)M3(I1)Figure 13: Depth map predictions for a few image samples from the LSUN Car dataset, illustrating catastrophicforgetting for the model M.A.6 Additional generalized training results 34214(a) Reconstructed depth(b) Reconstruced 3D imageFigure 14: Depth map predictions for a few image samples from the training set D ⊂ LSUN Cat dataset, all using oneand the same general model M∗.I199 I201 I199 I201(a) Using the general model M∗.I199 I201 I199 I201(b) Using the instance-specific model MlastFigure 15: Depth map predictions for unseen image samples {I199,I201} ̸∈ D from the LSUN Cat dataset.A.7 Initialization iteration results 343The observations of sections 5.3.1 and 3.4 can be condensed into two main points to form a hypothesis. Please note 344that our limited computational resources meant that we could not perform rigorous experimentation to confirm these 345observations with a large number of samples, and that this section should be viewed as a speculative experiment. 346•The initial few training iterations can be viewed as an initialization of the weights, which depends on what 347projected samples are generated by the StyleGAN2 model. 348•The “features” (i.e. peaks and valleys) of the depth map predictions do not qualitatively change with increasing 349iterations, but remain fixed except in size (i.e. the height of the peaks). 350If one accepts these claims, then it is clear that the first few iterations determine the success of the shape reconstruction. 351That is why we experiment with drastically increasing the number of pseudo-samples during the first iterations. This 35215reduces the bias of the initialization and reduces the relative impact that a poor projected sample generated by the GAN 353has on the model weights. Specifically, we increase the number of projected samples Npfrom 16 to 128 for 10 short 354epochs, in which each training step is performed only once. 355Ideally, one would of course permanently increase Np, but with extreme costs in terms of training time. This method 356only added ∼4minutes of training time using a Tesla T4 GPU. 357(a) Original initialization(b) Original initialization(c) With initialization iterations (d) With initialization iterationsFigure 16: Results for the worst performers for the single-image model using the smoothed box prior, from the LSUNCat dataset. Original initialization (top row) and using initialization iterations (bottom row). The leftmost cat saw themost drastic changes. While the result is a “spikey” depth map, we argue that the general shape has a better resemblanceto a cat, and less to a square box-like in the original initialization. The rightmost cat saw some improvement in somedetails such as the ears and the mouth region.A.8 Hyperparameter tuning 358We found that Npcorrelates with the quality of the predicted shapes. The trend tends to be that more is better, but 359with diminishing returns. The biggest benefit that a large Nphas, is that strange artefacts are less likely to persist. It is 360difficult to pinpoint an acceptable threshold for Np, as it varies between datasets and even between images. Therefore 361we believe a good compromise is to perform a few initialization iterations as described in section 4.2.3 with a large Np 362(i.e. 128) and then continue training with a lower number according to the aforementioned thresholds. 363To illustrate the results when varying on the number of projected samples Nproj we present the results on the LSUN car 364and Celeba dataset. In Figure 18 the first two cars (corresponding to a low Np) have more irregular surfaces and one 365has a large spike, while the third is more regular. The same is observed for the Celeba faces in Figure 17, where the first 366face (corresponding to a low Np) has significant irregularities across the face. As described in subsubsection 5.3.1, we 367attribute this phenomenon to lower relative impact that sampling poor view-light projections has, the larger Npis. 36816Figure 17: Face 1 when trained with 4, 8, 16 and 32 (from left to right) number of projected samples.Figure 18: Car 1 when trained with 2, 4 and 8 (from left to right) number of projected samples.17<|im_end|> <|im_start|>assistant ### Review Title Good reproducibility study, writing and structure needs to be improved ### Review Text This paper aims to reproduce "Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs" by Pan et al. The authors provide a complete and clear summary of the work that has been done. They also state clearly which of the research claims made by Pan et al. are subject to this reproducibility study. The authors decide to re-implement the entire code-base, relying on external dependencies where necessary. Their initial attempt to write the code from scratch relying only on the documentation in the original paper had to be abandoned due to lack of details. The code of the authors seems well enough documented with clear instructions on how to run it. The authors try out additional hyperparameters that have not been reported by the original authors. They additionally experiment with the number of projected samples and the number of epochs for the pre-training. The authors did not communicate with the original authors. Clarifying the different implementation of the loss function compared to the paper would have been helpful. Despite the effort of reproducing the authors also go beyond the original work and try out different shape priors and introduce a generalized training procedure. The motivation for the novel shape priors is missing. Nonetheless, this additional experiment is helpful in understanding the original work. Their newly proposed 3D shape priors seem to improve shape reconstruction for some cases. The proposal for a generalized training structure seems reasonable to improve generalization and is backed up with their experiments on catastrophic forgetting. The improvement achieved by their improved initialization seems debatable. The authors clearly discuss which parts of the original paper could be reproduced and which could not. The paper provides useful experiments to validate the research claims by Pan et al. They carry out many additional experiments which are useful. The structure of the paper is confusing. I urge the authors to improve the writing and especially the structure of the paper, avoiding heavy cross-referencing between results, earlier sections, appendix, and conclusions which makes it very hard to follow. General remarks: - The paper contains many small experiments that would benefit from being better connected. - The structure of the paper could be improved by first presenting their experiments, results, and conclusions of the reproduction and in the following section discuss own experiments, results, and conclusion. This avoids references to findings later in the paper (see for example line 112). Generally, references to appendices and different sections are hard to follow and are very confusing to the reader. - Reference to 3.1.1. in line 173 does not contain the hypothesis “…that this method would not work at all without a shape prior…”. - When referring to the appendix the exact subsection should be mentioned, otherwise, it is very hard to follow what the authors refer to, e.g. line 178 - Generally, when referring to observations in different sections it would be helpful to repeat them briefly, e.g. see section 4.2.3 line 189. - Appendix 4 does not seem not to contain any content. - The Confidence-Based Prior results are not referenced in the paper and can be only found in the appendix - When claiming superior results it would be helpful to have reference images text to the improved ones to compare with directly. Figure 10.
 Grammatical issues: - use either pre-training or pertaining - we observe see marginal -> we observe marginal, line 191 - and larger improves in -> and larger improvements in, line 191 ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
2VXyy9mIyU3
ICLR.cc/2021/Conference
2021
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
["Hao Cheng", "Zhaowei Zhu", "Xingyu Li", "Yifei Gong", "Xing Sun", "Yang Liu"]
Human-annotated labels are often prone to noise, and the presence of such noise will degrade the performance of the resulting deep neural network (DNN) models. Much of the literature (with several recent exceptions) of learning with noisy labels focuses on the case when the label noise is independent of features. Practically, annotations errors tend to be instance-dependent and often depend on the difficulty levels of recognizing a certain task. Applying existing results from instance-independent settings would require a significant amount of estimation of noise rates. Therefore, providing theoretically rigorous solutions for learning with instance-dependent label noise remains a challenge. In this paper, we propose CORES$^{2}$ (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted examples. The implementation of CORES$^{2}$ does not require specifying noise rates and yet we are able to provide theoretical guarantees of CORES$^{2}$ in filtering out the corrupted examples. This high-quality sample sieve allows us to treat clean examples and the corrupted ones separately in training a DNN solution, and such a separation is shown to be advantageous in the instance-dependent noise setting. We demonstrate the performance of CORES$^{2}$ on CIFAR10 and CIFAR100 datasets with synthetic instance-dependent label noise and Clothing1M with real-world human noise. As of independent interests, our sample sieve provides a generic machinery for anatomizing noisy datasets and provides a flexible interface for various robust training techniques to further improve the performance. Code is available at https://github.com/UCSC-REAL/cores.
["Learning with noisy labels", "instance-based label noise", "deep neural networks."]
ABSTRACTHuman-annotated labels are often prone to noise, and the presence of such noisewill degrade the performance of the resulting deep neural network (DNN) mod-els. Much of the literature (with several recent exceptions) of learning with noisylabels focuses on the case when the label noise is independent of features. Practi-cally, annotations errors tend to be instance-dependent and often depend on thedifficulty levels of recognizing a certain task. Applying existing results frominstance-independent settings would require a significant amount of estimationof noise rates. Therefore, providing theoretically rigorous solutions for learningwith instance-dependent label noise remains a challenge. In this paper, we proposeCORES2(COnfidence REgularized Sample Sieve) , which progressively sieves outcorrupted examples. The implementation of CORES2does not require specifyingnoise rates and yet we are able to provide theoretical guarantees of CORES2infiltering out the corrupted examples. This high-quality sample sieve allows us totreat clean examples and the corrupted ones separately in training a DNN solu-tion, and such a separation is shown to be advantageous in the instance-dependentnoise setting. We demonstrate the performance of CORES2on CIFAR10 and CI-FAR100 datasets with synthetic instance-dependent label noise and Clothing1Mwith real-world human noise. As of independent interests, our sample sieve pro-vides a generic machinery for anatomizing noisy datasets and provides a flexi-ble interface for various robust training techniques to further improve the perfor-mance. Code is available at https://github.com/UCSC-REAL/cores .1 I NTRODUCTIONDeep neural networks (DNNs) have gained popularity in a wide range of applications. The re-markable success of DNNs often relies on the availability of large-scale datasets. However, dataannotation inevitably introduces label noise, and it is extremely expensive and time-consuming toclean up the corrupted labels. The existence of label noise can weaken the true correlation betweenfeatures and labels as well as introducing artificial correlation patterns. Thus, mitigating the effectsof noisy labels becomes a critical issue that needs careful treatment.It is challenging to avoid overfitting to noisy labels, especially when the noise depends on both truelabelsYand features X. Unfortunately, this often tends to be the case where human annotationsare prone to different levels of errors for tasks with varying difficulty levels. Recent work hasalso shown that the presence of instance-dependent noisy labels imposes additional challenges andcautions to training in this scenario (Liu, 2021). For such instance-dependent (or feature-dependent,instance-based) label noise settings, theory-supported works usually focus on loss-correction whichrequires estimating noise rates (Xia et al., 2020; Berthon et al., 2020). Recent work by Cheng et al.(2020) addresses the bounded instance-based noise by first learning the noisy distribution and thendistilling examples according to some thresholds.1However, with a limited size of datasets, learningan accurate noisy distribution for each example is a non-trivial task. Additionally, the size and thequality of distilled examples are sensitive to the thresholds for distillation.Equal contributions in alphabetical ordering. Hao leads experiments and Zhaowei leads theories.yCorresponding authors: Y . Liu and Z. Zhu {yangliu,zwzhu}@ucsc.edu .1The proposed solution is primarily studied for the binary case in Cheng et al. (2020).1Published as a conference paper at ICLR 2021Departing from the above line of works, we design a sample sieve with theoretical guarantees toprovide a high-quality splitting of clean and corrupted examples without the need to estimate noiserates. Instead of learning the noisy distributions or noise rates, we focus on learning the underlyingclean distribution and design a regularization term to help improve the confidence of the learnedclassifier, which is proven to help safely sieve out corrupted examples. With the division between“clean” and “corrupted” examples, our training enjoys performance improvements by treating theclean examples (using standard loss) and the corrupted ones (using an unsupervised consistencyloss) separately.We summarize our main contributions: 1) We propose to train a classifier using a novel confidenceregularization (CR) term and theoretically guarantee that, under mild assumptions, minimizing theconfidence regularized cross-entropy (CE) loss on the instance-based noisy distribution is equiva-lent to minimizing the pure CE loss on the corresponding “unobservable” clean distribution. Thisclassifier is also shown to be helpful for evaluating each example to build our sample sieve.2) Weprovide a theoretically sound sample sieve that simply compares the example’s regularized loss witha closed-form threshold explicitly determined by predictions from the above trained model usingour confidence regularized loss, without any extra estimates. 3) To the best of our knowledge, theproposed CORES2(COnfidence REgularized Sample Sieve) is the first method that is thoroughlystudied for a multi-class classification problem, has theoretical guarantees to avoid overfitting toinstance-dependent label noise, and provides high-quality division without knowing or estimatingnoise rates. 4) By decoupling the regularized loss into separate additive terms, we also provide anovel and promising mechanism for understanding and controlling the effects of general instance-dependent label noise. 5) CORES2achieves competitive performance on multiple datasets, includ-ing CIFAR-10, CIFAR-100, and Clothing1M, under different label noise settings.Other related works In addition to recent works by Xia et al. (2020), Berthon et al. (2020), andCheng et al. (2020), we briefly overview other most relevant references. Detailed related work isleft to Appendix A. Making the loss function robust to label noise is important for building a robustmachine learning model (Zhang et al., 2016). One popular direction is to perform loss correction ,which first estimates transition matrix (Patrini et al., 2017; Vahdat, 2017; Xiao et al., 2015; Zhuet al., 2021b; Yao et al., 2020b), and then performs correction/reweighting via forward or back-ward propagation, or further revises the estimated transition matrix with controllable variations (Xiaet al., 2019). The other line of work focuses on designing specific losses without estimating transi-tion matrices (Natarajan et al., 2013; Xu et al., 2019; Liu & Guo, 2020; Wei & Liu, 2021). However,these works assume the label noise is instance-independent which limits their extension. Anotherapproach is sample selection (Jiang et al., 2017; Han et al., 2018; Yu et al., 2019; Northcutt et al.,2019; Yao et al., 2020a; Wei et al., 2020; Zhang et al., 2020a), which selects the “small loss” ex-amples as clean ones. However, we find this approach only works well on the instance-independentlabel noise. Approaches such as label correction (Veit et al., 2017; Li et al., 2017; Han et al., 2019) orsemi-supervised learning (Li et al., 2020; Nguyen et al., 2019) also lack guarantees for the instance-based label noise.2 CORES2: CO NFIDENCE REGULARIZED SAMPLE SIEVEConsider a classification problem on a set of Ntraining examples denoted by D:=f(xn;yn)gn2[N], where [N] :=f1;2;;Ngis the set of example indices. Examples (xn;yn)are drawn according to random variables (X;Y )2XY from a joint distribution D. LetDXandDYbe the marginal distributions of XandY. The classification task aims to identify a classifierf:X!Y that mapsXtoYaccurately. One common approach is minimizing the empirical riskusing DNNs with respect to the cross-entropy loss defined as `(f(x);y) =ln(fx[y]); y2[K];wherefx[y]denotes they-th component of f(x)andKis the number of classes. In real-world appli-cations, such as human-annotated images (Krizhevsky et al., 2012; Zhang et al., 2017) and medicaldiagnosis (Agarwal et al., 2016), the learner can only observe a set of noisy labels. For instance,human annotators may wrongly label some images containing cats as ones that contain dogs acciden-tally or irresponsibly. The label noise of each instance is characterized by a noise transition matrixT(X), where each element Tij(X) :=P(eY=jjY=i;X). The corresponding noisy dataset2anddistribution are denoted by eD:=f(xn;~yn)gn2[N]andeD. Let 1()be the indicator function taking2In this paper, the noisy dataset refers to a dataset with noisy examples. A noisy example is either a cleanexample (whose label is true) or a corrupted example (whose label is wrong).2Published as a conference paper at ICLR 2021value 1when the specified condition is satisfied and 0otherwise. Similar to the goals in surrogateloss (Natarajan et al., 2013), LDMI(Xu et al., 2019) and peer loss (Liu & Guo, 2020), we aim tolearn a classifier ffrom the noisy distribution eDwhich also minimizes P(f(X)6=Y);(X;Y )D.Beyond their results, we attempt to propose a theoretically sound approach addressing a generalinstance-based noise regime without knowing or estimating noise rates .2.1 C ONFIDENCE REGULARIZATIONIn this section, we present a new confidence regularizer (CR). Our design of the CR is mainlymotivated by a recently proposed robust loss function called peer loss (Liu & Guo, 2020). For eachexample (xn;~yn), peer loss has the following form:`PL(f(xn);~yn) :=`(f(xn);~yn)`(f(xn1);~yn2);where (xn1;~yn1)and(xn2;~yn2)are two randomly sampled and paired peer examples (with replace-ment) forn. LetXn1andeYn2be the corresponding random variables. Note Xn1;eYn2are twoindependent and uniform random variables being each xn0;n02[N]and~yn0;n02[N]with prob-ability1Nrespectively: P(Xn1=xn0jeD) =P(eYn2=yn0jeD) =1N;8n02[N]. LetDeYjeDbe thedistribution of eYn2given dataseteD. Peer loss then has the following equivalent form in expectation:1NXn2[N]EXn1;eYn2jeD[`(f(xn);~yn)`(f(Xn1);eYn2)]=1NXn2[N]`(f(xn);~yn)Xn02[N]P(Xn1=xn0jeD)EDeYjfD[`(f(xn0);eY)]=1NXn2[N]`(f(xn);~yn)EDeYjfD[`(f(xn);eY)]:This result characterizes a new loss denoted by `CA:`CA(f(xn);~yn) :=`(f(xn);~yn)EDeYjfD[`(f(xn);eY)]: (1)Though not studied rigorously by Liu & Guo (2020), we show, under conditions3,`CAdefined inEqn. (1) encourages confident predictions4fromfby analyzing the gradients:Theorem 1. For`CA(), solutions satisfying fxn[i]>0;8i2[K]are not locally optimal at (xn;~yn).See Appendix B.2 for the proof. Particularly, in binary cases, we have constraint f(xn)[0] +f(xn)[1] = 1 . Following Theorem 1, we know minimizing `CA(f(xn);~yn)w.r.tfunder this con-straint leads to either f(xn)[0]!1orf(xn)[1]!1, indicating confident predictions. Therefore,the addition of term EDeYjfD[`(f(xn);eY)]helps improve the confidence of the learned classifier.Inspired by the above observation, we define the following confidence regularizer:Confidence Regularizer: `CR(f(xn)) :=EDeYjfD[`(f(xn);eY)];whereis positive and `()refers to the CE loss. The prior probability P(eYjeD)is counted directlyfrom the noisy dataset. In the remaining of this paper, `()indicates the CE loss by default.Why are confident predictions important? Intuitively, when model fits to the label noise, its pre-dictions often become less confident, since the noise usually corrupts the signal encoded in the cleandata. From this perspective, encouraging confident predictions plays against fitting to label noise.Compared to instance-independent noise, the difficulties in estimating the instance-dependent noiserates largely prevent us from applying existing techniques. In addition, as shown by Manwani &Sastry (2013), the 0-1 loss function is more robust to instance-based noise but hard to optimizewith. To a certain degree, pushing confident predictions results in a differentiable loss function thatapproximates the 0-1 loss, and therefore restores the robustness property. Besides, as observed byChatterjee (2020) and Zielinski et al. (2020), gradients from similar examples would reinforce eachother. When the overall label information is dominantly informative that Tii(X)>Tij(X), DNNs3Detailed conditions for Theorem 1 are specified at the end of our main contents.4Our observation can also help partially explain the robustness property of peer loss (Liu & Guo, 2020).3Published as a conference paper at ICLR 2021will receive more correct information statistically. Encouraging confident predictions would discour-age the memorization of the noisy examples (makes it hard for noisy labels to reduce the confidenceof predictions), and therefore further facilitate DNNs to learn the (clean) dominant information.`CRis NOT the entropy regularization Entropy regularization (ER) is a popular choice for im-proving confidence of the trained classifiers in the literature (Tanaka et al., 2018; Yi & Wu, 2019).Given a particular prediction probability pfor a class, the ER term is based on the function plnp,while our`CRis built on lnp. Later we show `CRoffers us favorable theoretical guarantees fortraining with instance-dependent label noise, while ER does not. In Appendix C.1, we present boththeoretical and experimental evidences that `CRserves as a better regularizer compared to ER.2.2 C ONFIDENCE REGULARIZED SAMPLE SIEVEIntuitively, label noise misleads the training thus sieving corrupted examples out of datasets is ben-eficial. Furthermore, label noise introduces high variance during training even with the existence of`CR(discussed in Section 3.3). Therefore, rather than accomplishing training solely with `CR, wewill first leverage its regularization power to design an efficient sample sieve. Similar to a generalsieving process in physical words that compares the size of particles with the aperture of a sieve,we evaluate the “size” (quality, or a regularized loss) of examples and compare them with someto-be-specified thresholds, therefore the name sample sieve. In our formulation, the regularized loss`(f(xn);~yn) +`CR(f(xn))is employed to evaluate examples and nis used to specify thresholds.Specifically, we aim to solve the sample sieve problem in (2).Confidence Regularized Sample Sieveminf2F;v2f0;1gNXn2[N]vn[`(f(xn);~yn) +`CR(f(xn))n]s.t.`CR(f(xn)) :=EDeYjfD`(f(xn);eY);n:=1KX~y2[K]`(f(xn);~y) +`CR(f(xn)):(2)Sample Sieve-0Sample Sieve-1Figure 1: Dynamic sample sieves. Green circlesare clean examples. Red hexagons are corruptedexamples.The crucial components in (2) are:vn2f0;1gindicates whether example nis clean (vn= 1) or not (vn= 0);n(mimicking the aperture of a sieve) controls which example should be sieved out;fis a copy offand does not contribute to the back-propagation. Fis the search space of f.Dynamic sample sieve The problem in (2) is a combinatorial optimization which is hard to solvedirectly. A standard solution to (2) is to apply alternate search iteratively as follows:Starting att= 1,v(0)n= 1;8n2[N].Confidence-regularized model update (at iteration- t):f(t)= arg minf2FXn2[N]v(t1)n [`(f(xn);~yn) +`CR(f(xn))] ;(3)Sample sieve (at iteration- t):v(t)n= 1(`(f(t)(xn);~yn) +`CR(f(t)(xn))<n;t); (4)wheren;t=1KP~y2[K]`(f(t)(xn);~y) +`CR(f(t)(xn)),f(t)andv(t)refer to the specificclassifier and weight at iteration- t. Note the values of `CR(f(t)(xn))and`CR(f(t)(xn))arethe same. We keep both terms to be consistent with the objective in Eq. (2). In DNNs, weusually update model fwith one or several epochs of data instead of completely solving (3).Figure 1 illustrates the dynamic sample sieve, where the size of each example corresponds to theregularized loss and the aperture of a sieve is determined by n;t. In each iteration- t, sample sieve-4Published as a conference paper at ICLR 20214 2 0 20123# samples×103(a) CE Sieve (symm., epoch 20)1510505100246×103(b) CE Sieve (symm., epoch 70)1510505100123×103(c) CORES2 (symm., epoch 20)2002040600123×104(d) CORES2 (symm., epoch 70)cleancorrupted10 5 0loss0123# samples×103(e) CE Sieve (inst., epoch 20)151050510loss0246×103(f) CE Sieve (inst., epoch 70)151050510loss0.00.51.0×104(g) CORES2 (inst., epoch 20)200204060loss0123×104(h) CORES2 (inst., epoch 70)Figure 2: Loss distributions of training on CIFAR-10 with 40% symmetric noise (symm.) or 40%instance-based noise (inst.). The loss is given by `(f(t)(xn);~yn) +`CR(f(t)(xn))n;tas (4). CESieve represents the dynamic sample sieve with standard cross-entropy loss (without CR).t“blocks” some corrupted examples by comparing a regularized example loss with a closed-formthresholdn;t, which can be immediately obtained given current model f(t)and example (xn;~yn)(no extra estimation needed). In contrast, most sample selection works (Han et al., 2018; Yu et al.,2019; Wei et al., 2020) focus on controlling the number of the selected examples using an intuitivefunction where the overall noise rate may be required, or directly selecting examples by an empiri-cally set threshold (Zhang & Sabuncu, 2018). Intuitively, the specially designed thresholds n;tforeach example should be more accurate than a single threshold for the whole dataset. Besides, thegoal of existing works is often to select clean examples while our sample sieve focuses on removingthe corrupted ones. On a high level, we follow a different philosophy from these sample selectionworks. We coin our solution as COnfidence REgularized Sample Sieve (CORES2).More visualizations of the sample sieve In addition to Figure 1, we visualize the superiority of oursample sieve with numerical results as Figure 2. The sieved dataset is in the form of two clusters ofexamples. Particularly, from Figure 2(b) and Figure 2(f), we observe that CE suffers from providinga good division of clean and corrupted examples due to overfitting in the final stage of training. Onthe other hand, with `CR, there are two distinct clusters and can be separated by the threshold 0asshown in Figure 2(d) and Figure 2(h). Comparing Figure 2(a)-2(d) with Figure 2(e)-2(h), we findthe effect of instance-dependent noise on training is indeed different from the symmetric one, wherethe instance-dependent noise is more likely to cause overfitting.3 T HEORETICAL GUARANTEES OF CORES2In this section, we theoretically show the advantages of CORES2. The analyses focus on showingCORES2guarantees a quality division, i.e. vn= 1(yn= ~yn);8n, with a properly set . To showthe effectiveness of this solution, we call a model prediction on xnisbetter than random guess iffxn[yn]>1=K, and call it confident iffxn[y]2f0;1g;8y2[K], whereynis the clean label andyis an arbitrary label. The quality of sieving out corrupted examples is guaranteed in Theorem 2.Theorem 2. The sample sieve defined in (4) ensures that clean examples (xn;~yn=yn)will not beidentified as being corrupted if the model f(t)’s prediction on xnis better than random guess.Theorem 2 informs us that our sample sieve can progressively and safely filter out corrupted exam-ples, and therefore improves division quality, when the model prediction on each xnis better thanrandom guess. The full proof is left to Appendix B.3. In the next section, we provide evidences thatour trained model is guaranteed to achieve this requirement with sufficient examples.3.1 D ECOUPLING THE CONFIDENCE REGULARIZED LOSSThe discussion of performance guarantees of the sample sieve focuses on a general instance-basednoise transition matrix T(X), which can induce any specific noise regime such as symmetricnoise and asymmetric noise (Kim et al., 2019; Li et al., 2020). Note the feature-independencywas one critical assumption in state-of-the-art theoretically guaranteed noise-resistant literatures(Natarajan et al., 2013; Liu & Guo, 2020; Xu et al., 2019) while we do not require. Let Tij:=EDjY=i[Tij(X)];8i;j2[K]. Theorem 3 explicitly shows the contributions of clean examples,corrupted examples, and `CRduring training. See Appendix B.1 for the proof.Theorem 3. (Main Theorem: Decoupling the Expected Regularized CE Loss) In expectation, theloss with`CRcan be decoupled as three separate additive terms:5Published as a conference paper at ICLR 2021EeDh`(f(X);eY) +`CR(f(X))i=Term-1z}|{TED[`(f(X);Y)] +Term-2z}|{ED[`(f(X);Y)]+Xj2[K]Xi2[K]P(Y=i)EDjY=i[(Uij(X)P(eY=j))`(f(X);j)]| {z }Term-3;(5)whereT:= minj2[K]Tjj; :=Pj2[K]jP(Y=j);j:=TjjT,Uij(X) =Tij(X);8i6=j;Ujj(X) =Tjj(X)Tjj, andED[`(f(X);Y)] := 1(>0)Pj2[K]jP(Y=j)EDjY=j[`(f(X);j)].Equation (5) provides a generic machinery for anatomizing noisy datasets, where we show the ef-fects of instance-based label noise on the `CRregularized loss can be decoupled into three additiveterms: Term-1 reflects the expectation of CE on clean distribution D,Term-2 shifts the clean distri-bution by changing the prior probability of Y, and Term-3 characterizes how the corrupted examples(represented by Uij(X)) might mislead/mis-weight the loss, as well as the regularization ability of`CR(represented by P(eY=j)). In addition to the design of sample sieve, this additive decou-pling structure also provides a novel and promising perspective for understanding and controllingthe effects of generic instance-dependent label noise.3.2 G UARANTEES OF THE SAMPLE SIEVEBy decoupling the effects of instance-dependent noise into separate additive terms as shown inTheorem 3, we can further study under what conditions, minimizing the confidence regularized CEloss on the (instance-dependent) noisy distribution will be equivalent to minimizing the true lossincurred on the clean distribution, which is exactly encoded by Term-1. In other words, we wouldlike to understand when Term-2 and Term-3 in (5) can be controlled not to disrupt the minimizationof Term-1. Our next main result establishes this guarantee but will first need the following twoassumptions.Assumption 1. (Y=Y) Clean labels are Bayes optimal ( Y:= arg maxi2[K]P(Y=ijX)).Assumption 2. (Informative datasets) The noise rate is bounded as Tii(X)Tij(X)>0;8i2[K];j2[K];j6=i;XDX.Feasibility of assumptions: 1) Note for many popular image datasets, e.g. CIFAR, the label of eachfeature is well-defined and the corresponding distribution is well-separated by human annotation. Inthis case, each feature Xonly belongs to one particular class Y. Thus Assumption 1 is generallyheld in classification problems (Liu & Tao, 2015). Technically, this assumption could be relaxed.We use this assumption for clean presentations. 2) Assumption 2 shows the requirement of noiserates, i.e., for any feature X, a sufficient number of clean examples are necessary for dominant cleaninformation. For example, we require Tii(X)Tij(X)>0to ensure examples from class iareinformative (Liu & Chen, 2017).Before formally presenting the noise-resistant property of training with `CR, we discuss intuitionshere. As discussed earlier in Section 2.1, our `CRregularizes the CE loss to generate/incentivizeconfident prediction, and thus is able to approximate the 0-1 loss to obtain its robustness property.More explicitly, from (5), `CRaffects Term-3 with a scale parameter . Recall that Uij(X) =Tij(X);8i6=j, which is exactly the noise transition matrix. Although we have no informationabout this transition matrix, the confusion brought by Uij(X)can be canceled or reversed by asufficiently large such thatUij(X)P(eY=j)0. Intuitively, with an appropriate , all theeffects ofUij(X);i6=jcan be reversed, and we will get a negative loss punishing the classifier forpredicting class- jwhen the clean label is i. Formally, Theorem 4 shows the noise-resistant propertyof training with `CRand is proved in Appendix B.4.Theorem 4. (Robustness of the Confidence Regularized CE Loss) With Assumption 1 and 2, whenmaxi;j2[K];XDXUij(X)P(eY=j) minP(eY=i)>P(eY=j);XDXTii(X)Tij(X)P(eY=i)P(eY=j); (6)minimizing EeD[`(f(X);eY) +`CR(f(X))]is equivalent to minimizing ED[`(f(X);Y)].Theorem 4 shows a sufficient condition of for our confidence regularized CE loss to be robust toinstance-dependent label noise. The bound on LHS ensures the confusion from label noise could be6Published as a conference paper at ICLR 2021canceled or reversed by the weighted confidence regularizer, and the RHS bound guarantees themodel with the minimized regularized loss predicts the most frequent label in each feature w.p. 1.Theorem 4 also provides guidelines for tuning . Although we have no knowledge about Tij(X), wecan roughly estimate the range of possible . One possibly good setting of is linearly increasingwith the number of classes, e.g. = 2for10classes and= 20 for100classes.With infinite model capacity , minimizing ED[`(f(X);Y)]returns the Bayes optimal classifier (sinceCE is a calibrated loss) which predicts on each xnbetter than random guess. Therefore, with a suf-ficient number of examples, minimizing EeD[`(f(X);eY) +`CR(f(X))]will also return a model thatpredicts better than random guess, then satisfying the condition required in Theorem 2 to guaranteethe quality of sieved examples. Further, since the Bayes optimal classifier always predicts clean la-bels confidently when Assumption 1 holds, Theorem 4 also guarantees confident predictions. Withsuch predictions, the sample sieve in (4) will achieve 100% precision on both clean and corruptedexamples. This guaranteed division is summarized in Corollary 1:Corollary 1. When conditions in Theorem 4 hold, with infinite model capacity and sufficiently manyexamples, CORES2achievesvn= 1(yn= ~yn);8n2[N], i.e., all the sieved clean examples areeffectively clean.3.3 T RAINING WITH SIEVED SAMPLESWe discuss the necessity of a dynamic sample sieve in this subsection. Despite the strong guaran-tee in expectation as shown Theorem 4, performing direct Empirical Risk Minimization (ERM) ofthe regularized loss is likely to return a sub-optimal solution. Although Theorem 4 guarantees theequivalence of minimizing two first-order statistics, their second-order statistics are also importantfor estimating the expectation when examples are finite. Intuitively, Term-1 TED[`(f(X);Y)]primarily helps distinguish a good classifier from a bad one on the clean distribution. The existenceof the leading constant Treduces the power of the above discrimination, as effectively the gap be-tween the expected losses become smaller as noise increases ( Twill decrease). Therefore we wouldrequire more examples to recognize the better model. Equivalently, the variance of the selectionbecomes larger. In Appendix C.2, we also offer an explanation from the variance’s perspective. Forsome instances with extreme label noise, the satisfying Eqn. (6) in Theorem 4 may not exist.In such case, these instances cannot be properly used and other auxiliary techniques are necessary(e.g., sample pruning).Sieving out the corrupted examples from the clean ones allows us a couple of better solutions. First,we can focus on performing ERM using these sieved clean examples only. We derive the riskbound for training with these clean examples in Appendix C.3. Secondly, leveraging the samplesieve to distinguish clean examples from corrupted ones provides a flexible interface for variousrobust training techniques such that the performance can be further improved. For example, semi-supervised learning techniques can be applied (see section 4 for more details).4 E XPERIMENTSNow we present experimental evidences of how CORES2works.5Datasets: CORES2is evaluated on three benchmark datasets: CIFAR-10, CIFAR-100 (Krizhevskyet al., 2009) and Clothing1M (Xiao et al., 2015). Following the convention from Xu et al. (2019),we use ResNet34 for CIFAR-10 and CIFAR-100 and ResNet50 for Clothing1M.Noise type: We experiment with three types of label noise: symmetric, asymmetric and instance-dependent label noise. Symmetric noise is generated by randomly flipping a true label to the otherpossible labels w.p. "(Kim et al., 2019), where "is called the noise rate. Asymmetric noise isgenerated by flipping the true label to the next class ( i.e., labeli!i+1;modK) w.p.". Instance-dependent label noise is a more challenging setting and we generate instance-dependent label noisefollowing the method from Xia et al. (2020) (See Appendix D.3 for details). In expectation, thenoise rate"for all noise regimes is the overall ratio of corrupted examples in the whole dataset.Consistency training after the sample sieve: Letbe the last iteration of CORES2. DefineL() :=fnjn2[N];v()n= 1g,H() :=fnjn2[N];v()n= 0g,eDL():=f(xn;~yn) :n25The logarithmic function in `CRis adapted to ln(fx[y] + 108)for numerical stability.7Published as a conference paper at ICLR 2021204060801000.00.20.40.60.81.0F-score(a) 20% Symm.204060801000.00.20.40.60.81.0(b) 40% Symm.204060801000.00.20.40.60.81.0(c) 60% Symm.20406080100epoch0.00.20.40.60.81.0F-score(d) 20% Inst.CORES2Co-teachingCo-teaching+20406080100epoch0.00.20.40.60.81.0(e) 40% Inst.20406080100epoch0.00.20.40.60.81.0(f) 60% Inst.Figure 3: F-score comparisons on CIFAR10 under symmetric (Symm.) and instance-based (Inst.)label noise. F-score :=2PreRePre+Re, where Pre:=Pn2[N]1(vn=1;yn=~yn)Pn2[N]1(vn=1), and Re:=Pn2[N]1(vn=1;yn=~yn)Pn2[N]1(yn=~yn).L()g,eDH():=f(xn;~yn) :n2H()g. ThuseDL()is sieved as clean examples and eDH()is fil-tered out as corrupted ones. Examples (xn;~yn)2eDL()lead the training direction using the CE lossasPn2L()`(f(xn);~yn). Noting the labels in eDH()are supposed to be corrupted and can distractthe training, we simply drop them. On the other hand, feature information of these examples en-codes useful information that we can further leverage to improve the generalization ability of models.There are different ways to use this unsupervised information, in this paper, we chose to minimizethe KL-divergence between predictions on the original feature and the augmented feature to makepredictions consistent. This is a common option as chosen by Li et al. (2019), Xie et al. (2019), andZhang et al. (2020b). The consistency loss function in epoch- tisPn2H()`KL(f(xn);f(t)(xn;t)),where f(t)is a copy of the DNN at the beginning of epoch- tbut without gradients. Summing theclassification and consistency loss yields the total loss. See Appendix D.1 for an illustration.Other alternatives: Checking the consistency of noisy predictions is only one possible way toleverage the additional information after sample sieves. Our basic idea of first sieving the datasetand then treating corrupted examples differently from clean ones admits other alternatives. Thereare many other possible designs after sample sieves, e.g., estimating transition matrix using sievedexamples then applying loss-correction (Patrini et al., 2017; Vahdat, 2017; Xiao et al., 2015), makingthe consistency loss as another regularization term and retraining the model (Zhang et al., 2020b),correcting the sample selection bias in clean examples and retraining (Cheng et al., 2020; Fang et al.,2020), or relabeling those corrupted examples and retraining, etc. Additionally, clustering methodson the feature space (Han et al., 2019; Luo et al., 2020) or high-order information (Zhu et al., 2021a)can also be exploited along with the dynamic sample sieve. Besides, the current structure is readyto include other techniques such as mixup (Zhang et al., 2018).Quality of our sample sieve: Figure 3 shows the F-scores of sieved clean examples with trainingepochs on the symmetric and the instance-based label noise. F-score quantifies the quality of thesample sieve by the harmonic mean of precision (ratio of actual cleans examples in sieved cleanones) and recall (ratio of sieved cleans examples in actual clean ones). We compare CORES2withCo-teaching and Co-teaching+. Note the F-scores of CORES2and Co-teaching are consistently highon the symmetric noise, while CORES2achieves higher performance on the challenging instance-based label noise, especially with the 60% noise rate where the other two methods have low F-scores.Experiments on CIFAR-10, CIFAR-100 and Clothing1M: In this section, we compare CORES2with several state-of-the-art methods on CIFAR-10 and CIFAR-100 under instance-based, symmet-ric and asymmetric label noise settings, which is shown on Table 1 and Table 2. CORES2?denotesthat we apply consistency training on the corrupted examples after the sample sieve. For a fair com-parison, all the methods use ResNet-34 as the backbone. By comparing the performance of CE onthe symmetric and the instance-based label noise, we note the instance-based label noise is a morechallenging setting. Even though some methods (e.g., LDMI) behaves well on symmetric and asym-metric label noise, they may reach low test accuracies on the instance-based label noise, especiallywhen the noise rate is high or the dataset is more complex. However, CORES2consistently workswell on the instance-based label noise and adding the consistency training gets better results. Ta-ble 3 verifies CORES2on Clothing1M, a dataset with real human label noise. Compared to the other8Published as a conference paper at ICLR 2021Table 1: Comparison of test accuracies on clean datasets under instance-based label noise.MethodInst. CIFAR10 Inst. CIFAR100"= 0:2"= 0:4"= 0:6"= 0:2"= 0:4"= 0:6Cross Entropy 87.16 75.16 44.64 58.72 41.14 25.29ForwardT(Patrini et al., 2017) 88.08 82.67 41.57 58.95 41.68 22.83LDMI(Xu et al., 2019) 88.80 82.70 70.54 58.66 41.77 28.00Lq(Zhang & Sabuncu, 2018) 86.45 69.02 32.94 58.18 40.32 23.13SCE (Wang et al., 2019) 89.11 72.04 44.83 59.87 41.76 23.41Co-teaching (Han et al., 2018) 88.66 69.50 34.61 43.03 23.13 7.07Co-teaching+ (Yu et al., 2019) 89.04 69.15 33.33 41.84 24.40 8.74JoCoR (Wei et al., 2020) 88.71 68.97 30.27 44.28 22.77 7.54Peer Loss (Liu & Guo, 2020) 89.33 81.09 73.73 59.92 45.76 33.61CORES289.50 82.84 79.66 61.25 47.81 37.85CORES2?95.42 88.45 85.53 72.91 70.66 63.08Table 2: Comparison of test accuracies on clean datasets under symmetric/asymmetric label noise.MethodSymm. CIFAR10 Asymm. CIFAR10 Symm. CIFAR100 Asymm. CIFAR100"= 0:4"= 0:6"= 0:2"= 0:3"= 0:4"= 0:6"= 0:2"= 0:3Cross Entropy 81.88 74.14 88.59 86.14 48.20 37.41 59.20 51.40MAE (Ghosh et al., 2017) 61.63 41.98 59.67 57.62 7.68 6.45 11.16 8.97ForwardT(Patrini et al., 2017) 83.27 75.34 89.42 88.25 53.04 41.59 64.86 64.72Lq(Zhang & Sabuncu, 2018) 87.13 82.54 89.33 85.45 61.77 53.16 66.59 61.45LDMI(Xu et al., 2019) 83.04 76.51 89.04 87.88 52.32 40.00 60.04 52.82NLNL (Kim et al., 2019) 92.43 88.32 93.35 91.80 66.39 56.51 63.12 54.87SELF (Nguyen et al., 2019) 91.13 - 93.75 92.42 66.71 - 70.53 65.09CORES2?93.76 89.78 95.18 94.67 72.22 59.16 75.19 73.81Table 3: The best epoch (clean) test accuracy for each method on Clothing1M.MethodCE Forward T Co-teaching JoCoR LDMI PTD-R-V CORES2(Baseline) (Patrini et al., 2017) (Han et al., 2018) (Wei et al., 2020) (Xu et al., 2019) (Xia et al., 2020) (our)Acc. 68.94 70.83 69.21 70.30 72.46 71.67 73.24approaches, CORES2also works fairly well on the Clothing1M dataset. See more experiments inAppendix D. We also provide source codes with detailed instructions in supplementary materials.5 C ONCLUSIONSThis paper introduces CORES2, a sample sieve that is guaranteed to be robust to general instance-dependent label noise and sieve out corrupted examples, but without using explicit knowledge ofthe noise rates of labels. The analysis of CORES2assumed that the Bayes optimal labels are thesame as clean labels. Future directions of this work include extensions to more general cases wherethe Bayes optimal labels may differ from clean labels. We are also interested in exploring differentpossible designs of robust training with sieved examples.Acknowledgement This work is partially supported by the National Science Foundation (NSF)under grant IIS-2007951 and the Office of Naval Research under grant N00014-20-1-22.CONDITIONS REQUIRED FOR THEOREM 1Theorem 1 holds based on the following three assumptions:A1. The model capacity is infinite (i.e., it can realize arbitrary variation).A2. The model is updated using the gradient descent algorithm (i.e. updates follow the directionof decreasing ED[`(f(X);Y)]EDY[EDX[`(f(X);Y)]]).A3. The derivative of network function@f(x;w)@wiis smooth (i.e. the network function has nosingular point), where wi’s are model parameters.9Published as a conference paper at ICLR 2021
UQ4bxPhGPDK
Review for "Learning with Instance-Dependent Label Noise: A Sample Sieve Approach"
6: Marginally above acceptance threshold
The authors of the paper propose a new method, the CORES (COnfidence REgularized Sample Sieve), to tackle the important problem of learning under instance dependent label noise. The proposed method, in essence, involves the use of a confidence regularization term that encourages more confident predictions and a sieving process to remove the samples with large losses. Theoretical justification and empirical experiments were conducted to demonstrate the effectiveness of the proposed method. All in all, the paper is clearly written and easy to follow. The proposed method seems technically sound and the motivation for the proposal is explained clearly. One major complaint I have for the paper is the lack of novelty of the paper. The two important building blocks of the paper, the confidence regularizer, and the sample sieve are derived from previous papers. Specifically, in my opinion, the confidence regularizer is a marginal extension of the "peer loss" [1], and the sample sieve algorithm is essentially the same as that proposed in [2], the only difference being a different choice of loss function for training and sieving, to the best of my knowledge and understanding. I think it is worth commenting on this very relevant line of work in Section 2.2. In addition, it would be interesting if the authors of the paper could offer some insights on why the proposed sieving strategy works better than the one previously proposed [2] based on softmax probability. All in all, with the lack of novelty addressed above, I think the submission is marginally below the acceptance threshold. Other comments: 1. I find the intuitive justification for confidence regularization in Section 2.1 to be quite unconvincing. Specifically, it was stated that "when model overfits to the noise, its predictions often become less confident". From my understanding, this is not necessarily true at all. In fact, it was previously demonstrated that deep NNs can even perfectly overfit to datasets with randomly assigned labels? From this perspective, wouldn't encouraging confidence make the model overfit harder to the noisy labels? I would appreciate if the authors of the paper could provide further insights and intuition on why the introduced confidence regularization improves noise robustness. [1] Yang Liu and Hongyi Guo. Peer loss functions: Learning from noisy labels without knowing noise rates. In Proceedings of the 37th International Conference on Machine Learning, ICML ’20, 2020. [2] Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, pp. 8778–8788, 2018. -------------------------------------------------------------------------------------------------------------- The authors of the paper addressed carefully the concerns I raised above. As such, I am raising my score to a 6, and would like to recommend accepting this paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning with Instance-Dependent Label Noise: A Sample Sieve Approach ### Paper Abstract Human-annotated labels are often prone to noise, and the presence of such noise will degrade the performance of the resulting deep neural network (DNN) models. Much of the literature (with several recent exceptions) of learning with noisy labels focuses on the case when the label noise is independent of features. Practically, annotations errors tend to be instance-dependent and often depend on the difficulty levels of recognizing a certain task. Applying existing results from instance-independent settings would require a significant amount of estimation of noise rates. Therefore, providing theoretically rigorous solutions for learning with instance-dependent label noise remains a challenge. In this paper, we propose CORES$^{2}$ (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted examples. The implementation of CORES$^{2}$ does not require specifying noise rates and yet we are able to provide theoretical guarantees of CORES$^{2}$ in filtering out the corrupted examples. This high-quality sample sieve allows us to treat clean examples and the corrupted ones separately in training a DNN solution, and such a separation is shown to be advantageous in the instance-dependent noise setting. We demonstrate the performance of CORES$^{2}$ on CIFAR10 and CIFAR100 datasets with synthetic instance-dependent label noise and Clothing1M with real-world human noise. As of independent interests, our sample sieve provides a generic machinery for anatomizing noisy datasets and provides a flexible interface for various robust training techniques to further improve the performance. Code is available at https://github.com/UCSC-REAL/cores. ### Paper Keywords ["Learning with noisy labels", "instance-based label noise", "deep neural networks."] ### Paper Content ABSTRACTHuman-annotated labels are often prone to noise, and the presence of such noisewill degrade the performance of the resulting deep neural network (DNN) mod-els. Much of the literature (with several recent exceptions) of learning with noisylabels focuses on the case when the label noise is independent of features. Practi-cally, annotations errors tend to be instance-dependent and often depend on thedifficulty levels of recognizing a certain task. Applying existing results frominstance-independent settings would require a significant amount of estimationof noise rates. Therefore, providing theoretically rigorous solutions for learningwith instance-dependent label noise remains a challenge. In this paper, we proposeCORES2(COnfidence REgularized Sample Sieve) , which progressively sieves outcorrupted examples. The implementation of CORES2does not require specifyingnoise rates and yet we are able to provide theoretical guarantees of CORES2infiltering out the corrupted examples. This high-quality sample sieve allows us totreat clean examples and the corrupted ones separately in training a DNN solu-tion, and such a separation is shown to be advantageous in the instance-dependentnoise setting. We demonstrate the performance of CORES2on CIFAR10 and CI-FAR100 datasets with synthetic instance-dependent label noise and Clothing1Mwith real-world human noise. As of independent interests, our sample sieve pro-vides a generic machinery for anatomizing noisy datasets and provides a flexi-ble interface for various robust training techniques to further improve the perfor-mance. Code is available at https://github.com/UCSC-REAL/cores .1 I NTRODUCTIONDeep neural networks (DNNs) have gained popularity in a wide range of applications. The re-markable success of DNNs often relies on the availability of large-scale datasets. However, dataannotation inevitably introduces label noise, and it is extremely expensive and time-consuming toclean up the corrupted labels. The existence of label noise can weaken the true correlation betweenfeatures and labels as well as introducing artificial correlation patterns. Thus, mitigating the effectsof noisy labels becomes a critical issue that needs careful treatment.It is challenging to avoid overfitting to noisy labels, especially when the noise depends on both truelabelsYand features X. Unfortunately, this often tends to be the case where human annotationsare prone to different levels of errors for tasks with varying difficulty levels. Recent work hasalso shown that the presence of instance-dependent noisy labels imposes additional challenges andcautions to training in this scenario (Liu, 2021). For such instance-dependent (or feature-dependent,instance-based) label noise settings, theory-supported works usually focus on loss-correction whichrequires estimating noise rates (Xia et al., 2020; Berthon et al., 2020). Recent work by Cheng et al.(2020) addresses the bounded instance-based noise by first learning the noisy distribution and thendistilling examples according to some thresholds.1However, with a limited size of datasets, learningan accurate noisy distribution for each example is a non-trivial task. Additionally, the size and thequality of distilled examples are sensitive to the thresholds for distillation.Equal contributions in alphabetical ordering. Hao leads experiments and Zhaowei leads theories.yCorresponding authors: Y . Liu and Z. Zhu {yangliu,zwzhu}@ucsc.edu .1The proposed solution is primarily studied for the binary case in Cheng et al. (2020).1Published as a conference paper at ICLR 2021Departing from the above line of works, we design a sample sieve with theoretical guarantees toprovide a high-quality splitting of clean and corrupted examples without the need to estimate noiserates. Instead of learning the noisy distributions or noise rates, we focus on learning the underlyingclean distribution and design a regularization term to help improve the confidence of the learnedclassifier, which is proven to help safely sieve out corrupted examples. With the division between“clean” and “corrupted” examples, our training enjoys performance improvements by treating theclean examples (using standard loss) and the corrupted ones (using an unsupervised consistencyloss) separately.We summarize our main contributions: 1) We propose to train a classifier using a novel confidenceregularization (CR) term and theoretically guarantee that, under mild assumptions, minimizing theconfidence regularized cross-entropy (CE) loss on the instance-based noisy distribution is equiva-lent to minimizing the pure CE loss on the corresponding “unobservable” clean distribution. Thisclassifier is also shown to be helpful for evaluating each example to build our sample sieve.2) Weprovide a theoretically sound sample sieve that simply compares the example’s regularized loss witha closed-form threshold explicitly determined by predictions from the above trained model usingour confidence regularized loss, without any extra estimates. 3) To the best of our knowledge, theproposed CORES2(COnfidence REgularized Sample Sieve) is the first method that is thoroughlystudied for a multi-class classification problem, has theoretical guarantees to avoid overfitting toinstance-dependent label noise, and provides high-quality division without knowing or estimatingnoise rates. 4) By decoupling the regularized loss into separate additive terms, we also provide anovel and promising mechanism for understanding and controlling the effects of general instance-dependent label noise. 5) CORES2achieves competitive performance on multiple datasets, includ-ing CIFAR-10, CIFAR-100, and Clothing1M, under different label noise settings.Other related works In addition to recent works by Xia et al. (2020), Berthon et al. (2020), andCheng et al. (2020), we briefly overview other most relevant references. Detailed related work isleft to Appendix A. Making the loss function robust to label noise is important for building a robustmachine learning model (Zhang et al., 2016). One popular direction is to perform loss correction ,which first estimates transition matrix (Patrini et al., 2017; Vahdat, 2017; Xiao et al., 2015; Zhuet al., 2021b; Yao et al., 2020b), and then performs correction/reweighting via forward or back-ward propagation, or further revises the estimated transition matrix with controllable variations (Xiaet al., 2019). The other line of work focuses on designing specific losses without estimating transi-tion matrices (Natarajan et al., 2013; Xu et al., 2019; Liu & Guo, 2020; Wei & Liu, 2021). However,these works assume the label noise is instance-independent which limits their extension. Anotherapproach is sample selection (Jiang et al., 2017; Han et al., 2018; Yu et al., 2019; Northcutt et al.,2019; Yao et al., 2020a; Wei et al., 2020; Zhang et al., 2020a), which selects the “small loss” ex-amples as clean ones. However, we find this approach only works well on the instance-independentlabel noise. Approaches such as label correction (Veit et al., 2017; Li et al., 2017; Han et al., 2019) orsemi-supervised learning (Li et al., 2020; Nguyen et al., 2019) also lack guarantees for the instance-based label noise.2 CORES2: CO NFIDENCE REGULARIZED SAMPLE SIEVEConsider a classification problem on a set of Ntraining examples denoted by D:=f(xn;yn)gn2[N], where [N] :=f1;2;;Ngis the set of example indices. Examples (xn;yn)are drawn according to random variables (X;Y )2XY from a joint distribution D. LetDXandDYbe the marginal distributions of XandY. The classification task aims to identify a classifierf:X!Y that mapsXtoYaccurately. One common approach is minimizing the empirical riskusing DNNs with respect to the cross-entropy loss defined as `(f(x);y) =ln(fx[y]); y2[K];wherefx[y]denotes they-th component of f(x)andKis the number of classes. In real-world appli-cations, such as human-annotated images (Krizhevsky et al., 2012; Zhang et al., 2017) and medicaldiagnosis (Agarwal et al., 2016), the learner can only observe a set of noisy labels. For instance,human annotators may wrongly label some images containing cats as ones that contain dogs acciden-tally or irresponsibly. The label noise of each instance is characterized by a noise transition matrixT(X), where each element Tij(X) :=P(eY=jjY=i;X). The corresponding noisy dataset2anddistribution are denoted by eD:=f(xn;~yn)gn2[N]andeD. Let 1()be the indicator function taking2In this paper, the noisy dataset refers to a dataset with noisy examples. A noisy example is either a cleanexample (whose label is true) or a corrupted example (whose label is wrong).2Published as a conference paper at ICLR 2021value 1when the specified condition is satisfied and 0otherwise. Similar to the goals in surrogateloss (Natarajan et al., 2013), LDMI(Xu et al., 2019) and peer loss (Liu & Guo, 2020), we aim tolearn a classifier ffrom the noisy distribution eDwhich also minimizes P(f(X)6=Y);(X;Y )D.Beyond their results, we attempt to propose a theoretically sound approach addressing a generalinstance-based noise regime without knowing or estimating noise rates .2.1 C ONFIDENCE REGULARIZATIONIn this section, we present a new confidence regularizer (CR). Our design of the CR is mainlymotivated by a recently proposed robust loss function called peer loss (Liu & Guo, 2020). For eachexample (xn;~yn), peer loss has the following form:`PL(f(xn);~yn) :=`(f(xn);~yn)`(f(xn1);~yn2);where (xn1;~yn1)and(xn2;~yn2)are two randomly sampled and paired peer examples (with replace-ment) forn. LetXn1andeYn2be the corresponding random variables. Note Xn1;eYn2are twoindependent and uniform random variables being each xn0;n02[N]and~yn0;n02[N]with prob-ability1Nrespectively: P(Xn1=xn0jeD) =P(eYn2=yn0jeD) =1N;8n02[N]. LetDeYjeDbe thedistribution of eYn2given dataseteD. Peer loss then has the following equivalent form in expectation:1NXn2[N]EXn1;eYn2jeD[`(f(xn);~yn)`(f(Xn1);eYn2)]=1NXn2[N]`(f(xn);~yn)Xn02[N]P(Xn1=xn0jeD)EDeYjfD[`(f(xn0);eY)]=1NXn2[N]`(f(xn);~yn)EDeYjfD[`(f(xn);eY)]:This result characterizes a new loss denoted by `CA:`CA(f(xn);~yn) :=`(f(xn);~yn)EDeYjfD[`(f(xn);eY)]: (1)Though not studied rigorously by Liu & Guo (2020), we show, under conditions3,`CAdefined inEqn. (1) encourages confident predictions4fromfby analyzing the gradients:Theorem 1. For`CA(), solutions satisfying fxn[i]>0;8i2[K]are not locally optimal at (xn;~yn).See Appendix B.2 for the proof. Particularly, in binary cases, we have constraint f(xn)[0] +f(xn)[1] = 1 . Following Theorem 1, we know minimizing `CA(f(xn);~yn)w.r.tfunder this con-straint leads to either f(xn)[0]!1orf(xn)[1]!1, indicating confident predictions. Therefore,the addition of term EDeYjfD[`(f(xn);eY)]helps improve the confidence of the learned classifier.Inspired by the above observation, we define the following confidence regularizer:Confidence Regularizer: `CR(f(xn)) :=EDeYjfD[`(f(xn);eY)];whereis positive and `()refers to the CE loss. The prior probability P(eYjeD)is counted directlyfrom the noisy dataset. In the remaining of this paper, `()indicates the CE loss by default.Why are confident predictions important? Intuitively, when model fits to the label noise, its pre-dictions often become less confident, since the noise usually corrupts the signal encoded in the cleandata. From this perspective, encouraging confident predictions plays against fitting to label noise.Compared to instance-independent noise, the difficulties in estimating the instance-dependent noiserates largely prevent us from applying existing techniques. In addition, as shown by Manwani &Sastry (2013), the 0-1 loss function is more robust to instance-based noise but hard to optimizewith. To a certain degree, pushing confident predictions results in a differentiable loss function thatapproximates the 0-1 loss, and therefore restores the robustness property. Besides, as observed byChatterjee (2020) and Zielinski et al. (2020), gradients from similar examples would reinforce eachother. When the overall label information is dominantly informative that Tii(X)>Tij(X), DNNs3Detailed conditions for Theorem 1 are specified at the end of our main contents.4Our observation can also help partially explain the robustness property of peer loss (Liu & Guo, 2020).3Published as a conference paper at ICLR 2021will receive more correct information statistically. Encouraging confident predictions would discour-age the memorization of the noisy examples (makes it hard for noisy labels to reduce the confidenceof predictions), and therefore further facilitate DNNs to learn the (clean) dominant information.`CRis NOT the entropy regularization Entropy regularization (ER) is a popular choice for im-proving confidence of the trained classifiers in the literature (Tanaka et al., 2018; Yi & Wu, 2019).Given a particular prediction probability pfor a class, the ER term is based on the function plnp,while our`CRis built on lnp. Later we show `CRoffers us favorable theoretical guarantees fortraining with instance-dependent label noise, while ER does not. In Appendix C.1, we present boththeoretical and experimental evidences that `CRserves as a better regularizer compared to ER.2.2 C ONFIDENCE REGULARIZED SAMPLE SIEVEIntuitively, label noise misleads the training thus sieving corrupted examples out of datasets is ben-eficial. Furthermore, label noise introduces high variance during training even with the existence of`CR(discussed in Section 3.3). Therefore, rather than accomplishing training solely with `CR, wewill first leverage its regularization power to design an efficient sample sieve. Similar to a generalsieving process in physical words that compares the size of particles with the aperture of a sieve,we evaluate the “size” (quality, or a regularized loss) of examples and compare them with someto-be-specified thresholds, therefore the name sample sieve. In our formulation, the regularized loss`(f(xn);~yn) +`CR(f(xn))is employed to evaluate examples and nis used to specify thresholds.Specifically, we aim to solve the sample sieve problem in (2).Confidence Regularized Sample Sieveminf2F;v2f0;1gNXn2[N]vn[`(f(xn);~yn) +`CR(f(xn))n]s.t.`CR(f(xn)) :=EDeYjfD`(f(xn);eY);n:=1KX~y2[K]`(f(xn);~y) +`CR(f(xn)):(2)Sample Sieve-0Sample Sieve-1Figure 1: Dynamic sample sieves. Green circlesare clean examples. Red hexagons are corruptedexamples.The crucial components in (2) are:vn2f0;1gindicates whether example nis clean (vn= 1) or not (vn= 0);n(mimicking the aperture of a sieve) controls which example should be sieved out;fis a copy offand does not contribute to the back-propagation. Fis the search space of f.Dynamic sample sieve The problem in (2) is a combinatorial optimization which is hard to solvedirectly. A standard solution to (2) is to apply alternate search iteratively as follows:Starting att= 1,v(0)n= 1;8n2[N].Confidence-regularized model update (at iteration- t):f(t)= arg minf2FXn2[N]v(t1)n [`(f(xn);~yn) +`CR(f(xn))] ;(3)Sample sieve (at iteration- t):v(t)n= 1(`(f(t)(xn);~yn) +`CR(f(t)(xn))<n;t); (4)wheren;t=1KP~y2[K]`(f(t)(xn);~y) +`CR(f(t)(xn)),f(t)andv(t)refer to the specificclassifier and weight at iteration- t. Note the values of `CR(f(t)(xn))and`CR(f(t)(xn))arethe same. We keep both terms to be consistent with the objective in Eq. (2). In DNNs, weusually update model fwith one or several epochs of data instead of completely solving (3).Figure 1 illustrates the dynamic sample sieve, where the size of each example corresponds to theregularized loss and the aperture of a sieve is determined by n;t. In each iteration- t, sample sieve-4Published as a conference paper at ICLR 20214 2 0 20123# samples×103(a) CE Sieve (symm., epoch 20)1510505100246×103(b) CE Sieve (symm., epoch 70)1510505100123×103(c) CORES2 (symm., epoch 20)2002040600123×104(d) CORES2 (symm., epoch 70)cleancorrupted10 5 0loss0123# samples×103(e) CE Sieve (inst., epoch 20)151050510loss0246×103(f) CE Sieve (inst., epoch 70)151050510loss0.00.51.0×104(g) CORES2 (inst., epoch 20)200204060loss0123×104(h) CORES2 (inst., epoch 70)Figure 2: Loss distributions of training on CIFAR-10 with 40% symmetric noise (symm.) or 40%instance-based noise (inst.). The loss is given by `(f(t)(xn);~yn) +`CR(f(t)(xn))n;tas (4). CESieve represents the dynamic sample sieve with standard cross-entropy loss (without CR).t“blocks” some corrupted examples by comparing a regularized example loss with a closed-formthresholdn;t, which can be immediately obtained given current model f(t)and example (xn;~yn)(no extra estimation needed). In contrast, most sample selection works (Han et al., 2018; Yu et al.,2019; Wei et al., 2020) focus on controlling the number of the selected examples using an intuitivefunction where the overall noise rate may be required, or directly selecting examples by an empiri-cally set threshold (Zhang & Sabuncu, 2018). Intuitively, the specially designed thresholds n;tforeach example should be more accurate than a single threshold for the whole dataset. Besides, thegoal of existing works is often to select clean examples while our sample sieve focuses on removingthe corrupted ones. On a high level, we follow a different philosophy from these sample selectionworks. We coin our solution as COnfidence REgularized Sample Sieve (CORES2).More visualizations of the sample sieve In addition to Figure 1, we visualize the superiority of oursample sieve with numerical results as Figure 2. The sieved dataset is in the form of two clusters ofexamples. Particularly, from Figure 2(b) and Figure 2(f), we observe that CE suffers from providinga good division of clean and corrupted examples due to overfitting in the final stage of training. Onthe other hand, with `CR, there are two distinct clusters and can be separated by the threshold 0asshown in Figure 2(d) and Figure 2(h). Comparing Figure 2(a)-2(d) with Figure 2(e)-2(h), we findthe effect of instance-dependent noise on training is indeed different from the symmetric one, wherethe instance-dependent noise is more likely to cause overfitting.3 T HEORETICAL GUARANTEES OF CORES2In this section, we theoretically show the advantages of CORES2. The analyses focus on showingCORES2guarantees a quality division, i.e. vn= 1(yn= ~yn);8n, with a properly set . To showthe effectiveness of this solution, we call a model prediction on xnisbetter than random guess iffxn[yn]>1=K, and call it confident iffxn[y]2f0;1g;8y2[K], whereynis the clean label andyis an arbitrary label. The quality of sieving out corrupted examples is guaranteed in Theorem 2.Theorem 2. The sample sieve defined in (4) ensures that clean examples (xn;~yn=yn)will not beidentified as being corrupted if the model f(t)’s prediction on xnis better than random guess.Theorem 2 informs us that our sample sieve can progressively and safely filter out corrupted exam-ples, and therefore improves division quality, when the model prediction on each xnis better thanrandom guess. The full proof is left to Appendix B.3. In the next section, we provide evidences thatour trained model is guaranteed to achieve this requirement with sufficient examples.3.1 D ECOUPLING THE CONFIDENCE REGULARIZED LOSSThe discussion of performance guarantees of the sample sieve focuses on a general instance-basednoise transition matrix T(X), which can induce any specific noise regime such as symmetricnoise and asymmetric noise (Kim et al., 2019; Li et al., 2020). Note the feature-independencywas one critical assumption in state-of-the-art theoretically guaranteed noise-resistant literatures(Natarajan et al., 2013; Liu & Guo, 2020; Xu et al., 2019) while we do not require. Let Tij:=EDjY=i[Tij(X)];8i;j2[K]. Theorem 3 explicitly shows the contributions of clean examples,corrupted examples, and `CRduring training. See Appendix B.1 for the proof.Theorem 3. (Main Theorem: Decoupling the Expected Regularized CE Loss) In expectation, theloss with`CRcan be decoupled as three separate additive terms:5Published as a conference paper at ICLR 2021EeDh`(f(X);eY) +`CR(f(X))i=Term-1z}|{TED[`(f(X);Y)] +Term-2z}|{ED[`(f(X);Y)]+Xj2[K]Xi2[K]P(Y=i)EDjY=i[(Uij(X)P(eY=j))`(f(X);j)]| {z }Term-3;(5)whereT:= minj2[K]Tjj; :=Pj2[K]jP(Y=j);j:=TjjT,Uij(X) =Tij(X);8i6=j;Ujj(X) =Tjj(X)Tjj, andED[`(f(X);Y)] := 1(>0)Pj2[K]jP(Y=j)EDjY=j[`(f(X);j)].Equation (5) provides a generic machinery for anatomizing noisy datasets, where we show the ef-fects of instance-based label noise on the `CRregularized loss can be decoupled into three additiveterms: Term-1 reflects the expectation of CE on clean distribution D,Term-2 shifts the clean distri-bution by changing the prior probability of Y, and Term-3 characterizes how the corrupted examples(represented by Uij(X)) might mislead/mis-weight the loss, as well as the regularization ability of`CR(represented by P(eY=j)). In addition to the design of sample sieve, this additive decou-pling structure also provides a novel and promising perspective for understanding and controllingthe effects of generic instance-dependent label noise.3.2 G UARANTEES OF THE SAMPLE SIEVEBy decoupling the effects of instance-dependent noise into separate additive terms as shown inTheorem 3, we can further study under what conditions, minimizing the confidence regularized CEloss on the (instance-dependent) noisy distribution will be equivalent to minimizing the true lossincurred on the clean distribution, which is exactly encoded by Term-1. In other words, we wouldlike to understand when Term-2 and Term-3 in (5) can be controlled not to disrupt the minimizationof Term-1. Our next main result establishes this guarantee but will first need the following twoassumptions.Assumption 1. (Y=Y) Clean labels are Bayes optimal ( Y:= arg maxi2[K]P(Y=ijX)).Assumption 2. (Informative datasets) The noise rate is bounded as Tii(X)Tij(X)>0;8i2[K];j2[K];j6=i;XDX.Feasibility of assumptions: 1) Note for many popular image datasets, e.g. CIFAR, the label of eachfeature is well-defined and the corresponding distribution is well-separated by human annotation. Inthis case, each feature Xonly belongs to one particular class Y. Thus Assumption 1 is generallyheld in classification problems (Liu & Tao, 2015). Technically, this assumption could be relaxed.We use this assumption for clean presentations. 2) Assumption 2 shows the requirement of noiserates, i.e., for any feature X, a sufficient number of clean examples are necessary for dominant cleaninformation. For example, we require Tii(X)Tij(X)>0to ensure examples from class iareinformative (Liu & Chen, 2017).Before formally presenting the noise-resistant property of training with `CR, we discuss intuitionshere. As discussed earlier in Section 2.1, our `CRregularizes the CE loss to generate/incentivizeconfident prediction, and thus is able to approximate the 0-1 loss to obtain its robustness property.More explicitly, from (5), `CRaffects Term-3 with a scale parameter . Recall that Uij(X) =Tij(X);8i6=j, which is exactly the noise transition matrix. Although we have no informationabout this transition matrix, the confusion brought by Uij(X)can be canceled or reversed by asufficiently large such thatUij(X)P(eY=j)0. Intuitively, with an appropriate , all theeffects ofUij(X);i6=jcan be reversed, and we will get a negative loss punishing the classifier forpredicting class- jwhen the clean label is i. Formally, Theorem 4 shows the noise-resistant propertyof training with `CRand is proved in Appendix B.4.Theorem 4. (Robustness of the Confidence Regularized CE Loss) With Assumption 1 and 2, whenmaxi;j2[K];XDXUij(X)P(eY=j) minP(eY=i)>P(eY=j);XDXTii(X)Tij(X)P(eY=i)P(eY=j); (6)minimizing EeD[`(f(X);eY) +`CR(f(X))]is equivalent to minimizing ED[`(f(X);Y)].Theorem 4 shows a sufficient condition of for our confidence regularized CE loss to be robust toinstance-dependent label noise. The bound on LHS ensures the confusion from label noise could be6Published as a conference paper at ICLR 2021canceled or reversed by the weighted confidence regularizer, and the RHS bound guarantees themodel with the minimized regularized loss predicts the most frequent label in each feature w.p. 1.Theorem 4 also provides guidelines for tuning . Although we have no knowledge about Tij(X), wecan roughly estimate the range of possible . One possibly good setting of is linearly increasingwith the number of classes, e.g. = 2for10classes and= 20 for100classes.With infinite model capacity , minimizing ED[`(f(X);Y)]returns the Bayes optimal classifier (sinceCE is a calibrated loss) which predicts on each xnbetter than random guess. Therefore, with a suf-ficient number of examples, minimizing EeD[`(f(X);eY) +`CR(f(X))]will also return a model thatpredicts better than random guess, then satisfying the condition required in Theorem 2 to guaranteethe quality of sieved examples. Further, since the Bayes optimal classifier always predicts clean la-bels confidently when Assumption 1 holds, Theorem 4 also guarantees confident predictions. Withsuch predictions, the sample sieve in (4) will achieve 100% precision on both clean and corruptedexamples. This guaranteed division is summarized in Corollary 1:Corollary 1. When conditions in Theorem 4 hold, with infinite model capacity and sufficiently manyexamples, CORES2achievesvn= 1(yn= ~yn);8n2[N], i.e., all the sieved clean examples areeffectively clean.3.3 T RAINING WITH SIEVED SAMPLESWe discuss the necessity of a dynamic sample sieve in this subsection. Despite the strong guaran-tee in expectation as shown Theorem 4, performing direct Empirical Risk Minimization (ERM) ofthe regularized loss is likely to return a sub-optimal solution. Although Theorem 4 guarantees theequivalence of minimizing two first-order statistics, their second-order statistics are also importantfor estimating the expectation when examples are finite. Intuitively, Term-1 TED[`(f(X);Y)]primarily helps distinguish a good classifier from a bad one on the clean distribution. The existenceof the leading constant Treduces the power of the above discrimination, as effectively the gap be-tween the expected losses become smaller as noise increases ( Twill decrease). Therefore we wouldrequire more examples to recognize the better model. Equivalently, the variance of the selectionbecomes larger. In Appendix C.2, we also offer an explanation from the variance’s perspective. Forsome instances with extreme label noise, the satisfying Eqn. (6) in Theorem 4 may not exist.In such case, these instances cannot be properly used and other auxiliary techniques are necessary(e.g., sample pruning).Sieving out the corrupted examples from the clean ones allows us a couple of better solutions. First,we can focus on performing ERM using these sieved clean examples only. We derive the riskbound for training with these clean examples in Appendix C.3. Secondly, leveraging the samplesieve to distinguish clean examples from corrupted ones provides a flexible interface for variousrobust training techniques such that the performance can be further improved. For example, semi-supervised learning techniques can be applied (see section 4 for more details).4 E XPERIMENTSNow we present experimental evidences of how CORES2works.5Datasets: CORES2is evaluated on three benchmark datasets: CIFAR-10, CIFAR-100 (Krizhevskyet al., 2009) and Clothing1M (Xiao et al., 2015). Following the convention from Xu et al. (2019),we use ResNet34 for CIFAR-10 and CIFAR-100 and ResNet50 for Clothing1M.Noise type: We experiment with three types of label noise: symmetric, asymmetric and instance-dependent label noise. Symmetric noise is generated by randomly flipping a true label to the otherpossible labels w.p. "(Kim et al., 2019), where "is called the noise rate. Asymmetric noise isgenerated by flipping the true label to the next class ( i.e., labeli!i+1;modK) w.p.". Instance-dependent label noise is a more challenging setting and we generate instance-dependent label noisefollowing the method from Xia et al. (2020) (See Appendix D.3 for details). In expectation, thenoise rate"for all noise regimes is the overall ratio of corrupted examples in the whole dataset.Consistency training after the sample sieve: Letbe the last iteration of CORES2. DefineL() :=fnjn2[N];v()n= 1g,H() :=fnjn2[N];v()n= 0g,eDL():=f(xn;~yn) :n25The logarithmic function in `CRis adapted to ln(fx[y] + 108)for numerical stability.7Published as a conference paper at ICLR 2021204060801000.00.20.40.60.81.0F-score(a) 20% Symm.204060801000.00.20.40.60.81.0(b) 40% Symm.204060801000.00.20.40.60.81.0(c) 60% Symm.20406080100epoch0.00.20.40.60.81.0F-score(d) 20% Inst.CORES2Co-teachingCo-teaching+20406080100epoch0.00.20.40.60.81.0(e) 40% Inst.20406080100epoch0.00.20.40.60.81.0(f) 60% Inst.Figure 3: F-score comparisons on CIFAR10 under symmetric (Symm.) and instance-based (Inst.)label noise. F-score :=2PreRePre+Re, where Pre:=Pn2[N]1(vn=1;yn=~yn)Pn2[N]1(vn=1), and Re:=Pn2[N]1(vn=1;yn=~yn)Pn2[N]1(yn=~yn).L()g,eDH():=f(xn;~yn) :n2H()g. ThuseDL()is sieved as clean examples and eDH()is fil-tered out as corrupted ones. Examples (xn;~yn)2eDL()lead the training direction using the CE lossasPn2L()`(f(xn);~yn). Noting the labels in eDH()are supposed to be corrupted and can distractthe training, we simply drop them. On the other hand, feature information of these examples en-codes useful information that we can further leverage to improve the generalization ability of models.There are different ways to use this unsupervised information, in this paper, we chose to minimizethe KL-divergence between predictions on the original feature and the augmented feature to makepredictions consistent. This is a common option as chosen by Li et al. (2019), Xie et al. (2019), andZhang et al. (2020b). The consistency loss function in epoch- tisPn2H()`KL(f(xn);f(t)(xn;t)),where f(t)is a copy of the DNN at the beginning of epoch- tbut without gradients. Summing theclassification and consistency loss yields the total loss. See Appendix D.1 for an illustration.Other alternatives: Checking the consistency of noisy predictions is only one possible way toleverage the additional information after sample sieves. Our basic idea of first sieving the datasetand then treating corrupted examples differently from clean ones admits other alternatives. Thereare many other possible designs after sample sieves, e.g., estimating transition matrix using sievedexamples then applying loss-correction (Patrini et al., 2017; Vahdat, 2017; Xiao et al., 2015), makingthe consistency loss as another regularization term and retraining the model (Zhang et al., 2020b),correcting the sample selection bias in clean examples and retraining (Cheng et al., 2020; Fang et al.,2020), or relabeling those corrupted examples and retraining, etc. Additionally, clustering methodson the feature space (Han et al., 2019; Luo et al., 2020) or high-order information (Zhu et al., 2021a)can also be exploited along with the dynamic sample sieve. Besides, the current structure is readyto include other techniques such as mixup (Zhang et al., 2018).Quality of our sample sieve: Figure 3 shows the F-scores of sieved clean examples with trainingepochs on the symmetric and the instance-based label noise. F-score quantifies the quality of thesample sieve by the harmonic mean of precision (ratio of actual cleans examples in sieved cleanones) and recall (ratio of sieved cleans examples in actual clean ones). We compare CORES2withCo-teaching and Co-teaching+. Note the F-scores of CORES2and Co-teaching are consistently highon the symmetric noise, while CORES2achieves higher performance on the challenging instance-based label noise, especially with the 60% noise rate where the other two methods have low F-scores.Experiments on CIFAR-10, CIFAR-100 and Clothing1M: In this section, we compare CORES2with several state-of-the-art methods on CIFAR-10 and CIFAR-100 under instance-based, symmet-ric and asymmetric label noise settings, which is shown on Table 1 and Table 2. CORES2?denotesthat we apply consistency training on the corrupted examples after the sample sieve. For a fair com-parison, all the methods use ResNet-34 as the backbone. By comparing the performance of CE onthe symmetric and the instance-based label noise, we note the instance-based label noise is a morechallenging setting. Even though some methods (e.g., LDMI) behaves well on symmetric and asym-metric label noise, they may reach low test accuracies on the instance-based label noise, especiallywhen the noise rate is high or the dataset is more complex. However, CORES2consistently workswell on the instance-based label noise and adding the consistency training gets better results. Ta-ble 3 verifies CORES2on Clothing1M, a dataset with real human label noise. Compared to the other8Published as a conference paper at ICLR 2021Table 1: Comparison of test accuracies on clean datasets under instance-based label noise.MethodInst. CIFAR10 Inst. CIFAR100"= 0:2"= 0:4"= 0:6"= 0:2"= 0:4"= 0:6Cross Entropy 87.16 75.16 44.64 58.72 41.14 25.29ForwardT(Patrini et al., 2017) 88.08 82.67 41.57 58.95 41.68 22.83LDMI(Xu et al., 2019) 88.80 82.70 70.54 58.66 41.77 28.00Lq(Zhang & Sabuncu, 2018) 86.45 69.02 32.94 58.18 40.32 23.13SCE (Wang et al., 2019) 89.11 72.04 44.83 59.87 41.76 23.41Co-teaching (Han et al., 2018) 88.66 69.50 34.61 43.03 23.13 7.07Co-teaching+ (Yu et al., 2019) 89.04 69.15 33.33 41.84 24.40 8.74JoCoR (Wei et al., 2020) 88.71 68.97 30.27 44.28 22.77 7.54Peer Loss (Liu & Guo, 2020) 89.33 81.09 73.73 59.92 45.76 33.61CORES289.50 82.84 79.66 61.25 47.81 37.85CORES2?95.42 88.45 85.53 72.91 70.66 63.08Table 2: Comparison of test accuracies on clean datasets under symmetric/asymmetric label noise.MethodSymm. CIFAR10 Asymm. CIFAR10 Symm. CIFAR100 Asymm. CIFAR100"= 0:4"= 0:6"= 0:2"= 0:3"= 0:4"= 0:6"= 0:2"= 0:3Cross Entropy 81.88 74.14 88.59 86.14 48.20 37.41 59.20 51.40MAE (Ghosh et al., 2017) 61.63 41.98 59.67 57.62 7.68 6.45 11.16 8.97ForwardT(Patrini et al., 2017) 83.27 75.34 89.42 88.25 53.04 41.59 64.86 64.72Lq(Zhang & Sabuncu, 2018) 87.13 82.54 89.33 85.45 61.77 53.16 66.59 61.45LDMI(Xu et al., 2019) 83.04 76.51 89.04 87.88 52.32 40.00 60.04 52.82NLNL (Kim et al., 2019) 92.43 88.32 93.35 91.80 66.39 56.51 63.12 54.87SELF (Nguyen et al., 2019) 91.13 - 93.75 92.42 66.71 - 70.53 65.09CORES2?93.76 89.78 95.18 94.67 72.22 59.16 75.19 73.81Table 3: The best epoch (clean) test accuracy for each method on Clothing1M.MethodCE Forward T Co-teaching JoCoR LDMI PTD-R-V CORES2(Baseline) (Patrini et al., 2017) (Han et al., 2018) (Wei et al., 2020) (Xu et al., 2019) (Xia et al., 2020) (our)Acc. 68.94 70.83 69.21 70.30 72.46 71.67 73.24approaches, CORES2also works fairly well on the Clothing1M dataset. See more experiments inAppendix D. We also provide source codes with detailed instructions in supplementary materials.5 C ONCLUSIONSThis paper introduces CORES2, a sample sieve that is guaranteed to be robust to general instance-dependent label noise and sieve out corrupted examples, but without using explicit knowledge ofthe noise rates of labels. The analysis of CORES2assumed that the Bayes optimal labels are thesame as clean labels. Future directions of this work include extensions to more general cases wherethe Bayes optimal labels may differ from clean labels. We are also interested in exploring differentpossible designs of robust training with sieved examples.Acknowledgement This work is partially supported by the National Science Foundation (NSF)under grant IIS-2007951 and the Office of Naval Research under grant N00014-20-1-22.CONDITIONS REQUIRED FOR THEOREM 1Theorem 1 holds based on the following three assumptions:A1. The model capacity is infinite (i.e., it can realize arbitrary variation).A2. The model is updated using the gradient descent algorithm (i.e. updates follow the directionof decreasing ED[`(f(X);Y)]EDY[EDX[`(f(X);Y)]]).A3. The derivative of network function@f(x;w)@wiis smooth (i.e. the network function has nosingular point), where wi’s are model parameters.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Review for "Learning with Instance-Dependent Label Noise: A Sample Sieve Approach" ### Review Text The authors of the paper propose a new method, the CORES (COnfidence REgularized Sample Sieve), to tackle the important problem of learning under instance dependent label noise. The proposed method, in essence, involves the use of a confidence regularization term that encourages more confident predictions and a sieving process to remove the samples with large losses. Theoretical justification and empirical experiments were conducted to demonstrate the effectiveness of the proposed method. All in all, the paper is clearly written and easy to follow. The proposed method seems technically sound and the motivation for the proposal is explained clearly. One major complaint I have for the paper is the lack of novelty of the paper. The two important building blocks of the paper, the confidence regularizer, and the sample sieve are derived from previous papers. Specifically, in my opinion, the confidence regularizer is a marginal extension of the "peer loss" [1], and the sample sieve algorithm is essentially the same as that proposed in [2], the only difference being a different choice of loss function for training and sieving, to the best of my knowledge and understanding. I think it is worth commenting on this very relevant line of work in Section 2.2. In addition, it would be interesting if the authors of the paper could offer some insights on why the proposed sieving strategy works better than the one previously proposed [2] based on softmax probability. All in all, with the lack of novelty addressed above, I think the submission is marginally below the acceptance threshold. Other comments: 1. I find the intuitive justification for confidence regularization in Section 2.1 to be quite unconvincing. Specifically, it was stated that "when model overfits to the noise, its predictions often become less confident". From my understanding, this is not necessarily true at all. In fact, it was previously demonstrated that deep NNs can even perfectly overfit to datasets with randomly assigned labels? From this perspective, wouldn't encouraging confidence make the model overfit harder to the noisy labels? I would appreciate if the authors of the paper could provide further insights and intuition on why the introduced confidence regularization improves noise robustness. [1] Yang Liu and Hongyi Guo. Peer loss functions: Learning from noisy labels without knowing noise rates. In Proceedings of the 37th International Conference on Machine Learning, ICML ’20, 2020. [2] Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, pp. 8778–8788, 2018. -------------------------------------------------------------------------------------------------------------- The authors of the paper addressed carefully the concerns I raised above. As such, I am raising my score to a 6, and would like to recommend accepting this paper. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
B1gZV1HYvS
ICLR.cc/2020/Conference
2020
Multi-Agent Interactions Modeling with Correlated Policies
["Minghuan Liu", "Ming Zhou", "Weinan Zhang", "Yuzheng Zhuang", "Jun Wang", "Wulong Liu", "Yong Yu"]
In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}.
["Multi-agent reinforcement learning", "Imitation learning"]
ABSTRACTIn multi-agent systems, complex interacting behaviors arise due to the high cor-relations among agents. However, previous work on modeling multi-agent inter-actions from demonstrations is primarily constrained by assuming the indepen-dence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning frame-work with explicit modeling of correlated policies by approximating opponents’policies, which can recover agents’ policies that can regenerate similar interac-tions. Consequently, we develop a Decentralized Adversarial Imitation Learn-ing algorithm with Correlated policies (CoDAIL), which allows for decentralizedtraining and execution. Various experiments demonstrate that CoDAIL can bet-ter regenerate complex interactions close to the demonstrators and outperformsstate-of-the-art multi-agent imitation learning methods. Our code is available athttps://github.com/apexrl/CoDAIL .1 I NTRODUCTIONModeling complex interactions among intelligent agents from the real world is essential for under-standing and creating intelligent multi-agent behaviors, which is typically formulated as a multi-agent learning (MAL) problem in multi-agent systems. When the system dynamics are agnostic andnon-stationary due to the adaptive agents with implicit goals, multi-agent reinforcement learning(MARL) is the most commonly used technique for MAL. MARL has recently drawn much atten-tion and achieved impressive progress on various non-trivial tasks, such as multi-player strategygames (OpenAI, 2018; Jaderberg et al., 2018), traffic light control (Chu et al., 2019), taxi-orderdispatching (Li et al., 2019) etc.A central challenge in MARL is to specify a good learning goal, as the agents’ rewards are cor-related and thus cannot be maximized independently (Bu et al., 2008). Without explicit access tothe reward signals, imitation learning could be the most intuitive solution for learning good policiesdirectly from demonstrations. Conventional solutions such as behavior cloning (BC) (Pomerleau,1991) learn the policy in a supervised manner by requiring numerous data while suffering from com-pounding error (Ross & Bagnell, 2010; Ross et al., 2011). Inverse reinforcement learning (IRL) (Nget al., 2000; Russell, 1998) alleviates these shortcomings by recovering a reward function but isalways expensive to obtain the optimal policy due to the forward reinforcement learning procedurein an inner loop. Generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016) leaves abetter candidate for its model-free structure without compounding error, which is highly effectiveand scalable. However, real-world multi-agent interactions could be much challenging to imitate be-cause of the strong correlations among adaptive agents’ policies and rewards. Consider if a footballcoach wants to win the league, he must make targeted tactics against various opponents, in additionto the situation of his team. Moreover, the multi-agent environment tends to give rise to more severecompounding errors with more expensive running costs.Motivated by these challenges, we investigate the problem of modeling complicated multi-agentinteractions from a pile of off-line demonstrations and recover their on-line policies, which can re-generate analogous multi-agent behaviors. Prior studies for multi-agent imitation learning typicallylimit the complexity in demonstrated interactions by assuming isolated reward structures (Barrettet al., 2017; Le et al., 2017; Lin et al., 2014; Waugh et al., 2013) and independence in per-agent1Published as a conference paper at ICLR 2020policies that overlook the high correlations among agents (Song et al., 2018; Yu et al., 2019). In thispaper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learningframework with correlated policies by approximating opponents’ policies, in order to reach inacces-sible opponents’ actions due to concurrently execution of actions among agents when making deci-sions. Consequently, with approximated opponents model, we develop a Decentralized AdversarialImitation Learning algorithm with Correlated policies (CoDAIL) suitable for learning correlatedpolicies under our proposed framework, which allows for decentralized training and execution. Weprove that our framework treats the demonstrator interactions as one of -Nash Equilibrium ( -NE)solutions under the recovered reward.In experiments, we conduct multi-dimensional comparisons for both the reward gap between learnedagents and demonstrators, along with the distribution divergence between demonstrations and regen-erated interacted trajectories from learned policies. Furthermore, the results reveal that CoDAIL canbetter recover correlated multi-agent policy interactions than other state-of-the-art multi-agent im-itation learning methods in several multi-agent scenarios. We further illustrate the distributions ofregenerated interactions, which indicates that CoDAIL yields the closest interaction behaviors to thedemonstrators.2 P RELIMINARIES2.1 M ARKOV GAME AND-NASH EQUILIBRIUMMarkov game (MG), or stochastic game (Littman, 1994), can be regarded as an extensionof Markov Decision Process (MDP). Formally, we define an MG with Nagents as a tuplehN;S;A(1);:::;A(N);P;r(1);:::;r(N);0;i, whereSis the set of states, A(i)represents theaction space of agent i, wherei2f1;2;:::;Ng,P:SA(1)A(2)A(N)S! Risthe state transition probability distribution, 0:S!Ris the distribution of the initial state s0, and2[0;1]is the discounted factor. Each agent iholds its policy (i)(a(i)js) :SA(i)![0;1]tomake decisions and receive rewards defined as r(i):SA(1)A(2)A(N)!R. We useito represent the set of agents except i, and variables without superscript ito denote the concatenationof all variables for all agents (e.g., represents the joint policy and adenotes actions of all agents).For an arbitrary function f:hs;ai!R, there is a fact that E[f(s;a)] =EsP;a[f(s;a)],E[P1t=0tf(st;at)], wheres00,at,st+1P(st+1jat;st). The objective of agent iis tomaximize its own total expected return R(i),E[r(i)(s;a)] =EP1t=0tr(i)(st;at).In Markov games, however, the reward function for each agent depends on the joint agent actions.Such a fact implies that one’s optimal policy must also depend on others’ policies. For the solutionto the Markov games, -Nash equilibrium ( -NE) is a commonly used concept that extends Nashequilibrium (NE) (Nash, 1951).Definition 1. An-NE is a strategy profile ((i);(i))such that9>0:v(i)(s;(i);(i))v(i)(s;(i);(i));8(i)2(i); (1)wherev(i)(s;(i);(i)) =E(i);(i);s0=shr(i)(st;a(i)t;a(i)t)iis the value function of agent iunder states, and (i)is the set of policies available to agent i.-NE is weaker than NE, which can be seen as sub-optimal NE. Every NE is equivalent to an -NEwhere= 0.2.2 G ENERATIVE ADVERSARIAL IMITATION LEARNINGImitation learning aims to learn the policy directly from expert demonstrations without any accessto the reward signals. In single-agent settings, such demonstrations come from behavior trajectoriessampled with the expert policy, denoted as E=f(st;a(i)t)g1t=0. However, in multi-agent settings,demonstrations are often interrelated trajectories, that is, which are sampled from the interactions ofpolicies among all agents, denoted as E=f(st;a(1)t;:::;a(N)t)g1t=0. For simplicity, we will usethe term interactions directly as the concept of interrelated trajectories, and we refer to trajectoriesfor a single agent.2Published as a conference paper at ICLR 2020Typically, behavior cloning (BC) and inverse reinforcement learning (IRL) are two main approachesfor imitation learning. Although IRL theoretically alleviates compounding error and outperformsto BC, it is less efficient since it requires resolving an RL problem inside the learning loop. Re-cent proposed work aims to learn the policy without estimating the reward function directly, no-tably, GAIL (Ho & Ermon, 2016), which takes advantage of Generative Adversarial Networks(GAN (Goodfellow et al., 2014)), showing that IRL is the dual problem of occupancy measurematching. GAIL regards the environment as a black-box, which is non-differentiable but can beleveraged through Monte-Carlo estimation of policy gradients. Formally, its objective can be ex-pressed asminmaxDEE[logD(s;a)] +E[log (1D(s;a))]H(); (2)whereDis a discriminator that identifies the expert trajectories with agents’ sampled from policy, which tries to maximize its evaluation from D;His the causal entropy for the policy and is thehyperparameter.2.3 C ORRELATED POLICYIn multi-agent learning tasks, each agent imakes decisions independently while the resulting rewardr(i)(st;a(i)t;a(i)t)depends on others’ actions, which makes its cumulative return subjected to thejoint policy . One common joint policy modeling method is to decouple the with assumingconditional independence of actions from different agents (Albrecht & Stone, 2018):(a(i);a(i)js)(i)(a(i)js)(i)(a(i)js): (3)However, such a non-correlated factorization on the joint policy is a vulnerable simplification whichignores the influence of opponents (Wen et al., 2019). And the learning process of agent ilacks sta-bility since the environment dynamics depends on not only the current state but also the joint actionsof all agents (Tian et al., 2019). To solve this, recent work has taken opponents into considerationby decoupling the joint policy as a correlated policy conditioned on state sanda(i)as(a(i);a(i)js) =(i)(a(i)js;a(i))(i)(a(i)js); (4)where(i)(a(i)js;a(i))is the conditional policy, with which agent iregards all potential ac-tions from its opponent policies (i)(a(i)js), and makes decisions through the marginal policy(i)(a(i)js) =Ra(i)(i)(a(i)js;a(i))(i)(a(i)js) da(i)=Ea(i)(i)(a(i)js;a(i)).3 M ETHODOLOGY3.1 G ENERALIZE CORRELATED POLICIES TO MULTI -AGENT IMITATION LEARNINGIn multi-agent settings, for agent iwith policy (i), it seeks to maximize its cumulative rewardagainst demonstrator opponents who equip with demonstrated policies (i)E via reinforcementlearning:RL(i)(r(i)) = arg max(i)H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))]; (5)whereH((i))is the-discounted entropy (Bloem & Bambos, 2014; Haarnoja et al., 2017) of policy(i)andis the hyperparameter. By coupling with Eq. (5), we define an IRL procedure to find areward function r(i)such that the demonstrated joint policy outperforms all other policies, with theregularizer :RSA(1)A(N)!R:IRL(i) ((i)E) = arg maxr(i) (r(i))max(i)(H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))])+EE[r(i)(s;a(i);a(i))]:(6)3Published as a conference paper at ICLR 2020It is worth noting that we cannot obtain the demonstrated policies from the demonstrations directly.To solve this problem, we first introduce the occupancy measure, namely, the unnormalized distri-bution ofhs;aipairs correspond to the agent interactions navigated by joint policy :(s;a) =(ajs)1Xt=0tP(st=sj): (7)With the definition in Eq. (7), we can further formulate from agenti’s perspective as(s;a(i);a(i)) =(a(i);a(i)js)1Xt=0tP(st=sj(i);(i))=(i);(i)(s;a(i);a(i))=8>>>><>>>>:(i)(a(i)js)(i)(a(i)js)|{z}non-correlated formP1t=0tP(st=sj(i);(i))(i)(a(i)js;a(i))(i)(a(i)js)| {z }correlated formP1t=0tP(st=sj(i);(i));(8)wherea(i)(i)anda(i)(i). Furthermore, with the support of Eq. (8), we haveE(i);(i)[] =EsP;a(i)(i)[Ea(i)(i)[]]=Xs;a(i);a(i)(i);(i)(s;a(i);a(i))[]: (9)In analogy to the definition of occupancy measure of that in a single-agent environment, we followthe derivation from Ho & Ermon (2016) and state the conclusion directly1.Proposition 1. The IRL regarding demonstrator opponents is a dual form of a occupancy measurematching problem with regularizer , and the induced optimal policy is the primal optimum, specif-ically, the policy learned by RL on the reward recovered by IRL can be characterize by the followingequation:RL(i)IRL(i)= arg min(i)H((i)) + ((i);(i)EE): (10)With setting the regularizer = GAsimilar to Ho & Ermon (2016), we can obtain a GAIL-likeimitation algorithm to learn (i)EfromEgiven demonstrator counterparts (i)Eby introducing theadversarial training procedures of GANs which lead to a saddle point ((i);D(i)):min(i)maxD(i)H((i)) +EEhlogD(i)(s;a(i);a(i))i+E(i);(i)Ehlog (1D(i)(s;a(i);a(i)))i;(11)whereD(i)denotes the discriminator for agent i, which plays a role of surrogate cost function andguides the policy learning.However, such an algorithm is not practical, since we are unable to access the policies of demonstra-tor opponents (i)Ebecause the demonstrated policies are always given through sets of interactionsdata. To alleviate this deficiency, it is necessary to deal with accessible counterparts. Thereby wepropose Proposition 2.Proposition 2. Letbe an arbitrary function such that holds a similar form as (i), thenE(i);(i)[] =E(i);h(i);(i)(s;a(i);a(i))(i);(s;a(i);a(i))i.Proof. Substituting (i)within Eq. (9) by importance sampling.1Note that Ho & Ermon (2016) proved the conclusion under the goal to minimize the cost instead of maxi-mizing the reward of an agent.4Published as a conference paper at ICLR 2020Proposition 2 raises an important point that a term of importance weight can quantify the demon-strator opponents. By replacing (i)Ewith(i), Eq. (11) is equivalent withmin(i)maxD(i)H((i)) +EEhlogD(i)(s;a(i);a(i))i+E(i);(i)hlog (1D(i)(s;a(i);a(i)))i;(12)where=(i);(i)E(s;a(i);a(i))(i);(i)(s;a(i);a(i))is the importance sampling weight. In practice, it is challenging toestimate the densities and the learning methods might suffer from large variance. Thus, we fix = 1in our implementation, and as the experimental results have shown, it has no significant influenceson performance. Besides, a similar approach can be found in Kostrikov et al. (2018).So far, we’ve built a multi-agent imitation learning framework, which can be easily generalized tocorrelated or non-correlated policy settings. No prior has to be considered in advance since thediscriminator is able to learn the implicit goal for each agent.3.2 L EARN WITH THE OPPONENTS MODELWith the objective shown in Eq. (11), demonstrated interactions can be imitated by updating dis-criminators to offer surrogate rewards and learning their policies alternately. Formally, the updateof discriminator for each agent ican be expressed as:r!JD(!) =EsP;a(i)(i)Za(i)(i)(a(i)js;a(i))r!log (1D(i)!(s;a(i);a(i))) da(i)+E(s;a(i);a(i))Ehr!logD(i)!(s;a(i);a(i))i;(13)and the update of policy is:rJ() =EsP;a(i)(i)r(i)Za(i)(i)(a(i)js;a(i))A(i)(s;a(i);a(i)) da(i)r(i)H((i));(14)where discriminator D(i)is parametrized by !, and the policy (i)is parametrized by . It is worthnoting that the agent iconsiders opponents’ action a(i)while updating its policy and discriminator,with integrating all its possible decisions to find the optimal response. However, it is unrealistic tohave the access to opponent joint policy (a(i)js)for agenti. Thus, it is possible to estimateopponents’ actions via approximating (i)(a(i)js)using opponent modeling. To that end, weconstruct a function (i)(a(i)js) :SA(1)A(i1)A(i+1)A(N)![0;1]N1,as the approximation of opponents for each agent i. Then we rewrite Eq. (13) and Eq. (14) as:r!JD(!)EsP;^a(i)(i);a(i)(i)hr!(i)log(1D(i)!(s;a(i);^a(i)))i+E(s;a(i);a(i))Ehr!logD(i)!(s;a(i);a(i))i (15)andrJ()EsP;^a(i)(i);a(i)(i)hr(i)log(i)(a(i)js;^a(i))A(i)(s;a(i);^a(i))ir(i)H((i))(16)respectively. Therefore, each agent imust infer the opponents model (i)to approximate the un-observable policies (i), which can be achieved via supervised learning. Specifically, we learn indiscrete action space by minimizing a cross-entropy (CE) loss, and a mean-square-error (MSE) lossin continuous action space:L=(12Esph(i)(a(i)js)(i)(a(i)js)2i;continuous action spaceEsp(i)(a(i)js) log(i)(a(i)js); discrete action space :(17)With opponents modeling, agents are able to be trained in a fully decentralized manner. We nameour algorithm as Decentralized Adversarial Imitation Learning with Correlated policies (CorrelatedDAIL, a.k.a. CoDAIL) and present the training procedure in Appendix Algo. 1, which can be easilyscaled to a distributed algorithm. As a comparison, we also present a non-correlated DAIL algorithmwith non-correlated policy assumption in Appendix Algo. 2.5Published as a conference paper at ICLR 20203.3 T HEORETICAL ANALYSISIn this section, we prove that the reinforcement learning objective against demonstrator counterpartsshown in the last section is essentially equivalent to reaching an -NE.Since we fix the policies of agents ias(i)E, the RL procedure mentioned in Eq. (5) can beregarded as a single-agent RL problem. Similarly, with a fixed (i)E, the IRL process of Eq. (6) iscast to a single-agent IRL problem, which recovers an optimal reward function r(i)which achievesthe best performance following the joint action E. Thus we haveRL(i)(r(i)) = arg max(i)H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))]=(i)E:(18)We can also rewrite Eq. (18) asH((i)E) +E(i)E;(i)E[r(i)(s;a(i);a(i))]H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))] (19)for all(i)2(i), which is equivalent toEa(i)t(i)E;a(i)t(i)E;s0=s"1Xt=0tr(i)(st;a(i)t;a(i)t)# (20)Ea(i)t(i);a(i)t(i)E;s0=s"1Xt=0tr(i)(st;a(i)t;a(i)t)#+(H((i))H((i)E));8(i)2(i):Given the value function defined in Eq. (1) for each agent i, forH((i))H((i)E)<0,8(i)2(i),we havev(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E)(H((i)E)H((i))): (21)ForH((i))H((i)E)0,8(i)2(i)we havev(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E) +(H((i))H((i)E))v(i)(s;(i);(i)E)(H((i))H((i)E)):(22)Let=maxnH((i))H((i)E);8(i)2(i)o, then we finally obtainv(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E);8(i)2(i); (23)which is exactly the -NE defined in Definition 1. We can always prove that is bounded in smallvalues such that the -NE solution concept is meaningful. Generally, random policies that keep vastentropy are not always considered as sub-optimal solutions or demonstrated policies (i)Ein mostreinforcement learning environments. As we do not require those random policies, we can removethem from the candidate policy set (i), which indicates that H((i))is bounded in small values,so as. Empirically, we adopt a small , and attain the demonstrator policy Ewith an efficientlearning algorithm to become a close-to-optimal solution.Thus, we conclude that the objective of our CoDAIL assumes that demonstrated policies institutean-NE solution concept (but not necessarily unique) that can be controlled the hyperparameter under some specific reward function, from which the agent learns a policy. It is worth noting that Yuet al. (2019) claimed that NE is incompatible with maximum entropy inverse reinforcement learning(MaxEnt IRL) because NE assumes that the agent never takes sub-optimal actions. Nevertheless,we prove that given demonstrator opponents, the multi-agent MaxEnt IRL defined in Eq. (6) isequivalent to finding an -NE.6Published as a conference paper at ICLR 20204 R ELATED WORKAlbeit non-correlated policy learning guided by a centralized critic has shown excellent propertiesin couple of methods, including MADDPG (Lowe et al., 2017), COMA (Foerster et al., 2018), MASoft-Q (Wei et al., 2018), it lacks in modeling complex interactions because its decisions makingrelies on the independent policy assumption which only considers private observations while ignoresthe impact of opponent behaviors. To behave more rational, agents must take other agents into con-sideration, which leads to the studies of opponent modeling (Albrecht & Stone, 2018) where an agentmodels how its opponents behave based on the interaction history when making decisions (Claus &Boutilier, 1998; Greenwald et al., 2003; Wen et al., 2019; Tian et al., 2019).For multi-agent imitation learning, however, prior works fail to learn from complicated demon-strations, and many of them are bounded with particular reward assumptions. For instance, Bhat-tacharyya et al. (2018) proposed Parameter Sharing Generative Adversarial Imitation Learning (PS-GAIL) that adopts parameter sharing trick to extend GAIL to handle multi-agent problems directly,but it does not utilize the properties of Markov games with strong constraints on the action space andthe reward function. Besides, there are many works built-in Markov games that are restricted un-der tabular representation and known dynamics but with specific prior of reward structures, as fullycooperative games (Barrett et al., 2017; Le et al., 2017; ˇSoˇsic et al., 2016; Bogert & Doshi, 2014),two-player zero-sum games (Lin et al., 2014), two-player general-sum games (Lin et al., 2018), andlinear combinations of specific features (Reddy et al., 2012; Waugh et al., 2013).Recently, some researchers take advantage of GAIL to solve Markov games. Inspired by a spe-cific choice of Lagrange multipliers for a constraint optimization problem (Yu et al., 2019), Songet al. (2018) derived a performance gap for multi-agent from NE. It proposed multi-agent GAIL(MA-GAIL), where they formulated the reward function for each agent using private actions andobservations. As an improvement, Yu et al. (2019) presented a multi-agent adversarial inverse rein-forcement learning (MA-AIRL) based on logistic stochastic best response equilibrium and MaxEntIRL. However, both of them are inadequate to model agent interactions with correlated policies withindependent discriminators. By contrast, our approach can generalize correlated policies to modelthe interactions from demonstrations and employ a fully decentralized training procedure without toget access to know the specific opponent policies.Except for the way of modeling multi-agent interactions as recovering agents’ policies from demon-strations, which can regenerate similar interacted data, some other works consider different effectsof interactions. Grover et al. (2018) proposed to learn a policy representation function of the agentsbased on their interactions and sets of generalization tasks using the learned policy embeddings.They regarded interactions as the episodes that contain only k(in the paper they used 2agents),which constructs an agent-interaction graph. Different from us, they focused on the potential re-lationships among agents to help characterize agent behaviors. Besides, Kuhnt et al. (2016) andGindele et al. (2015) proposed to use the Dynamic Bayesian Model that describes physical rela-tionships among vehicles and driving behaviors to model interaction-dependent behaviors in au-tonomous driving scenario.Correlated policy structures that can help agents consider the influence of other agents usually needopponents modeling (Albrecht & Stone, 2018) to infer others’ actions. Opponent modeling has arich history in MAL (Billings et al., 1998; Ganzfried & Sandholm, 2011), and lots of researches haverecently worked out various useful approaches for different settings in deep MARL, e.g., DRON (Heet al., 2016) and ROMMEO (Tian et al., 2019). In this paper, we focus on imitation learning withcorrelated policies, and we choose a natural and straightforward idea of opponent modeling thatlearning opponents’ policies in the way of supervised learning with historical trajectories. Opponentmodels are used both in the training and the execution stages.5 E XPERIMENTS5.1 E XPERIMENTAL SETTINGSEnvironment Description We test our method on the Particle World Environments (Lowe et al.,2017), which is a popular benchmark for evaluating multi-agent algorithms, including several co-operative and competitive tasks. Specifically, we consider two cooperative scenarios and two com-7Published as a conference paper at ICLR 2020Table 1: Average reward gaps between demonstrators and learned agents in 2 cooperative tasks. Means andstandard deviations are taken across different random seeds.Algorithm Coop.-Comm. Coop.-Navi.Demonstrators 00 00MA-AIRL 0.7800.917 6.6963.646MA-GAIL 0.6380.624 7.5963.088NC-DAIL 0.6920.597 6.9123.971CoDAIL 0.6320.685 6.2492.779Random 186.00116.710 322.112415.358Table 2: Average reward gaps between demonstrators and learned agents in 2 competitive tasks, where ‘agent+’and ‘agent-’ represent 2 teams of agents and ‘total’ is their sum. Means and standard deviations are taken acrossdifferent random seeds.AlgorithmKeep-away Pred.-PreyTotal Agent+ Agent- Total Agent+ Agent-Demonstrators 00 00 00 00 00 00MA-AIRL 12.2731.817 4.1491.912 8.9984.345 279.53577.903 35.1001.891 174.23573.168MA-GAIL 1.9631.689 1.1041.212 1.3030.798 15.78810.887 4.8002.718 8.8263.810NC-DAIL 1.8051.695 1.1930.883 1.5391.188 27.61114.645 8.2607.087 6.9755.130CoDAIL 0.2690.078 0.0640.041 0.2190.084 10.4566.762 4.5003.273 4.3592.734Random 28.2722.968 25.1832.150 53.4552.409 100.7366.870 37.9802.396 13.2048.444petitive ones as follows: 1) Cooperative-communication, with 2 agents and 3 landmarks, where anunmovable speaker knowing the goal, cooperates with a listener to reach a particular landmarkswho achieves the goal only through the message from the speaker; 2) Cooperative-navigation, with3 agents and 3 landmarks, where agents must cooperate via physical actions and it requires eachagent to reach one landmark while avoiding collisions; 3) Keep-away, with 1 agent, 1 adversary and1 landmark, where the agent has to get close to the landmark, while the adversary is rewarded bypushing away the agent from the landmark without knowing the target; 4) Predator-prey, with 1 preyagent with 3 adversary predators, where the slower predactor agents must cooperate to chase theprey agent that moves faster and try to run away from the adversaries.Experimental Details We aim to compare the quality of interactions modeling in different aspects.To obtain the interacted demonstrations sampled from correlated policies, we train the demonstra-tor agent via a MARL learning algorithm with opponents modeling to regard others’ policies intoone’s decision making, since the ground-truth reward in those simulated environments is accessi-ble. Specifically, we modify the multi-agent version ACKTR (Wu et al., 2017; Song et al., 2018),an efficient model-free policy gradient algorithm, by keeping an auxiliary opponents model and aconditioned policy for each agent, which can transform the original centralized on-policy learningalgorithm to be decentralized. Note that we do not necessarily need experts that can do well in ourdesignated environments. Instead, any demonstrator will be treated as it is from an -NE strategyconcept under some unknown reward functions, which will be recovered by the discriminator. Inour training procedure, we first obtain demonstrator policies induced by the ground-truth rewardsand then generate demonstrations, i.e., the interactions data for imitation training. Then we trainthe agents through the surrogate rewards from discriminators. We compare CoDAIL with MA-AIRL, MA-GAIL, non-correlated DAIL (NC-DAIL) (the only difference between MA-GAIL andNC-DAIL is whether the reward function depends on joint actions or individual action) and a randomagent. We do not apply any prior to the reward structure for all tasks to let the discriminator learnthe implicit goals. All training procedures are pre-trained via behavior cloning to reduce the samplecomplexity, and we use 200 episodes of demonstrations, each with a maximum of 50 timesteps.5.2 R EWARD GAPTab. 1 and Tab. 2 show the averaged absolute differences of reward for learned agents compared tothe demonstrators in cooperative and competitive tasks, respectively. The learned interactions areconsidered superior if there are smaller reward gaps. Since cooperative tasks are reward-sharing,we show only a group reward for each task in Tab. 1. Compared to the baselines, CoDAIL achievessmaller gaps in both cooperative and competitive tasks, which suggests that our algorithm has a8Published as a conference paper at ICLR 2020Table 3: KL divergence of learned agents position distribution and demonstrators position distribution from anindividual perspective in different scenarios. ‘Total’ is the KL divergence for state-action pairs of all agents,and ‘Per’ is the averaged KL divergence of each agent. Experiments are conducted under the same randomseed. Note that unmovable agents are not recorded since they never move from the start point, and there is onlyone movable agent in Cooperative-communication.AlgorithmCoop.-Comm. Coop.-Navi. Keep-away Pred.-PreyTotal/Per Total Per Total Per Total PerDemonstrators 0 0 0 0 0 0 0MA-AIRL 35.525 18.071 47.241 69.146 98.248 71.568 118.511MA-GAIL 34.681 15.034 45.550 51.721 69.820 9.998 27.116NC-DAIL 38.002 16.202 46.040 46.563 61.780 16.698 33.307CoDAIL 6.427 9.033 23.100 3.113 5.735 8.621 22.600Random 217.456 174.892 221.209 191.344 234.829 37.555 82.361robust imitation learning capability of modeling the demonstrated interactions. It is also worthnoting that CoDAIL achieves higher performance gaps in competitive tasks than cooperative ones,for which we think that conflict goals motivate more complicated interactions than a shared goal.Besides, MA-GAIL and NC-DAIL are about the same, indicating that less important is the surrogatereward structure on these multi-agent scenarios. To our surprise, MA-AIRL does not perform well insome environments, and even fails in Predator-prey. We list the raw obtained rewards in Appendix C,and we provide more hyperparameter sensitivity results in Appendix D.5.3 D IVERGENCE OVER INTERACTIONSSince we aim to recover the interactions of agents generated by the learned policies, it is properto evaluate the relevance between distributions of regenerated interactions and demonstration data.Specifically, we collect positions of agents over hundreds of state-action tuples, which can be re-garded as the low-dimension projection of the state-action interactions. We start each episode froma different initial state but the same for each algorithm in one episode. We run all the experimentsunder the same random seed, and collect positions of each agent in the total 100 episodes, each witha maximum of 50 timesteps.We first estimate the distribution of position (x;y)via Kernel Density Estimation (KDE) (Rosen-blatt, 1956) with Gaussian kernel to compute the Kullback-Leibler (KL) divergence between thegenerated interactions with the demonstrated ones, shown in Tab. 3. It is evident that in terms ofthe KL divergence between regenerated interactions with demonstrator interactions, CoDAIL gen-erates the interaction data that obtains the minimum gap with the demonstration interaction, andhighly outperforms other baseline methods. Besides, MA-GAIL and NC-DAIL reflect about-the-same performance to model complex interactions, while MA-AIRL behaves the worst, even worsethan random agents on Predator-prey.5.4 V ISUALIZATIONS OF INTERACTIONSTo further understand the interactions generated by learned policies compared with the demonstra-tors, we visualize the interactions for demonstrator policies and all learned ones. We plot the densitydistribution of positions, (x;y)and marginal distributions of x-position and y-position. We illus-trate the results conducted on Keep-away in Fig. 1, other scenarios can be found in the Appendix E.Higher frequency positions in collected data are colored darker in the plane, and higher the valuewith respect to its marginal distributions.As shown in Fig. 1, the interaction densities of demonstrators and CoDAIL agents are highly similar(and with the smallest KL divergence), which tend to walk in the right-down side. In contrast, otherlearned agents fail to recover the demonstrator interactions. It is worth noting that even differentpolicies can interact to earn similar rewards, but still keep vast differences among their generatedinteractions. Furthermore, such a result reminds us that the real reward is not the best metric toevaluate the quality of modeling the demonstrated interactions or imitation learning (Li et al., 2017).9Published as a conference paper at ICLR 20202 1 0 1 2210122 0 2210122 0 221012(a) Demonstrators (KL = 0)2 1 0 1 2210122 0 2210122 0 221012 (b) MA-GAIL (KL = 51.721)2 1 0 1 2210122 0 2210122 0 221012 (c) MA-AIRL (KL = 69.146)2 1 0 1 2210122 0 2210122 0 221012(d) CoDAIL (KL = 3.113)2 1 0 1 2210122 0 2210122 0 221012 (e) NC-DAIL (KL = 46.563)2 1 0 1 2210122 0 2210122 0 221012 (f) Random (KL = 191.344)Figure 1: The density and marginal distribution of agent positions, in 100 repeated episodes with differentinitialized states, generated from different learned policies upon Keep-away. The top row of each sub-figure isdrawn from state-action pairs of all agents. Meanwhile, the bottom row explains for each individual (KL meansthe KL divergence between generated interactions shown in the top row and the demonstrators).6 C ONCLUSIONIn this paper, we focus on modeling complex multi-agent interactions via imitation learning ondemonstration data. We develop a decentralized adversarial imitation learning algorithm with corre-lated policies (CoDAIL) with approximated opponents modeling. CoDAIL allows for decentralizedtraining and execution and is more capable of modeling correlated interactions from demonstra-tions shown by multi-dimensional comparisons against other state-of-the-art multi-agent imitationlearning methods on several experiment scenarios. In the future, we will consider covering more im-itation learning tasks and modeling the latent variables of policies for diverse multi-agent imitationlearning.ACKNOWLEDGEMENTWe sincerely thank Yaodong Yang for helpful discussion. The corresponding author Weinan Zhangis supported by NSFC (61702327, 61772333, 61632017). The author Minghuan Liu is supported byWu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University.10Published as a conference paper at ICLR 2020
B1gXtGEb5S
Official Blind Review #3
6: Weak Accept
The authors propose a decentralized adversarial imitation learning algorithm with correlated policies, which recovers each agent’s policy through approximating opponents action using opponent modeling. Extensive experimental results showed that the proposed framework, CoDAIL, better fits scenarios with correlated multi-agent policies. Generally, the paper follows the idea of GAIL and MAGAIL. Differing from the previous works, the paper introduces \epsilon-Nash equilibrium as the solution to multi-agent imitation learning in Markov games. It shows that using the concept of \epsilon-Nash equilibrium as constraints is consistent and equivalent to adding the difference of the causal entropy of the expert policy and the causal entropy of a possible policy in RL procedure. It makes sense. Below, I have a few concerns to the current status of the paper. 1. The authors propose \epsilon-Nash equilibrium to model the convergent state in multi-agent scenarios, however, in section 3.1 the objective function of MA-RL (Equation 5) is still the discounted causal entropy of policy, the same as that of MA-GAIL paper. It is unclear how the \epsilon-NE is considered in modeling MA-RL problem. 2. Rather than assuming conditional independence of actions from different agents, the authors considered that the joint policy as a correlated policy conditioned on state and all opponents’ actions. With the new assumption, the paper re-defines the occupancy measure and introduces an approach to approximate the unobservable opponents’ policies, in order to access opponents’ actions. However, in the section 3.2 when discussing the opponents modeling, the paper did not clearly explain how the joint opponent function \sigma^{(i)} is designed. The description \sigma^{(i)} is confusing. 3. Typos: in equation 14 “i” or “-i”; appendix algorithm 1 line 3 “pi” or “\pi”.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Multi-Agent Interactions Modeling with Correlated Policies ### Paper Abstract In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}. ### Paper Keywords ["Multi-agent reinforcement learning", "Imitation learning"] ### Paper Content ABSTRACTIn multi-agent systems, complex interacting behaviors arise due to the high cor-relations among agents. However, previous work on modeling multi-agent inter-actions from demonstrations is primarily constrained by assuming the indepen-dence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning frame-work with explicit modeling of correlated policies by approximating opponents’policies, which can recover agents’ policies that can regenerate similar interac-tions. Consequently, we develop a Decentralized Adversarial Imitation Learn-ing algorithm with Correlated policies (CoDAIL), which allows for decentralizedtraining and execution. Various experiments demonstrate that CoDAIL can bet-ter regenerate complex interactions close to the demonstrators and outperformsstate-of-the-art multi-agent imitation learning methods. Our code is available athttps://github.com/apexrl/CoDAIL .1 I NTRODUCTIONModeling complex interactions among intelligent agents from the real world is essential for under-standing and creating intelligent multi-agent behaviors, which is typically formulated as a multi-agent learning (MAL) problem in multi-agent systems. When the system dynamics are agnostic andnon-stationary due to the adaptive agents with implicit goals, multi-agent reinforcement learning(MARL) is the most commonly used technique for MAL. MARL has recently drawn much atten-tion and achieved impressive progress on various non-trivial tasks, such as multi-player strategygames (OpenAI, 2018; Jaderberg et al., 2018), traffic light control (Chu et al., 2019), taxi-orderdispatching (Li et al., 2019) etc.A central challenge in MARL is to specify a good learning goal, as the agents’ rewards are cor-related and thus cannot be maximized independently (Bu et al., 2008). Without explicit access tothe reward signals, imitation learning could be the most intuitive solution for learning good policiesdirectly from demonstrations. Conventional solutions such as behavior cloning (BC) (Pomerleau,1991) learn the policy in a supervised manner by requiring numerous data while suffering from com-pounding error (Ross & Bagnell, 2010; Ross et al., 2011). Inverse reinforcement learning (IRL) (Nget al., 2000; Russell, 1998) alleviates these shortcomings by recovering a reward function but isalways expensive to obtain the optimal policy due to the forward reinforcement learning procedurein an inner loop. Generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016) leaves abetter candidate for its model-free structure without compounding error, which is highly effectiveand scalable. However, real-world multi-agent interactions could be much challenging to imitate be-cause of the strong correlations among adaptive agents’ policies and rewards. Consider if a footballcoach wants to win the league, he must make targeted tactics against various opponents, in additionto the situation of his team. Moreover, the multi-agent environment tends to give rise to more severecompounding errors with more expensive running costs.Motivated by these challenges, we investigate the problem of modeling complicated multi-agentinteractions from a pile of off-line demonstrations and recover their on-line policies, which can re-generate analogous multi-agent behaviors. Prior studies for multi-agent imitation learning typicallylimit the complexity in demonstrated interactions by assuming isolated reward structures (Barrettet al., 2017; Le et al., 2017; Lin et al., 2014; Waugh et al., 2013) and independence in per-agent1Published as a conference paper at ICLR 2020policies that overlook the high correlations among agents (Song et al., 2018; Yu et al., 2019). In thispaper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learningframework with correlated policies by approximating opponents’ policies, in order to reach inacces-sible opponents’ actions due to concurrently execution of actions among agents when making deci-sions. Consequently, with approximated opponents model, we develop a Decentralized AdversarialImitation Learning algorithm with Correlated policies (CoDAIL) suitable for learning correlatedpolicies under our proposed framework, which allows for decentralized training and execution. Weprove that our framework treats the demonstrator interactions as one of -Nash Equilibrium ( -NE)solutions under the recovered reward.In experiments, we conduct multi-dimensional comparisons for both the reward gap between learnedagents and demonstrators, along with the distribution divergence between demonstrations and regen-erated interacted trajectories from learned policies. Furthermore, the results reveal that CoDAIL canbetter recover correlated multi-agent policy interactions than other state-of-the-art multi-agent im-itation learning methods in several multi-agent scenarios. We further illustrate the distributions ofregenerated interactions, which indicates that CoDAIL yields the closest interaction behaviors to thedemonstrators.2 P RELIMINARIES2.1 M ARKOV GAME AND-NASH EQUILIBRIUMMarkov game (MG), or stochastic game (Littman, 1994), can be regarded as an extensionof Markov Decision Process (MDP). Formally, we define an MG with Nagents as a tuplehN;S;A(1);:::;A(N);P;r(1);:::;r(N);0;i, whereSis the set of states, A(i)represents theaction space of agent i, wherei2f1;2;:::;Ng,P:SA(1)A(2)A(N)S! Risthe state transition probability distribution, 0:S!Ris the distribution of the initial state s0, and2[0;1]is the discounted factor. Each agent iholds its policy (i)(a(i)js) :SA(i)![0;1]tomake decisions and receive rewards defined as r(i):SA(1)A(2)A(N)!R. We useito represent the set of agents except i, and variables without superscript ito denote the concatenationof all variables for all agents (e.g., represents the joint policy and adenotes actions of all agents).For an arbitrary function f:hs;ai!R, there is a fact that E[f(s;a)] =EsP;a[f(s;a)],E[P1t=0tf(st;at)], wheres00,at,st+1P(st+1jat;st). The objective of agent iis tomaximize its own total expected return R(i),E[r(i)(s;a)] =EP1t=0tr(i)(st;at).In Markov games, however, the reward function for each agent depends on the joint agent actions.Such a fact implies that one’s optimal policy must also depend on others’ policies. For the solutionto the Markov games, -Nash equilibrium ( -NE) is a commonly used concept that extends Nashequilibrium (NE) (Nash, 1951).Definition 1. An-NE is a strategy profile ((i);(i))such that9>0:v(i)(s;(i);(i))v(i)(s;(i);(i));8(i)2(i); (1)wherev(i)(s;(i);(i)) =E(i);(i);s0=shr(i)(st;a(i)t;a(i)t)iis the value function of agent iunder states, and (i)is the set of policies available to agent i.-NE is weaker than NE, which can be seen as sub-optimal NE. Every NE is equivalent to an -NEwhere= 0.2.2 G ENERATIVE ADVERSARIAL IMITATION LEARNINGImitation learning aims to learn the policy directly from expert demonstrations without any accessto the reward signals. In single-agent settings, such demonstrations come from behavior trajectoriessampled with the expert policy, denoted as E=f(st;a(i)t)g1t=0. However, in multi-agent settings,demonstrations are often interrelated trajectories, that is, which are sampled from the interactions ofpolicies among all agents, denoted as E=f(st;a(1)t;:::;a(N)t)g1t=0. For simplicity, we will usethe term interactions directly as the concept of interrelated trajectories, and we refer to trajectoriesfor a single agent.2Published as a conference paper at ICLR 2020Typically, behavior cloning (BC) and inverse reinforcement learning (IRL) are two main approachesfor imitation learning. Although IRL theoretically alleviates compounding error and outperformsto BC, it is less efficient since it requires resolving an RL problem inside the learning loop. Re-cent proposed work aims to learn the policy without estimating the reward function directly, no-tably, GAIL (Ho & Ermon, 2016), which takes advantage of Generative Adversarial Networks(GAN (Goodfellow et al., 2014)), showing that IRL is the dual problem of occupancy measurematching. GAIL regards the environment as a black-box, which is non-differentiable but can beleveraged through Monte-Carlo estimation of policy gradients. Formally, its objective can be ex-pressed asminmaxDEE[logD(s;a)] +E[log (1D(s;a))]H(); (2)whereDis a discriminator that identifies the expert trajectories with agents’ sampled from policy, which tries to maximize its evaluation from D;His the causal entropy for the policy and is thehyperparameter.2.3 C ORRELATED POLICYIn multi-agent learning tasks, each agent imakes decisions independently while the resulting rewardr(i)(st;a(i)t;a(i)t)depends on others’ actions, which makes its cumulative return subjected to thejoint policy . One common joint policy modeling method is to decouple the with assumingconditional independence of actions from different agents (Albrecht & Stone, 2018):(a(i);a(i)js)(i)(a(i)js)(i)(a(i)js): (3)However, such a non-correlated factorization on the joint policy is a vulnerable simplification whichignores the influence of opponents (Wen et al., 2019). And the learning process of agent ilacks sta-bility since the environment dynamics depends on not only the current state but also the joint actionsof all agents (Tian et al., 2019). To solve this, recent work has taken opponents into considerationby decoupling the joint policy as a correlated policy conditioned on state sanda(i)as(a(i);a(i)js) =(i)(a(i)js;a(i))(i)(a(i)js); (4)where(i)(a(i)js;a(i))is the conditional policy, with which agent iregards all potential ac-tions from its opponent policies (i)(a(i)js), and makes decisions through the marginal policy(i)(a(i)js) =Ra(i)(i)(a(i)js;a(i))(i)(a(i)js) da(i)=Ea(i)(i)(a(i)js;a(i)).3 M ETHODOLOGY3.1 G ENERALIZE CORRELATED POLICIES TO MULTI -AGENT IMITATION LEARNINGIn multi-agent settings, for agent iwith policy (i), it seeks to maximize its cumulative rewardagainst demonstrator opponents who equip with demonstrated policies (i)E via reinforcementlearning:RL(i)(r(i)) = arg max(i)H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))]; (5)whereH((i))is the-discounted entropy (Bloem & Bambos, 2014; Haarnoja et al., 2017) of policy(i)andis the hyperparameter. By coupling with Eq. (5), we define an IRL procedure to find areward function r(i)such that the demonstrated joint policy outperforms all other policies, with theregularizer :RSA(1)A(N)!R:IRL(i) ((i)E) = arg maxr(i) (r(i))max(i)(H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))])+EE[r(i)(s;a(i);a(i))]:(6)3Published as a conference paper at ICLR 2020It is worth noting that we cannot obtain the demonstrated policies from the demonstrations directly.To solve this problem, we first introduce the occupancy measure, namely, the unnormalized distri-bution ofhs;aipairs correspond to the agent interactions navigated by joint policy :(s;a) =(ajs)1Xt=0tP(st=sj): (7)With the definition in Eq. (7), we can further formulate from agenti’s perspective as(s;a(i);a(i)) =(a(i);a(i)js)1Xt=0tP(st=sj(i);(i))=(i);(i)(s;a(i);a(i))=8>>>><>>>>:(i)(a(i)js)(i)(a(i)js)|{z}non-correlated formP1t=0tP(st=sj(i);(i))(i)(a(i)js;a(i))(i)(a(i)js)| {z }correlated formP1t=0tP(st=sj(i);(i));(8)wherea(i)(i)anda(i)(i). Furthermore, with the support of Eq. (8), we haveE(i);(i)[] =EsP;a(i)(i)[Ea(i)(i)[]]=Xs;a(i);a(i)(i);(i)(s;a(i);a(i))[]: (9)In analogy to the definition of occupancy measure of that in a single-agent environment, we followthe derivation from Ho & Ermon (2016) and state the conclusion directly1.Proposition 1. The IRL regarding demonstrator opponents is a dual form of a occupancy measurematching problem with regularizer , and the induced optimal policy is the primal optimum, specif-ically, the policy learned by RL on the reward recovered by IRL can be characterize by the followingequation:RL(i)IRL(i)= arg min(i)H((i)) + ((i);(i)EE): (10)With setting the regularizer = GAsimilar to Ho & Ermon (2016), we can obtain a GAIL-likeimitation algorithm to learn (i)EfromEgiven demonstrator counterparts (i)Eby introducing theadversarial training procedures of GANs which lead to a saddle point ((i);D(i)):min(i)maxD(i)H((i)) +EEhlogD(i)(s;a(i);a(i))i+E(i);(i)Ehlog (1D(i)(s;a(i);a(i)))i;(11)whereD(i)denotes the discriminator for agent i, which plays a role of surrogate cost function andguides the policy learning.However, such an algorithm is not practical, since we are unable to access the policies of demonstra-tor opponents (i)Ebecause the demonstrated policies are always given through sets of interactionsdata. To alleviate this deficiency, it is necessary to deal with accessible counterparts. Thereby wepropose Proposition 2.Proposition 2. Letbe an arbitrary function such that holds a similar form as (i), thenE(i);(i)[] =E(i);h(i);(i)(s;a(i);a(i))(i);(s;a(i);a(i))i.Proof. Substituting (i)within Eq. (9) by importance sampling.1Note that Ho & Ermon (2016) proved the conclusion under the goal to minimize the cost instead of maxi-mizing the reward of an agent.4Published as a conference paper at ICLR 2020Proposition 2 raises an important point that a term of importance weight can quantify the demon-strator opponents. By replacing (i)Ewith(i), Eq. (11) is equivalent withmin(i)maxD(i)H((i)) +EEhlogD(i)(s;a(i);a(i))i+E(i);(i)hlog (1D(i)(s;a(i);a(i)))i;(12)where=(i);(i)E(s;a(i);a(i))(i);(i)(s;a(i);a(i))is the importance sampling weight. In practice, it is challenging toestimate the densities and the learning methods might suffer from large variance. Thus, we fix = 1in our implementation, and as the experimental results have shown, it has no significant influenceson performance. Besides, a similar approach can be found in Kostrikov et al. (2018).So far, we’ve built a multi-agent imitation learning framework, which can be easily generalized tocorrelated or non-correlated policy settings. No prior has to be considered in advance since thediscriminator is able to learn the implicit goal for each agent.3.2 L EARN WITH THE OPPONENTS MODELWith the objective shown in Eq. (11), demonstrated interactions can be imitated by updating dis-criminators to offer surrogate rewards and learning their policies alternately. Formally, the updateof discriminator for each agent ican be expressed as:r!JD(!) =EsP;a(i)(i)Za(i)(i)(a(i)js;a(i))r!log (1D(i)!(s;a(i);a(i))) da(i)+E(s;a(i);a(i))Ehr!logD(i)!(s;a(i);a(i))i;(13)and the update of policy is:rJ() =EsP;a(i)(i)r(i)Za(i)(i)(a(i)js;a(i))A(i)(s;a(i);a(i)) da(i)r(i)H((i));(14)where discriminator D(i)is parametrized by !, and the policy (i)is parametrized by . It is worthnoting that the agent iconsiders opponents’ action a(i)while updating its policy and discriminator,with integrating all its possible decisions to find the optimal response. However, it is unrealistic tohave the access to opponent joint policy (a(i)js)for agenti. Thus, it is possible to estimateopponents’ actions via approximating (i)(a(i)js)using opponent modeling. To that end, weconstruct a function (i)(a(i)js) :SA(1)A(i1)A(i+1)A(N)![0;1]N1,as the approximation of opponents for each agent i. Then we rewrite Eq. (13) and Eq. (14) as:r!JD(!)EsP;^a(i)(i);a(i)(i)hr!(i)log(1D(i)!(s;a(i);^a(i)))i+E(s;a(i);a(i))Ehr!logD(i)!(s;a(i);a(i))i (15)andrJ()EsP;^a(i)(i);a(i)(i)hr(i)log(i)(a(i)js;^a(i))A(i)(s;a(i);^a(i))ir(i)H((i))(16)respectively. Therefore, each agent imust infer the opponents model (i)to approximate the un-observable policies (i), which can be achieved via supervised learning. Specifically, we learn indiscrete action space by minimizing a cross-entropy (CE) loss, and a mean-square-error (MSE) lossin continuous action space:L=(12Esph(i)(a(i)js)(i)(a(i)js)2i;continuous action spaceEsp(i)(a(i)js) log(i)(a(i)js); discrete action space :(17)With opponents modeling, agents are able to be trained in a fully decentralized manner. We nameour algorithm as Decentralized Adversarial Imitation Learning with Correlated policies (CorrelatedDAIL, a.k.a. CoDAIL) and present the training procedure in Appendix Algo. 1, which can be easilyscaled to a distributed algorithm. As a comparison, we also present a non-correlated DAIL algorithmwith non-correlated policy assumption in Appendix Algo. 2.5Published as a conference paper at ICLR 20203.3 T HEORETICAL ANALYSISIn this section, we prove that the reinforcement learning objective against demonstrator counterpartsshown in the last section is essentially equivalent to reaching an -NE.Since we fix the policies of agents ias(i)E, the RL procedure mentioned in Eq. (5) can beregarded as a single-agent RL problem. Similarly, with a fixed (i)E, the IRL process of Eq. (6) iscast to a single-agent IRL problem, which recovers an optimal reward function r(i)which achievesthe best performance following the joint action E. Thus we haveRL(i)(r(i)) = arg max(i)H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))]=(i)E:(18)We can also rewrite Eq. (18) asH((i)E) +E(i)E;(i)E[r(i)(s;a(i);a(i))]H((i)) +E(i);(i)E[r(i)(s;a(i);a(i))] (19)for all(i)2(i), which is equivalent toEa(i)t(i)E;a(i)t(i)E;s0=s"1Xt=0tr(i)(st;a(i)t;a(i)t)# (20)Ea(i)t(i);a(i)t(i)E;s0=s"1Xt=0tr(i)(st;a(i)t;a(i)t)#+(H((i))H((i)E));8(i)2(i):Given the value function defined in Eq. (1) for each agent i, forH((i))H((i)E)<0,8(i)2(i),we havev(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E)(H((i)E)H((i))): (21)ForH((i))H((i)E)0,8(i)2(i)we havev(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E) +(H((i))H((i)E))v(i)(s;(i);(i)E)(H((i))H((i)E)):(22)Let=maxnH((i))H((i)E);8(i)2(i)o, then we finally obtainv(i)(s;(i)E;(i)E)v(i)(s;(i);(i)E);8(i)2(i); (23)which is exactly the -NE defined in Definition 1. We can always prove that is bounded in smallvalues such that the -NE solution concept is meaningful. Generally, random policies that keep vastentropy are not always considered as sub-optimal solutions or demonstrated policies (i)Ein mostreinforcement learning environments. As we do not require those random policies, we can removethem from the candidate policy set (i), which indicates that H((i))is bounded in small values,so as. Empirically, we adopt a small , and attain the demonstrator policy Ewith an efficientlearning algorithm to become a close-to-optimal solution.Thus, we conclude that the objective of our CoDAIL assumes that demonstrated policies institutean-NE solution concept (but not necessarily unique) that can be controlled the hyperparameter under some specific reward function, from which the agent learns a policy. It is worth noting that Yuet al. (2019) claimed that NE is incompatible with maximum entropy inverse reinforcement learning(MaxEnt IRL) because NE assumes that the agent never takes sub-optimal actions. Nevertheless,we prove that given demonstrator opponents, the multi-agent MaxEnt IRL defined in Eq. (6) isequivalent to finding an -NE.6Published as a conference paper at ICLR 20204 R ELATED WORKAlbeit non-correlated policy learning guided by a centralized critic has shown excellent propertiesin couple of methods, including MADDPG (Lowe et al., 2017), COMA (Foerster et al., 2018), MASoft-Q (Wei et al., 2018), it lacks in modeling complex interactions because its decisions makingrelies on the independent policy assumption which only considers private observations while ignoresthe impact of opponent behaviors. To behave more rational, agents must take other agents into con-sideration, which leads to the studies of opponent modeling (Albrecht & Stone, 2018) where an agentmodels how its opponents behave based on the interaction history when making decisions (Claus &Boutilier, 1998; Greenwald et al., 2003; Wen et al., 2019; Tian et al., 2019).For multi-agent imitation learning, however, prior works fail to learn from complicated demon-strations, and many of them are bounded with particular reward assumptions. For instance, Bhat-tacharyya et al. (2018) proposed Parameter Sharing Generative Adversarial Imitation Learning (PS-GAIL) that adopts parameter sharing trick to extend GAIL to handle multi-agent problems directly,but it does not utilize the properties of Markov games with strong constraints on the action space andthe reward function. Besides, there are many works built-in Markov games that are restricted un-der tabular representation and known dynamics but with specific prior of reward structures, as fullycooperative games (Barrett et al., 2017; Le et al., 2017; ˇSoˇsic et al., 2016; Bogert & Doshi, 2014),two-player zero-sum games (Lin et al., 2014), two-player general-sum games (Lin et al., 2018), andlinear combinations of specific features (Reddy et al., 2012; Waugh et al., 2013).Recently, some researchers take advantage of GAIL to solve Markov games. Inspired by a spe-cific choice of Lagrange multipliers for a constraint optimization problem (Yu et al., 2019), Songet al. (2018) derived a performance gap for multi-agent from NE. It proposed multi-agent GAIL(MA-GAIL), where they formulated the reward function for each agent using private actions andobservations. As an improvement, Yu et al. (2019) presented a multi-agent adversarial inverse rein-forcement learning (MA-AIRL) based on logistic stochastic best response equilibrium and MaxEntIRL. However, both of them are inadequate to model agent interactions with correlated policies withindependent discriminators. By contrast, our approach can generalize correlated policies to modelthe interactions from demonstrations and employ a fully decentralized training procedure without toget access to know the specific opponent policies.Except for the way of modeling multi-agent interactions as recovering agents’ policies from demon-strations, which can regenerate similar interacted data, some other works consider different effectsof interactions. Grover et al. (2018) proposed to learn a policy representation function of the agentsbased on their interactions and sets of generalization tasks using the learned policy embeddings.They regarded interactions as the episodes that contain only k(in the paper they used 2agents),which constructs an agent-interaction graph. Different from us, they focused on the potential re-lationships among agents to help characterize agent behaviors. Besides, Kuhnt et al. (2016) andGindele et al. (2015) proposed to use the Dynamic Bayesian Model that describes physical rela-tionships among vehicles and driving behaviors to model interaction-dependent behaviors in au-tonomous driving scenario.Correlated policy structures that can help agents consider the influence of other agents usually needopponents modeling (Albrecht & Stone, 2018) to infer others’ actions. Opponent modeling has arich history in MAL (Billings et al., 1998; Ganzfried & Sandholm, 2011), and lots of researches haverecently worked out various useful approaches for different settings in deep MARL, e.g., DRON (Heet al., 2016) and ROMMEO (Tian et al., 2019). In this paper, we focus on imitation learning withcorrelated policies, and we choose a natural and straightforward idea of opponent modeling thatlearning opponents’ policies in the way of supervised learning with historical trajectories. Opponentmodels are used both in the training and the execution stages.5 E XPERIMENTS5.1 E XPERIMENTAL SETTINGSEnvironment Description We test our method on the Particle World Environments (Lowe et al.,2017), which is a popular benchmark for evaluating multi-agent algorithms, including several co-operative and competitive tasks. Specifically, we consider two cooperative scenarios and two com-7Published as a conference paper at ICLR 2020Table 1: Average reward gaps between demonstrators and learned agents in 2 cooperative tasks. Means andstandard deviations are taken across different random seeds.Algorithm Coop.-Comm. Coop.-Navi.Demonstrators 00 00MA-AIRL 0.7800.917 6.6963.646MA-GAIL 0.6380.624 7.5963.088NC-DAIL 0.6920.597 6.9123.971CoDAIL 0.6320.685 6.2492.779Random 186.00116.710 322.112415.358Table 2: Average reward gaps between demonstrators and learned agents in 2 competitive tasks, where ‘agent+’and ‘agent-’ represent 2 teams of agents and ‘total’ is their sum. Means and standard deviations are taken acrossdifferent random seeds.AlgorithmKeep-away Pred.-PreyTotal Agent+ Agent- Total Agent+ Agent-Demonstrators 00 00 00 00 00 00MA-AIRL 12.2731.817 4.1491.912 8.9984.345 279.53577.903 35.1001.891 174.23573.168MA-GAIL 1.9631.689 1.1041.212 1.3030.798 15.78810.887 4.8002.718 8.8263.810NC-DAIL 1.8051.695 1.1930.883 1.5391.188 27.61114.645 8.2607.087 6.9755.130CoDAIL 0.2690.078 0.0640.041 0.2190.084 10.4566.762 4.5003.273 4.3592.734Random 28.2722.968 25.1832.150 53.4552.409 100.7366.870 37.9802.396 13.2048.444petitive ones as follows: 1) Cooperative-communication, with 2 agents and 3 landmarks, where anunmovable speaker knowing the goal, cooperates with a listener to reach a particular landmarkswho achieves the goal only through the message from the speaker; 2) Cooperative-navigation, with3 agents and 3 landmarks, where agents must cooperate via physical actions and it requires eachagent to reach one landmark while avoiding collisions; 3) Keep-away, with 1 agent, 1 adversary and1 landmark, where the agent has to get close to the landmark, while the adversary is rewarded bypushing away the agent from the landmark without knowing the target; 4) Predator-prey, with 1 preyagent with 3 adversary predators, where the slower predactor agents must cooperate to chase theprey agent that moves faster and try to run away from the adversaries.Experimental Details We aim to compare the quality of interactions modeling in different aspects.To obtain the interacted demonstrations sampled from correlated policies, we train the demonstra-tor agent via a MARL learning algorithm with opponents modeling to regard others’ policies intoone’s decision making, since the ground-truth reward in those simulated environments is accessi-ble. Specifically, we modify the multi-agent version ACKTR (Wu et al., 2017; Song et al., 2018),an efficient model-free policy gradient algorithm, by keeping an auxiliary opponents model and aconditioned policy for each agent, which can transform the original centralized on-policy learningalgorithm to be decentralized. Note that we do not necessarily need experts that can do well in ourdesignated environments. Instead, any demonstrator will be treated as it is from an -NE strategyconcept under some unknown reward functions, which will be recovered by the discriminator. Inour training procedure, we first obtain demonstrator policies induced by the ground-truth rewardsand then generate demonstrations, i.e., the interactions data for imitation training. Then we trainthe agents through the surrogate rewards from discriminators. We compare CoDAIL with MA-AIRL, MA-GAIL, non-correlated DAIL (NC-DAIL) (the only difference between MA-GAIL andNC-DAIL is whether the reward function depends on joint actions or individual action) and a randomagent. We do not apply any prior to the reward structure for all tasks to let the discriminator learnthe implicit goals. All training procedures are pre-trained via behavior cloning to reduce the samplecomplexity, and we use 200 episodes of demonstrations, each with a maximum of 50 timesteps.5.2 R EWARD GAPTab. 1 and Tab. 2 show the averaged absolute differences of reward for learned agents compared tothe demonstrators in cooperative and competitive tasks, respectively. The learned interactions areconsidered superior if there are smaller reward gaps. Since cooperative tasks are reward-sharing,we show only a group reward for each task in Tab. 1. Compared to the baselines, CoDAIL achievessmaller gaps in both cooperative and competitive tasks, which suggests that our algorithm has a8Published as a conference paper at ICLR 2020Table 3: KL divergence of learned agents position distribution and demonstrators position distribution from anindividual perspective in different scenarios. ‘Total’ is the KL divergence for state-action pairs of all agents,and ‘Per’ is the averaged KL divergence of each agent. Experiments are conducted under the same randomseed. Note that unmovable agents are not recorded since they never move from the start point, and there is onlyone movable agent in Cooperative-communication.AlgorithmCoop.-Comm. Coop.-Navi. Keep-away Pred.-PreyTotal/Per Total Per Total Per Total PerDemonstrators 0 0 0 0 0 0 0MA-AIRL 35.525 18.071 47.241 69.146 98.248 71.568 118.511MA-GAIL 34.681 15.034 45.550 51.721 69.820 9.998 27.116NC-DAIL 38.002 16.202 46.040 46.563 61.780 16.698 33.307CoDAIL 6.427 9.033 23.100 3.113 5.735 8.621 22.600Random 217.456 174.892 221.209 191.344 234.829 37.555 82.361robust imitation learning capability of modeling the demonstrated interactions. It is also worthnoting that CoDAIL achieves higher performance gaps in competitive tasks than cooperative ones,for which we think that conflict goals motivate more complicated interactions than a shared goal.Besides, MA-GAIL and NC-DAIL are about the same, indicating that less important is the surrogatereward structure on these multi-agent scenarios. To our surprise, MA-AIRL does not perform well insome environments, and even fails in Predator-prey. We list the raw obtained rewards in Appendix C,and we provide more hyperparameter sensitivity results in Appendix D.5.3 D IVERGENCE OVER INTERACTIONSSince we aim to recover the interactions of agents generated by the learned policies, it is properto evaluate the relevance between distributions of regenerated interactions and demonstration data.Specifically, we collect positions of agents over hundreds of state-action tuples, which can be re-garded as the low-dimension projection of the state-action interactions. We start each episode froma different initial state but the same for each algorithm in one episode. We run all the experimentsunder the same random seed, and collect positions of each agent in the total 100 episodes, each witha maximum of 50 timesteps.We first estimate the distribution of position (x;y)via Kernel Density Estimation (KDE) (Rosen-blatt, 1956) with Gaussian kernel to compute the Kullback-Leibler (KL) divergence between thegenerated interactions with the demonstrated ones, shown in Tab. 3. It is evident that in terms ofthe KL divergence between regenerated interactions with demonstrator interactions, CoDAIL gen-erates the interaction data that obtains the minimum gap with the demonstration interaction, andhighly outperforms other baseline methods. Besides, MA-GAIL and NC-DAIL reflect about-the-same performance to model complex interactions, while MA-AIRL behaves the worst, even worsethan random agents on Predator-prey.5.4 V ISUALIZATIONS OF INTERACTIONSTo further understand the interactions generated by learned policies compared with the demonstra-tors, we visualize the interactions for demonstrator policies and all learned ones. We plot the densitydistribution of positions, (x;y)and marginal distributions of x-position and y-position. We illus-trate the results conducted on Keep-away in Fig. 1, other scenarios can be found in the Appendix E.Higher frequency positions in collected data are colored darker in the plane, and higher the valuewith respect to its marginal distributions.As shown in Fig. 1, the interaction densities of demonstrators and CoDAIL agents are highly similar(and with the smallest KL divergence), which tend to walk in the right-down side. In contrast, otherlearned agents fail to recover the demonstrator interactions. It is worth noting that even differentpolicies can interact to earn similar rewards, but still keep vast differences among their generatedinteractions. Furthermore, such a result reminds us that the real reward is not the best metric toevaluate the quality of modeling the demonstrated interactions or imitation learning (Li et al., 2017).9Published as a conference paper at ICLR 20202 1 0 1 2210122 0 2210122 0 221012(a) Demonstrators (KL = 0)2 1 0 1 2210122 0 2210122 0 221012 (b) MA-GAIL (KL = 51.721)2 1 0 1 2210122 0 2210122 0 221012 (c) MA-AIRL (KL = 69.146)2 1 0 1 2210122 0 2210122 0 221012(d) CoDAIL (KL = 3.113)2 1 0 1 2210122 0 2210122 0 221012 (e) NC-DAIL (KL = 46.563)2 1 0 1 2210122 0 2210122 0 221012 (f) Random (KL = 191.344)Figure 1: The density and marginal distribution of agent positions, in 100 repeated episodes with differentinitialized states, generated from different learned policies upon Keep-away. The top row of each sub-figure isdrawn from state-action pairs of all agents. Meanwhile, the bottom row explains for each individual (KL meansthe KL divergence between generated interactions shown in the top row and the demonstrators).6 C ONCLUSIONIn this paper, we focus on modeling complex multi-agent interactions via imitation learning ondemonstration data. We develop a decentralized adversarial imitation learning algorithm with corre-lated policies (CoDAIL) with approximated opponents modeling. CoDAIL allows for decentralizedtraining and execution and is more capable of modeling correlated interactions from demonstra-tions shown by multi-dimensional comparisons against other state-of-the-art multi-agent imitationlearning methods on several experiment scenarios. In the future, we will consider covering more im-itation learning tasks and modeling the latent variables of policies for diverse multi-agent imitationlearning.ACKNOWLEDGEMENTWe sincerely thank Yaodong Yang for helpful discussion. The corresponding author Weinan Zhangis supported by NSFC (61702327, 61772333, 61632017). The author Minghuan Liu is supported byWu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University.10Published as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text The authors propose a decentralized adversarial imitation learning algorithm with correlated policies, which recovers each agent’s policy through approximating opponents action using opponent modeling. Extensive experimental results showed that the proposed framework, CoDAIL, better fits scenarios with correlated multi-agent policies. Generally, the paper follows the idea of GAIL and MAGAIL. Differing from the previous works, the paper introduces \epsilon-Nash equilibrium as the solution to multi-agent imitation learning in Markov games. It shows that using the concept of \epsilon-Nash equilibrium as constraints is consistent and equivalent to adding the difference of the causal entropy of the expert policy and the causal entropy of a possible policy in RL procedure. It makes sense. Below, I have a few concerns to the current status of the paper. 1. The authors propose \epsilon-Nash equilibrium to model the convergent state in multi-agent scenarios, however, in section 3.1 the objective function of MA-RL (Equation 5) is still the discounted causal entropy of policy, the same as that of MA-GAIL paper. It is unclear how the \epsilon-NE is considered in modeling MA-RL problem. 2. Rather than assuming conditional independence of actions from different agents, the authors considered that the joint policy as a correlated policy conditioned on state and all opponents’ actions. With the new assumption, the paper re-defines the occupancy measure and introduces an approach to approximate the unobservable opponents’ policies, in order to access opponents’ actions. However, in the section 3.2 when discussing the opponents modeling, the paper did not clearly explain how the joint opponent function \sigma^{(i)} is designed. The description \sigma^{(i)} is confusing. 3. Typos: in equation 14 “i” or “-i”; appendix algorithm 1 line 3 “pi” or “\pi”. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
pqZV_srUVmK
ICLR.cc/2021/Conference
2021
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
["Zuyue Fu", "Zhuoran Yang", "Zhaoran Wang"]
We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms. While most existing works on actor-critic employ bi-level or two-timescale updates, we focus on the more practical single-timescale setting, where the actor and critic are updated simultaneously. Specifically, in each iteration, the critic update is obtained by applying the Bellman evaluation operator only once while the actor is updated in the policy gradient direction computed using the critic. Moreover, we consider two function approximation settings where both the actor and critic are represented by linear or deep neural networks. For both cases, we prove that the actor sequence converges to a globally optimal policy at a sublinear $O(K^{-1/2})$ rate, where $K$ is the number of iterations. To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time. Moreover, under the broader scope of policy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear rate for the first time.
["optimal policy", "actor", "critic", "provably", "global optimality", "rate", "first time", "global convergence", "popular families", "reinforcement"]
ABSTRACTWe study the global convergence and global optimality of actor-critic, one of themost popular families of reinforcement learning algorithms. While most exist-ing works on actor-critic employ bi-level or two-timescale updates, we focus onthe more practical single-timescale setting, where the actor and critic are updatedsimultaneously. Specifically, in each iteration, the critic update is obtained by ap-plying the Bellman evaluation operator only once while the actor is updated in thepolicy gradient direction computed using the critic. Moreover, we consider twofunction approximation settings where both the actor and critic are represented bylinear or deep neural networks. For both cases, we prove that the actor sequenceconverges to a globally optimal policy at a sublinear O(K1=2)rate, where Kis the number of iterations. To the best of our knowledge, we establish the rateof convergence and global optimality of single-timescale actor-critic with linearfunction approximation for the first time. Moreover, under the broader scope ofpolicy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear ratefor the first time.1 I NTRODUCTIONIn reinforcement learning (RL) (Sutton et al., 1998), the agent aims to make sequential decisions thatmaximize the expected total reward through interacting with the environment and learning from theexperiences, where the environment is modeled as a Markov Decision Process (MDP) (Puterman,2014). To learn a policy that achieves the highest possible total reward in expectation, the actor-criticmethod (Konda and Tsitsiklis, 2000) is among the most commonly used algorithms. In actor-critic,the actor refers to the policy and the critic corresponds to the value function that characterizes theperformance of the actor. This method directly optimizes the expected total return over the policyclass by iteratively improving the actor, where the update direction is determined by the critic. Inparticular, recently, actor-critic combined with deep neural networks (LeCun et al., 2015) achievestremendous empirical successes in solving large-scale RL tasks, such as the game of Go (Silveret al., 2017), StarCraft (Vinyals et al., 2019), Dota (OpenAI, 2018), Rubik’s cube (Agostinelli et al.,2019; Akkaya et al., 2019), and autonomous driving (Sallab et al., 2017). See Li (2017) for a detailedsurvey of the recent developments of deep reinforcement learning.Despite these great empirical successes of actor-critic, there is still an evident chasm between theoryand practice. Specifically, to establish convergence guarantees for actor-critic, most existing workseither focus on the bi-level setting or the two-timescale setting, which are seldom adopted in practice.In particular, under the bi-level setting (Yang et al., 2019a; Wang et al., 2019; Agarwal et al., 2019;Fu et al., 2019; Liu et al., 2019; Abbasi-Yadkori et al., 2019a;b; Cai et al., 2019; Hao et al., 2020;Mei et al., 2020; Bhandari and Russo, 2020), the actor is updated only after the critic solves thepolicy evaluation sub-problem completely, which is equivalent to applying the Bellman evaluationoperator to the previous critic for infinite times. Consequently, actor-critic under the bi-level setting1Published as a conference paper at ICLR 2021is a double-loop iterative algorithm where the inner loop is allocated for solving the policy evaluationsub-problem of the critic. In terms of theoretical analysis, such a double-loop structure decouplesthe analysis for the actor and critic. For the actor, the problem is essentially reduced to analyzing theconvergence of a variant of the policy gradient method (Sutton et al., 2000; Kakade, 2002) wherethe error of the gradient estimate depends on the policy evaluation error of the critic. Besides, underthe two-timescale setting (Borkar and Konda, 1997; Konda and Tsitsiklis, 2000; Xu et al., 2020;Wu et al., 2020; Hong et al., 2020), the actor and the critic are updated simultaneously, but withdisparate stepsizes. More concretely, the stepsize of the actor is set to be much smaller than thatof the critic, with the ratio between these stepsizes converging to zero. In an asymptotic sense,such a separation between stepsizes ensures that the critic completely solves its policy evaluationsub-problem asymptotically. In other words, such a two-timescale scheme results in a separationbetween actor and critic in an asymptotic sense, which leads to asymptotically unbiased policygradient estimates. In sum, in terms of convergence analysis, the existing theory of actor-critichinges on decoupling the analysis for critic and actor, which is ensured via focusing on the bi-levelor two-timescale settings.However, most practical implementations of actor-critic are under the single-timescale setting (Pe-ters and Schaal, 2008a; Schulman et al., 2015; Mnih et al., 2016; Schulman et al., 2017; Haarnojaet al., 2018), where the actor and critic are simultaneously updated, and particularly, the actor isupdated without the critic reaching an approximate solution to the policy evaluation sub-problem.Meanwhile, in comparison with the two-timescale setting, the actor is equipped with a much largerstepsize in the the single-timescale setting such that the asymptotic separation between the analysisof actor and critic is no longer valid.Furthermore, when it comes to function approximation, most existing works only analyze the con-vergence of actor-critic with either linear function approximation (Xu et al., 2020; Wu et al., 2020;Hong et al., 2020), or shallow-neural-network parameterization (Wang et al., 2019; Liu et al., 2019).In contrast, practically used actor-critic methods such as asynchronous advantage actor-critic (Mnihet al., 2016) and soft actor-critic (Haarnoja et al., 2018) oftentimes represent both the actor and criticusing deep neural networks.Thus, the following question is left open:Does single-timescale actor-critic provably find a globally optimal policy under the functionapproximation setting, especially when deep neural networks are employed?To answer such a question, we make the first attempt to investigate the convergence and globaloptimality of single-timescale actor-critic with linear and neural network function approximation. Inparticular, we focus on the family of energy-based policies and aim to find the optimal policy withinthis class. Here we represent both the energy function and the critic as linear or deep neural networkfunctions. In our actor-critic algorithm, the actor update follows proximal policy optimization (PPO)(Schulman et al., 2017) and the critic update is obtained by applying the Bellman evaluation operatoronly once to the current critic iterate. As a result, the actor is updated before the critic solves thepolicy evaluation sub-problem. Such a coupled updating structure persists even when the numberof iterations goes to infinity, which implies that the update direction of the actor is always biasedcompared with the policy gradient direction. This brings an additional challenge that is absent in thebi-level and the two-timescale settings, where the actor and critic are decoupled asymptotically.To tackle such a challenge, our analysis captures the joint effect of actor and critic updates on theobjective function, dubbed as the “double contraction” phenomenon, which plays a pivotal role forthe success of single-timescale actor-critic. Specifically, thanks to the discount factor of the MDP,the Bellman evaluation operator is contractive, which implies that, after each update, the critic makesnoticeable progress by moving towards the value function associated with the current actor. As aresult, although we use a biased estimate of the policy gradient, thanks to the contraction broughtby the discount factor, the accumulative effect of the biases is controlled. Such a phenomenonenables us to characterize the progress of each iteration of joint actor and critic update, and thusyields the convergence to the globally optimal policy. In particular, for both the linear and neuralsettings, we prove that, single-timescale actor-critic finds a O(K1=2)-globally optimal policy afterKiterations. To the best of our knowledge, we seem to establish the first theoretical guarantee ofglobal convergence and global optimality for actor-critic with function approximation in the single-timescale setting. Moreover, under the broader scope of policy optimization with nonlinear function2Published as a conference paper at ICLR 2021approximation, our work seems to prove convergence and optimality guarantees for actor-critic withdeep neural network for the first time.Contribution. Our contribution is two-fold. First, in the single-timescale setting with linear functionapproximation, we prove that, after Kiterations of actor and critic updates, actor-critic returns apolicy that is at most O(K1=2)inferior to the globally optimal policy. Second, when both theactor and critic are represented by deep neural networks, we prove a similar O(K1=2)rate ofconvergence to the globally optimal policy when the architecture of the neural networks are properlychosen.Related Work. Our work extends the line of works on the convergence of actor-critic under thefunction approximation setting. In particular, actor-critic is first introduced in Sutton et al. (2000);Konda and Tsitsiklis (2000). Later, Kakade (2002); Peters and Schaal (2008b) propose the naturalactor-critic method which updates the policy via the natural gradient (Amari, 1998) direction. Theconvergence of (natural) actor-critic with linear function approximation are studied in Bhatnagaret al. (2008; 2009); Bhatnagar (2010); Castro and Meir (2010); Maei (2018). However, these worksonly characterize the asymptotic convergence of actor-critic and their proofs all resort to tools fromstochastic approximation via ordinary differential equations (Borkar, 2008). As a result, these worksonly show that actor-critic with linear function approximation converges to the set of stable equilibriaof a set of ordinary differential equations. Recently, Zhang et al. (2019) propose a variant of actor-critic where Monte-Carlo sampling is used to ensure the critic and the policy gradient estimatesare unbiased. Although they incorporate nonlinear function approximation in the actor, they onlyestablish finite-time convergence result to a stationary point of the expected total reward. Moreover,due to having an inner loop for solving the policy evaluation sub-problem, they focus on the bi-levelsetting. Moreover, under the two-timescale setting, Wu et al. (2020); Xu et al. (2020) show that actor-critic with linear function approximation finds an "-stationary point with eO("5=2)samples, where"measures the squared norm of the policy gradient. All of these results establish the convergenceof actor-critic, without characterizing the optimality of the policy obtained by actor-critic.In terms of the global optimality of actor-critic, Fazel et al. (2018); Malik et al. (2018); Tu andRecht (2018); Yang et al. (2019a); Bu et al. (2019); Fu et al. (2019) show that policy gradient andbi-level actor-critic methods converge to the globally optimal policies under the linear-quadraticsetting, where the state transitions follow a linear dynamical system and the reward function isquadratic. For general MDPs, Bhandari and Russo (2019) recently prove the global optimality ofvanilla policy gradient under the assumption that the families of policies and value functions areboth convex. In addition, our work is also related to Liu et al. (2019) and Wang et al. (2019),where they establish the global optimality of proximal policy optimization and (natural) actor-critic,respectively, where both the actor and critic are parameterized by two-layer neural networks. Ourwork is also related to Agarwal et al. (2019); Abbasi-Yadkori et al. (2019a;b); Cai et al. (2019);Hao et al. (2020); Mei et al. (2020); Bhandari and Russo (2020), which focus on characterizing theoptimality of natural policy gradient in tabular and/or linear settings. However, these aforementionedworks all focus on bi-level actor-critic, where the actor is updated only after the critic solves thepolicy evaluation sub-problem to an approximate optimum. Besides, these works consider linearor two-layer neural network function approximations whereas we focus on the setting with deepneural networks. Furthermore, under the two-timescale setting, Xu et al. (2020); Hong et al. (2020)prove that linear actor-critic requires a sample complexity of eO("4)for obtaining an "-globallyoptimal policy. In comparison, our O(K1=2)convergence for single-timescale actor-critic can betranslated into a similar eO("4)sample complexity directly. Moreover, when reusing the data, ourresult leads to an improved eO("2)sample complexity. In addition, our work is also related toGeist et al. (2019), which proposes a variant of policy iteration algorithm with Bregman divergenceregularization. Without considering an explicit form of function approximation, their algorithmis shown to converge to the globally optimal policy at a similar O(K1=2)rate, where Kis thenumber of policy updates. In contrast, our method is single-timescale actor-critic with linear ordeep neural network function approximation, which enjoys both global convergence and globaloptimality. Meanwhile, our proof is based on a finite-sample analysis, which involves dealing withthe algorithmic errors that track the performance of actor and critic updates as well as the statisticalerror due to having finite data.3Published as a conference paper at ICLR 2021Our work is also related to the literature on deep neural networks. Previous works (Daniely, 2017;Jacot et al., 2018; Wu et al., 2018; Allen-Zhu et al., 2018a;b; Du et al., 2018; Zou et al., 2018; Chizatand Bach, 2018; Jacot et al., 2018; Li and Liang, 2018; Cao and Gu, 2019a;b; Arora et al., 2019; Leeet al., 2019; Gao et al., 2019) analyze the computational and statistical rates of supervised learningmethods with overparameterized neural networks. In contrast, our work employs overparameterizeddeep neural networks in actor-critic for solving RL tasks, which is significantly more challengingthan supervised learning due to the interplay between the actor and the critic.Notation. We denote by [n]the setf1;2;:::;ng. For any measure and1p1 , we denote bykfk;p= (RXjf(x)jpd)1=pandkfkp= (RXjf(x)jpd)1=p, whereis the Lebesgue measure.2 B ACKGROUNDIn this section, we introduce the background on discounted Markov decision processes (MDPs) andactor-critic methods.2.1 D ISCOUNTED MDPA discounted MDP is defined by a tuple (S;A;P;;r; ). HereSandAare the state and actionspaces, respectively, P:SSA! [0;1]is the Markov transition kernel, :S! [0;1]is theinitial state distribution, r:SA! Ris the deterministic reward function, and 2[0;1)is thediscount factor. A policy (ajs)measures the probability of taking the action aat the states. Wefocus on a family of parameterized policies defined as follows, =f(js)2P(A):s2Sg; (2.1)whereP(A)is the probability simplex on the action space Aandis the parameter of the policy. For any state-action pair (s;a)2SA , we define the action-value function as follows,Q(s;a) = (1)Eh1Xt=0tr(st;at)s0=s;a0=ai; (2.2)wherest+1P(jst;at)andat+1(jst+1)for anyt0. We use E[]to denote that theactions follow the policy , which further affect the transition of the states. We aim to find an optimalpolicysuch thatQ(s;a)Q(s;a)for any policy and state-action pair (s;a)2SA .That is to say, such an optimal policy attains a higher expected total reward than any otherpolicy, regardless of the initial state-action pair (s;a). For notational convenience, we denote byQ(s;a) =Q(s;a)for any (s;a)2SA hereafter.Meanwhile, we denote by (s)and(s;a) =(s)(ajs)the stationary state distribution andstationary state-action distribution of the policy , respectively, for any (s;a)2SA . Correspond-ingly, we denote by (s)and(s;a)the stationary state distribution and stationary state-actiondistribution of the optimal policy , respectively, for any (s;a)2SA . For ease of presentation,given any functions g1:S!Randg2:SA! R, we define two operators PandPas follows,[Pg1](s;a) =E[g1(s1)js0=s;a0=a];[Pg2](s;a) =E[g2(s1;a1)js0=s;a0=a];(2.3)wheres1P(js0;a0)anda1(js1). Intuitively, given the current state-action pair(s0;a0), the operator Ppushes the agent to its next state s1following the Markov transition kernelP(js0;a0), while the operator Ppushes the agent to its next state-action pair (s1;a1)followingthe Markov transition kernel P(js0;a0)and policy(js1). These operators also relate to theBellman evaluation operator T, which is defined for any function g:SA! Ras follows,Tg= (1)r+Pg: (2.4)The Bellman evaluation operator Tis used to characterize the actor-critic method in the followingsection. By the definition in (2.2), it is straightforward to verify that the action-value function Qisthe fixed point of the Bellman evaluation operator Tdefined in (2.4), that is, Q=TQfor anypolicy. For notational convenience, we let P`denote the`-fold composition PPP, where thereare`operators Pcomposed together. Such notation is also adopted for other linear operators suchasPandT.4Published as a conference paper at ICLR 20212.2 A CTOR -CRITIC METHODTo obtain an optimal policy , the actor-critic method (Konda and Tsitsiklis, 2000) aims to maxi-mize the expected total reward as a function of the policy, which is equivalent to solving the follow-ing maximization problem,max2J() =Es;a(js)Q(s;a); (2.5)whereis the initial state distribution, Qis the action-value function defined in (2.2), and the familyof parameterized polices is defined in (2.1). The actor-critic method solves the maximizationproblem in (2.5) via first-order optimization using an estimator of the policy gradient rJ(). Hereis the parameter of the policy . In detail, by the policy gradient theorem (Sutton et al., 2000), wehaverJ() =E(s;a)%Q(s;a)rlog(ajs): (2.6)Here%is the state-action visitation measure of the policy , which is defined as %(s;a) =(1)P1t=0tPr[st=s;at=a]. Based on the closed form of the policy gradient in (2.6),the actor-critic method consists of the following two parts: (i) the critic update, where a policy eval-uation algorithm is invoked to estimate the action-value function Q, e.g., by applying the Bellmanevaluation operator Tto the current estimator of Q, and (ii) the actor update, where a policyimprovement algorithm, e.g., the policy gradient method, is invoked using the updated estimator ofQ.In this paper, we consider the following variant of the actor-critic method,k+1 argmax2EkhQk(s;);(js)iKL(js)kk(js);Qk+1(s;a) Ek+1(1)r(s0;a0) +Qk(s1;a1)s0=s;a0=a; (2.7)for any (s;a)2 SA , wheres1P(js0;a0),a1k+1(js1), and we write Ek[] =Esk[]for notational convenience. Here is defined in (2.1) and KL ((js)kk(js))is theKullback-Leibler (KL) divergence between (js)andk(js), which is defined for any s2S asfollows, KL ((js)kk(js)) =Pa2Alog((ajs)=k(ajs))(ajs):In (2.7), the actor updateuses the proximal policy optimization (PPO) method (Schulman et al., 2017), while the critic updateapplies the Bellman evaluation operator Tk+1defined in (2.4) to Qkonly once, which is the currentestimator of the action-value function. Furthermore, we remark that the updates in (2.7) provide ageneral framework in the following two aspects. First, the critic update can be extended to lettingQk+1 (Tk+1)Qkfor any fixed 1, which corresponds to updating the value function via -step rollouts following k+1. Here we only focus on the case with = 1for simplicity. Our theorycan be easily modified for any fixed . Moreover, the KL divergence used in the actor step can alsobe replaced by other Bregman divergences between probability distributions over A. Second, theactor and critic updates in (2.7) is a general template that admits both on- and off-policy evaluationmethods and various function approximators in the actor and critic. In the next section, we present anincarnation of (2.7) with on-policy sampling and linear and neural network function approximation.Furthermore, for analyzing the actor-critic method, most existing works (Yang et al., 2019a; Wanget al., 2019; Agarwal et al., 2019; Fu et al., 2019; Liu et al., 2019) rely on (approximately) obtainingQk+1at each iteration, which is equivalent to applying the Bellman evaluation operator Tk+1infinite times to Qk. This is usually achieved by minimizing the mean-squared Bellman error kQTk+1Qk2k+1;2using stochastic semi-gradient descent, e.g., as in the temporal-difference method(Sutton, 1988), to update the critic for sufficiently many iterations. The unique global minimizerof the mean-squared Bellman error gives the action-value function Qk+1, which is used in theactor update. Meanwhile, the two-timescale setting is also considered in existing works (Borkarand Konda, 1997; Konda and Tsitsiklis, 2000; Xu et al., 2019; 2020; Wu et al., 2020; Hong et al.,2020), which require the actor to be updated more slowly than the critic in an asymptotic sense.Such a requirement is usually satisfied by forcing the ratio between the stepsizes of the actor andcritic updates to go to zero asymptotically.In comparison with the setting with bi-level updates, we consider the single-timescale actor andcritic updates in (2.7), where the critic involves only one step of update, that is, applying the Bell-man evaluation operator TtoQkonly once. Meanwhile, in comparison with the two-timescale5Published as a conference paper at ICLR 2021setting, where the actor and critic are updated simultaneously but with the ratio between their step-sizes asymptotically going to zero, the single-timescale setting is able to achieve a faster rate ofconvergence by allowing the actor to be updated with a larger stepsize, while updating the criticsimultaneously. In particular, such a single-timescale setting better captures a broader range of prac-tical algorithms (Peters and Schaal, 2008a; Schulman et al., 2015; Mnih et al., 2016; Schulman et al.,2017; Haarnoja et al., 2018), where the stepsize of the actor is not asymptotically zero. In x3, wediscuss the implementation of the updates in (2.7) for different schemes of function approximation.Inx4, we compare the rates of convergence between the two-timescale and single-timescale settings.3 A LGORITHMSWe consider two settings, where the actor and critic are parameterized using linear functions anddeep neural networks (which is deferred to xA of the appendix), respectively. We consider theenergy-based policy (ajs)/exp(1f(s;a)), where the energy function f(s;a)is parameter-ized with the parameter . Also, for the (estimated) action-value function, we consider the parame-terizationQ!(s;a)for any (s;a)2SA , where!is the parameter. For such parameterizations ofthe actor and critic, the updates in (2.7) have the following forms.Actor Update. The following proposition gives the closed form of k+1in (2.7).Proposition 3.1. Letk(ajs)/exp(1kfk(s;a))be an energy-based policy and ek+1=argmaxEkhQ!k(s;);(js)iKL(js)kk(js). Thenek+1has the followingclosed form:ek+1(ajs)/exp1Q!k(s;a) +1kfk(s;a), for any (s;a)2SA , wherek=kis the stationary state distribution of k.SeexG.1 for a detailed proof of Proposition 3.1. Motivated by Proposition 3.1, to implement the ac-tor update in (2.7), we update the actor parameter by solving the following minimization problem,k+1 argminEkf(s;a)k+11Q!k(s;a) +1kfk(s;a)2; (3.1)wherek=kis the stationary state-action distribution of k.Critic Update. To implement the critic update in (2.7), we update the critic parameter !by solvingthe following minimization problem,!k+1 argmin!Ek+1[Q!(1)rPk+1Q!k](s;a)2; (3.2)wherek+1=k+1is the stationary state-action distribution of k+1and the operator Pisdefined in (2.3).3.1 L INEAR FUNCTION APPROXIMATIONIn this section, we consider linear function approximation. More specifically, we parameterize theaction-value function using Q!(s;a) =!>'(s;a)and the energy function of the energy-basedpolicyusingf(s;a) =>'(s;a). Here'(s;a)2Rdis the feature vector, where d > 0isthe dimension. Without loss of generality, we assume that k'(s;a)k21for any (s;a)2SA ,which can be achieved by normalization.Actor Update. The minimization problem in (3.1) admits the following closed-form solution,k+1=k+1(1!k+1kk); (3.3)which corresponds to a step of the natural policy gradient method (Kakade, 2002).Critic Update. The minimization problem in (3.2) admits the following closed-form solution,e!k+1=Ek+1['(s;a)'(s;a)>]1Ek+1[(1)r+Pk+1Q!k](s;a)'(s;a):(3.4)Since the closed-form solution e!k+1in (3.4) involves the expectation over the stationary state-actiondistribution k+1ofk+1, we use data to approximate such an expectation. More specifically,we samplef(s`;1;a`;1)g`2[N]andf(s`;2;a`;2;r`;2;s0`;2;a0`;2)g`2[N]such that (s`;1;a`;1)k+1,6Published as a conference paper at ICLR 2021(s`;2;a`;2)k+1,r`;2=r(s`;2;a`;2),s0`;2P(js`;2;a`;2), anda0`;2k+1(js0`;2), whereNis the sample size. We approximate e!k+1using!k+1, which is defined as follows,!k+1= RnNX`=1'(s`;1;a`;1)'(s`;1;a`;1)>1(3.5)NX`=1(1)r`;2+Q!k(s0`;2;a0`;2)'(s`;2;a`;2)o:Here Ris the projection operator, which projects the parameter onto the centered ball with radiusRinRd. Such a projection operator stabilizes the algorithm (Konda and Tsitsiklis, 2000; Bhatnagaret al., 2009). It is worth mentioning that one may also view the update in (3.5) as one step of theleast-squares temporal difference method (Bradtke and Barto, 1996), which can be modified for theoff-policy setting (Antos et al., 2007; Yu, 2010; Liu et al., 2018; Nachum et al., 2019; Xie et al.,2019; Zhang et al., 2020; Uehara and Jiang, 2019; Nachum and Dai, 2020). Such a modificationallows the data points in (3.5) to be reused in the subsequent iterations, which further improves thesample complexity. Specifically, let bhv2P(SA )be the stationary state-action distributioninduced by a behavioral policy bhv. We replace the actor and critic updates in (3.1) and (3.2) byk+1 argminEbhvf(s;a)k+11Q!k(s;a) +1kfk(s;a)2; (3.6)!k+1 argmin!Ebhv[Q!(1)rPk+1Q!k](s;a)2; (3.7)respectively. With linear function approximation, the actor update in (3.6) is reduced to (3.3), whilethe critic update in (3.7) admits a closed form solutione!k+1=Ebhv['(s;a)'(s;a)>]1Ebhv[(1)r+Pk+1Q!k](s;a)'(s;a);which can be well approximated using state-action pairs drawn from bhv. Seex4 for a detaileddiscussion. Finally, by assembling the updates in (3.3) and (3.5), we present the linear actor-criticmethod in Algorithm 1, which is deferred to xB of the appendix.4 T HEORETICAL RESULTSIn this section, we upper bound the regret of the linear actor-critic method. We defer the analysis ofthe deep neural actor-critic method to xC of the appendix. Hereafter we assume that jr(s;a)jrmaxfor any (s;a)2SA , wherermaxis a positive absolute constant. First, we impose the followingassumptions. Recall that is the stationary state-action distribution of , whilekis the stationarystate-action distribution of k. Moreover, let 2P(SA )be a state-action distribution withrespect to which we aim to characterize the performance of the actor-critic algorithm. Specifically,afterK+ 1actor updates, we are interested in upper bounding the following regretEhKXk=0kQQk+1k;1i=EhKXk=0Q(s;a)Qk+1(s;a)i; (4.1)where the expectation is taken with respect to fkgk2[K+1]and(s;a). Here we allow to beany fixed distribution for generality, which might be different from .Assumption 4.1 (Concentrability Coefficient) .The following statements hold.(i) There exists a positive absolute constant such thatkfor anyk1, wherek=kd=dkkk;2.(ii) We assume that for any k1and a sequence of policies figi1, thek-step future-state-action distribution P1Pkis absolutely continuous with respect to , whereis thesame as the one in (4.1) Also, it holds for such thatC;= (1)2P1k=1k2kc(k)<1, wherec(k) = supfigi2[k]kd(P1Pk)=dk;1.In Assumption 4.1, C;is known as the discounted-average concentrability coefficient of thefuture-state-action distributions. Such an assumption indeed measures the stochastic stability prop-erties of the MDP, and the class of MDPs with such properties is quite large. See Szepesv ́ari and7Published as a conference paper at ICLR 2021Munos (2005); Munos and Szepesv ́ari (2008); Antos et al. (2008a;b); Scherrer (2013); Scherrer et al.(2015); Farahmand et al. (2016); Yang et al. (2019b); Geist et al. (2019); Chen and Jiang (2019) formore examples and discussion.Assumption 4.2 (Zero Approximation Error) .It holds for any !;2 B (0;R)thatinf!2B(0;R)E[TQ!!>'](s;a)2= 0, where Tis defined in (2.4).Assumption 4.2 imposes a structural assumption of the MDP under the linear setting. Specificallyspeaking, it assumes that the Bellman operator of each policy maps a linear value function to alinear function. Therefore, the value function associated with each policy (which is the fixed pointof the corresponding Bellman operator) lies in the linear function class. Since the value functions arelinear here, the energy-based policy class approximately covers the optimal policy as the temperatureparametergoes to zero. In summary, our Assumption 4.2 ensures that the energy-based policyclass approximately captures the optimal policy and thus there is no approximation error. WhenAssumption 4.2 does not hold, we only need to add an additional bias term to the regret upper boundin our theorem without much change in the proof.Assumption 4.3 (Well-Conditioned Feature) .The minimum singular value of the matrixEk['(s;a)'(s;a)>]is uniformly lower bounded by a positive absolute constant for anyk1.Assumption 4.3 ensures that the minimization problem in (3.2) admits a unique minimizer, whichis used in the critic update. Similar assumptions are commonly imposed in the literature (Bhandariet al., 2018; Xu et al., 2019; Zou et al., 2019; Wu et al., 2020).Under Assumptions 4.1, 4.2, and 4.3, we upper bound the regret of Algorithm 1 in the followingtheorem.Theorem 4.4. We assume that Assumptions 4.1, 4.2, and 4.3 hold. Let be a state-action distri-bution satisfying (ii) of Assumption 4.1. Also, for any sufficiently large K > 0, let=K1=2,N= (KC2;(=)2log2N), and the sequence of policy parameters fkgk2[K+1]be gen-erated by Algorithm 1. It holds thatEhKXk=0Q(s;a)Qk+1(s;a)i2(1)3logjAj+O(1)K1=2; (4.2)where the expectation is taken with respect to fkgk2[K+1]and(s;a).We sketch the proof in xD. SeexE.1 for a detailed proof. Theorem 4.4 establishes an O(K1=2)regret of Algorithm 1, where Kis the total number of iterations. Here O()omits terms involving(1)1andlogjAj. To better understand Theorem 4.4, we consider the ideal setting, where wehave access to the action-value function Qof any policy . In such an ideal setting, the criticupdate is unnecessary. However, the natural policy gradient method, which only uses the actorupdate, achieves the same O(K1=2)regret (Liu et al., 2019; Agarwal et al., 2019; Cai et al., 2019).In other words, in terms of the iteration complexity, Theorem 4.4 shows that in the single-timescalesetting, using only one step of the critic update along with one step of the actor update is as efficientas the natural policy gradient method in the ideal setting.Furthermore, by the regret bound in (4.2), to obtain an "-globally optimal policy, it suffices to setK(1)6"2log2jAjin Algorithm 1 and output a randomized policy that is drawn fromfkgK+1k=1uniformly. Plugging such a KintoN= (KC2;(=)2log2N), we obtain thatN=eO("2), whereeO()omits the logarithmic terms. Thus, to achieve an "-globally optimalpolicy, the total sample complexity of Algorithm 1 is eO("4). This matches the sample complexityresults established in Xu et al. (2020); Hong et al. (2020) for two-timescale actor-critic methods.Meanwhile, notice that here the critic updates are on-policy and we draw Nnew data points in eachcritic update. As discussed in x3.1, under the off-policy setting, the critic updates given in (3.7)can be implemented using a fixed dataset sampled from bhv, the stationary state-action distributioninduced by the behavioral policy. Under this scenario, the total number of data points used by thealgorithm is equal to N. Moreover, by imposing similar assumptions on bhvas in (i) of Assumption4.1 and Assumption 4.3, we can establish a similar O(K1=2)regret as in (4.2) for the off-policysetting. As a result, with data reuse, to obtain an "-globally optimal policy, the sample complexityof Algorithm 1 is essentially eO("2), which demonstrates the advantage of our single-timescale8Published as a conference paper at ICLR 2021actor-critic method. Besides, only focusing on the convergence to an "-stationary point, Wu et al.(2020); Xu et al. (2020) establish the sample complexity of eO("5=2)for two-timescale actor-critic,where"measures the squared Euclidean norm of the policy gradient. In contrast, by adopting thenatural policy gradient (Kakade, 2002) in actor updates, we achieve convergence to the globallyoptimal policy. We remark that the idea of off-policy evaluation cannot be applied to typical two-timescale setting (Wu et al., 2020; Xu et al., 2020), where the critic is updated using TD learning(e.g. TD(0) and TD( )), since it is shown that off-policy TD method may diverge even with linearfunction approximation (Baird et al., 1995; Sutton et al., 2008). To the best of our knowledge,we establish the rate of convergence and global optimality of the actor-critic method with functionapproximation in the single-timescale setting for the first time.Furthermore, as we will show in Theorem C.5 of xB, when both the actor and the critic are repre-sented using overparameterized deep neural networks, we establish a similar O((1)3logjAjK1=2)regret when the architecture of the actor and critic neural networks are properly chosen. Toour best knowledge, this seems the first theoretical guarantee for the actor-critic method with deepneural network function approximation in terms of the rate of convergence and global optimality.
AljIbLyWOjd
Theory for an one-timescale AC
8: Top 50% of accepted papers, clear accept
This is a theoretical paper. The paper studies convergence and optimality of the actor-critic with the function approximation in a single-timescale setting. The proposed single-timescale AC applies PPO for the actor and updates the critic by applying the Bellman operator to the critic one time. Despite the actor-critic coupling, the paper establishes global convergence with sublinear rates for the proposed AC method in both linear and neural network function approximation. Strong points: (1) The paper is well-written, and the motivation is easy to follow. The paper also provides detailed literature on related works. (2) The studied AC is practical in the sense that both the actor and the critic can take function approximation and their updates work in the one-timescale. The proposed scheme could complement the previous study of the AC methods in the two-timescale. (3) The provided convergence theory could be some new insights into dealing with the coupling of the actor and the critic. The provided intuition makes sense to me although I didn’t get time checking proof details. Weak points and comments: (1) Although the paper provides a general setup for the one-timescale AC method, it is worth providing some generic examples to explain the theory, e.g., the energy-based policy with direct parametrization. (2) The theory seems to be specific to the energy-based policy. Are there any other types of policies that can also be considered? If not, it would be helpful to comment on the importance of the energy-based policy in practice or from the theoretical point of view? (3) The proposed AC methods rely on the population quantity, e.g., expectation over state-action visitation probability. How practical are they? Or how is the proposed AC related to practical one-timescale AC methods mentioned in the Introduction? (4) In (2.7), the critic depends on the current policy at k+1. What do you mean by they are updated simultaneously in the Abstract? (5) It would be helpful to make a table comparing the proposed AC with others in terms of sampling assumptions, policy classes, and convergence. I believe my concerns can be addressed during reviewing. For me, the development of this paper is new, and I am more inclined to agree with the acceptance.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy ### Paper Abstract We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms. While most existing works on actor-critic employ bi-level or two-timescale updates, we focus on the more practical single-timescale setting, where the actor and critic are updated simultaneously. Specifically, in each iteration, the critic update is obtained by applying the Bellman evaluation operator only once while the actor is updated in the policy gradient direction computed using the critic. Moreover, we consider two function approximation settings where both the actor and critic are represented by linear or deep neural networks. For both cases, we prove that the actor sequence converges to a globally optimal policy at a sublinear $O(K^{-1/2})$ rate, where $K$ is the number of iterations. To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time. Moreover, under the broader scope of policy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear rate for the first time. ### Paper Keywords ["optimal policy", "actor", "critic", "provably", "global optimality", "rate", "first time", "global convergence", "popular families", "reinforcement"] ### Paper Content ABSTRACTWe study the global convergence and global optimality of actor-critic, one of themost popular families of reinforcement learning algorithms. While most exist-ing works on actor-critic employ bi-level or two-timescale updates, we focus onthe more practical single-timescale setting, where the actor and critic are updatedsimultaneously. Specifically, in each iteration, the critic update is obtained by ap-plying the Bellman evaluation operator only once while the actor is updated in thepolicy gradient direction computed using the critic. Moreover, we consider twofunction approximation settings where both the actor and critic are represented bylinear or deep neural networks. For both cases, we prove that the actor sequenceconverges to a globally optimal policy at a sublinear O(K1=2)rate, where Kis the number of iterations. To the best of our knowledge, we establish the rateof convergence and global optimality of single-timescale actor-critic with linearfunction approximation for the first time. Moreover, under the broader scope ofpolicy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear ratefor the first time.1 I NTRODUCTIONIn reinforcement learning (RL) (Sutton et al., 1998), the agent aims to make sequential decisions thatmaximize the expected total reward through interacting with the environment and learning from theexperiences, where the environment is modeled as a Markov Decision Process (MDP) (Puterman,2014). To learn a policy that achieves the highest possible total reward in expectation, the actor-criticmethod (Konda and Tsitsiklis, 2000) is among the most commonly used algorithms. In actor-critic,the actor refers to the policy and the critic corresponds to the value function that characterizes theperformance of the actor. This method directly optimizes the expected total return over the policyclass by iteratively improving the actor, where the update direction is determined by the critic. Inparticular, recently, actor-critic combined with deep neural networks (LeCun et al., 2015) achievestremendous empirical successes in solving large-scale RL tasks, such as the game of Go (Silveret al., 2017), StarCraft (Vinyals et al., 2019), Dota (OpenAI, 2018), Rubik’s cube (Agostinelli et al.,2019; Akkaya et al., 2019), and autonomous driving (Sallab et al., 2017). See Li (2017) for a detailedsurvey of the recent developments of deep reinforcement learning.Despite these great empirical successes of actor-critic, there is still an evident chasm between theoryand practice. Specifically, to establish convergence guarantees for actor-critic, most existing workseither focus on the bi-level setting or the two-timescale setting, which are seldom adopted in practice.In particular, under the bi-level setting (Yang et al., 2019a; Wang et al., 2019; Agarwal et al., 2019;Fu et al., 2019; Liu et al., 2019; Abbasi-Yadkori et al., 2019a;b; Cai et al., 2019; Hao et al., 2020;Mei et al., 2020; Bhandari and Russo, 2020), the actor is updated only after the critic solves thepolicy evaluation sub-problem completely, which is equivalent to applying the Bellman evaluationoperator to the previous critic for infinite times. Consequently, actor-critic under the bi-level setting1Published as a conference paper at ICLR 2021is a double-loop iterative algorithm where the inner loop is allocated for solving the policy evaluationsub-problem of the critic. In terms of theoretical analysis, such a double-loop structure decouplesthe analysis for the actor and critic. For the actor, the problem is essentially reduced to analyzing theconvergence of a variant of the policy gradient method (Sutton et al., 2000; Kakade, 2002) wherethe error of the gradient estimate depends on the policy evaluation error of the critic. Besides, underthe two-timescale setting (Borkar and Konda, 1997; Konda and Tsitsiklis, 2000; Xu et al., 2020;Wu et al., 2020; Hong et al., 2020), the actor and the critic are updated simultaneously, but withdisparate stepsizes. More concretely, the stepsize of the actor is set to be much smaller than thatof the critic, with the ratio between these stepsizes converging to zero. In an asymptotic sense,such a separation between stepsizes ensures that the critic completely solves its policy evaluationsub-problem asymptotically. In other words, such a two-timescale scheme results in a separationbetween actor and critic in an asymptotic sense, which leads to asymptotically unbiased policygradient estimates. In sum, in terms of convergence analysis, the existing theory of actor-critichinges on decoupling the analysis for critic and actor, which is ensured via focusing on the bi-levelor two-timescale settings.However, most practical implementations of actor-critic are under the single-timescale setting (Pe-ters and Schaal, 2008a; Schulman et al., 2015; Mnih et al., 2016; Schulman et al., 2017; Haarnojaet al., 2018), where the actor and critic are simultaneously updated, and particularly, the actor isupdated without the critic reaching an approximate solution to the policy evaluation sub-problem.Meanwhile, in comparison with the two-timescale setting, the actor is equipped with a much largerstepsize in the the single-timescale setting such that the asymptotic separation between the analysisof actor and critic is no longer valid.Furthermore, when it comes to function approximation, most existing works only analyze the con-vergence of actor-critic with either linear function approximation (Xu et al., 2020; Wu et al., 2020;Hong et al., 2020), or shallow-neural-network parameterization (Wang et al., 2019; Liu et al., 2019).In contrast, practically used actor-critic methods such as asynchronous advantage actor-critic (Mnihet al., 2016) and soft actor-critic (Haarnoja et al., 2018) oftentimes represent both the actor and criticusing deep neural networks.Thus, the following question is left open:Does single-timescale actor-critic provably find a globally optimal policy under the functionapproximation setting, especially when deep neural networks are employed?To answer such a question, we make the first attempt to investigate the convergence and globaloptimality of single-timescale actor-critic with linear and neural network function approximation. Inparticular, we focus on the family of energy-based policies and aim to find the optimal policy withinthis class. Here we represent both the energy function and the critic as linear or deep neural networkfunctions. In our actor-critic algorithm, the actor update follows proximal policy optimization (PPO)(Schulman et al., 2017) and the critic update is obtained by applying the Bellman evaluation operatoronly once to the current critic iterate. As a result, the actor is updated before the critic solves thepolicy evaluation sub-problem. Such a coupled updating structure persists even when the numberof iterations goes to infinity, which implies that the update direction of the actor is always biasedcompared with the policy gradient direction. This brings an additional challenge that is absent in thebi-level and the two-timescale settings, where the actor and critic are decoupled asymptotically.To tackle such a challenge, our analysis captures the joint effect of actor and critic updates on theobjective function, dubbed as the “double contraction” phenomenon, which plays a pivotal role forthe success of single-timescale actor-critic. Specifically, thanks to the discount factor of the MDP,the Bellman evaluation operator is contractive, which implies that, after each update, the critic makesnoticeable progress by moving towards the value function associated with the current actor. As aresult, although we use a biased estimate of the policy gradient, thanks to the contraction broughtby the discount factor, the accumulative effect of the biases is controlled. Such a phenomenonenables us to characterize the progress of each iteration of joint actor and critic update, and thusyields the convergence to the globally optimal policy. In particular, for both the linear and neuralsettings, we prove that, single-timescale actor-critic finds a O(K1=2)-globally optimal policy afterKiterations. To the best of our knowledge, we seem to establish the first theoretical guarantee ofglobal convergence and global optimality for actor-critic with function approximation in the single-timescale setting. Moreover, under the broader scope of policy optimization with nonlinear function2Published as a conference paper at ICLR 2021approximation, our work seems to prove convergence and optimality guarantees for actor-critic withdeep neural network for the first time.Contribution. Our contribution is two-fold. First, in the single-timescale setting with linear functionapproximation, we prove that, after Kiterations of actor and critic updates, actor-critic returns apolicy that is at most O(K1=2)inferior to the globally optimal policy. Second, when both theactor and critic are represented by deep neural networks, we prove a similar O(K1=2)rate ofconvergence to the globally optimal policy when the architecture of the neural networks are properlychosen.Related Work. Our work extends the line of works on the convergence of actor-critic under thefunction approximation setting. In particular, actor-critic is first introduced in Sutton et al. (2000);Konda and Tsitsiklis (2000). Later, Kakade (2002); Peters and Schaal (2008b) propose the naturalactor-critic method which updates the policy via the natural gradient (Amari, 1998) direction. Theconvergence of (natural) actor-critic with linear function approximation are studied in Bhatnagaret al. (2008; 2009); Bhatnagar (2010); Castro and Meir (2010); Maei (2018). However, these worksonly characterize the asymptotic convergence of actor-critic and their proofs all resort to tools fromstochastic approximation via ordinary differential equations (Borkar, 2008). As a result, these worksonly show that actor-critic with linear function approximation converges to the set of stable equilibriaof a set of ordinary differential equations. Recently, Zhang et al. (2019) propose a variant of actor-critic where Monte-Carlo sampling is used to ensure the critic and the policy gradient estimatesare unbiased. Although they incorporate nonlinear function approximation in the actor, they onlyestablish finite-time convergence result to a stationary point of the expected total reward. Moreover,due to having an inner loop for solving the policy evaluation sub-problem, they focus on the bi-levelsetting. Moreover, under the two-timescale setting, Wu et al. (2020); Xu et al. (2020) show that actor-critic with linear function approximation finds an "-stationary point with eO("5=2)samples, where"measures the squared norm of the policy gradient. All of these results establish the convergenceof actor-critic, without characterizing the optimality of the policy obtained by actor-critic.In terms of the global optimality of actor-critic, Fazel et al. (2018); Malik et al. (2018); Tu andRecht (2018); Yang et al. (2019a); Bu et al. (2019); Fu et al. (2019) show that policy gradient andbi-level actor-critic methods converge to the globally optimal policies under the linear-quadraticsetting, where the state transitions follow a linear dynamical system and the reward function isquadratic. For general MDPs, Bhandari and Russo (2019) recently prove the global optimality ofvanilla policy gradient under the assumption that the families of policies and value functions areboth convex. In addition, our work is also related to Liu et al. (2019) and Wang et al. (2019),where they establish the global optimality of proximal policy optimization and (natural) actor-critic,respectively, where both the actor and critic are parameterized by two-layer neural networks. Ourwork is also related to Agarwal et al. (2019); Abbasi-Yadkori et al. (2019a;b); Cai et al. (2019);Hao et al. (2020); Mei et al. (2020); Bhandari and Russo (2020), which focus on characterizing theoptimality of natural policy gradient in tabular and/or linear settings. However, these aforementionedworks all focus on bi-level actor-critic, where the actor is updated only after the critic solves thepolicy evaluation sub-problem to an approximate optimum. Besides, these works consider linearor two-layer neural network function approximations whereas we focus on the setting with deepneural networks. Furthermore, under the two-timescale setting, Xu et al. (2020); Hong et al. (2020)prove that linear actor-critic requires a sample complexity of eO("4)for obtaining an "-globallyoptimal policy. In comparison, our O(K1=2)convergence for single-timescale actor-critic can betranslated into a similar eO("4)sample complexity directly. Moreover, when reusing the data, ourresult leads to an improved eO("2)sample complexity. In addition, our work is also related toGeist et al. (2019), which proposes a variant of policy iteration algorithm with Bregman divergenceregularization. Without considering an explicit form of function approximation, their algorithmis shown to converge to the globally optimal policy at a similar O(K1=2)rate, where Kis thenumber of policy updates. In contrast, our method is single-timescale actor-critic with linear ordeep neural network function approximation, which enjoys both global convergence and globaloptimality. Meanwhile, our proof is based on a finite-sample analysis, which involves dealing withthe algorithmic errors that track the performance of actor and critic updates as well as the statisticalerror due to having finite data.3Published as a conference paper at ICLR 2021Our work is also related to the literature on deep neural networks. Previous works (Daniely, 2017;Jacot et al., 2018; Wu et al., 2018; Allen-Zhu et al., 2018a;b; Du et al., 2018; Zou et al., 2018; Chizatand Bach, 2018; Jacot et al., 2018; Li and Liang, 2018; Cao and Gu, 2019a;b; Arora et al., 2019; Leeet al., 2019; Gao et al., 2019) analyze the computational and statistical rates of supervised learningmethods with overparameterized neural networks. In contrast, our work employs overparameterizeddeep neural networks in actor-critic for solving RL tasks, which is significantly more challengingthan supervised learning due to the interplay between the actor and the critic.Notation. We denote by [n]the setf1;2;:::;ng. For any measure and1p1 , we denote bykfk;p= (RXjf(x)jpd)1=pandkfkp= (RXjf(x)jpd)1=p, whereis the Lebesgue measure.2 B ACKGROUNDIn this section, we introduce the background on discounted Markov decision processes (MDPs) andactor-critic methods.2.1 D ISCOUNTED MDPA discounted MDP is defined by a tuple (S;A;P;;r; ). HereSandAare the state and actionspaces, respectively, P:SSA! [0;1]is the Markov transition kernel, :S! [0;1]is theinitial state distribution, r:SA! Ris the deterministic reward function, and 2[0;1)is thediscount factor. A policy (ajs)measures the probability of taking the action aat the states. Wefocus on a family of parameterized policies defined as follows, =f(js)2P(A):s2Sg; (2.1)whereP(A)is the probability simplex on the action space Aandis the parameter of the policy. For any state-action pair (s;a)2SA , we define the action-value function as follows,Q(s;a) = (1)Eh1Xt=0tr(st;at)s0=s;a0=ai; (2.2)wherest+1P(jst;at)andat+1(jst+1)for anyt0. We use E[]to denote that theactions follow the policy , which further affect the transition of the states. We aim to find an optimalpolicysuch thatQ(s;a)Q(s;a)for any policy and state-action pair (s;a)2SA .That is to say, such an optimal policy attains a higher expected total reward than any otherpolicy, regardless of the initial state-action pair (s;a). For notational convenience, we denote byQ(s;a) =Q(s;a)for any (s;a)2SA hereafter.Meanwhile, we denote by (s)and(s;a) =(s)(ajs)the stationary state distribution andstationary state-action distribution of the policy , respectively, for any (s;a)2SA . Correspond-ingly, we denote by (s)and(s;a)the stationary state distribution and stationary state-actiondistribution of the optimal policy , respectively, for any (s;a)2SA . For ease of presentation,given any functions g1:S!Randg2:SA! R, we define two operators PandPas follows,[Pg1](s;a) =E[g1(s1)js0=s;a0=a];[Pg2](s;a) =E[g2(s1;a1)js0=s;a0=a];(2.3)wheres1P(js0;a0)anda1(js1). Intuitively, given the current state-action pair(s0;a0), the operator Ppushes the agent to its next state s1following the Markov transition kernelP(js0;a0), while the operator Ppushes the agent to its next state-action pair (s1;a1)followingthe Markov transition kernel P(js0;a0)and policy(js1). These operators also relate to theBellman evaluation operator T, which is defined for any function g:SA! Ras follows,Tg= (1)r+Pg: (2.4)The Bellman evaluation operator Tis used to characterize the actor-critic method in the followingsection. By the definition in (2.2), it is straightforward to verify that the action-value function Qisthe fixed point of the Bellman evaluation operator Tdefined in (2.4), that is, Q=TQfor anypolicy. For notational convenience, we let P`denote the`-fold composition PPP, where thereare`operators Pcomposed together. Such notation is also adopted for other linear operators suchasPandT.4Published as a conference paper at ICLR 20212.2 A CTOR -CRITIC METHODTo obtain an optimal policy , the actor-critic method (Konda and Tsitsiklis, 2000) aims to maxi-mize the expected total reward as a function of the policy, which is equivalent to solving the follow-ing maximization problem,max2J() =Es;a(js)Q(s;a); (2.5)whereis the initial state distribution, Qis the action-value function defined in (2.2), and the familyof parameterized polices is defined in (2.1). The actor-critic method solves the maximizationproblem in (2.5) via first-order optimization using an estimator of the policy gradient rJ(). Hereis the parameter of the policy . In detail, by the policy gradient theorem (Sutton et al., 2000), wehaverJ() =E(s;a)%Q(s;a)rlog(ajs): (2.6)Here%is the state-action visitation measure of the policy , which is defined as %(s;a) =(1)P1t=0tPr[st=s;at=a]. Based on the closed form of the policy gradient in (2.6),the actor-critic method consists of the following two parts: (i) the critic update, where a policy eval-uation algorithm is invoked to estimate the action-value function Q, e.g., by applying the Bellmanevaluation operator Tto the current estimator of Q, and (ii) the actor update, where a policyimprovement algorithm, e.g., the policy gradient method, is invoked using the updated estimator ofQ.In this paper, we consider the following variant of the actor-critic method,k+1 argmax2EkhQk(s;);(js)iKL(js)kk(js);Qk+1(s;a) Ek+1(1)r(s0;a0) +Qk(s1;a1)s0=s;a0=a; (2.7)for any (s;a)2 SA , wheres1P(js0;a0),a1k+1(js1), and we write Ek[] =Esk[]for notational convenience. Here is defined in (2.1) and KL ((js)kk(js))is theKullback-Leibler (KL) divergence between (js)andk(js), which is defined for any s2S asfollows, KL ((js)kk(js)) =Pa2Alog((ajs)=k(ajs))(ajs):In (2.7), the actor updateuses the proximal policy optimization (PPO) method (Schulman et al., 2017), while the critic updateapplies the Bellman evaluation operator Tk+1defined in (2.4) to Qkonly once, which is the currentestimator of the action-value function. Furthermore, we remark that the updates in (2.7) provide ageneral framework in the following two aspects. First, the critic update can be extended to lettingQk+1 (Tk+1)Qkfor any fixed 1, which corresponds to updating the value function via -step rollouts following k+1. Here we only focus on the case with = 1for simplicity. Our theorycan be easily modified for any fixed . Moreover, the KL divergence used in the actor step can alsobe replaced by other Bregman divergences between probability distributions over A. Second, theactor and critic updates in (2.7) is a general template that admits both on- and off-policy evaluationmethods and various function approximators in the actor and critic. In the next section, we present anincarnation of (2.7) with on-policy sampling and linear and neural network function approximation.Furthermore, for analyzing the actor-critic method, most existing works (Yang et al., 2019a; Wanget al., 2019; Agarwal et al., 2019; Fu et al., 2019; Liu et al., 2019) rely on (approximately) obtainingQk+1at each iteration, which is equivalent to applying the Bellman evaluation operator Tk+1infinite times to Qk. This is usually achieved by minimizing the mean-squared Bellman error kQTk+1Qk2k+1;2using stochastic semi-gradient descent, e.g., as in the temporal-difference method(Sutton, 1988), to update the critic for sufficiently many iterations. The unique global minimizerof the mean-squared Bellman error gives the action-value function Qk+1, which is used in theactor update. Meanwhile, the two-timescale setting is also considered in existing works (Borkarand Konda, 1997; Konda and Tsitsiklis, 2000; Xu et al., 2019; 2020; Wu et al., 2020; Hong et al.,2020), which require the actor to be updated more slowly than the critic in an asymptotic sense.Such a requirement is usually satisfied by forcing the ratio between the stepsizes of the actor andcritic updates to go to zero asymptotically.In comparison with the setting with bi-level updates, we consider the single-timescale actor andcritic updates in (2.7), where the critic involves only one step of update, that is, applying the Bell-man evaluation operator TtoQkonly once. Meanwhile, in comparison with the two-timescale5Published as a conference paper at ICLR 2021setting, where the actor and critic are updated simultaneously but with the ratio between their step-sizes asymptotically going to zero, the single-timescale setting is able to achieve a faster rate ofconvergence by allowing the actor to be updated with a larger stepsize, while updating the criticsimultaneously. In particular, such a single-timescale setting better captures a broader range of prac-tical algorithms (Peters and Schaal, 2008a; Schulman et al., 2015; Mnih et al., 2016; Schulman et al.,2017; Haarnoja et al., 2018), where the stepsize of the actor is not asymptotically zero. In x3, wediscuss the implementation of the updates in (2.7) for different schemes of function approximation.Inx4, we compare the rates of convergence between the two-timescale and single-timescale settings.3 A LGORITHMSWe consider two settings, where the actor and critic are parameterized using linear functions anddeep neural networks (which is deferred to xA of the appendix), respectively. We consider theenergy-based policy (ajs)/exp(1f(s;a)), where the energy function f(s;a)is parameter-ized with the parameter . Also, for the (estimated) action-value function, we consider the parame-terizationQ!(s;a)for any (s;a)2SA , where!is the parameter. For such parameterizations ofthe actor and critic, the updates in (2.7) have the following forms.Actor Update. The following proposition gives the closed form of k+1in (2.7).Proposition 3.1. Letk(ajs)/exp(1kfk(s;a))be an energy-based policy and ek+1=argmaxEkhQ!k(s;);(js)iKL(js)kk(js). Thenek+1has the followingclosed form:ek+1(ajs)/exp1Q!k(s;a) +1kfk(s;a), for any (s;a)2SA , wherek=kis the stationary state distribution of k.SeexG.1 for a detailed proof of Proposition 3.1. Motivated by Proposition 3.1, to implement the ac-tor update in (2.7), we update the actor parameter by solving the following minimization problem,k+1 argminEkf(s;a)k+11Q!k(s;a) +1kfk(s;a)2; (3.1)wherek=kis the stationary state-action distribution of k.Critic Update. To implement the critic update in (2.7), we update the critic parameter !by solvingthe following minimization problem,!k+1 argmin!Ek+1[Q!(1)rPk+1Q!k](s;a)2; (3.2)wherek+1=k+1is the stationary state-action distribution of k+1and the operator Pisdefined in (2.3).3.1 L INEAR FUNCTION APPROXIMATIONIn this section, we consider linear function approximation. More specifically, we parameterize theaction-value function using Q!(s;a) =!>'(s;a)and the energy function of the energy-basedpolicyusingf(s;a) =>'(s;a). Here'(s;a)2Rdis the feature vector, where d > 0isthe dimension. Without loss of generality, we assume that k'(s;a)k21for any (s;a)2SA ,which can be achieved by normalization.Actor Update. The minimization problem in (3.1) admits the following closed-form solution,k+1=k+1(1!k+1kk); (3.3)which corresponds to a step of the natural policy gradient method (Kakade, 2002).Critic Update. The minimization problem in (3.2) admits the following closed-form solution,e!k+1=Ek+1['(s;a)'(s;a)>]1Ek+1[(1)r+Pk+1Q!k](s;a)'(s;a):(3.4)Since the closed-form solution e!k+1in (3.4) involves the expectation over the stationary state-actiondistribution k+1ofk+1, we use data to approximate such an expectation. More specifically,we samplef(s`;1;a`;1)g`2[N]andf(s`;2;a`;2;r`;2;s0`;2;a0`;2)g`2[N]such that (s`;1;a`;1)k+1,6Published as a conference paper at ICLR 2021(s`;2;a`;2)k+1,r`;2=r(s`;2;a`;2),s0`;2P(js`;2;a`;2), anda0`;2k+1(js0`;2), whereNis the sample size. We approximate e!k+1using!k+1, which is defined as follows,!k+1= RnNX`=1'(s`;1;a`;1)'(s`;1;a`;1)>1(3.5)NX`=1(1)r`;2+Q!k(s0`;2;a0`;2)'(s`;2;a`;2)o:Here Ris the projection operator, which projects the parameter onto the centered ball with radiusRinRd. Such a projection operator stabilizes the algorithm (Konda and Tsitsiklis, 2000; Bhatnagaret al., 2009). It is worth mentioning that one may also view the update in (3.5) as one step of theleast-squares temporal difference method (Bradtke and Barto, 1996), which can be modified for theoff-policy setting (Antos et al., 2007; Yu, 2010; Liu et al., 2018; Nachum et al., 2019; Xie et al.,2019; Zhang et al., 2020; Uehara and Jiang, 2019; Nachum and Dai, 2020). Such a modificationallows the data points in (3.5) to be reused in the subsequent iterations, which further improves thesample complexity. Specifically, let bhv2P(SA )be the stationary state-action distributioninduced by a behavioral policy bhv. We replace the actor and critic updates in (3.1) and (3.2) byk+1 argminEbhvf(s;a)k+11Q!k(s;a) +1kfk(s;a)2; (3.6)!k+1 argmin!Ebhv[Q!(1)rPk+1Q!k](s;a)2; (3.7)respectively. With linear function approximation, the actor update in (3.6) is reduced to (3.3), whilethe critic update in (3.7) admits a closed form solutione!k+1=Ebhv['(s;a)'(s;a)>]1Ebhv[(1)r+Pk+1Q!k](s;a)'(s;a);which can be well approximated using state-action pairs drawn from bhv. Seex4 for a detaileddiscussion. Finally, by assembling the updates in (3.3) and (3.5), we present the linear actor-criticmethod in Algorithm 1, which is deferred to xB of the appendix.4 T HEORETICAL RESULTSIn this section, we upper bound the regret of the linear actor-critic method. We defer the analysis ofthe deep neural actor-critic method to xC of the appendix. Hereafter we assume that jr(s;a)jrmaxfor any (s;a)2SA , wherermaxis a positive absolute constant. First, we impose the followingassumptions. Recall that is the stationary state-action distribution of , whilekis the stationarystate-action distribution of k. Moreover, let 2P(SA )be a state-action distribution withrespect to which we aim to characterize the performance of the actor-critic algorithm. Specifically,afterK+ 1actor updates, we are interested in upper bounding the following regretEhKXk=0kQQk+1k;1i=EhKXk=0Q(s;a)Qk+1(s;a)i; (4.1)where the expectation is taken with respect to fkgk2[K+1]and(s;a). Here we allow to beany fixed distribution for generality, which might be different from .Assumption 4.1 (Concentrability Coefficient) .The following statements hold.(i) There exists a positive absolute constant such thatkfor anyk1, wherek=kd=dkkk;2.(ii) We assume that for any k1and a sequence of policies figi1, thek-step future-state-action distribution P1Pkis absolutely continuous with respect to , whereis thesame as the one in (4.1) Also, it holds for such thatC;= (1)2P1k=1k2kc(k)<1, wherec(k) = supfigi2[k]kd(P1Pk)=dk;1.In Assumption 4.1, C;is known as the discounted-average concentrability coefficient of thefuture-state-action distributions. Such an assumption indeed measures the stochastic stability prop-erties of the MDP, and the class of MDPs with such properties is quite large. See Szepesv ́ari and7Published as a conference paper at ICLR 2021Munos (2005); Munos and Szepesv ́ari (2008); Antos et al. (2008a;b); Scherrer (2013); Scherrer et al.(2015); Farahmand et al. (2016); Yang et al. (2019b); Geist et al. (2019); Chen and Jiang (2019) formore examples and discussion.Assumption 4.2 (Zero Approximation Error) .It holds for any !;2 B (0;R)thatinf!2B(0;R)E[TQ!!>'](s;a)2= 0, where Tis defined in (2.4).Assumption 4.2 imposes a structural assumption of the MDP under the linear setting. Specificallyspeaking, it assumes that the Bellman operator of each policy maps a linear value function to alinear function. Therefore, the value function associated with each policy (which is the fixed pointof the corresponding Bellman operator) lies in the linear function class. Since the value functions arelinear here, the energy-based policy class approximately covers the optimal policy as the temperatureparametergoes to zero. In summary, our Assumption 4.2 ensures that the energy-based policyclass approximately captures the optimal policy and thus there is no approximation error. WhenAssumption 4.2 does not hold, we only need to add an additional bias term to the regret upper boundin our theorem without much change in the proof.Assumption 4.3 (Well-Conditioned Feature) .The minimum singular value of the matrixEk['(s;a)'(s;a)>]is uniformly lower bounded by a positive absolute constant for anyk1.Assumption 4.3 ensures that the minimization problem in (3.2) admits a unique minimizer, whichis used in the critic update. Similar assumptions are commonly imposed in the literature (Bhandariet al., 2018; Xu et al., 2019; Zou et al., 2019; Wu et al., 2020).Under Assumptions 4.1, 4.2, and 4.3, we upper bound the regret of Algorithm 1 in the followingtheorem.Theorem 4.4. We assume that Assumptions 4.1, 4.2, and 4.3 hold. Let be a state-action distri-bution satisfying (ii) of Assumption 4.1. Also, for any sufficiently large K > 0, let=K1=2,N= (KC2;(=)2log2N), and the sequence of policy parameters fkgk2[K+1]be gen-erated by Algorithm 1. It holds thatEhKXk=0Q(s;a)Qk+1(s;a)i2(1)3logjAj+O(1)K1=2; (4.2)where the expectation is taken with respect to fkgk2[K+1]and(s;a).We sketch the proof in xD. SeexE.1 for a detailed proof. Theorem 4.4 establishes an O(K1=2)regret of Algorithm 1, where Kis the total number of iterations. Here O()omits terms involving(1)1andlogjAj. To better understand Theorem 4.4, we consider the ideal setting, where wehave access to the action-value function Qof any policy . In such an ideal setting, the criticupdate is unnecessary. However, the natural policy gradient method, which only uses the actorupdate, achieves the same O(K1=2)regret (Liu et al., 2019; Agarwal et al., 2019; Cai et al., 2019).In other words, in terms of the iteration complexity, Theorem 4.4 shows that in the single-timescalesetting, using only one step of the critic update along with one step of the actor update is as efficientas the natural policy gradient method in the ideal setting.Furthermore, by the regret bound in (4.2), to obtain an "-globally optimal policy, it suffices to setK(1)6"2log2jAjin Algorithm 1 and output a randomized policy that is drawn fromfkgK+1k=1uniformly. Plugging such a KintoN= (KC2;(=)2log2N), we obtain thatN=eO("2), whereeO()omits the logarithmic terms. Thus, to achieve an "-globally optimalpolicy, the total sample complexity of Algorithm 1 is eO("4). This matches the sample complexityresults established in Xu et al. (2020); Hong et al. (2020) for two-timescale actor-critic methods.Meanwhile, notice that here the critic updates are on-policy and we draw Nnew data points in eachcritic update. As discussed in x3.1, under the off-policy setting, the critic updates given in (3.7)can be implemented using a fixed dataset sampled from bhv, the stationary state-action distributioninduced by the behavioral policy. Under this scenario, the total number of data points used by thealgorithm is equal to N. Moreover, by imposing similar assumptions on bhvas in (i) of Assumption4.1 and Assumption 4.3, we can establish a similar O(K1=2)regret as in (4.2) for the off-policysetting. As a result, with data reuse, to obtain an "-globally optimal policy, the sample complexityof Algorithm 1 is essentially eO("2), which demonstrates the advantage of our single-timescale8Published as a conference paper at ICLR 2021actor-critic method. Besides, only focusing on the convergence to an "-stationary point, Wu et al.(2020); Xu et al. (2020) establish the sample complexity of eO("5=2)for two-timescale actor-critic,where"measures the squared Euclidean norm of the policy gradient. In contrast, by adopting thenatural policy gradient (Kakade, 2002) in actor updates, we achieve convergence to the globallyoptimal policy. We remark that the idea of off-policy evaluation cannot be applied to typical two-timescale setting (Wu et al., 2020; Xu et al., 2020), where the critic is updated using TD learning(e.g. TD(0) and TD( )), since it is shown that off-policy TD method may diverge even with linearfunction approximation (Baird et al., 1995; Sutton et al., 2008). To the best of our knowledge,we establish the rate of convergence and global optimality of the actor-critic method with functionapproximation in the single-timescale setting for the first time.Furthermore, as we will show in Theorem C.5 of xB, when both the actor and the critic are repre-sented using overparameterized deep neural networks, we establish a similar O((1)3logjAjK1=2)regret when the architecture of the actor and critic neural networks are properly chosen. Toour best knowledge, this seems the first theoretical guarantee for the actor-critic method with deepneural network function approximation in terms of the rate of convergence and global optimality.<|im_end|> <|im_start|>assistant ### Review Title Theory for an one-timescale AC ### Review Text This is a theoretical paper. The paper studies convergence and optimality of the actor-critic with the function approximation in a single-timescale setting. The proposed single-timescale AC applies PPO for the actor and updates the critic by applying the Bellman operator to the critic one time. Despite the actor-critic coupling, the paper establishes global convergence with sublinear rates for the proposed AC method in both linear and neural network function approximation. Strong points: (1) The paper is well-written, and the motivation is easy to follow. The paper also provides detailed literature on related works. (2) The studied AC is practical in the sense that both the actor and the critic can take function approximation and their updates work in the one-timescale. The proposed scheme could complement the previous study of the AC methods in the two-timescale. (3) The provided convergence theory could be some new insights into dealing with the coupling of the actor and the critic. The provided intuition makes sense to me although I didn’t get time checking proof details. Weak points and comments: (1) Although the paper provides a general setup for the one-timescale AC method, it is worth providing some generic examples to explain the theory, e.g., the energy-based policy with direct parametrization. (2) The theory seems to be specific to the energy-based policy. Are there any other types of policies that can also be considered? If not, it would be helpful to comment on the importance of the energy-based policy in practice or from the theoretical point of view? (3) The proposed AC methods rely on the population quantity, e.g., expectation over state-action visitation probability. How practical are they? Or how is the proposed AC related to practical one-timescale AC methods mentioned in the Introduction? (4) In (2.7), the critic depends on the current policy at k+1. What do you mean by they are updated simultaneously in the Abstract? (5) It would be helpful to make a table comparing the proposed AC with others in terms of sampling assumptions, policy classes, and convergence. I believe my concerns can be addressed during reviewing. For me, the development of this paper is new, and I am more inclined to agree with the acceptance. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>